What Does It Mean When an Electrical or Optical Channel is PAM-4 or NRZ?

Picture of John Doe

John Doe

Answered on 8:10 am

PAM-4 and NRZ are two different modulation techniques that are used to transmit data over an electrical or optical channel. Modulation is the process of changing the characteristics of a signal (such as voltage, amplitude, or frequency) to encode information. PAM-4 and NRZ have different advantages and disadvantages depending on the channel characteristics and the data rate.

PAM-4 stands for Pulse Amplitude Modulation 4-level. It means that the signal can have four different levels of amplitude (or voltage), each representing two bits of information. For example, a PAM-4 signal can use 0V, 1V, 2V, and 3V to encode 00, 01, 11, and 10 respectively. PAM-4 can transmit twice as much data as NRZ for the same symbol rate (or baud rate), which is the number of times the signal changes per second. However, PAM-4 also has some drawbacks, such as higher power consumption, lower signal-to-noise ratio (SNR), and higher bit error rate (BER). PAM-4 requires more sophisticated signal processing and error correction techniques to overcome these challenges. PAM-4 is used for high-speed data transmission such as 400G Ethernet.

PAM4

NRZ stands for Non-Return-to-Zero. It means that the signal can have two different levels of amplitude (or voltage), each representing one bit of information. For example, a NRZ signal can use -1V and +1V to encode 0 and 1 respectively. NRZ does not return to zero voltage between symbols, hence the name. NRZ has some advantages over PAM-4, such as lower power consumption, higher SNR, and lower BER. NRZ is simpler and more robust than PAM-4, but it also has a lower data rate for the same symbol rate. NRZ is used for short-distance data transmission such as 100G Ethernet.

NRZ

When a signal is referred to as “25Gb/s NRZ” or “25G NRZ”, it means the signal is carrying data at 25 Gbit / second with NRZ modulation. When a signal is referred to as “50G PAM-4”, or “100G PAM-4” it means the signal is carrying data at a rate of 50 Gbit / second, or 100 Gbit / second, respectively, using PAM-4 modulation.

People Also Ask

Comprehensive Guide to AI Server Liquid Cooling Cold Plate Development, Manufacturing, Assembly, and Testing

In the rapidly evolving world of AI servers and high-performance computing, effective thermal management is critical. Liquid cooling cold plates have emerged as a superior solution for dissipating heat from high-power processors in data centers and cloud environments. This in-depth guide covers everything from cold plate manufacturing and assembly to development requirements

Dual-Plane and Multi-Plane Networking in AI Computing Centers

In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling

What is a Silicon Photonics Optical Module?

In the rapidly evolving world of data communication and high-performance computing, silicon photonics optical modules are emerging as a groundbreaking technology. Combining the maturity of silicon semiconductor processes with advanced photonics, these modules promise higher speeds, lower power consumption, and reduced costs. This in-depth guide explores the fundamentals, principles, advantages, industry

Key Design Principles for AI Clusters: Scale, Efficiency, and Flexibility

In the era of trillion-parameter AI models, building high-performance AI clusters has become a core competitive advantage for cloud providers and AI enterprises. This article deeply analyzes the unique network requirements of AI workloads, compares architectural differences between AI clusters and traditional data centers, and introduces two mainstream network design

Related Articles

800g sr8 and 400g sr4

800G SR8 and 400G SR4 Optical Transceiver Modules Compatibility and Interconnection Test Report

Version Change Log Writer V0 Sample Test Cassie Test Purpose Test Objects:800G OSFP SR8/400G OSFP SR4/400G Q112 SR4. By conducting corresponding tests, the test parameters meet the relevant industry standards, and the test modules can be normally used for Nvidia (Mellanox) MQM9790 switch, Nvidia (Mellanox) ConnectX-7 network card and Nvidia (Mellanox) BlueField-3, laying a foundation for

Read More »
liquid cooling

Comprehensive Guide to AI Server Liquid Cooling Cold Plate Development, Manufacturing, Assembly, and Testing

In the rapidly evolving world of AI servers and high-performance computing, effective thermal management is critical. Liquid cooling cold plates have emerged as a superior solution for dissipating heat from high-power processors in data centers and cloud environments. This in-depth guide covers everything from cold plate manufacturing and assembly to development requirements

Read More »
multi-plane

Dual-Plane and Multi-Plane Networking in AI Computing Centers

In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling

Read More »
800G Silicon Photonics Optical Modules

What is a Silicon Photonics Optical Module?

In the rapidly evolving world of data communication and high-performance computing, silicon photonics optical modules are emerging as a groundbreaking technology. Combining the maturity of silicon semiconductor processes with advanced photonics, these modules promise higher speeds, lower power consumption, and reduced costs. This in-depth guide explores the fundamentals, principles, advantages, industry

Read More »
ai cluster

Key Design Principles for AI Clusters: Scale, Efficiency, and Flexibility

In the era of trillion-parameter AI models, building high-performance AI clusters has become a core competitive advantage for cloud providers and AI enterprises. This article deeply analyzes the unique network requirements of AI workloads, compares architectural differences between AI clusters and traditional data centers, and introduces two mainstream network design

Read More »
Scroll to Top