- Catherine
John Doe
Answered on 8:10 am
PAM-4 and NRZ are two different modulation techniques that are used to transmit data over an electrical or optical channel. Modulation is the process of changing the characteristics of a signal (such as voltage, amplitude, or frequency) to encode information. PAM-4 and NRZ have different advantages and disadvantages depending on the channel characteristics and the data rate.
PAM-4 stands for Pulse Amplitude Modulation 4-level. It means that the signal can have four different levels of amplitude (or voltage), each representing two bits of information. For example, a PAM-4 signal can use 0V, 1V, 2V, and 3V to encode 00, 01, 11, and 10 respectively. PAM-4 can transmit twice as much data as NRZ for the same symbol rate (or baud rate), which is the number of times the signal changes per second. However, PAM-4 also has some drawbacks, such as higher power consumption, lower signal-to-noise ratio (SNR), and higher bit error rate (BER). PAM-4 requires more sophisticated signal processing and error correction techniques to overcome these challenges. PAM-4 is used for high-speed data transmission such as 400G Ethernet.

NRZ stands for Non-Return-to-Zero. It means that the signal can have two different levels of amplitude (or voltage), each representing one bit of information. For example, a NRZ signal can use -1V and +1V to encode 0 and 1 respectively. NRZ does not return to zero voltage between symbols, hence the name. NRZ has some advantages over PAM-4, such as lower power consumption, higher SNR, and lower BER. NRZ is simpler and more robust than PAM-4, but it also has a lower data rate for the same symbol rate. NRZ is used for short-distance data transmission such as 100G Ethernet.

When a signal is referred to as “25Gb/s NRZ” or “25G NRZ”, it means the signal is carrying data at 25 Gbit / second with NRZ modulation. When a signal is referred to as “50G PAM-4”, or “100G PAM-4” it means the signal is carrying data at a rate of 50 Gbit / second, or 100 Gbit / second, respectively, using PAM-4 modulation.
People Also Ask
Dual-Plane and Multi-Plane Networking in AI Computing Centers
In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling
OCP 2025: FiberMall Showcases Advances in 1.6T and Higher DSP, LPO/LRO, and CPO Technologies
The rapid advancement of artificial intelligence (AI) and machine learning is driving an urgent demand for higher bandwidth in data centers. At OCP 2025, FiberMall delivered multiple presentations highlighting its progress in transceiver DSPs for AI applications, as well as LPO (Linear Pluggable Optics), LRO (Linear Receive Optics), and CPO
What is a Silicon Photonics Optical Module?
In the rapidly evolving world of data communication and high-performance computing, silicon photonics optical modules are emerging as a groundbreaking technology. Combining the maturity of silicon semiconductor processes with advanced photonics, these modules promise higher speeds, lower power consumption, and reduced costs. This in-depth guide explores the fundamentals, principles, advantages, industry
Key Design Principles for AI Clusters: Scale, Efficiency, and Flexibility
In the era of trillion-parameter AI models, building high-performance AI clusters has become a core competitive advantage for cloud providers and AI enterprises. This article deeply analyzes the unique network requirements of AI workloads, compares architectural differences between AI clusters and traditional data centers, and introduces two mainstream network design
Google TPU vs NVIDIA GPU: The Ultimate Showdown in AI Hardware
In the world of AI acceleration, the battle between Google’s Tensor Processing Unit (TPU) and NVIDIA’s GPU is far more than a spec-sheet war — it’s a philosophical clash between custom-designed ASIC (Application-Specific Integrated Circuit) and general-purpose parallel computing (GPGPU). These represent the two dominant schools of thought in today’s AI hardware landscape.
InfiniBand vs. Ethernet: The Battle Between Broadcom and NVIDIA for AI Scale-Out Dominance
The Core Battle in High-Performance Computing Interconnects Ethernet is poised to reclaim mainstream status in scale-out data centers, while InfiniBand continues to maintain strong momentum in the high-performance computing (HPC) and AI training sectors. Broadcom and NVIDIA are fiercely competing for market leadership. As artificial intelligence models grow exponentially in
Related Articles

800G SR8 and 400G SR4 Optical Transceiver Modules Compatibility and Interconnection Test Report
Version Change Log Writer V0 Sample Test Cassie Test Purpose Test Objects:800G OSFP SR8/400G OSFP SR4/400G Q112 SR4. By conducting corresponding tests, the test parameters meet the relevant industry standards, and the test modules can be normally used for Nvidia (Mellanox) MQM9790 switch, Nvidia (Mellanox) ConnectX-7 network card and Nvidia (Mellanox) BlueField-3, laying a foundation for

Dual-Plane and Multi-Plane Networking in AI Computing Centers
In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling

OCP 2025: FiberMall Showcases Advances in 1.6T and Higher DSP, LPO/LRO, and CPO Technologies
The rapid advancement of artificial intelligence (AI) and machine learning is driving an urgent demand for higher bandwidth in data centers. At OCP 2025, FiberMall delivered multiple presentations highlighting its progress in transceiver DSPs for AI applications, as well as LPO (Linear Pluggable Optics), LRO (Linear Receive Optics), and CPO

What is a Silicon Photonics Optical Module?
In the rapidly evolving world of data communication and high-performance computing, silicon photonics optical modules are emerging as a groundbreaking technology. Combining the maturity of silicon semiconductor processes with advanced photonics, these modules promise higher speeds, lower power consumption, and reduced costs. This in-depth guide explores the fundamentals, principles, advantages, industry

Key Design Principles for AI Clusters: Scale, Efficiency, and Flexibility
In the era of trillion-parameter AI models, building high-performance AI clusters has become a core competitive advantage for cloud providers and AI enterprises. This article deeply analyzes the unique network requirements of AI workloads, compares architectural differences between AI clusters and traditional data centers, and introduces two mainstream network design

Google TPU vs NVIDIA GPU: The Ultimate Showdown in AI Hardware
In the world of AI acceleration, the battle between Google’s Tensor Processing Unit (TPU) and NVIDIA’s GPU is far more than a spec-sheet war — it’s a philosophical clash between custom-designed ASIC (Application-Specific Integrated Circuit) and general-purpose parallel computing (GPGPU). These represent the two dominant schools of thought in today’s AI hardware landscape.

InfiniBand vs. Ethernet: The Battle Between Broadcom and NVIDIA for AI Scale-Out Dominance
The Core Battle in High-Performance Computing Interconnects Ethernet is poised to reclaim mainstream status in scale-out data centers, while InfiniBand continues to maintain strong momentum in the high-performance computing (HPC) and AI training sectors. Broadcom and NVIDIA are fiercely competing for market leadership. As artificial intelligence models grow exponentially in
