Can the 400G-FR4 and 400G-LR4 Transceivers Interoperate with Each Other?

Picture of Harper Ross

Harper Ross

Answered on 1:59 am

Yes, the 400G-FR4 and 400G-LR4 transceivers can interoperate up to a reach of 2km (limited by the FR4). Note that max allowed Receiver power for the 400G-FR4 (max Rx power of 3.5dBm) may require a minimum level of attenuation to be present if connected to a 400G-LR4 transmitter (Max Tx power of 5.1dBm).

According to the Cisco 400G QSFP-DD Cable and Transceiver Modules Data Sheet, the 400G-FR4 and the 400G-LR4 transceivers are both compliant to the 100G Lambda MSA standard, which defines a common optical interface for 100G per wavelength applications. The 400G-FR4 and the 400G-LR4 transceivers use four optical lanes, each carrying a 100G PAM4 signal, over a duplex LC single-mode fiber. The main difference between them is the transmission distance: the 400G-FR4 can reach up to 2km, while the 400G-LR4 can reach up to 10km.

Therefore, to interoperate these transceivers, they need to have compatible wavelengths, power budgets, and dispersion compensation. The 100G Lambda MSA specifies two sets of wavelengths for 100G per wavelength applications: LAN-WDM (1295.56nm, 1300.05nm, 1304.58nm, and 1309.14nm) and CWDM4 (1271nm, 1291nm, 1311nm, and 1331nm).

People Also Ask

NVIDIA DGX Spark Quick Start Guide: Your Personal AI Supercomputer on the Desk

NVIDIA DGX Spark — the world’s smallest AI supercomputer powered by the NVIDIA GB10 Grace Blackwell Superchip — brings data-center-class AI performance to your desktop. With up to 1 PFLOP of FP4 AI compute and 128 GB of unified memory, it enables local inference on models up to 200 billion parameters and fine-tuning of models

RoCEv2 Explained: The Ultimate Guide to Low-Latency, High-Throughput Networking in AI Data Centers

In the fast-evolving world of AI training, high-performance computing (HPC), and cloud infrastructure, network performance is no longer just a supporting role—it’s the bottleneck breaker. RoCEv2 (RDMA over Converged Ethernet version 2) has emerged as the go-to protocol for building lossless Ethernet networks that deliver ultra-low latency, massive throughput, and minimal CPU

Comprehensive Guide to AI Server Liquid Cooling Cold Plate Development, Manufacturing, Assembly, and Testing

In the rapidly evolving world of AI servers and high-performance computing, effective thermal management is critical. Liquid cooling cold plates have emerged as a superior solution for dissipating heat from high-power processors in data centers and cloud environments. This in-depth guide covers everything from cold plate manufacturing and assembly to development requirements

Dual-Plane and Multi-Plane Networking in AI Computing Centers

In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling

Related Articles

800g sr8 and 400g sr4

800G SR8 and 400G SR4 Optical Transceiver Modules Compatibility and Interconnection Test Report

Version Change Log Writer V0 Sample Test Cassie Test Purpose Test Objects:800G OSFP SR8/400G OSFP SR4/400G Q112 SR4. By conducting corresponding tests, the test parameters meet the relevant industry standards, and the test modules can be normally used for Nvidia (Mellanox) MQM9790 switch, Nvidia (Mellanox) ConnectX-7 network card and Nvidia (Mellanox) BlueField-3, laying a foundation for

Read More »
RoCEv2

RoCEv2 Explained: The Ultimate Guide to Low-Latency, High-Throughput Networking in AI Data Centers

In the fast-evolving world of AI training, high-performance computing (HPC), and cloud infrastructure, network performance is no longer just a supporting role—it’s the bottleneck breaker. RoCEv2 (RDMA over Converged Ethernet version 2) has emerged as the go-to protocol for building lossless Ethernet networks that deliver ultra-low latency, massive throughput, and minimal CPU

Read More »
liquid cooling

Comprehensive Guide to AI Server Liquid Cooling Cold Plate Development, Manufacturing, Assembly, and Testing

In the rapidly evolving world of AI servers and high-performance computing, effective thermal management is critical. Liquid cooling cold plates have emerged as a superior solution for dissipating heat from high-power processors in data centers and cloud environments. This in-depth guide covers everything from cold plate manufacturing and assembly to development requirements

Read More »
multi-plane

Dual-Plane and Multi-Plane Networking in AI Computing Centers

In the previous article, we discussed the differences between Scale-Out and Scale-Up. Scale-Up refers to vertical scaling by increasing the number of GPU/NPU cards within a single node to enhance individual node performance. Scale-Out, on the other hand, involves horizontal scaling by adding more nodes to expand the overall network scale, enabling

Read More »
Scroll to Top