- Casey
Can CX7 Dual-port 400G reach 800G after bonding? Why can 200G reach 400G after bonding?
FiberMall
Answered on 8:18 am
CX7 Dual-port 400G can use Octal small form factor pluggable (OSFP) connectors or Quad small form factor pluggable (QSFP) connectors. It can support bonding technology, which combines two 400G ports into one 800G port, increasing bandwidth and redundancy. However, to achieve the effect of reaching 800G after bonding, the following conditions need to be met:
1. Use OSFP connectors and NDR cables, which can provide 400 Gb/s of bandwidth.
2. Use switches that support 800G, such as the Cisco Nexus 9800 Series or Cisco Nexus 9232E.
3. Configure the correct bonding mode and parameters, such as LACP, load balancing, failover, etc.
If you use QSFP connectors or HDR/EDR cables, then the bandwidth after bonding will be limited, because the maximum bandwidth of QSFP connectors and HDR/EDR cables are 200 Gb/s and 100 Gb/s respectively. Therefore, the maximum bandwidth after bonding using QSFP connectors or HDR/EDR cables is 400 Gb/s.
People Also Ask
Related Articles

800G SR8 and 400G SR4 Optical Transceiver Modules Compatibility and Interconnection Test Report
Version Change Log Writer V0 Sample Test Cassie Test Purpose Test Objects:800G OSFP SR8/400G OSFP SR4/400G Q112 SR4. By conducting corresponding tests, the test parameters meet the relevant industry standards,

Key Design Principles for AI Clusters: Scale, Efficiency, and Flexibility
In the era of trillion-parameter AI models, building high-performance AI clusters has become a core competitive advantage for cloud providers and AI enterprises. This article deeply analyzes the unique network

Google TPU vs NVIDIA GPU: The Ultimate Showdown in AI Hardware
In the world of AI acceleration, the battle between Google’s Tensor Processing Unit (TPU) and NVIDIA’s GPU is far more than a spec-sheet war — it’s a philosophical clash between custom-designed ASIC (Application-Specific

InfiniBand vs. Ethernet: The Battle Between Broadcom and NVIDIA for AI Scale-Out Dominance
The Core Battle in High-Performance Computing Interconnects Ethernet is poised to reclaim mainstream status in scale-out data centers, while InfiniBand continues to maintain strong momentum in the high-performance computing (HPC)

From AI Chips to the Ultimate CPO Positioning Battle: NVIDIA vs. Broadcom Technology Roadmap Showdown
In the era driven by artificial intelligence (AI) and machine learning, global data traffic is multiplying exponentially. Data center servers and switches are rapidly transitioning from 200G and 400G connections

H3C S6550XE-HI Series 25G Ethernet Switch: High-Performance 25G/100G Solution for Campus and Metro Networks
The H3C S6550XE-HI series is a cutting-edge, high-performance, high-density 25G/100G Ethernet switch developed by H3C using industry-leading professional ASIC technology. Designed as a next-generation Layer 3 Ethernet switch, it delivers exceptional

Switching NVIDIA ConnectX Series NICs from InfiniBand to Ethernet Mode: A Step-by-Step Guide
The NVIDIA ConnectX Virtual Protocol Interconnect (VPI) series network interface cards (NICs)—including models such as ConnectX-4, ConnectX-5, ConnectX-6, ConnectX-7, and ConnectX-8 (commonly abbreviated as CX-4/5/6/7/8)—represent a rare class of dual-mode
