In recent years, thanks to the support of cloud computing, technologies such as artificial intelligence, virtual/augmented reality, and the Internet of Things have sprung up like mushrooms. Cloud computing is a large-scale distributed computing platform composed of millions of servers distributed in data centers around the world connected together through networking. Today, the data center is no longer an isolated computer room, but a complex of buildings. A data center can contain many data center branches, which are located in different places but can be interconnected through the network to jointly complete the corresponding business deployment.
The link to realize the interconnection between these data centers is the interconnection technology between data centers (hereinafter referred to as DCI technology)
DCI network: The link to realize the interconnection among data centers
According to the cloud index report released by Cisco, the interconnection bandwidth among data centers has maintained an annual growth rate of nearly 33% in the past five years, and the interconnection bandwidth has reached the order of 100Tb/s.
Figure1 Annual traffic growth trends in data center published by Cisco
When several data centers are connected by optical fibers, and optical communication technology is used to carry information transmission, a data center interconnection network (DCI network)is formed.
Some obvious characteristics of the DCI network:
- The network topology is mainly point-to-point and simple networking, with low complexity;
- The interconnection distance between metro data centers is short, and the reduction of unit transmission cost is very attractive to data centers;
- Focuses more on network delay. Small equipment delays can reduce the difficulty of data center location selection;
- The main type of interconnection service is 100G Ethernet service, with low complexity of electrical layer equipment;
- Combined with the rapid growth of traffic, modular equipment, and flexible and scalable networking are more popular;
- Special hardware requirements, such as being accommodated in a server cabinet, realizing front and rear air outlets, and a high-voltage DC power supply.
DCI technology came into being in order to better build and maintain the interconnection network among data centers and adapt to the rapidly increasing traffic among data centers.
From a closed black box to open decoupling
In the past network operation system, system manufacturers provided a complete set of solutions, including equipment installation, system debugging, and operation and maintenance support. The whole system is similar to a closed black box, and the hardware and software of different manufacturers are not compatible with each other.
Second, there is the issue of cost. Benefiting from the continuous evolution of coherent optical transmission technology, the single-wave rate has increased from 100Gb/s to 800Gb/s. Since the main cost of electrical layer equipment comes from optical devices, the increase in the single-wave rate is conducive to reducing unit costs. However, in the past 10 years, few system manufacturers have maintained their product leadership, which means that if you continue to use a closed system to build the network, you will not be able to enjoy the dividends of technological development in the first place.
Fig. 2 Evolution of single-wave rate and single-fiber capacity in the electrical layer
In addition, the private network management software in the closed system cannot be connected with the user’s existing resource management, authority management, construction process, and routine maintenance system, which makes it difficult to improve the level of end-to-end automation, thereby shortening the service provisioning time.
The first breakthrough point of DCI technology is to open the closed system, allowing users to customize their own network, avoiding exclusive binding, and ensuring supply security. The Alibaba Cloud Infrastructure Optical Network Team proposed the concept of open and decoupled DCI technology through research and worked with industry partners to promote the formation and growth of the DCI technology ecosystem, breaking through the concept of the traditional closed system.
The DCI network can be seen as a combination of underlying hardware devices and upper-layer management and control software. The devices are divided into optical-layer devices and electrical-layer devices. The roles of the two are analogous to urban transportation facilities. Optical-layer devices are similar to roads, and electrical-layer devices are vehicles on the road. Compared with the rapid evolution of electrical layer technology, optical layer equipment acts as the infrastructure, whose technology evolution is relatively slow. Therefore, the first step of decoupling is right here– separating roads and vehicles, and decoupling the optical layer and the electrical layer. After that, the optical layer equipment and electrical layer equipment will come from different manufacturers, and at the same time, the “road” composed of a set of optical layer equipment can support “vehicles” from different electrical layer equipment manufacturers.
Fig.3 People can drive different types of vehicles from different manufacturers on the road, and the open and decoupled DCI network also has a similar capability
It is important for the devices to provide a unified interface. With the development of software-defined networks, the Netconf protocol has been agreed upon by most equipment manufacturers. Alibaba also joined the OpenConfig organization in the early days to participate in the definition of data models related to optical networks. Based on the Netconf protocol and the OpenConfig model, a third-party cloud software platform can directly connect to the manufacturers’ equipment for management and control, which completely decoupled system reduces the links during the management and control and has better initiative and more freedom in responding to the needs of new network-level functions.
Figure 4 Open and decoupled DCI network
Flexible architecture supports network scalability
After the closed system is opened, the next step is to choose suitable hardware to build a DCI network that can be expanded flexibly. For a long period of time, the multiplexing and demultiplexing units of optical-layer devices only support a fixed channel interval. In fact, as the single-wave rate continues to increase, the spectrum width required by electrical-layer devices also continues to increase. In order to be compatible with the ever-evolving single-wave rate, the fixed-interval multiplexing and demultiplexing unit should be upgraded to a flexible multiplexing and demultiplexing unit based on a Wavelength Selective Switch ( WSS).
Figure5 Flexible MUX and DEMUX unit and flexible grid spectrum
In a large-scale DCI network, the service distribution is more complex, and the Mesh network architecture based on the Reconfigurable Optical Add Drop Multiplexer (ROADM) needs to be considered. In cities where data centers are more dispersed, a star architecture is often used. If the master station does not have the optical layer penetrability, the traffic among satellite stations needs to be converted from light to electricity to light at the main station, which not only increases the extra cost but also increases the transmission delay among the stations. When the main station is a ROADM, the services among satellite stations can pass through the main station directly to the opposite end, and the passed-through wavelength and route can be configured through the network management software, which greatly reduces the labor operation and maintenance costs of the DCI network and improves the service provisioning efficiency.
Figure 6 Synergy between IP network and DCI network supporting ROADM
Note: “station” in the figure refers to “satellite station”
In the point-to-point scenario, the optical layer has been constructed on the first day, and thus it’s suitable for the photoelectric-decoupling. In the Mesh DCI network, considering the increase of subsequent sites and the expansion of the network scale, the optical layer needs to be further decoupled. We recommend decoupling the ROADM according to the given direction, and ensuring that the devices in the Optical Multiplex Section (OMS) are from the same manufacturer. In this way, the optical layer part in the DCI network can be effectively segmented, and the excessive agreement details among the devices can be avoided. On the first day of network construction, there is only a connection between sites A and B, and the equipment comes from supplier M. A new site C is added the next day, then the connection between site C and site B is constructed by supplier T1, and the connection between site C and site A is by supplier T2.
To deal with the problem of the inability to communicate due to different connectors of different manufacturers’ equipment, we designed a universal fiber connection box that supports flexible plug-in cards, which consists of a fully-connected backplane and a direction-adaptive plug-in card. The adapter boards in each direction can match the manufacturers’ connector specifications, and “translate” the manufacturers’ line sequence into a common line sequence. In this way, any two directions are fully connected through the universal fiber connection box. The universal fiber connection box realizes the heterogeneity of the optical layer skillfully and opens the door of freedom for the expansion of the DCI network.
Fig.7 Schematic diagram of heterogeneous ROADM and optical layer decoupling scheme based on the universal fiber connection box
Note: “The box” in the figure refers to “Universal fiber connection box”, and“D” refers to “direction”.
Control automation improves network efficiency
Compared with the IP digital communication system, a large number of analog properties are still retained in the optical network, such as how to adjust the optical power, and how to configure the gain and slope of the amplifier. To meet such challenges, open optical network design tools that can be used by a third party are required. By abstracting a multi-level model, the behavior and functions of different manufacturers’ equipment are described, and the differences between manufacturers are reflected in the key specification parameters of the model. Combined with the actual networking topology data, service resource data, and other information, the planner solves the end-to-end optimization problem and can calculate and obtain the target configuration value on all devices and the performance margin at this time.
When adding services or optimizing configurations in an existing network, you need to choose the adjustment path from the current configuration to the target configuration as carefully as a rock climber. Due to the influence of optical amplifier nonlinearity, fiber Kerr nonlinearity, and stimulated Raman scattering effects, not only does the currently regulated service channel need attention but also adjacent channels and channels on the nearby related OMSs need to be monitored.
A real-time status check unit is introduced into the configurator, and the equipment performance data collected in real time passes through a customized check logic to determine whether the current adjustment path has risks and is continuously updated. By doing so repeatedly, the preset adjustment target can be safely achieved in the end.
Fig. 8 Open optical network design tools and automated configuration process available to the third party
Development and Challenges
The continuous emergence of Internet services and the rapidly evolving cloud computing have propelled the DCI network to flourish over the past decade. Open and decoupled systems, simple and flexible architecture, and software automation are the main innovations of DCI. In the near future, 5G networks, Internet of Things (IoT), augmented reality (AR) and virtual reality (VR), and edge cloud computing will continue to drive the rapid growth of DCI networks. An open DCI ecosystem will be more conducive to the development and introduction of new technologies, promote technological innovation and industrial prosperity, better meet customer and business needs, and ultimately propel cloud computing to a new stage!
Related Products:
- 200G Muxponder Service Card: 20x10G SFP+ to 1x200G CFP2, 2 Slot $8835.00
- 2x200G Muxponder Service Card: 4x100G QSFP28 to 2x200G CFP2, 1 Slot $3285.00
- 200G Muxponder Service Card: 2x100G QSFP28 or 1x100G QSFP28 and 10x10G SFP+ to 1x200G CFP2, 2 Slot $8835.00
- 2x400G Muxponder Service Card: 8x100G QSFP28 to 2x400G CFP2, 2 Slot $4725.00
- 400G Muxponder Service Card: 4x100G QSFP28 to 1x400G CFP2, 1 Slot $3285.00
- DCI BOX Chassis, 19", 1U: 4 equal 1/4 slots, also compatible with 2 equal 1/2 slots, including front interface board, support provides 1 CONSOLE and 3 ETH management ports, 2 Standard CRPS power supplies: 220V AC or 48V DC optional $3600.00
- CFP2-400G-DCO 400G Coherent CFP2-DCO C-band Tunable Optical Transceiver Module $8000.00
- CFP2-200G-DCO 200G Coherent CFP2-DCO C-band Tunable Optical Transceiver Module $7000.00