The Future of Cable Infrastructure: How Data Performance Requirements Shape Data Center Architecture

As the industry expects operating frequencies to jump again and data rates to climb to 224 Gbps, data center construction will have to change again. High-speed cables, especially direct attach cables (DACs), have historically been the solution of choice for connecting servers within a rack, and they play a vital role in enabling quick and efficient upgrades in data centers. However, as higher frequency requirements emerge and data rates escalate, data center managers demand versatility in their cabling tool portfolio.

The Future of Cable Infrastructure: How Data Performance Requirements Shape Data Center Architecture

Data center transfer rate requirements have undergone considerable evolution. The industry’s requirements have increased from 20 Gbps and 56 Gbps to 112 Gbps. Consumers and brand manufacturers expect a faster and more efficient online experience, which requires continuous changes in data centers. Moreover, this situation shows no sign of slowing down in the future.

As the industry expects operating frequencies to jump again and data rates to climb to 224 Gbps, data center construction will have to change again. High-speed cables, especially direct attach cables (DACs), have historically been the solution of choice for connecting servers within a rack, and they play a vital role in enabling quick and efficient upgrades in data centers. However, as higher frequency requirements emerge and data rates escalate, data center managers demand versatility in their cabling tool portfolio.

The Future of Cable Infrastructure: How Data Performance Requirements Shape Data Center Architecture

As data centers require more and more frequencies, it is important to keep in mind when planning which cables to use with each increase in frequency. The connection length capacity of the DAC will shrink over and over again. For example, at 224Gbps, the DAC may only be able to achieve high-quality data transmission over distances of up to 1 meter. To support longer bridge lengths, active cables (AECs) and active optical cables (AOCs) must fill the gaps where the DACs support plug-and-play upgrades. In other words, today’s data center managers must choose cabling products over traditional cable products to manage different rack architectures. Passive cables such as DACs have dominated connectivity in rack architectures for decades, but as the data industry moves into next-generation frequencies, active cables are not only more popular, but essential.

Be prepared for future growth and maximize data transfer performance today. Listed below are key considerations when choosing a cabling solution to meet changing data center needs.

Short reach and power budget:

DACs have become a standard connectivity solution within data center racks. At 56 Gbps PAM-4, a pulse-amplitude modulation technique, the DAC can effectively connect rows of servers within a rack of space up to 3.0m, but higher data rates create data loss in these passive cables. As data rates continue to climb, DACs will be ideal for applications of 1.0m or less, but this length may not be enough to connect a top-of-rack (TOR) switch to servers lower in the rack.

Since there are no electronics in the DAC’s cable assembly, it provides a passive solution that can pass data directly. Therefore, in addition to being suitable for connections within a rack, they are also ideal for situations where increasing power may result in greater overall power consumption in the rack.

A cloud-operated data center in North Dakota, USA, demonstrates DAC cables in action. The center’s seven-foot racks of equipment are buzzing with data transfers, and the TOR switches communicate via DAC cables to every server in the row below. Given that the cabling cost of the DAC solution is lower than other solutions and does not increase the thermal budget, the DAC is a good solution for busy data centers operating at lower frequencies. In addition, at 56 Gbps PAM-4 transmission, the DAC can effectively connect the TOR switch to all servers in the rack, from the top-most servers down to the bottom-most servers.

Over time, performance expectations have increased and data centers have been upgraded to 112 Gbps PAM-4 rates. One can still use DACs to connect TOR switches to servers higher up in the rack, but they will experience unacceptable levels of data loss at higher data rates if the distance exceeds 2.0m. Today’s data center managers need an alternative cabling solution to connect lower-located servers to TOR switches while maintaining acceptable performance levels.

The Future of Cable Infrastructure: How Data Performance Requirements Shape Data Center Architecture

Bottom line: The DAC does not increase the power budget and is a viable option for 56 Gbps PAM-4 applications in racks up to 3.0m in length. At 112 Gbps PAM-4, they are still valid for 0.5 to 1.0m transmission link lengths.

Filling the gap: Active cables

In response to the insufficient connection length of the DAC solution under 112 Gbps PAM-4, AEC provides a powerful solution with almost zero data loss and a smaller cable diameter, creating a more efficient application for data transmission lengths over 2.0m. Good cable choice.

A retimer built into the AEC cleans the signal at entry and exit locations. The data enters the cable, and Re-timers repair it, removing noise and amplifying the signal. When the data leaves the cable, the same procedure is performed again. Compared to DAC, AEC transmits data “cleaner” over longer distances, and is less expensive than AOC components.

The Future of Cable Infrastructure: How Data Performance Requirements Shape Data Center Architecture

The managers of the aforementioned North Dakota cloud operations data center have added AEC to their cabling component options for upgrading to 112 Gbps PAM-4 data transmission. Their racks now use a combination of DAC and AEC components, effectively providing the best of both worlds for a new higher-function and higher-frequency data center. DACs are still the most cost-effective way to connect TOR switches to servers in higher locations, but data loss issues prevent them from being used to connect servers in more distant locations below the rack. AEC is less expensive than fiber optic cable and provides a lossless connection from the top to the bottom of the rack. As data rates continue to increase, AEC provides an ideal mid-section cable option between the DAC and AOC.

Bottom line: AEC is an excellent choice for clean high-speed connections between rows of servers in racks up to 7.0m transmission length. Although they use electricity, their small diameter size helps improve airflow.

The Future of Cable Infrastructure: How Data Performance Requirements Shape Data Center Architecture

Overcoming Transmission Distance Problems: Fiber Optics

AOC is another important option, using fiber optic cables that clean the signal with little damage. Almost no data is lost as it passes through the AOC. And, given this structure, AOC can reliably transmit data over longer connection lengths — in fact, those distances can be measured in kilometers.

Of the three cable options available, AEC is more expensive than DAC, while AOC is the most expensive of the three. While upgrading to fiber can cost up to 10 times the cost of the original copper cable in many cases, these fiber optic cables are ideal for rack-to-rack and server row-to-row connections, especially where performance thresholds are critical, Large data centers with long cable connections. This means that there is no economic benefit to using AOC when connecting rows of servers within a rack, and DAC and AEC cables are more economical options.

The North Dakota cloud operations data center mentioned in our example now uses AOC for longer distance applications (>7.0m), such as connecting to TOR end-of-row switches. At 112 Gbps PAM-4 and higher data rates, AOC does not suffer from data loss issues. The North Dakota facility also uses fiber to connect data centers in Texas and Virginia, and this long-distance connection continues to move forward, eventually connecting data centers in Europe, Asia and elsewhere around the world.

Bottom line: AOC and its powerful fiber capabilities provide an excellent solution for both server-to-row and data center-to-data center connections.

Molex DAC Components

The Quad Small Form-Factor Pluggable Double Density (QSFP-DD) interconnect solution is designed for high-density applications with efficient use of space, power, and port density. QSFP-DD/QSFP+ features high-speed I/O passive cable assemblies that deliver data rates up to 400 Gbps, with various lengths or customization options for greater design flexibility.

The Future of Cable Infrastructure: How Data Performance Requirements Shape Data Center Architecture

These components offer superior performance and reliability thanks to innovative manufacturing and shielding processes that minimize external and internal noise. I/O passive cables are IEEE802.3by, IEEE802.3bj and IEEE802.3cd compatible and are compatible with industry standard connectors and cages.

AEC 112 Gbps PAM-4 Solutions

Demand for bandwidth-intensive, data-driven services has skyrocketed, driving growth in computing, data storage, and networking capabilities. In response to higher bandwidth demands, the Active Cable (AEC) solution with QSFP-DD and OSFP interconnects is a new pluggable connection designed to 100 and 800 Gbps) while extending coverage to 5 meters without the use of fiber optic cables.

As data rate requirements increase, signal loss also becomes a design issue. Engineers have a variety of options, such as optical connections, linear amplifiers, and retimers—each with its pros and cons. AEC is an ideal choice when cable lengths in excess of 1.5 to 2.0 meters are required. Since the connectors in the AEC assembly regenerate the signal and eliminate noise, the limitations due to in-box signal loss are significantly reduced, even at lengths up to 5.0 to 7.0m.

The Future of Cable Infrastructure: How Data Performance Requirements Shape Data Center Architecture

With increasing speed and functionality, thermal management has become more challenging for telecom and networking OEMs to minimize air impedance caused by heavy wiring in front of the chassis. AEC solutions minimize cable bundle sizes, from 28 to 32 AWG, reducing airflow resistance at the front of the chassis.

Data center managers often require more than 2.0 meters of cable to reach between chassis, but longer lengths create greater losses. The AEC 112 Gbps PAM4 solution enables external lengths of 5.0 to 7.0 meters.

AOC Integrated Cable Solutions

Molex’s AOC solution has significant cost advantages over traditional optical modules. Additionally, the AOC can connect to the system via a wide range of standard MSA connectors including QSFP+, iPass+™ HD and CXP. These cables are electrically compatible with InfiniBand* FDR/DDR/QDR, Ethernet (10, 40 and 56 Gbps), Fibre Channel (8 and 10 Gbps), SAS 3.0 and 2.1 (12 and 6 Gbps) and other protocol applications .

The Future of Cable Infrastructure: How Data Performance Requirements Shape Data Center Architecture

author:

Molex Copper Solutions Product Development Manager: Chris Kapuscinski
General Manager, Molex I/O Solutions: Chad Jameson

As an authorized distributor of Molex, Heilind can provide relevant product service and support to the market. In addition, Heilind also supplies products from many of the world’s top manufacturers, covering 25 different component categories, and focusing on all market segments and all customers. , constantly seeking a broad product offering to cover all markets.

About Heilind Electronics

Founded in 1974, Heilind Electronics is headquartered in Boston, USA, and has more than 40 offices in China, Singapore, the United States, Brazil, Canada and Mexico. Helian supports OEMs and contract manufacturers in various market segments of the electronics industry, supplying products from the industry’s leading manufacturers covering 25 different component categories, with a special focus on interconnects, electromechanical products, tight Firmware and hardware, sensor products, etc.

Heilind operates on strong inventories, flexible policies, responsive systems, knowledgeable technical support and unmatched customer service. In December 2012, Helian Electronics officially launched its Asia-Pacific business. Headquartered in Hong Kong, China, Helian Asia Pacific has set up a regional distribution center and a value-added service center in addition to the sales department; so far, Helian Asia Pacific has offices in Hong Kong, Shanghai, Tianjin, Qingdao, Suzhou, Shenzhen, Dongguan and Chengdu. , Xiamen, Taipei, Tainan, Singapore, Malaysia, India, Thailand, the Philippines, Vietnam, Indonesia, etc. with 24 branches and 4 warehouses (Hong Kong, Singapore, Suzhou and Taipei), dedicated to bringing the core value of distribution back to the industry . For more information, please visit www.heilind.com; www.heilindasia.com; WeChat, Weibo, Facebook and Twitter.

The Links:   PM75RSK060 6MBI100FA-060 TFT-Panel

Related Posts