The Move to 400Gb is closer than you think

 
As data center managers look to the horizon, the signs of a cloud-based evolution are everywhere: more high-performance virtualized servers, higher bandwidth and lower latency requirements, and faster switch-to-server connections. Whether you’re running 10Gb or 100Gb today, the transition to 400Gb is closer than you realize.

Many factors are responsible for the drive to 400G/800G and beyond. Data centers have moved from multiple disparate networks to more virtualized, software-driven environments. At the same time, they are deploying more machine-to-machine connections and reducing the number of switches between them. Most of the pieces for supporting tomorrow’s cloud-scale data centers exist. What’s missing is a comprehensive strategy for creating a physical layer infrastructure that can tie it all together.

So, how do we get there? Know thy options.  

data center image

Transceivers

Jumping to 400Gb will require doubling the density of the current SFP, SFP+ or QSFP+ transceivers. This is where QSFP-Double Density (QSFP-DD) and octal small-form-factor pluggable (OSFP) technologies come into play. Both make use of eight lanes versus four and can fit 32 ports in a 1RU panel. The only significant differences are in backwards compatibility and power handling capabilities. The MSAs mention different optical connection options: LC, mini-LC, MPO 8, 12, 16, SN, MDC and CS connectors could be chosen depending on the application supported. Speaking of connectors—

Connectors

A variety of connector options provide more ways to distribute the higher octal-based capacity. 12-fiber MPO was traditionally used to support six two-fiber lanes. With many applications, like 40GBase-SR4, using only four lanes (eight fibers), MPO16 may be better suited for use with octal modules. Other connectors worth looking into are the SN and MDC, which incorporate 1.25-mm ferrule technology and provide flexible breakout options for high-speed optical modules. The ultimate question will be which transceivers are available with which connectors in the future.

Chips

Of course, there are no limits to the demand for bandwidth. Currently, the limiting factor isn’t how much you can plug into the front of the switch but how much capacity the chipset inside can deliver. Higher radix combines with higher SERDES speeds to net higher capacity. The typical approach circa 2015 for supporting 100Gb applications uses 128 - 25Gb lanes, which yields 3.2Tb switch capacity. Getting to 400Gb required increasing the ASIC to 256 50Gb lanes yielding 12.8Tb of switch capacity. The next progression – 256 100Gb lanes – takes us to 25.6Tbs. Future plans are being considered to increase the lane speeds to 200Gb – that is a difficult task, which will take a few years to perfect.

Take deeper dive into 400G/800 (Part 1)

Obviously, we have barely skimmed the surface of what data center managers need to consider as they contemplate their strategies and schedules for 400G/800G migration. There’s much more to say about where the technology is and where it’s headed. For a deeper dive into where things stand now, we encourage you to check out the Fact File on this topic. And make sure to check back often with CommScope to keep tabs on how things are evolving.

Migration to 400G/800G: the Fact File - Part I

Hyperscale and multi tenant data centers need to plan now for 400G/800G migration. Get the insight you need to prepare—optics, fiber, cabling design and more.

Read

Stay informed

Subscribe to The Enterprise Source and get updates when new articles are posted.

Categories
Article