AT&T recently activated a 400-gigabit fiber connection between Dallas and Atlanta and claimed it is the first such connection in the country. This is a milestone because it represents a major upgrade in fiber speeds in our networks. While scientists in the labs have created multi-terabyte lasers, our fiber network backbones for the last decade have mostly relied on 100-gigabit or slower laser technology.
Broadband demand has grown by a huge amount over the last decade. We’ve seen double-digit annual growth in residential broadband, business broadband, cellular data, and machine-to-machine data traffic. Our backbone and transport networks are busy and often full. AT&T says it’s going to need the faster fiber transport to accommodate 5G, gaming, and ever-growing video traffic volumes.
I’ve heard concerns from network engineers that some of our long-haul fiber routes, such as the ones along the east coast are overloaded and in danger of being swamped. Having the ability to update long-haul fiber routes from 100 Gb to 400 Gb is a nice upgrade – but not as good as you might imagine. If a 100 Gb fiber route is nearly full and is upgraded to 400 Gb, the life of that route is only stretched another six years if network traffic volumes are doubling every three years. But upgrading is a start and a stopgap measure.
AT&T is also touting that they used white box hardware for this new deployment. White box hardware uses inexpensive generic switches and routers controlled by open-source software. AT&T is likely replacing a 100 Gb traditional electronics route with a much cheaper white box solution. Folks who don’t work with long-haul networks probably don’t realize the big cost of electronics needed to light a long fiber route like this one between Dallas and Atlanta. Long-haul fiber requires numerous heated and cooled huts placed along the route that house repeaters needed to amplify the signal. A white box solution doesn’t just mean less expensive lasers at the end points, but at all of the intermediate points along the fiber route.
AT&T views 400 Gb transport as the next generation of technology needed in our networks and the company submitted specifications to the Open Compute Project for an array of different 400 GB chassis and backbone fabrics. The AT&T specifications rely on Broadcom’s Jericho2 family of chips.
100 Gb electronics are not only used today in long-haul data routes. I have a lot of clients that operate fiber-to-the-home networks that use a 100 Gb backbone to provide the bandwidth to reach multiple neighborhoods. In local networks that are fiber-rich there is always a trade-off between the cost up upgrading to faster electronics or instead lighting additional fiber pairs. As an existing 100 Gb fiber starts getting full, network engineers will consider the cost of lighting a second 100 Gb route versus upgrading to the 400 Gb technology. The fact that AT&T is pushing this as a white box solution likely means that it will be cheaper to upgrade to a new 400 Gb network than it is to buy a second traditional 100 Gb set of electronics.
There are other 400 Gb solution hitting the market from Cisco, Juniper, and Arista Networks – but all will be more expensive than a white box solution. Network engineers always talk about chokepoints in a network – places where the traffic volume exceeds the network capability. One of the most worrisome chokepoints for ISPs are the long-haul fiber networks that connect communities – because those routes are out of the control of the last-mile ISP. It’s reassuring to know there are technology upgrades that will let the industry keep up with demand.