In a recent article in LightReading, Mike Dano quotes Dan Rampton of Meta as saying that the immersive metaverse experience is going to require a customer latency between 10 and 20 milliseconds.
The quote came from a presentation at the Wireless Infrastructure Association Connect (WIAC) trade show. Dano says the presentation there was aimed at big players like American Tower and DigitalBridge, which are investing heavily in major data centers. Meta believes we need a lot more data centers closer to users to speed up the Internet and reduce latency.
Let me put the 10 – 20 millisecond latency into context. Latency in this case would be the total delay of signal between a user and the data center that is controlling the metaverse experience. Meta is talking about the network that will be needed to support full telepresence where the people connecting virtually can feel like they are together in real time. That virtual connection might be somebody having a virtual chat with their grandmother or a dozen people gaming.
The latency experienced by anybody connected to the Internet is the accumulation of a number of small delays.
- Transmission delay is the time required to get packets from a customer to be ready to route to the Internet. This is the latency that starts at the customer’s house and traverses the local ISP network. This delay is caused to some degree by the quality of the routers at the home – but the biggest factor in transmission delay is related to the technology being used. I polled several clients who tell me the latency inside their fiber network typically ranges between 4 and 8 milliseconds. Some wireless technologies also have low latency as long as there aren’t multiple hops between a customer and the core. Cable HFC systems are slower and can approach the 20 ms limit, and older technologies like DSL have much larger latencies. Satellite latencies, even the low-orbit networks, will not be fast enough to meet the 20 ms goal established by Meta due to the signal having to travel from the ground to a satellite and back to the Internet interface.
- Processing delay is the time required by the originating ISPs to decide where a packet is to be sent. ISPs have to sort between all of the packets received from users and route each appropriately.
- Propagation delay is due to the distance a signal travels outside of the local network. It takes a lot longer for a signal to travel from Tokyo to Baltimore than it takes to travel from Baltimore and Washington DC.
- Queuing delays are the time required at the terminating end of the transmission. Since a metaverse connection is almost certainly going to be hosted at a data center, this is the time it takes to receive and appropriately route the signal to the right place in the data center.
It’s easy to talk about the metaverse as if it’s some far future technology. But companies are currently investing tens of billions of dollars to develop the technology. The metaverse will be the next technology that will force ISPs to improve networks. Netflix and streaming video had a huge impact on cable and telephone company ISPs, which were not prepared to have multiple customers streaming video at the same time. Working and schooling from home exposed the weakness of the upload links in cable company, fixed wireless, and DSL networks. The metaverse will push ISPs again.
Meta’s warning is that ISPs will need to have an efficient network if they want their customers to participate in the metaverse. Packets need to get out the door quickly. Networks that are overloaded at some times of the day will cause enough delay to make a metaverse connection unworkable. Too much jitter will mean resending missed packets, which adds significantly to the delay. Networks with low latency like fiber will be preferred. Large data centers that are closer to users can shave time off the latency. Customers are going to figure this out quickly and migrate to ISPs that can support a metaverse connection (or complain loudly about ISPs that can’t). It will be curious to see if ISPs will heed the warnings coming from companies like Meta or if they will wait until the world comes crashing down on their heads (which has been the historical approach to traffic management).
Goodness – Further increasing the demand for more data centers. Can the already strained electric grid keep up with this demand? Can we really expect intermittent renewable energy sources combined with limited battery storage capacity to keep a fast and reliable no-carbon internet?