The Industrial Metaverse

When Facebook changed the company name to Meta in 2021, it looked like the company would be taking the lead to bring virtual reality into everyday life. The company promised in 2021 that it would create an interconnected and immersive digital world where users could engage in socializing, working, learning and creating. Meta invested over $60 billion since then to create metaverse technologies.

In the market, Meta released a series of Quest virtual reality headsets that were popular with some gamers but never got wide acceptance. The most successful product is Ray-Ban smart glasses, which are largely a phone you wear as glasses and that don’t include VR technology.

This month, Meta announced its new priorities for the future, which don’t include more exploration of the metaverse. The company laid off 100 workers from Reality Labs, and there are industry predictions that Meta will back out of metaverse research by the end of the year.

At least for now, this kills the idea of having virtual reality meetings at work and of taking virtual vacations. There were predictions in 2021 that this would be the new killer app for broadband usage and that home VR would finally be able to fully utilize home gigabit connections.

However, virtual reality has not been a total bust. A recent article in Wired describes how virtual reality has been embraced by manufacturers. There are some major advantages to being able to build a virtual version of a factory to try new ideas.

Cited in the article is report from the World Economic Forum that believes that industrial virtual reality will be a $100 billion business by 2030. This new industry might best be described as spatial computing. This combined virtual reality and augmented reality to benefit industrial applications.

The purpose of industrial VR is to create simulations of large industrial settings. The article says that Amazon uses a virtual simulation of its warehouses to train the robots that move and retrieve packages. Lowe’s uses the technology to consider its options before changing store layouts. There is a long discussion of how BMW uses the technology to improve efficiency in its auto factories.

Virtual reality for gaming is far from dead, and a dozen companies are making headsets for gaming. But the idea that we’ll all create avatars of ourselves that will navigate in a virtual world is going to go on the shelf for a while – maybe forever.

Major Outages in 2024

2024 was like most recent years where there were a few major broadband outages and a lot of smaller regional ones. Most carriers claim to be investing more money in increased redundancy to avoid major outages and one hopes that is cutting down on outages.

AT&T suffered a big outage in February when it lost cellular coverage in markets like Dallas Houston, Los Angeles, and Atlanta. The outage particularly affected first responders served by AT&T’s FirstNet network. The company said the outage was “caused by the application and execution of an incorrect process used as we were expanding our network, not a cyberattack” Basically, the company messed something up during a network update.

The biggest telco outages for the year came from hurricanes. In western North Carolina alone, 80% of cell sites went out of service by the day after the storm hit. Fiber networks were severed as entire roads washed away, and something like a million trees were damaged. I live in Asheville, and we experienced a total communications blackout with no cellular or landline broadband. It took about a week to get a partial cell signal back and over three weeks to get broadband. Some rural areas were out much longer.

Hurricane Milton caused broadband outages as well, more related to power outages than destroyed telecom network. A lot of places didn’t lose cell coverage, and most people were back in service within a few days.

The other big outages in 2024 were not network outages but service provider outages.

  • Microsoft Teams had a seven hour outage on January 26. The cause of the outage was never disclosed but seems to have been internal to Microsoft.
  • On March 5, Meta had an outage that blocked users from accessing Facebook, Instagram, Messenger, and Threads. The reason for the outage was a glitch in the login process.
  • Google lost service for an hour on May 1. The problem was a failure in the verification process that couldn’t identify users.
  • The biggest outage of the year happened on July 19 and affected 8.5 million Microsoft Windows devices. The outage was worldwide. Flights were canceled, customers couldn’t access banks, surgeries were canceled, and there were widespread 911 outages. The cause of the problem was a section of code at CrowdStrike, the cybersecurity firm that many large Windows customers were using to protect their devices. In retrospect, the outage was blamed on the lack of testing from CrowdStrike before implementing a software update.
  • Microsoft had an outage on November 25 that caused intermittent inability for users to use Outlook or reach the web. Microsoft admitted the source of the problem was a configuration change – another software update problem.
  • On December 11, OpenAI had an outage of it’s video service Sora. This was caused by a cascading error when a telemetry service overwhelmed the platform.

Interestingly, most of the service outages were the result of configuration changes, meaning software upgrades.

These big companies should learn a lesson from smaller telcos. I’ve had many clients who learned the hard way to never introduce new software onto customers without first testing the update. That means NEVER EVER, NEVER EVER, NEVER EVER (did I say that enough?). Many telcos have software test labs where they have a lab setup that mimics the network. They try updates in the test lab before ever subjecting their customers to an untested update. This is software update 101 stuff, but apparently, the smart guys at some of the biggest companies don’t think they need to take this extra precaution.

Getting Ready for the Metaverse

In a recent article in LightReading, Mike Dano quotes Dan Rampton of Meta as saying that the immersive metaverse experience is going to require a customer latency between 10 and 20 milliseconds.

The quote came from a presentation at the Wireless Infrastructure Association Connect (WIAC) trade show. Dano says the presentation there was aimed at big players like American Tower and DigitalBridge, which are investing heavily in major data centers. Meta believes we need a lot more data centers closer to users to speed up the Internet and reduce latency.

Let me put the 10 – 20 millisecond latency into context. Latency in this case would be the total delay of signal between a user and the data center that is controlling the metaverse experience. Meta is talking about the network that will be needed to support full telepresence where the people connecting virtually can feel like they are together in real time. That virtual connection might be somebody having a virtual chat with their grandmother or a dozen people gaming.

The latency experienced by anybody connected to the Internet is the accumulation of a number of small delays.

  • Transmission delay is the time required to get packets from a customer to be ready to route to the Internet. This is the latency that starts at the customer’s house and traverses the local ISP network. This delay is caused to some degree by the quality of the routers at the home – but the biggest factor in transmission delay is related to the technology being used. I polled several clients who tell me the latency inside their fiber network typically ranges between 4 and 8 milliseconds. Some wireless technologies also have low latency as long as there aren’t multiple hops between a customer and the core. Cable HFC systems are slower and can approach the 20 ms limit, and older technologies like DSL have much larger latencies. Satellite latencies, even the low-orbit networks, will not be fast enough to meet the 20 ms goal established by Meta due to the signal having to travel from the ground to a satellite and back to the Internet interface.
  • Processing delay is the time required by the originating ISPs to decide where a packet is to be sent. ISPs have to sort between all of the packets received from users and route each appropriately.
  • Propagation delay is due to the distance a signal travels outside of the local network. It takes a lot longer for a signal to travel from Tokyo to Baltimore than it takes to travel from Baltimore and Washington DC.
  • Queuing delays are the time required at the terminating end of the transmission. Since a metaverse connection is almost certainly going to be hosted at a data center, this is the time it takes to receive and appropriately route the signal to the right place in the data center.

It’s easy to talk about the metaverse as if it’s some far future technology. But companies are currently investing tens of billions of dollars to develop the technology. The metaverse will be the next technology that will force ISPs to improve networks. Netflix and streaming video had a huge impact on cable and telephone company ISPs, which were not prepared to have multiple customers streaming video at the same time. Working and schooling from home exposed the weakness of the upload links in cable company, fixed wireless, and DSL networks. The metaverse will push ISPs again.

Meta’s warning is that ISPs will need to have an efficient network if they want their customers to participate in the metaverse. Packets need to get out the door quickly. Networks that are overloaded at some times of the day will cause enough delay to make a metaverse connection unworkable. Too much jitter will mean resending missed packets, which adds significantly to the delay. Networks with low latency like fiber will be preferred. Large data centers that are closer to users can shave time off the latency. Customers are going to figure this out quickly and migrate to ISPs that can support a metaverse connection (or complain loudly about ISPs that can’t). It will be curious to see if ISPs will heed the warnings coming from companies like Meta or if they will wait until the world comes crashing down on their heads (which has been the historical approach to traffic management).

Network Requirements for the Metaverse

I’ve often joked that I don’t play computer games because I’m holding out for a holodeck. While that may sound ridiculously far-future, we’re on the verge of seeing web-based virtual reality that will be a major step towards a holodeck. There is already some awesome virtual reality software and games where a person can get immersed in another world using a headset. But it will be a big leap to move virtual reality online where people from anywhere can join in a game together like is done in the movie Ready Player One. If you haven’t seen it, it’s a movie from 2018 about a believable future worldwide gaming phenomenon.

Meta (formerly Facebook) is investing heavily in creating a platform that can host game designers and others to launch virtual reality apps. When Meta first announced that it was going to tackle the metaverse, people assumed the company was off designing games, but the company is instead tackling the technology that will enable the use of online virtual reality.

Meta says there are some key requirements that will be needed to support the metaverse.

  • Fast symmetrical broadband speeds. And they aren’t just talking about one gigabit bandwidth – faster speeds will be needed to transmit the huge amounts of data needed to create real-time virtual reality worlds.
  • Low latency, under 10 milliseconds. Well-designed last-mile fiber networks have speeds in this range today. But Meta isn’t talking only about the last mile network, but the middle mile network used to connect users to the cloud. The company says that middle-mile carriers will need to step up their game. Some networks are already this fast, but many are not.
  • We’re going to need higher resolution video – 4K is not good enough resolution to convey the pixels needed to create immersive worlds. And that means big data files.
  • With big data files, we’re going to need the next generation of video compression that can compress huge data files in real-time and that can be decompressed without adding delay to the signal.
  • To make everything work together in real-time will require cooperation in the network between entities. Some traffic optimization is done today by network operators while content providers do their own optimization – it’s going to take a coordinated real-time integrated process of network optimization that includes all parties to the metaverse.
  • Metaverse software must be able to adapt to the user. While designed for a high-bandwidth, low latency fiber customer, the metaverse system must be able to adapt to the local network conditions. We do this in a minor way today when Netflix dummies down the video signal to match a user’s bandwidth.
  • What Meta didn’t way is that we’ll need ISPs willing to deliver the fast 2-way traffic needed to make the metaverse work in homes.

This may all sound out of reach, but Meta already has early prototypes of the concepts working in the lab. We’re seeing last-mile fiber builders now using XGS-PON that can deliver 10-gigabit symmetrical broadband. We’re seeing new middle-mile routes with 300-gigabit pipes reaching out to smaller and smaller cities.

The metaverse and web-based virtual reality will only become possible when there are enough people in the world connected to a fast fiber connection. We’re certainly on that path in the U.S. with plans for various ISPs to build fiber to pass nearly 50 million more homes in just the next few years. Meta envisions a platform where it supplies the muscles and tens of thousands of develops independently create metaverse worlds. That’s not quite a holodeck – but I might just give it a try.