There is No Artificial Intelligence

It seems like most new technology today comes with a lot of hype. Just a few years ago, the press was full of predictions that we’d be awash with Internet of Thing sensors that would transform the way we live. We’ve heard similar claims for technologies like virtual reality, block chain, and self-driving cars. I’ve written a lot about the massive hype surrounding 5G – in my way of measuring things, there isn’t any 5G in the world yet, but the cellular carriers are loudly proclaiming its everywhere.

The other technology with a hype that nearly equals 5G is artificial intelligence. I see articles every day talking about the ways that artificial intelligence is already changing our world, with predictions about the big changes on the horizon due to AI. A majority of large corporations claim to now be using AI. Unfortunately, this is all hype and there is no artificial intelligence today, just like there is not yet any 5G.

It’s easy to understand what real 5G will be like – it will include the many innovations embedded in the 5G specifications like frequency slicing and dynamic spectrum sharing. We’ll finally have 5G when a half dozen new 5G technologies are on my phone. Defining artificial intelligence is harder because there is no specification for AI. Artificial intelligence will be here when a computer can solve problems in much the way that humans do. Our brains evaluate available data on hand to see if we know enough to solve a problem. If not, we seek the additional data we need. Our brains can consider data from disparate and unrelated sources to solve problems. There is no computer today that is within a light-year of that ability – there are not yet any computers that can ask for specific additional data needed to solve a problem. An AI computer doesn’t need to be self-aware – it just has to be able to ask the questions and seek the right data needed to solve a given problem.

We use computer tools today that get labeled as artificial intelligence such as complex algorithms, machine learning, and deep learning. We’ve paired these techniques with faster and larger computers (such as in data centers) to quickly process vast amounts of data.

One of the techniques we think of artificial intelligence is nothing more than using brute force to process large amounts of data. This is how IBM’s Deep Blue works. It can produce impressive results and shocked the world in 1997 when the computer was able to beat Garry Kasparov, the world chess champion. Since then, the IBM Watson system has beat the best Jeopardy players and is being used to diagnose illnesses. These computers achieve their results through processing vast amounts of data quickly. A chess computer can consider huge numbers of possible moves and put a value on the ones with the best outcome. The Jeopardy computer had massive databases of human knowledge available like Wikipedia and Google search – it looks up the answer to a question faster than a human mind can pull it out of memory.

Much of what is thought of as AI today uses machine learning. Perhaps the easiest way to describe machine learning is with an example. Machine learning uses complex algorithms to analyze and rank data. Netflix uses machine learning to suggest shows that it thinks a given customer will like. Netflix knows what a viewer has already watched. Netflix also knows what millions of others who watch the same shows seem to like, and it looks at what those millions of others watched to make a recommendation. The algorithm is far from perfect because the data set of what any individual viewer has watched is small. I know in my case, I look at the shows recommended for my wife and see all sorts of shows that interest me, but which I am not offered. This highlights one of the problems of machine learning – it can easily be biased and draw wrong conclusions instead of right ones. Netflix’s suggestion algorithm can become a self-fulfilling prophecy unless a viewer makes the effort to look outside of the recommended shows – the more a viewer watches what is suggested, the more they are pigeonholed into a specific type of content.

Deep learning is a form of machine learning that can produce better results by passing data through multiple algorithms. For example, there are numerous forms of English spoken around the world. A customer service bot can begin each conversation in standard English, and then use layered algorithms to analyze the speaker’s dialect to switch to more closely match a given speaker.

I’m not implying that today’s techniques are not worthwhile. They are being used to create numerous automated applications that could not be done otherwise. However, almost every algorithm-based technique in use today will become instantly obsolete when a real AI is created.

I’ve read several experts that predict that we are only a few years away from an AI desert – meaning that we will have milked about all that can be had out of machine learning and deep learning. Developments with those techniques are not leading towards a breakthrough to real AI – machine learning is not part of the evolutionary path to AI. At least for today, both AI and 5G are largely non-existent, and the things passed off as these two technologies are pale versions of the real thing.

5G and Rural America

FCC Chairman Ajit Pai recently told the crowd at CES that 5G would be a huge benefit to rural America and would help to close the rural broadband divide. I have to imagine he’s saying this to keep rural legislators on board to support that FCC’s emphasis on promoting 5G. I’ve thought hard about the topic and I have a hard time seeing how 5G will make much difference in rural America – particularly with broadband.

There is more than one use of 5G, and I’ve thought through each one of them. Let me start with 5G cellular service. The major benefits of 5G cellular are that a cell site will be able to handle up to 100,000 simultaneous connection per cell site. 5G also promises slightly faster cellular data speeds. The specification calls for speeds up to 100 Mbps with the normal cellular frequencies – which happens to also have been the specification for 4G, although it was never realized.

I can’t picture a scenario where a rural cell site might need 100,000 simultaneous connections within a circle of a few miles. There aren’t many urban places that need that many connections today other than stadiums and other crowded locations where a lot of people want connectivity at the same time. I’ve heard farm sensors mentioned as a reason for needing 5G, but I don’t buy it. The normal crop sensor might dribble out tiny amounts of data a few times per day. These sensors cost close to $1,000 today, but even if they somehow get reduced to a cost of pennies it’s hard to imagine a situation where any given rural cell site is going to need to more capacity than is available with 4G.

It’s great if rural cell sites get upgraded, but there can’t be many rural cell sites that are overloaded enough to demand 5G. There is also the economics. It’s hard to imagine the cellular carriers being willing to invest in a rural cell site that might support only a few farmers – and it’s hard to think the farmers are willing to pay enough to justify their own cell site

There has also been talk of lower frequencies benefitting rural America, and there is some validity to that. For example, T-Mobile’s 600 MHz frequency travels farther and penetrates obstacles better than higher frequencies. Using this frequency might extend good cellular data coverage as much as an extra mile and might support voice for several additional miles from a cell site. However, low frequencies don’t require 5G to operate. There is nothing stopping these carriers from introducing low frequencies with 4G (and in fact, that’s what they have done in the first-generation cellphones capable of using the lower frequencies). The cellular carriers are loudly claiming that their introduction of new frequencies is the same thing as 5G – it’s not.

5G can also be used to provide faster data using millimeter wave spectrum. The big carriers are all deploying 5G hot spots with millimeter wave technology in dense urban centers. This technology broadcasts super-fast broadband for up to 1,000 feet.  The spectrum is also super-squirrely in that it doesn’t pass through anything, even a pane of glass. Try as I might, I can’t find a profitable application for this technology in suburbs, let alone rural places. If a farmer wants fast broadband in the barnyard I suspect we’re only a few years away from people being able to buy a 5G/WiFi 6 hot spot that could satisfy this purpose without paying a monthly fee to a cellular company.

Finally, 5G can be used to provide gigabit wireless loops from a fiber network. This is the technology trialed by Verizon in a few cities like Sacramento. In that trial, speeds were about 300 Mbps, but there are no reason speeds can’t climb to a gigabit. For this technology to work there has to be a transmitter on fiber within 1,000 feet of a customer. It seems unlikely to me that somebody spending the money to get fiber close to farms would use electronics for the last few hundred feet instead of a fiber drop. The electronics are always going to have problems and require truck rolls, and the electronics will likely have to be replaced at least once per decade. The small telcos and electric coops I know would scoff at the idea of adding another set of electronics into a rural fiber network.

I expect some of the 5G benefits to find uses in larger county seats – but those towns have the same characteristics as suburbia. It’s hard to think that rural America outside of county seats will ever need 5G.

I’m at a total loss of why Chairman Pai and many politicians keep extolling the virtues of rural 5G. I have no doubt that rural cell sites will be updated to 5G over time, but the carriers will be in no hurry to do so. It’s hard to find situations in rural America that demand a 5G solution that can’t be done with 4G – and it’s even harder to justify the cost of 5G upgrades that benefit only a few customers. I can’t find a business case, or even an engineering case for pushing 5G into rural America. I most definitely can’t foresee a 5G application that will solve the rural broadband divide.

 

Is 5G Radiation Safe?

There is a lot of public sentiment against placing small cell sites on residential streets. There is a particular fear of broadcasting higher millimeter wave frequencies near to homes since these frequencies have never been in widespread use before. In the public’s mind, higher frequencies mean a higher danger of health problems related to exposure to radiofrequency emissions. The public’s fears are further stoked when they hear that Switzerland and Belgium are limiting the deployment of millimeter wave radios until there is better proof that they are safe.

The FCC released a report and order on December 4 that is likely to add fuel to the fire. The agency rejected all claims that there is any public danger from radiofrequency emissions and affirmed the existing frequency exposure rules. The FCC said that none of the thousand filings made in the docket provided any scientific evidence that millimeter wave, and other 5G frequencies are dangerous.

The FCC is right in their assertion that there are no definitive scientific studies linking cellular frequencies to cancer or other health issues. However, the FCC misses the point that most of those asking for caution, including scientists, agree with that. The public has several specific fears about the new frequencies being used:

  • First is the overall range of new frequencies. In the recent past, the public was widely exposed to relatively low frequencies from radio and TV stations, to a fairly narrow range of cellular frequencies, and two bands of WiFi. The FCC is in the process of approving dozens of new bands of frequency that will be widely used where people live and work. The fear is not so much about any given frequency being dangerous, but rather a fear that being bombarded by a large range of frequencies will create unforeseen problems.
  • People are also concerned that cellular transmitters are moving from tall towers, which normally have been located away from housing, to small cell sites on poles that are located on residential streets. The fear is that these transmitters are generating a lot of radiation close to the transmitter – which is true. The amount of frequency that strikes a given area decreases rapidly with distance from a transmitter. The anecdote that I’ve seen repeated on social media is of placing a cell site fifteen feet from the bedroom of a child. I have no idea if there is a real small cell site that is the genesis of this claim – but there could be. In dense urban neighborhoods, there are plenty of streets where telephone poles are within a few feet of homes. I admit that I would be leery about having a small cell site directly outside one of my windows.
  • The public worries when they know that there will always be devices that don’t meet the FCC guidelines. As an example, the Chicago Tribune tested eleven smartphones in August and found that a few of them were issuing radiation at twice the FCC maximum-allowable limit. The public understands that vendors play loose with regulatory rules and that the FCC largely ignores such violations.

The public has no particular reason to trust this FCC. The FCC under Chairman Pai has sided with the large carriers on practically every issue in front of the Commission. This is not to say that the FCC didn’t give this docket the full consideration that should be given to all dockets – but the public perception is that this FCC would side with the cellular carriers even if there was a public health danger.

The FCC order is also not particularly helped by citing the buy-in from the Food and Drug Administration on the safety of radiation. That agency has licensed dozens of medicines that later proved to be harmful, so that agency also doesn’t garner a lot of public trust.

The FCC made a few changes with this order. They have mandated a new set of warning signs to be posted around transmitters. It’s doubtful that anybody outside of the industry will understand the meaning of the color-coded warnings. The FCC is also seeking comments on whether exposure standards should be changed for frequencies below 100 kHz and above 6 GHz. The agency is also going to exempt certain kinds of transmitters from FCC testing.

I’ve read extensively on both sides of the issue and it’s impossible to know the full story. For example, a majority of scientists in the field signed a petition to the United Nations warning against using higher frequencies without more testing. But it’s also easy to be persuaded by other scientists who say that higher frequencies don’t even penetrate the skin. I’ve not heard of any studies that look at exposing people to a huge range of different low-power frequencies.

This FCC is in a no-win position. The public properly perceives the agency of being pro-carrier, and anything the FCC says is not going to persuade those worried about radiation risks. I tend to side with the likelihood that the radiation is not a big danger, but I also have to wonder if there will be any impact after expanding by tenfold the range of frequencies we’re exposed to. The fact is that we’re not likely to know until after we’ve all been exposed for a decade.

Killing 3G

I have bad news for anybody still clinging to their flip phones. All of the big cellular carriers have announced plans to end 3G cellular service, and each has a different timeline in mind:

  • Verizon previously said they would stop supporting 3G at the end of 2019, but now says it will end service at the end of 2020.
  • AT&T has announced the end of 3G to be coming in early 2022.
  • Sprint and T-Mobile have not expressed a specific date but are both expected to stop 3G service sometime in 2020 or 2021.

The amount of usage on 3G networks is still significant. GSMA reported that at the end of 2018 that as many as 17% of US cellular customers still made 3G connections, which accounted for as much as 19% of all cellular connections.

The primary reason cited for ending 3G is that the technology is far less efficient than 4G. A 3G connection to a cell site chews up the same amount of frequency resources as a 4G connection yet delivers far less data to customers. The carriers are also anxious to free up mid-range spectrum for upcoming 5G deployment.

Opensignal measures actual speed performance for millions of cellular connections and recently reported the following statistics for the average 3G and 4G download speeds as of July 2019:

4G 2019 3G 2019
AT&T 22.5 Mbps 3.3 Mbps
Sprint 19.2 Mbps 1.3 Mbps
T-Mobile 23.6 Mbps 4.2 Mbps
Verizon 22.9 Mbps 0.9 Mbps

The carriers have been hesitating on ending 3G because there are still significant numbers of rural cell sites that still don’t offer 4G. The cellular carriers were counting on funding from the FCC’s Mobility Fund Phase II to upgrade rural cell sites. However, that funding program got derailed and delayed when the FCC found there were massive errors in the data provided for distributing that fund. The big carriers were accused by many of rigging the data in a way to give more funding to themselves instead of to smaller rural cellular providers.

The FCC staff conducted significant testing of the reported speed and coverage data and released a report of their findings in December 2019. The testing showed that the carriers have significantly overreported 4G coverage and speeds across the country. This report is worth reading for anybody that needs to be convinced of the garbage data that has been used for the creation of FCC broadband maps. I wish the FCC Staff would put the same effort into investigating landline broadband data provided to the FCC. The FCC Staff recommended that the agency should release a formal Enforcement Advisory including ‘a detailing of the penalties associated with carrier filings that violate federal law’.

The carriers are also hesitant to end 3G since a lot of customers still use the technology. Opensignal says there are several reasons for the continued use of 3G. First, 12.7% of users of 3G live in rural areas where 3G is the only cellular technology available. Opensignal says that 4.1% of 3G users still own old flip phones that are not capable of receiving 4G. The biggest category of 3G users are customers that own a 4G capable phone but still subscribe to a 3G data plan. AT&T is the largest provider of such plans and has not forced customers to upgrade to 4G plans.

The carriers need to upgrade rural cell sites to 4G before they can be allowed to cut 3G dead. In doing so they need to migrate customers to 4G data plans and also notify customers who still use 3G-only flip phones that it’s finally time to upgrade.

One aspect of the 3G issue that nobody is talking about is that AT&T says it is using fixed wireless connections to meet its CAF II buildout requirements. Since the CAF II areas include some of the most remote landline customers, it stands to reason that these are the same areas that are likely to still be served with 3G cell towers. AT&T can’t deliver 10/1 Mbps or faster speeds using 3G technology. This makes me wonder what AT&T has been telling the FCC in terms of meeting their CAF II build-out requirements.

US Has Poor Cellular Video

Opensignal recently published a report that looks around the world at the quality of cellular video. Video has become a key part of the cellular experience as people are using cellphones for entertainment, and since social media and advertising have migrated to video.

The use of cellular video is exploding. Netflix reports that 25% of its total streaming worldwide is sent to mobile devices. The new Disney+ app that was just launched got over 3 million downloads of their cellular app in just the first 24 hours. The Internet Advertising Bureau says that 62% of video advertisements are being seen on cellphones. Social media sites that are video-heavy like Instagram and Tik-Tok are growing rapidly.

The pressure on cellular networks to deliver high-quality video is growing. Ericcson recently estimated that video will grow to be almost 75% of all cellular traffic by 2024, up from 60% today. Look back five years, and video was a relatively small component of cellular traffic. To some extent, US carriers have contributed to the issue. T-Mobile includes Netflix in some of its plans; Sprint includes Hulu or Amazon Prime; Verizon just started bundling Disney+ with cellular plans; and AT&T offers premium movie services like HBO or Starz with premium plans.

The quality of US video was ranked 68 out of 100 countries, the equivalent of an F grade. That places our wireless video experience far behind other industrialized countries and puts the US in the same category as a lot of countries from Africa, and South and Central America. One of the most interesting statistics about US video watching is that 38% of users watch video at home using a cellular connection rather than their WiFi connection. This also says a lot about the poor quality of broadband connections in many US homes.

Interestingly, the ranking of video quality is not directly correlated with cellular data speeds. For example, South Korea has the fastest cellular networks but ranked 21st in video quality. Canada has the third-fastest cellular speeds and was ranked 22nd in video quality. The video quality rankings are instead based upon measurable metrics like picture quality, video loading times, and stall rates. These factors together define the quality of the video experience.

One of the reasons that US video quality was rated so low is that the US cellular carriers transmit video at the lowest compression possible to save on network bandwidth. The Opensignal report speculates that the primary culprit for poor US video quality is the lack of cellular spectrum. US cellular carriers are now starting to implement new spectrum bands into phones and there are more auctions for mid-range spectrum coming next year. But it takes 3-4 years to fully integrate new spectrum since it takes time for the cellular carriers to upgrade cell sites and even longer for handsets using a new spectrum to widely penetrate the market.

Only six countries got an excellent rating for video quality – Norway, Czech Republic, Austria, Denmark, Hungary, and the Netherlands. Meanwhile, the US is bracketed on the list between Kyrgyzstan and Kazakhstan.

Interestingly, the early versions of 5G won’t necessarily improve video quality. The best example of this is South Korea that already has millions of customers using what is touted as 5G phones. The country is still ranked 21st in terms of video quality. Cellular carriers treat cellular traffic differently than other data, and it’s often the video delivery platform that is contributing to video problems.

The major fixes to the US cellular networks are at least a few years away for most of the country. The introduction of more small cells, the implementation of more spectrum, and the eventual introduction of the 5G features from the 5G specifications will contribute to a better US cellular video experience. However, with the volume of US cellular broadband volumes doubling every two years, the chances are that the US video rating will drop more before improving significantly. The network engineers at the US cellular companies face an almost unsolvable problem of maintaining network quality while dealing with unprecedented growth.

Modems versus Routers

I have to admit that today’s blog is the result of one of my minor pet peeves – I find myself wincing a bit whenever I hear somebody interchange the words modem and router. That’s easy enough to do since today there are a lot of devices in the world that include both a modem and a router. But for somebody who’s been around since the birth of broadband, there is a big distinction. Today’s blog is also a bit nostalgic as I recalled the many kinds of broadband I’ve used during my life.

Modems. A modem is a device that connects a user to an ISP. Before there were ISPs, a modem made a data connection between two points. Modems are specific to the technology being used to make the connection.

In the picture accompanying this blog is an acoustic coupler, which is a modem that makes a data connection using the acoustic signals from an analog telephone. I used a 300 baud modem (which communicated at 300 bps – bits per second) around 1980 at Southwestern Bell when programming in basic. The modem allowed me to connect my telephone to a company mainframe modem and ‘type’ directly into programs stored on the mainframe.

Modems grew faster over time and by the 1990s we could communicate with a dial-up ISP. The first such modem I recalled using communicated at 28.8 kbps (28,800 bits per second). The technology was eventually upgraded to 56 kbps.

Around 2000, I upgraded to a 1 Mbps DSL modem from Verizon. This was a device that sat next to an existing telephone jack. If I recall, this first modem used ADSL technology. The type of DSL matters, because a customer upgrading to a different variety of DSL, such as VDSL2, has to swap to the appropriate modem.

In 2006 I was lucky enough to live in a neighborhood that was getting Verizon FiOS on fiber and I upgraded to 30 Mbps service. The modem for fiber is called an ONT (Optical Network Terminal) and was attached to the outside of my house. Verizon at the time was using BPON technology. A customer would have to swap ONTs to upgrade to newer fiber technologies like GPON.

Today I use broadband from Charter, delivered over a hybrid coaxial network. Cable modems use the DOCSIS standards developed by CableLabs. I have a 135 Mbps connection that is delivered using a DOCSIS 3.0 modem. If I want to upgrade to faster broadband, I’d have to swap to a DOCSIS 3.1 modem – the newest technology on the Charter network.

Routers. A router allows a broadband connection to be split to connect to multiple devices. Modern routers also contain other functions such as the ability to create a firewall or the ability to create a VPN connection.

The most common kind of router in homes is a WiFi router that can connect multiple devices to a single broadband connection. My first WiFi router came with my Verizon FiOS service. It was a single WiFi device intended to serve the whole home. Unfortunately, my house at the time was built in the 1940s and had plaster walls with metal lathing, which created a complete barrier to WiFi signals. Soon after I figured out the limitations on the WiFi I bought my first Ethernet router and used it to string broadband connections using cat 5 cables to other parts of the house. It’s probably good that I was single at the time because I had wires running all over the house!

Today it’s common for an ISP to combine the modem (which talks to the ISP network) and the router (which talks to the devices in the home) into a single device. I’ve always advised clients to not combine the modem and the WiFi router because if you want to upgrade only one of those two functions you have to replace the device. With separate devices, an ISP can upgrade just one function. That’s going to become an issue soon for many ISPs when customers start asking the ISPs to provide WiFi 6 modems.

Some ISPs go beyond a simple modem and router. For example, most Comcast broadband service to single-family homes provide a WiFi router for the home and a second WiFi router that broadcasts to nearby customers outside the home. These dual routers allow Comcast to claim to have millions of public WiFi hotspots.  Many of my clients are now installing networked router systems for customers where multiple routers share the same network. These network systems can provide strong WiFi throughout a home, with the advantage that the same passwords are usable at each router.

FCC Proposes New WiFi Spectrum

On December 17 the FCC issued a Notice of Proposed Rulemaking for the 5.9 GHz spectrum band that would create new public spectrum that can be used for WiFi or other purposes. The 5.9 GHz spectrum band was previously assigned in 2013 to support DSRC (Dedicated Short Range Communications), a technology to communicate between cars, and between cars and infrastructure. The spectrum band covered by the order is 75 megahertz wide. The FCC suggests that the lower 45 megahertz be made available to anybody as new public spectrum. They’ve assigned the highest 20 megahertz for a newer smart car technology called C-V2X. The FCC tentatively kept the remaining bandwidth for the older DSRC technology, dependent upon the users of that technology convincing the agency that it’s viable – otherwise, it also converts to C-V2X usage.

DSRC technology has been around for twenty years. The goal of the technology is to allow cars to communicate with each other and to communicate with infrastructure like toll booths or traffic measuring sensors. One of the biggest benefits touted for DSRC is increased safety so that cars will know what’s going on around them, such as when a car ahead is braking suddenly.

For the new technology, the V2X stands for vehicle-to-everything. Earlier this year Ford broke from the rest of the industry and dropped research in DSRC communications in favor of C-V2X. Ford says they will introduce C-V2X into their whole fleet in 2022. Ford touts the technology as enabling cars to ‘see around corners’ due to the ability to gather data from other cars in the neighborhood. They believe the new technology will improve safety, reduce accidents, allow things like safely forming convoys of vehicles on open highways, and act as an important step towards autonomous cars. C-V2X uses the 3GPP standard and provides an easy interface between 5G and vehicles.

This decision was not without controversy. The Department of Transportation strenuously opposed the reduction of spectrum assigned for vehicle purposes. The DOT painted the picture of the spectrum providing a huge benefit for traffic safety in the future, while the FCC argued that the auto industry has done a poor job of developing applications to use the spectrum.

This is an NPRM, meaning that there will be a cycle of public comments before the FCC votes on the order. I think we can expect major filings by the transportation industry describing reasons why taking away most of this spectrum is a bad idea. On the day of the FCC vote, Elaine Chao, the Secretary of Transportation said that the FCC is valuing Netflix over public safety – so this could yet become an ugly fight.

Perhaps the biggest news from the announcement is the big slice of the spectrum that will be repositioned for public use – a decision praised by the WiFi Alliance. The FCC proposes to make this public spectrum that is open to everybody, not just specifically for WiFi. The order anticipates that 5G carriers might use the spectrum for cellular offload. If the cellular carriers heavily use the spectrum in urban areas, then the DOT might be right and this might be a giveaway of 5G spectrum without an auction.

There is no guarantee that the cellular carriers will heavily use the spectrum. Recall a few years ago there was the opportunity for the cellular carriers to dip into the existing WiFi spectrum using LTE-U to offload busy cellular networks. The carriers used LTE-U much less than anticipated by the WiFi industry, which had warned that cellular offload could overwhelm WiFi. It turns out the cellular carriers don’t like spectrum where they have to deal with unpredictable interference.

Even if the cellular carriers use the spectrum for cellular offload in urban areas, the new public block ought to be mostly empty in rural America. That will create an additional spectrum band to help boost point-to-multipoint radios.

Regardless of how the new spectrum might be used outdoors, it ought to provide a boost to indoor WiFi. The spectrum sits just a little higher than the current 5.4 GHz WiFi band and should significantly boost home WiFi speeds and volume capability. The new spectrum will provide an opportunity to reduce interference with existing WiFi networks by providing more channels for spread home use.

This particular docket shows why spectrum decisions at the FCC are so difficult. Every potential use for this mid-range spectrum creates significant public good. How do you weigh safer driving against better 5G or against better rural broadband?

Immersive Virtual Reality

In case you haven’t noticed, virtual reality has moved from headsets to the mall. At least two companies now offer an immersive virtual reality experience that goes far beyond what can be experienced with only a VR headset at home.

The newest company is Dreamscape Immersive that has launched virtual reality studies in Los Angeles and Dallas, with more outlets planned. The virtual reality experience is enhanced by the use of a headset, hand and foot trackers, and a backpack holding the computers. The action occurs within a 16X16 room with vibrating haptic floor (responds to actions of the participant). This all equates to an experience where a user can reach out and touch objects or can walk around all sides of a virtual object in the environment.

The company has launched with three separate adventures, each lasting roughly 15 minutes. In Alien Zoo the user visits a zoo populated by exotic and endangered animals from around the galaxy. In The Blu: Deep Rescue users try to help reunite a lost whale with its family. The Curse of the Lost Pearl feels like an Indiana Jones adventure where the user tries to find a lost pearl.

More established is The Void, which has launched virtual reality adventure sites in sixteen cities, with more planned. The company is creating virtual reality settings based upon familiar content. The company’s first VR experience was based on Ghostbusters. The current theme is Star Wars: Secrets of the Empire.

The Void lets users wander through a virtual reality world. The company constructs elaborate sets where the walls and locations of objects in the real-life set correspond to what is being seen in the virtual reality world. This provides users with real tactile feedback that enhances the virtual reality experience.

You might be wondering what these two companies and their virtual reality worlds have to do with broadband. I think they provide a peek at what virtual reality in the home might become in a decade. Anybody who’s followed the growth of video games can remember how the games started in arcades before they were shrunk to a format that would work in homes. I think the virtual reality experiences of these two companies are a precursor to the virtual reality we’ll be having at home in the not-too-distant future.

There is already a robust virtual reality gaming industry, but it relies entirely on providing a virtual reality experience through the use of goggles. There are now many brands of headsets on the market, ranging from the simple cardboard headset from Google to more expensive headsets from companies like Oculus Rift, Nintendo, Sony, HTC, and Lenovo. If you want to spend an interesting half an hour, you can see the current most popular virtual reality games at this review from PCGamer. To a large degree, virtual reality gaming has been built modeled on existing traditional video games, although there are some interesting VR games that are now offering content that only makes sense in 3D.

The whole video game market is in the process of moving content online, with the core processing of the gaming experience done in data centers. While most games are still available in more traditional formats, gamers are increasingly connecting to a gaming cloud and need a broadband connection akin in size to a 4K video stream. Historically, many games have been downloaded, causing headaches for gamers with data caps. Playing the games in the cloud can still chew up a lot of bandwidth for active gamers but avoids the giant gigabyte downloads.

If history is a teacher, the technologies used by these two companies will eventually migrate to homes. We saw this migration occur with first-generation video games – there were video arcades in nearly every town, but within a decade those arcades got displaced by the gaming boxes in the home that delivered the same content.

When the kind of games offered by The Void and Dreamscape Immersive reach the home they will ramp up the need for home broadband. It’s not hard to imagine immersive virtual reality needing 100 Mbps speeds or greater for one data stream. These games are the first step towards eventually having something resembling a home holodeck – each new generation of gaming is growing in sophistication and the need for more bandwidth.

Improving Rural Wireless Broadband

Microsoft has been implementing rural wireless broadband using white space spectrum – the slices of spectrum that sits between traditional TV channels. The company announced a partnership with ARK Multicasting to introduce a technology that will boost the efficiency of fixed wireless networks.

ARK Multicasting does just what their name implies. Today about 80% of home broadband usage is for video, and ISPs unicast video, meaning that the send a separate stream of a given video to each customer that wants to watch it. If ten customers in a wireless node are watching the same new Netflix show, the ISP sends out ten copies of the program. Today, in even a small wireless node of a few hundred customers an ISP might be transmitting dozens of simultaneous copies of the most popular content in an evening. The ARK Multicasting technology will send out just one copy of the most popular content on the various OTT services like Netflix, Amazon Prime, and Apple TV. This one copy will be cached in an end user storage device, and if a customer elects to watch the new content they view it from the local cache.

The net impact of multicasting should be a huge decrease in demand for video content during peak network hours. It would be interesting to know the percentage of video viewing in a given week comes from watching newly released content. I’m sure all of the OTT providers know that number, but I’ve never seen anybody talk about it. If anybody knows that statistic, please post in reply comments to this blog. Anecdotal evidence suggests the percentage is significant because people widely discuss new content on social media soon after it’s released.

The first trial of the technology is being done in conjunction with a Microsoft partner wireless network in Crockett. Texas. ARK Multicasting says that it is capable of transmitting 7-10 terabytes of content per month, which equates to 2,300 – 3,300 hours of HD video. We’ll have to wait to see the details of the deployment, but I assume that Microsoft will provide the hefty CPE capable of multi-terabyte storage – there are no current consumer settop boxes with that much capacity. I also assume that cellphones and tablets will grab content using WiFi from the in-home storage device since there are no tablets or cellphones with terabyte storage capacity.

To be effective ARK must be deleting older programming to make room for new, meaning that the available local cache will always contain the latest and most popular content on the various OTT platforms.

There is an interesting side benefit of the technology. Viewers should be able to watch cached content even if they lose the connection to the ISP. Even after a big network outage due to a storm, ISP customers should still be able to watch many hours of popular content.

This is a smart idea. The weakest part of the network for many fixed wireless systems is the backhaul connection. When a backhaul connection gets stressed during the busiest hours of network usage all customers on a wireless node suffer from dropped packets, pixelization, and overall degraded service. Smart caching will remove huge amounts of repetitive video signals from the backhaul routes.

Layering this caching system onto any wireless system should free up peak evening network resources for other purposes. Fixed wireless systems are like most other broadband technologies where the bandwidth is shared between users of a given node. Anything that removes a lot of video downloading at peak times will benefit all users of a node.

The big OTT providers already do edge-caching of content. Providers like Netflix, Google, and Amazon park servers at or near to ISPs to send local copies of the latest content. That caching saves a lot of bandwidth on the internet transport network. The ARK Multicasting will carry caching down to the customer level and bring the benefits of caching to the last-mile network.

A lot of questions come to mind about the nuances of the technology. Hopefully the downloads are done in the slow hours of the network so as to not to add to network congestion. Will all popular content be sent to all customers – or just content from the services they subscribe to? The technology isn’t going to work for an ISP with data caps because the cashing means customers might be downloading multiple terabytes of data that may never be viewed.

I assume that if this technology works well that ISPs of all kinds will consider it. One interesting aspect of the concept is that this means getting ISPs back into the business of supplying boxes to customers – something that many ISPs avoid as much as possible. However, if it works as described, this caching could create a huge boost to last-mile networks by relieving a lot of repetitive traffic, particularly at peak evening hours. I remember local caching being tried a decade or more ago, but it never worked as promised. It will be interesting to see if Microsoft and ARK can pull this off.