The Upload Speed Lie

In the 2020 Broadband Deployment Report, the FCC made the following claim. “The vast majority of Americans – surpassing 85% – now have access to fixed terrestrial broadband service at 250/25 Mbps”. The FCC makes this claim based upon the data provided to it by the country’s ISPs on Form 477. We know the data reported by the ISPs is badly flawed in the over-reporting of download speeds, but we’ve paid little attention to the second number the FCC cites – the 25 Mbps upload speeds that are supposedly available to everybody. I think the FCC claim that 85% of homes have access to 25 Mbps upload speeds is massively overstated.

The vast majority of the customers covered by the FCC statement are served by cable companies using hybrid fiber-coaxial technology. I don’t believe that cable companies are widely delivering upload speeds greater than 25 Mbps upload. I think the FCC has the story partly right. I think cable companies tell customers that the broadband products they buy have upload speeds of 25 Mbps, and the cable company’s largely report these marketing speeds on Form 477.

But do cable companies really deliver 25 Mbps upload speeds? One of the services my consulting firm provides is helping communities conduct speed tests. We’ve done speed tests in cities recently where only a tiny fraction of customers measured upload speeds greater than 25 Mbps on a cable HFC network.

It’s fairly easy to understand the upload speed capacity of a cable system. The first thing to understand is the upload capacity based upon the way the technology is deployed. Most cable systems deploy upload broadband using the frequencies on the cable system between 5 MHz and 42 MHz. This is a relatively small amount of bandwidth that sits at the noisiest part of cable TV frequency. I remember back to the days of analog broadcast TV and analog cable systems when somebody running a blender or a microwave would disrupt the signals on channels 2 through 5 – the cable companies are now using these same frequencies for uploading broadband. The DOCSIS 3.0 specification assigned upload broadband to the worst part of the spectrum because before the pandemic almost nobody cared about upload broadband speeds.

The second factor affecting upload speeds is the nature of the upload requests from customers. Before the pandemic, the upload link was mostly used to send out attachments to emails or backup data on a computer into the cloud. These are largely temporary uses of the upload link and are also considered non-critical – it didn’t matter to most folks if a file was uploaded in ten seconds or five minutes. However, during the pandemic, all of the new uses for uploading require a steady and dedicated upload data stream. People now are using the upload link to connect to school servers, to connect to work servers, to take college classes online, and to sit on video call services like Zoom. These are critical applications – if the upload broadband is not steady and sufficient the user loses the connection. The new upload applications can’t tolerate best effort – a connection to a school server either works or it doesn’t.

The final big factor that affects the bandwidth on a cable network is demand. Before the pandemic, a user had a better chance than today of hitting 25 Mbps upload because they might have been one of a few people trying to upload at any given time. But today a lot of homes are trying to make upload connections at the same time. This matters because a cable system shares bandwidth both in the home, but also in the neighborhood.

The upload link from a home can get overloaded if more than one person tries to connect to the upload link at the same time. Homes with a poor upload connection will find that a second or a third user cannot establish a connection. The same thing happens at the neighborhood level – if too many homes in a given neighborhood are trying to connect to upload links, then the bandwidth for the whole neighborhood becomes overloaded and starts to fail. Remember a decade ago that it was common for downloaded videos streams to freeze or pixelate in the evening when a lot of homes were using broadband? The cable companies have largely solved the download problem, but now we’re seeing neighborhoods overloading on upload speeds. This results in people unable to establish a connection to a work server or being booted off a Zoom call.

The net result of the overloaded upload links is that the cable companies cannot deliver 25 Mbps to most homes during the times when people are busy on the upload links. The cable companies have ways to fix this – but most fixes mean expensive upgrades. I bet that the cable companies are hoping this problem will magically go away at the end of the pandemic. But I’m guessing that people are going to continue to use upload speeds at levels far higher than before the pandemic. Meanwhile, if the cable companies were being honest, they would not be reporting 25 Mbps upload speeds to the FCC. (Just typing that made me chuckle because it’s not going to happen.)

Network Outages Go Global

On August 30, CenturyLink experienced a major network outage that lasted for over five hours and which disrupted CenturyLink customers nationwide as well as many other networks. What was unique about the outage was the scope of the disruptions as the outage affected video streaming services, game platforms, and even webcasts of European soccer.

This is an example of how telecom network outages have expanded in size and scope and can now be global in scale. This is a development that I find disturbing because it means that our telecom networks are growing more vulnerable over time.

The story of what happened that day is fascinating and I’m including two links for those who want to peek into how the outages were viewed by outsiders who are engaged in monitoring Internet traffic flow. First is this report from a Cloudflare blog that was written on the day of the outage. Cloudflare is a company that specializes in protecting large businesses and networks from attacks and outages. The blog describes how Cloudflare dealt with the outage by rerouting traffic away from the CenturyLink network. This story alone is a great example of modern network protections that have been put into place to deal with major Internet traffic disruptions.

The second report comes from ThousandEyes, which is now owned by Cisco. The company is similar to Cloudflare and helps clients deal with security issues and network disruptions. The ThousandEye report comes from the day after the outage and discusses the likely reasons for the outage. Again, this is an interesting story for those who don’t know much about the operations of the large fiber networks that constitute the Internet. ThousandEyes confirms the suspicions that were expressed the day before by Cloudflare that the issue was caused by a powerful network command issued by CenturyLink using Flowspec that resulted in a logic loop that turned off and restarted BGP (Border Gateway Protocol) over and over again.

It’s reassuring to know that there are companies like Cloudflare and ThousandEye that can stop network outages from permeating into other networks. But what is also clear from the reporting of the event is that a single incident or bad command can take out huge portions of the Internet.

That is something worth examining from a policy perspective. It’s easy to understand how this happens at companies like CenturyLink. The company has acquired numerous networks over the years from the old Qwest network up to the Level 3 networks and has integrated them all into a giant platform. The idea that the company owns a large global network is touted to business customers as a huge positive – but is it?

Network owners like CenturyLink have consolidated and concentrated the control of the network to a few key network hubs controlled by a relatively small staff of network engineers. ThousandEyes says that the CenturyLink Network Operation Center in Denver is one of the best in existence, and I’m sure they are right. But that network center controls a huge piece of the country’s Internet backbone.

I can’t find where CenturyLink ever gave the exact reason why the company issued a faulty Flowspec command. It may have been used to try to tamp down a problem at one customer or have been part of more routine network upgrades implemented early on a Sunday morning when the Internet is at its quietest. From a policy perspective, it doesn’t matter – what matters is that a single faulty command could take down such a large part of the Internet.

This should cause concerns for several reasons. First, if one unintentional faulty command can cause this much damage, then the network is susceptible to this being done deliberately. I’m sure that the network engineers running the Internet will say that’s not likely to happen, but they also would have expected this particular outage to have been stopped much sooner and easier.

I think the biggest concern is that the big network owners have adopted the idea of centralization to such an extent that outages like this one are more and more likely. Centralization of big networks means that outages can now reach globally and not just locally like happened just a decade ago. Our desire to be as efficient as possible through centralization has increased the risk to the Internet, not decreased it.

A good analogy for understanding the risk in our Internet networks comes by looking at the nationwide electric grid. It used to be routine to purposefully allow neighboring grids to automatically interact until it because obvious after some giant rolling blackouts that we needed firewalls between grids. The electric industry reworked the way that grids interact, and the big rolling regional outages disappeared. It’s time to have that same discussion about the Internet infrastructure. Right now, the security of the Internet is in the hands of few corporations that stress the bottom line first, and which have willingly accepted increased risk to our Internet backbones as a price to pay for cost efficiency.

5G in China

There is an interesting recent article in the English version of a South Korean newspaper, the ChosunILBO, that talks about 5G in China. According to the article, the Chinese 5G rollout is an expensive bust.

There are a number of interesting facts disclosed about the Chinese 5G roll-out. First, it’s clear that the roll-out is using millimeter wave spectrum. The article says that the 5G towers in the Chinese networks are being installed about 200 meters apart (600 feet) since the signal from each transmitter travels between 100 and 300 meters. That’s consistent with millimeter wave hot spots being deployed in downtown cities by Verizon.

It takes a huge number of millimeter wave cell sites to cover a city and the article says that by the end of June 2020 that the Chinese had installed 410,000 cell sites. The article estimates that to get the same coverage as today’s 4G that the network would eventually need over 10 million cell sites. The article quotes Xiang Ligang, the director-general of the Information Consumption Alliance, a Chinese telecom industry association, who said the plans are to build one million new cell sites in each of the next three years.

The 5G coverage isn’t seeing wide acceptance. The article cites a recent Chinese survey where over 73% of the public says there is no need to buy 5G phones. This matched the findings from another survey that also said the public saw no need for 5G.

One of the more interesting things cited in the article is that the 5G cell sites use a lot of energy and that starting in August, China Unicom has taken to shutting the cell sites down from 9 PM until 9 AM daily to save on electricity costs. They say each cell site is using triple the power of a 4G cell site, and there are a lot of sites to power. The new 5G specifications include a provision to significantly reduce power consumption for 5G cell sites, but in the early days of deployment, it looks like this has gone in the wrong direction.

The article concludes that the Chinese 5G experiment might end up as an economic bust. What’s interesting about this article is that a lot of the same things can be said about 5G in South Korea. It’s been reported that South Korea has the biggest percentage penetration of 5G handsets, but that the public has largely been panning the service.

None of this is surprising. The 5G deployment using millimeter wave spectrum is an outdoor technology and can only be brought indoors by installing numerous 5G transmitters inside a building since the spectrum won’t pass through walls. There is no doubt that the millimeter wave signals are fast, but as has been demonstrated here, the reception of signal is squirrely. Apparently, bandwidth comes and goes by a simple twist of the hand and the user’s body can block the millimeter wave signals. Add that to the inability to continue with a connection when walking into a building or around the corner of a building, and the millimeter wave product doesn’t sound particularly user friendly.

The outdoor product possibly makes sense in places where people stay and work outside, such as public markets. But it’s not an inviting technology for people who are only outside to go between buildings or to commute.

There are no indications that Verizon intends to deploy the product widely in the US, or at least not in the same manner that would cover a city in cell sites every 600 feet.

There has been a huge amount of hype in this country about being in a race with the Chinese over the deployment of 5G. But after seeing articles like this, perhaps our best strategy is to lay back and wait until 5G equipment gets cheaper and until the new 5G cell sites are made energy efficient. For now, it doesn’t sound like a race we want to win.

Network Function Virtualization

Comcast recently did a trial of DOCSIS 4.0 at a home in Jacksonville, Florida, and was able to combine various new techniques and technologies to achieve a symmetrical 1.25 Gbps connection. Comcast says this was achieved using DOCSIS 4.0 technology coupled with network function virtualization (NFV), and distributed access architecture (DAA). Today I’m going to talk about the NFV concept.

The simplest way to explain network function virtualization is that it brings the lessons learned in creating efficient data centers to the edge of the network. Consider a typical data center application that is to provide computing to a large business customer. Before the conversion to the cloud, the large business network likely contained a host of different devices such as firewalls, routers, load balancers, VPN servers, and WAN accelerators. In a fully realized cloud application, all of these devices would be replaced with software that would mimic the functions of each device, all operated remotely in a data center consisting of banks of super-fast computer chips.

There are big benefits from a conversion to the cloud. Each of the various devices used in the business IT environment  is expensive and proprietary. The host of expensive devices, likely from different vendors are replaced with lower-cost generic servers that run on fast chips. A host of expensive electronics sitting at each large business is replaced by much cheaper servers sitting in a data center in the cloud.

There is also a big efficiency gain from the conversion because inevitably the existing devices in the historic network operated with different software systems that were never 100% compatible. Everything was cobbled together and made to work, but the average IT department at a large corporation never fully understood everything going on inside the network. There were always unexplained glitches when software systems of different devices interacted in the work network.

In this trial, Comcast used this same concept in the cable TV broadband network. Network function virtualization was used to replace the various electronic devices in the Comcast traditional network including the CMTS (cable modem termination system), various network routers, transport electronics for sending a broadband signal to neighborhood nodes, and likely the whole way down to the settop box. All of these electronic components were virtualized and performed in the data center or nearer to the edge in devices using the same generic chips that are used in the data center.

There are some major repercussions for the industry if the future is network function virtualization. First, all of the historic telecom vendors in the industry disappear. Comcast would operate a big data center composed of generic servers, as is done today in other data centers all over the country. Gone would be different brands of servers, transport electronics, and CMTS servers – all replaced by sophisticated software that will mimic the performance of each function performed by the former network gear. The current electronics vendors are replaced by one software vendor and cheap generic servers that can be custom built by Comcast without the need for an external vendor.

This also means a drastically reduced need for electronics technicians at Comcast, replaced by a handful of folks operating the data center. We’ve seen this same transition roll through the IT world as IT staffs have been downsized due to the conversion to the cloud. There is no longer a need for technicians that understand proprietary hardware such as Cisco servers, because those devices no longer exist in the virtualized network.

NFV should mean that a cable company becomes more nimble in that it can introduce a new feature for a settop box or a new efficiency into data traffic routing instantly by upgrading the software system that now operates the cable network.

But there are also two downsides for a cable company. First, conversion to a cloud-based network means an expensive rip and replacement of every electronics component in the network. There is no slow migration into DOCSIS 4.0 if it means a drastic redo of the underlying way the network functions.

There is also the new danger that comes from reliance on one set of software to do everything in the network. Inevitably there are going to be software problems that arise – and a software glitch in an NFV network could mean a crash of the entire Comcast network everywhere. That may sound extreme, and companies operating in the cloud will work hard to minimize such risks – but we’ve already seen a foreshadowing of what this might look like in recent years. The big fiber providers have centralized network functions across their national fiber networks, and we’ve seen network outages in recent years that have knocked out broadband networks in half of the US. When a cloud-based network crashes, it’s likely to crash dramatically.

What’s the Best Way to Help Precision Agriculture?

The FCC is going to take a fresh look at the $9 billion 5G fund this month and it sounds like the grant program will get delayed again while the FCC figures out where to deploy the money. The fund idea has been roiled in controversy since the beginning when it became clear that the big cellular companies were providing false data about existing cellular coverage.

Buried inside this fund is $1 billion in grants intended to help precision farming. Precision farming needs bandwidth, and apparently, the FCC has decided that the bandwidth should be cellular. I was frankly surprised to see such a specific earmark. The current FCC and administration have clearly climbed on the 5G bandwagon, but it seems premature to me to assume that cellular will be the winning technology for precision agriculture.

This funding means that the cellular companies will get a free, or highly subsidized network and will then be able to bill farmers for providing the bandwidth needed for smart tractors and for the millions of field sensors that the industry predicts will be deployed to monitor crops and livestock.

This all sounds great and shows that the government is working to help solve one of our biggest broadband needs. But it also means that the FCC hopes to hand the agribusiness revenue stream to cellular companies. This feels to me like another victory for the cellular lobbyists – their companies get free government handouts that will lead to lucrative long-term monopoly revenue streams.

If the FCC was doing its job right, we’d be seeing a far different approach. There are multiple wireless technologies that can be leveraged for smart agriculture.

  • Cellular technology is an option, but it’s not necessarily the best technology to cover big swaths of farmland. The coverage area around a cell tower is only a few miles and it requires a huge number of rural cell sites to provide universal cellular broadband coverage in farming areas.
  • Another option is LoRaWAN, a technology that is perfect for providing small bandwidth to huge numbers of sensors over a large area. This technology was discussed in a recent blog talking about the deployment of a LoRaWAN blimp in Indiana.
  • By default, early farm sensors are using WiFi, which is something farms can implement locally, at least in barns and close to farm buildings.

All these technologies require broadband backhaul, and this could be provided by fiber or satellites. If the 5G grants and the current RDOF grants are spent wisely there will be fiber built deeply into farming counties. Satellite broadband could fill in for the most remote farms.

Ideally, the FCC would be considering the above technologies and any others that could help agribusiness. Agriculture is our largest industry and it seems callous to stuff money to solve the problem inside an FCC grant program that might not even be awarded for several years and that then will allow for six more years to build the networks – that would push solutions out for at least a decade into the future.

Instead, the FCC should be establishing a smart farming grant program to see what could be done now for this vital sector of our economy. The FCC should be funding experimental test trials to understand the pros and cons of using cellular, WiFi, satellite, or LoRaWAN bandwidth to talk to farm devices. The results of such trials would then be used to fund a farming broadband grant program that would deploy farm broadband in an expeditious manner – a lot sooner than a decade from now.

The FCC should not be automatically awarding money to cellular companies to control the budding smart farming industry. If we took the time to look at this scientifically, we’d find out which technology is the most suitable and sustainable. For example, one of the driving factors in creating smart farming is going to be the power needs for sensors using the different wireless technologies. It may turn out that the best solution is cellular – but we don’t know that. But that’s not going to stop the FCC from marching forward with $1 billion in grants without ever having looked hard at the issue. This sounds like just another giveaway to the big carriers to me.

Reaching Critical Mass for Gigabit Connections

The statistics concerning the number of gigabit fiber customers is eye-opening. Openvault tracks the percentage of customers provisioned at various broadband speeds. At the end of 2019, the company reported that 2.81% of all households in the US were subscribed to gigabit service. By the end of the first quarter of 2020, just after the onset of the pandemic, the percentage of gigabit subscriptions had climbed to 3.75% of total broadband subscribers. By the end of the second quarter, this exploded to 4.9% of the total market.

It’s clear that households are finally migrating to gigabit broadband. The gigabit product has been around for a while. The earliest places I remember selling it to homes were municipal systems like Lafayette, Louisiana, and Chattanooga, Tennessee. Some small fiber overbuilders and small telcos also sold early gigabit products. But the product didn’t really take off until Google fiber announced it was going to overbuild Kansas City in 2011 and offered $70 gigabit. That put the gigabit product into the daily conversation in the industry.

Since then there are a lot of ISPs offering the gigabit product. Big telcos like AT&T and CenturyLink push the product where they have fiber. Most of the big cable companies now offer gigabit download products, although it’s only priced to sell in markets where there is a fiber competitor. Google Fiber expanded to a bunch of additional markets and a few dozen overbuilders like Ting are selling gigabit broadband. There are now over 150 municipal fiber broadband utilities that sell gigabit broadband. And smaller telcos and cooperatives have expanded gigabit broadband into smaller towns and rural areas all around the country.

The title of the blog uses the phrase ‘critical mass’. By that, I mean there are probably now enough gigabit homes to finally have a discussion about gigabit applications on the Internet. Back after Google Fiber stirred up the industry, there was a lot of talk about finding a gigabit application that needed that much bandwidth. But nobody’s ever found one for homes for the simple reason that there was never a big enough quantity of gigabit customers to justify the cost of developing and distributing large bandwidth applications.

Maybe we are finally getting to the point when it’s reasonable to talk about developing giant bandwidth applications. The most obvious candidate product for using giant bandwidth is telepresence – and that’s been at the top of the list of candidates for a long time as shown by this article from Pew Research in 2014 asking how we might use a gigabit in the home – almost every answer from industry experts then talked about some form of telepresence.

Telepresence is the technology to bring in realistic images into the home in real-time. This would mean having images of people, objects, or places in your home that seem real. It could mean having a work meeting, seeing a doctor, talking to distant family members, or playing cards with friends as recently suggested by Mark Zuckerberg. Telepresence also means interactive gaming with holographic opponents. Telepresence might mean immersion in a tour of distant lands as if you are there.

Early telepresence technology is still going to be a long way away from a StarTrek holodeck, but it will be the first step in that direction. The technology will be transformational. We’ve quickly gotten used to meetings by Zoom, but telepresence is going to more like sitting across the table from somebody while you talk to them. I can think of a dozen sci-movies that include scenes of telepresence board meetings – and that will soon be possible with enough broadband.

I’m looking forward to Openvault’s third-quarter report to see the additional growth in gigabit subscribers. We might already by reaching a critical mass to now have a market for gigabit applications. A 5% market penetration of gigabit users means that we’re approaching 7 million gigabit households. I have to think that a decent percentage of the people who will pony up for gigabit broadband will be willing to tackle cutting edge applications.

This isn’t something that will happen overnight. Somebody has to develop portals and processors to handle telepresence streams in real-time – it’s a big computing challenge to make affordable in a home environment. But as the number of gigabit subscribers keeps growing, the opportunity is there for somebody to finally monetize and capitalize on the capability of a gigabit connection. As somebody who now spends several hours of each day in online video chats, I’m ready to move on to telepresence, even if that means I have to wear something other than sweatpants to have a business meeting!

Breakthroughs in Laser Research

Since the fiber industry relies on laser technology, I periodically look to see the latest breakthroughs and news in the field of laser research.

Beaming Lasers Through Tubes. Luc Thévenaz and a team from the Fiber Optics Group at the École Polytechnique Fédérale de Lausanne in Switzerland have developed a technology that amplifies light through hollow-tube fiber cables.

Today’s fiber has a core of solid glass. As light moves through the glass, the light signal naturally loses intensity due to impurities in the glass, losses at splice points, and light that bounces astray. Eventually, the light signal must be amplified and renewed if the signal is to be beamed for great distances.

Thévenaz and his team reasoned that the light signal would travel further if it could pass through a medium with less resistance than glass. They created hollow fiber glass tubes with the center filled with air. They found that there was less attenuation and resistance as the light traveled through the air tube and that they could beam signals for a much greater distance before needing to amplify the signal. However, at normal air pressure, they found that it was challenging to intercept and amplify the light signal.

They finally struck on the idea of adding pressure to the air in the tube. They found that as air is compressed in the tiny tubes that the air molecules form into regularly spaced clusters, and the compressed air acts to strengthen the light signal, similar to the manner that sound waves propagate through the air. The results were astounding, and they found that they could amplify the light signal as much as 100,000 times. Best of all, this can be done at room temperatures. It works for all frequencies of light from infrared to ultraviolet and it seems to work with any gas.

The implications for the breakthrough is that light signals will be able to be sent for great distances without amplification. The challenge will be to find ways to pressurize the fiber cable (something that we used to do fifty years ago with air-filled copper cable). The original paper is available for purchase in nature photonics.

Bending the Laws of Refraction. Ayman Abouraddy, a professor in the College of Optics and Photonics at the University of Central Florida, along with a team has developed a new kind of laser that doesn’t obey the understood principles of how light refracts and travels through different substances.

Light normally slows down when it travels through denser materials. This is something we all instinctively understand, and it can be seen by putting a spoon into a glass of water. To the eye, it looks like the spoon bends at that point where the water and air meet. This phenomenon is described by Snell’s Law, and if you took physics you probably recall calculating the angles of incidence and refraction predicted by the law.

The new lasers don’t follow Snell’s law. Light is arranged into what the researchers call spacetime wave packets. The packets can be arranged in such a way that they don’t slow down or speed up as they pass through materials of different density. That means that the light signals taking different paths can be timed to arrive at the destination at the same time.

The scientists created the light packets using a device known as a spatial  light modulator which arranges the energy of a pulse of light in a way that the normal properties if space and time are no longer separate. I’m sure like me that you have no idea what that means.

This creates a mind-boggling result in that light can pass through different mediums and yet act as if there is no resistance. The packets still follow another age-old rule in Fermat’s Principle that says that light always travels to take the shortest path. The findings are lading scientists to look at light in a new way and develop new concepts for the best way to transmit light beams. The scientists say this feels as if the old restrictions of physics have been lifted and has given them a host of new avenues of light and laser research.

 The research was funded by the U.S. Office of Naval Research. One of the most immediate uses of the technology would be the ability to communicate simultaneously from planes or satellites with submarines in different locations.  The research paper is also available from nature photonics.

 

The Need for Fiber Technicians

I foresee a coming shortage of trained technicians to work with fiber optics networks. This shortfall has come about for a few reasons. One reason is due to the labor practices of some of the biggest owners of fiber networks like AT&T, Verizon, CenturyLink, and Frontier. All of the big telcos have been downsizing technical staff for various reasons. Much of it has to do with the phasing out of traditional copper networks. The technical staff of the telcos have been systematically downsized for well over a decade, and in doing so these companies have not been hiring many new technicians, but rather training existing copper technicians to become fiber technicians. This has an impact on the whole industry since in the past, many of the trained technicians working throughout the industry began their careers at the big telcos. That funnel of newly trained technicians has dried up compared to the past.

The other reason for a shortage of trained telecom technicians is the recent explosion of new fiber construction. Companies everywhere are building fiber networks. The big carriers have been investing heavily in fiber. For example, over the past four years, AT&T built fiber to pass over 12 million homes and businesses. Verizon has been building fiber across the country to provide fiber to its cellular towers – including small cell sites that are scattered throughout most urban areas. Verizon says it also plans to pass 30 million homes with what is essentially fiber-to-the-curb technology using wireless loops.

There is also a huge amount of fiber being built by smaller companies. The FCC’s ACAM program from the Universal Service Fund spurred the construction of rural fiber in areas served by small telephone companies and cooperatives. Electric cooperatives have joined the fray in many rural markets. Various independent fiber overbuilders have been building fiber in small towns and in a few urban markets of the country.

The FCC is helping to fuel the demand for fiber construction. For example, they will soon be awarding the two biggest telecom grant programs ever. In October the FCC will hold a reverse auction to award $16.4 billion to construct rural broadband networks over the next six years. Another $4 billion will be awarded from that program next year. The FCC will also be awarding $9 billion for the 5G Fund, and much of that money will be used to build fiber networks to beef up rural cellular coverage. Meanwhile, a majority of states now have broadband grant programs, and the level of funding to these programs is increasing due to the recognition during the pandemic that millions of students don’t have access to broadband at their homes.

All of this fiber construction has already resulted in a recent shortage in trained fiber technicians needed for fiber construction. Almost all of the ISPs I’m working with are seeing increased bids for the labor cost of fiber construction. It’s becoming clear that the demand for trained construction crews is outpacing the number of available construction crews nationwide. Already in 2020, we don’t have enough trained fiber technicians to meet the demand for fiber construction – and this is going to get worse.

But construction is only half the story. We also need fiber technicians to maintain and operate fiber networks after they are constructed. Operational fiber networks require fiber technicians in trucks as well as electronics technicians to connect customers to fiber, respond to trouble calls, and maintain the network. All of the billions being poured into building fiber networks will require an army of new technicians to maintain and service the new networks.

The US is not equipped to easily double the number of fiber technicians over the next decade – but we’re going to have to find a way to do that. There are some formal training programs for fiber technicians, mostly being done by trade schools or technical colleges that sponsor apprenticeship programs for technicians for the CFOT or CPCT certification process. But the majority of fiber technicians are trained on the job by starting as hands-on journeymen.

The bottom line is that this is a growing field for people looking for a career. The high demand for technicians is going to drive up salaries, particularly for well-trained technicians. Unfortunately, this kind of shortage also means that the cost of building fiber is going to increase due to the excess of demand over supply for qualified technicians.

Apple Buys into 5G

Apple is coming out with a full range of new 5G iPhones. The phones have been designed to use the full range of new frequencies that the various cellular companies are touting as 5G, up to and including the millimeter wave spectrum offered in center cities by Verizon. In addition to 5G, the phones have new features like a better camera, better ease at using wireless charging, and a lidar scanner. The last change is the most revolutionary since lidar allows apps on the phone to better see and react to the surrounding environment.

But Apple is going all-in on the 5G concept. It’s a natural thing to do since cellular carriers have been talking non-stop about 5G for the last few years. However, by heavily advertising the new phones as 5G capable, Apple is possibly setting themselves up to be the brunt of consumer dissatisfaction when the public realizes that what’s being sold as 5G is just a repackaged version of 4G. The new features from an upgrade in cellular specifications will get rolled out over a decade, like we saw with the transition from 4G to 5G. In terms of the improvements of these new phones, were probably now at 4.1G, which is a far cry from what 5G will be like in ten years.

What I find most disturbing about the whole 5G phenomenon is that the cellular companies have essentially sold the public on the advantages of faster cellular speeds without anybody ever asking the big question of why cellphones need faster speed. Cellphones are, by definition, a single user device. The biggest data application that most people ever do on a cellphone is to watch video. If  4G phone is sufficient to watch video, then what’s the advantage up spending a lot of money to upgrade to 5G? Home broadband needs fast broadband to allow multiple people to use the broadband at the same time, but that isn’t true for a cellphone.

People do get frustrated with smartphones that get poor coverage inside big building, in elevators, in the inevitable cellular dead zones in every town, or rural areas too far away from cell towers. 5G phones won’t fix any of these problems because poor cellular coverage happens in areas that naturally block or can’t receive wireless signals. No technology can make up for lack of wireless signal.

The big new 5G feature in the iPhones is the ability to use all of the different frequencies that the cellular companies are now transmitting. However, these frequencies aren’t additive – if somebody grabs a new ‘5G’ frequency, the bandwidth on that frequency doesn’t add to what they were receiving on 4G. Instead, the user gets whatever frequency is available on the new spectrum channel. In many cases, the new 5G frequencies are lower than traditional cellular frequencies, and so data speeds are going to be a little slower.

The cellular companies are hoping that Apple is successful. The traditional frequencies used for 4G have been getting crowded, particularly in urban areas. Cellular data traffic has been growing at the torrid pace of 24% per year, and the traditional cellular network using big towers is getting overwhelmed.

Cellular companies have been trying to offload the 4G traffic volumes from the traditional cellular networks by opening up thousands of small cell sites. But their biggest hope for relieving 4G was to open up new bands of spectrum – which they have done. Every data connection made on a new frequency band is one that isn’t going to clog up the old and overfull cellular network. Introducing new bands of frequency doesn’t do the cellular networks any good unless people start using the new frequency bands – and that’s where the iPhone is a godsend to cellular companies. Huge volumes of data will finally migrate to the newly opened frequency bands as these new iPhones hit the market.

Unfortunately, users will likely not see any advantages from the change. Users will be migrating connection to a different frequency band, but it’s still 4G. It will be curious to see who takes the heat when the expensive new phones don’t outperform the old phones – will it be Apple or the cellular carriers?

The Regulatory Struggle to Maintain Copper Networks

The California Public Utilities Commission has been investigating the quality of service performance on the telco networks operated by AT&T and Frontier. The agency hired the consulting firm Economics and Technology, Inc. to investigate numerous consumer complaints made against the two telcos. Thanks go to Steve Blum for following this issue in his blog.

Anybody who still has service on the two carriers will not be surprised by the findings. The full study findings have not yet been released by the CPUC, but the portions that have been made public are mostly what would be expected.

For example, the report shows a correlation between household incomes in neighborhoods and the quality of service. As an example, the average household incomes are higher in neighborhoods where AT&T has replaced copper with fiber. More striking is a correlation between service calls and household income. The annual frequency of repair calls is double for neighborhoods where the average household income is $42,000 per year or less compared to neighborhoods with household incomes of $88,000 or more.

Part of that difference is likely because more high-income neighborhoods have fiber, which has fewer problems and generally requires less maintenance. But there are also hints in the report that this might be due to economic redlining where higher-income neighborhoods get a higher priority from AT&T.

This is not the first time that AT&T has been accused of redlining. I wrote a blog a few years ago about a detailed study made in Dallas, Texas that showed a direct correlation between the technology being delivered and household incomes. That study followed up on a similar report from Cleveland, Ohio, and the same things could likely be said for the older telco networks in almost every big city.

The big telcos are in a rough spot. The older copper networks have largely outlived their economic lives and are full of problems. Over the years copper pairs of wire in the outdoor cables have gone bad and the remaining number of working copper pairs decreases each year. The electronics used to deliver older versions of DSL are long out of production by the telco vendors.

I’m not defending the big telcos, because the telcos caused a lot of their own problems. The telcos have deemphasized copper maintenance for decades. The copper networks would be in bad shape today even had they been maintained perfectly. But purposefully neglected maintenance has hastened the deterioration of copper networks. Additionally, the big telcos have also been laying off copper-based technicians over the last decade and the folks who knew how to best diagnose problems on copper networks are long gone from the companies. Consumers have painfully learned that the most important factor in getting a repair made for DSL or copper is the knowledge of the technician that shows up to investigate an issue.

The California Commission is likely at some point to threaten the big telcos with penalties or sanctions, as been done in the past and also by regulators in other states. But the regulators have little power to effect improvements in the situation. Regulators can’t force the telcos to upgrade to fiber. And no amount of documentation and complaining is going to make the obsolete copper networks function any better. AT&T just announced that on October 1 that it is not longer going to add new customers to the DSL network – that’s likely to really rile the California Commission.

I’m not sure exactly how it will happen, but the day is going to come, likely during the coming decade when telcos will just throw up their hands and declare they are walking away from copper, with zero pretenses that they are going to replace it with something else.  Regulators will rant and rave, but I can’t see any ways that they can stop the inevitable – copper networks at some point won’t work well enough to be worth pretending otherwise.