Is Space Getting Too Busy?

Satellite broadband made the news again recently when the Chinese government said it had to adjust the orbits of the Chinese space station to avoid collisions with Starlink satellites. China claims it had to make adjustments in July and October of last year.

The Chinese are not the only ones making this claim. In 2020, the CEO of Rocket Lab said that it is becoming increasingly difficult to plot a clear trajectory when launching a rocket. The head of the European space agency recently accused Starlink of “making the rules” for everybody else in the way the company is launching satellites. The recent reaction by Elon Musk to these criticisms is that space is huge and can accommodate tens of billions of satellites.

What seems to be in play here is that there are no international regulations in place to define parameters for space launches. The last international treaty on space is over fifty years old and never envisioned the huge number of satellites we’re already starting to see. Starlink alone already has over 1,700 satellites and plans to launch new satellites twice per month throughout 2022. One earlier Starlink business plan called for over 30,000 satellites.

There have already been a few notable collisions between satellites. The most recent was when the Yunhai-1 Chinese satellite was apparently destroyed in March 2021 from pieces of debris from a Russian Satellite. There is a huge amount of space debris. There are over a million pieces of debris between 1 and 10 centimeters (4 inches) in size. The U.S. Space Surveillance Network was actively tracking 15,000 objects larger than 4 centimeters as of November 2021.

Debris matters because orbiting objects are moving fast – at 150 miles above the earth, a satellite needs to be going 17,500 miles per hour to maintain orbit. A collision with even a small object can be devastating.

Scientists have been warning about space debris for a long time. In 1978, NASA scientist Donald Kessler warned that collisions in space could result in a cloud of debris that would create an effective barrier to launching rockets or sending people into space.

This is no longer a theoretical problem since much of what we do on earth is now reliant on satellites. Most of our cable TV signals are launched from space. GPS relies on a series of satellites. Ships and airplanes navigate with support from satellites. Satellites are used to track weather patterns. There are now satellites tracking and monitoring everything from the movement of foreign armies to the water temperature of the oceans.  There will soon be millions of broadband customers using low-orbit satellites.

It’s hard for any layman to understand the real risks. Some of the controversy likely stems from international wrangling between nations. But there are also a lot of notable scientists that are worried that we might make space unusable.

It will be ironic if the world solves rural broadband with satellites only to find one day that there is too much debris to launch more satellites. It seems like a remote possibility, but some scientists say it’s possible. It makes sense for the international community to come together and work out rules that everybody can agree to.

Pushing Back Against Municipal Broadband

As a cautionary tale to any city that provides broadband, the incumbent ISPs are always going to push back on city initiatives. The following is a story from the summer that slipped off my radar. The city of Tucson, Arizona, launched a free wireless network to bring broadband to students in homes without broadband. As would be expected, the incumbent cable company, Cox Communications, fought against the city-provided broadband.

The city recognized the need for the network when it got requests for over 7,000 wireless access points from students during the pandemic. The city decided that the best long-term solution to the large numbers of unserved students was to create a private network using CBRS spectrum. We tend to think of municipal wireless networks as slow, but the city’s network rivals the broadband speeds offered by other cellular carriers in the city.

The city is using 4G LTE technology, which provides for the same indoor coverage as received by cell phones. The city identified 20 square miles of the city with the greatest number of students without home broadband. The initial network consisted of 40 small cell sites, and there are plans to add more. Broadband is received in the home through a typical cellular receiver and a SIM card that identifies the network. Broadband speeds are more than adequate to support a single student with download speeds over 50 Mbps and upload speeds over 3 Mbps. This network avoids the problem of having multiple students in a household sharing the network because it provides a receiver for each student.

The network has some interesting features. It supports basic network slicing which gives the school board the ability to prioritize school broadband traffic over other uses by students. The city is now looking at how to use this network for smart city purposes since the network provides broadband everywhere. The city is considering using the technology for monitoring the water system (critical infrastructure in arid Tucson), for providing ubiquitous broadband in parks, for connecting to all firefighters and other first responders, and for controlling traffic lights.

As might be expected, Cox Communications, the incumbent cable company, pushed back against the city network. When the wireless network was first discussed publicly, Cox made a proposal to provide 10 Mbps broadband to students in some selected parts of the city. When told that the wireless network would be delivering speeds of at least 50 Mbps, Cox countered that it would also be able to match the higher speed. But the first Cox offer is typical of most cable company low-income broadband programs – the speeds offered are far slower than what is delivered to a basic broadband customer.

Cox also sent a letter to the Tucson city council that warned about the problems that would be caused by broadband competition from the city. The letter included the same refrains we have seen elsewhere. The city shouldn’t be competing against the public sector. Cox warned that the city would have a hard time maintaining its new network. Cox also offered to partner with the city to build broadband in parts of the city not reached by Cox (with the city paying for the expansion).

I’m not sure that we should expect incumbents to act differently. As the cable company, Cox has a virtual monopoly on broadband since Cox largely competes only against DSL – and monopolies always fight to maintain monopoly power. Cable companies fight against all competition. They try every trick in the book to delay new commercial ISPs from building networks. But cable companies roll out a full press against city initiatives because they hope there is a political pressure point that will cause the city to reconsider. They know it’s a smart tactic because there are many cities that have canceled broadband plans after heavy lobbying by the incumbents.

In this case, the city didn’t back down and has launched the first phase of the wireless network. This became much easier for the city to finance after it received ARPA money from Congress that can be used to pay for broadband infrastructure. I am positive that the city will derive huge benefits from this network far past the day when the pandemic is behind us.

Fixed Cellular Broadband Performance

One of the first in-depth reviews I’ve found for T-Mobile’s fixed cellular broadband was published in the Verve. It’s not particularly flattering to T-Mobile, and this particular customer found the performance to be unreliable – fast sometimes and barely functioning at other times. But I’ve seen other T-Mobile customers raving about the speeds they are receiving.

We obviously can’t draw any conclusions based upon a single review by one customer, but his experience and the contrasting good reviews by others prompted me to talk about why performance on cellular broadband networks can vary so significantly.

I’ve always used the word wonky to describe cellular performance. It’s something I’ve tracked at my own house, and for years the reception of the cellular signal in my home office has varied hour-by-hour and day-by-day. This is a basic characteristic of cellular networks that you’ll never find the cellular carriers talking about or admitting.

The foremost issue with cellular signal strength is the distance of a customer from the local cellular tower. All wireless data transmissions weaken with distance. This is easy to understand. Wireless transmissions spread after they leave a transmitter. The traditional way we depict a wireless transmission shown in the diagram below demonstrates the spread. If two customers have the same receiver,  a customer who is closer to the tower will receive more data bits sooner than somebody who is further after the signal has spread. The customer in the bad review admitted he wasn’t super close to a cell tower, and somebody in his own neighborhood who lives closer to the cell site might have a stronger signal and a better opinion of the product.

There are other factors that create variability in a cellular signal. One is basic physics and the way radio waves behave outdoors. The cellular signal emanating from your local cell tower varies with the conditions in the atmosphere – the temperature, humidity, precipitation, and even wind. Things that stir up the air will affect the cellular signal. A wireless signal in the wild is unpredictable and variable.

Another issue is interference. Cellular companies that use licensed spectrum don’t want to talk about interference, but it exists everywhere. Some interference comes from natural sources like sunspots. But the biggest source of interference is the signal from other cell towers. Interference occurs any time there are multiple sources of the same frequency being used in the same area.

The customer in the review talks about the performance differing by the time of day. That is a phenomenon that can affect all broadband networks and is specific to the local robustness of the T-Mobile network. Performance drops when networks start getting too busy. Every DSL customer or cable company broadband customer has witnessed the network slowing at some times of the day. This can be caused by too many customers sharing the local network – in this case, the number of customers using a cell tower at the same time. The problem can also because caused by high regional usage if multiple cell towers share the same underlying broadband backbone.

The final issue that is somewhat unique to cellular networks is carrier priority. It’s highly likely that T-Mobile is giving first priority to customers using cell phones. That’s the company’s primary source of revenue, so cell phones get first dibs at the bandwidth. That means in busy times that the data left over for the fixed cellular customers might be greatly pinched. As T-Mobile and other carriers sell more of the fixed product, I predict the issue of having second priority will become a familiar phenomenon.

This blog is not intended to be a slam against fixed cellular broadband. The customer that wrote the review switched to cellular broadband to get a less expensive connection than from his cable company. This customer clearly bought into the T-Mobile advertising hype because a cellular broadband signal will never be as reliable as a signal delivered through wires.

We can’t forget the real promise of fixed cellular broadband – bringing broadband to folks who have no alternatives. Somebody that switched to T-Mobile from a 1 Mbps rural DSL product would have written a different and more glowing review of the same product. The bottom line is that anybody buying cellular broadband should recognize that it’s a wireless product – and that means the product comes with the quirks and limitations that are inherent with wireless broadband. I imagine that we’re going to continue to see bad reviews from customers who want to save money but still want the performance that comes with wired broadband. This is another reminder that it’s a mistake to judge a broadband product strictly by the download speed – a 100 Mbps cellular broadband product is not the same as a 100 Mbps cable company connection.

Courts Uphold 6 GHz WiFi Order

The right to use spectrum is turning into one of the most valuable pieces of real estate in the country. Cellular carriers have been paying huge sums in FCC auctions to get the rights to use spectrum. Perhaps the biggest sign of the value of spectrum is that there is seemingly a lawsuit every time the FCC makes a spectrum decision by those who want to see the spectrum used in other ways.

The United States Court of Appeals for the District of Columbia recently upheld the FCC’s April 2020 order that assigned 1,200 MHz of the 6 GHz spectrum band for public use. That order was challenged by a coalition of Apple and cellular carriers like AT&T. The challengers wanted some of the 6 GHz spectrum to be auctioned to those willing to pay the most for it – presumably the cellular carriers. Not surprisingly, the intervenors supporting the FCC decision were the big cable companies who take the most advantage of WiFi.

The original FCC order clearly supports the idea that the public needs better WiFi. The 6 GHz spectrum band will revolutionize the way we use WiFi in homes and businesses. WiFi performance is already slated to improve due to the new WiFi 6 technology. But adding the 6 GHz spectrum will drive performance to yet another level by adding seven 160 MHz channels to the WiFi environment.

The legal challenge followed the lines of other recent spectrum challenges that question the FCC’s technical assumptions used in making the order. Since this new spectrum band is open to everybody, including the cellular carriers – the challengers argued, among other technical points, that there will be too much interference to make the spectrum useful for cellular data.

The Court came down clearly on the side of the FCC. The court said that the courts owe ‘significant deference’ to the FCC and its technical staff in deciding complicated technical issues. Intervenors had raised the same interference issues at the FCC during the deliberation of the issue – and the courts were not having any rehashing of issues that the FCC had already considered.

The court did remand one minor issue related to interference back to the FCC raised by the National Association of Broadcasters about interference in the 2.4 GHz WiFi band. The FCC will revisit that issue.

The court decision finally frees up the 6 GHz spectrum for WiFi use. Vendors have assumed this would be ordered and have been building the capability to use the spectrum into devices over the last few years.

I think we’re going to look back at the FCC’s decision to expand WiFi and the Court’s backing of that order as the most important spectrum decision of our time. The current WiFi spectrum is overtaxed and growing busier by the day. This new spectrum will revitalize the WiFi distribution of bandwidth around the home and the office that we’ve all been wanting.

Industry vendors haven’t been sitting still and have already started to develop the next generation of WiFi that will create another big leap in performance.

The Fixation with Broadband Speeds

Leichtman Research Group recently conducted a nationwide poll of 2,000 households asking about broadband usage. LRG has been tracking broadband for many years and reports that overall broadband subscriptions are at 87% of all households in 2021, up from 83% in 2016, and 69% in 2006. There are a few results of the survey that I think warrant additional examination.

According to the LRG survey, 63% of broadband subscribers rate the speed of their Internet connection as 8 to10 on a 10-point scale with 10 being excellent. In a similar question, 69% of respondents who subscribe to speeds of at least 100 Mbps are satisfied with their broadband service.

The big news here isn’t that many homes are satisfied with broadband speeds – it’s that one-third of all households don’t think their broadband speeds are great. The news is that over 30% of homes with speeds over 100 Mbps are not satisfied with their broadband.

My consulting firm conducts surveys at the community level, and I often see similar results. LRG only released the high-level summary responses to the survey, so we don’t know all of the questions they asked. But if LRG only asked about broadband speeds, they asked the wrong question. This was borne out by the response to a different survey question where 45% of the respondents in the LRG poll don’t even know their subscribed broadband speed.

What I’ve found through surveys is that people don’t really care about broadband speeds – they care if their broadband connection works. Most people haven’t the slightest idea at any given time how much broadband speed is being delivered to their home. I sometimes hear dismay when people finally take a speed test and find out that they are only receiving a portion of what they are paying for – but even these people might not be unhappy with broadband if it works.

Here are the things I hear from the public when we ask the same kinds of questions that LRG asked:

  • One of the most common complaints I hear about big cable company broadband is outages. The issue in most markets is not big hours-long outages but frequent small outages of a few minutes in duration. These small drive people mad because it invariably disrupts whatever they were doing with the broadband.
  • Right behind unhappiness with outages is unhappiness with slowdowns. The complaint I hear is that broadband works most of the time but then gets maddingly slow at times. It’s almost as disruptive as an outage when broadband slows to a crawl.
  • The other big recurring complaint I hear is when broadband won’t perform an expected function. People become quickly unhappy with their broadband connection when they can’t do something like maintain a Zoom call or if they get kicked off a school or office connection. Somebody might have no trouble streaming Netflix movies but find that they can’t stream the more demanding live sports broadcasts.

This survey reminded me of something that has become clear to me over the last year – policymakers are fixated on broadband speeds but people care about broadband performance. These are not the same thing. I’ve never talked to anybody outside the industry who cares one iota about the definition of broadband – they only care if everybody in the household can use the Internet at the same time.

From a policy perspective, it seems like we’ve decided that there are no urban broadband problems because everybody can buy Internet faster than 100 Mbps download. Even if we set aside the issue that many homes can’t afford broadband, this survey points out that a lot of urban households find their broadband connection to be inadequate.

Our policies are all due to the fixation with broadband speeds. Concentrating on speeds as the only way to measure broadband means that policymakers can yield to cable company lobbying that says we have no urban broadband issues.

I am absolutely thrilled that we are finally going to use some money to bring faster broadband to rural areas that have little or no broadband. But policymakers need to understand that this will not eliminate broadband problems elsewhere. A huge number of people in urban areas are still not happy with their broadband connection – and that’s a problem that’s not going to go away by throwing grant money at rural markets. If anything, building rural fiber is going to remind urban residents that they have something of lesser quality.

Final Treasury Rules for ARPA

The U.S. Department of the Treasury released the final rules applicable to using ARPA funding. This was the giant pile of $350 billion that was paid out to local governments, counties, and states to address issues related to the pandemic. These rules take effect on April 1, 2022. As usual, this is not a simple document and is 437 pages long.

There has been a lot of confusion in cities and counties about how they can use these funds. This final rule should answer any open questions about broadband because the final rules are clear about how these funds can be used. Following are the most important provisions of the final rules that relate to building broadband infrastructure.

Broadband Speed Tests. The Interim Treasury rules had included requirements that broadband could only be constructed to areas that were considered as unserved or underserved using the typical definitions of 25/3 Mbps or less to be unserved. The final rules eliminate any consideration of existing broadband speeds. The final rules allow broadband to be constructed to reach households and businesses with an identified need for additional broadband infrastructure investment.

There still must be a justification that the project addresses a problem highlighted by the pandemic. But rather than relying on speed as the justification, localities can consider broadband reliability, affordability, or access to a connection that meets or exceed symmetrical 100 Mbps. Localities can document this need using any available data, including local speed tests, federal or state data, interviews with residents and businesses in the affected areas, and just about any other way that proves there is an existing broadband need.

This is an important clarification because it means local governments don’t have to spend the energy interpreting and fighting incumbent ISPs and the FCC maps.

Matching Funds. As is usual, the rules are written by lawyers and are never crystal clear. Following is the specific language in the order specific to using ARPA as matching for other grants:

Given the final rule’s revised requirements on eligible areas for investment, the final rule also modifies the interim final rule’s requirements around duplication of resources. Since recipients must ensure that the objective of the broadband projects is to serve locations with an identified need for additional broadband investment, the final rule provides that, to the extent recipients are considering deploying broadband to locations where there are existing enforceable federal or state funding commitments for reliable service at speeds of at least 100 Mbps download speed and 20 Mbps upload speed, recipients must ensure that SLFRF funds are designed to address an identified need for additional broadband investment that is not met by existing federal or state funding commitments. Recipients must also ensure that SLFRF funds will not be used for costs that will be reimbursed by the other federal or state funding streams.

I read this to mean that ARPA (SLFRF) can be used for matching, but with some important caveats. There is a two-part test for using ARPA along with another grants. ARPA must address a need not met by the existing federal or state grants, and second, it must not duplicate any payments for the same infrastructure.

The second test is the easy one to make and applies universally to all matching grants. For example, if one grant pays for 50% of an asset, the matching can be used to pay for the remainder – but the two grants together can’t pay more than 100% of the cost of the asset.

But the first test is going to require some legal gymnastics. The obvious justification for using ARPA is that the project won’t work without it. But there are easier ways to make this work that more easily meet the final rules. If ARPA is used for assets not included in the original state or federal grant, there is no duplication. That might mean using ARPA to build to additional households or using the original grant to build last-mile but ARPA to build drops. Folks are going to have to get creative to use ARPA money as matching – and this language lays out how to do so.

Low-Income. The final rules require ARPA-funded ISPs to participate in the Affordable Connectivity Program.

The Future of Data Storage

One of the consequences of our increased use of broadband is a big increase in the amount the data that we store outside our homes and businesses. The numbers are becoming staggering. There are currently about 3.7 billion people using the Internet, and together we generate 2.5 quintillion bytes of online data every day. The trends are that by 2025 we’ll be storing 160 zettabytes of data per year – a zettabyte is one trillion gigabytes.

I store a lot more data online than I used to. I now store things in the cloud all day long. When I edit a Word or Excel file, my changes are all stored in the cloud. I also back up every change on my computer every day. I write and store these blogs on a WordPress server. Copies of my blogs are automatically posted and stored on Twitter and LinkedIn. My company’s accounting records are stored online. When my car pulls into a driveway, it uploads diagnostics into the cloud. Pictures I take on my cellphone are automatically saved. I have no idea what else is being shared and saved by apps and software that I routinely use. As recently as a few years ago, I had very little interaction with the cloud, but I now seemingly live and work in the cloud.

It may be hard to believe, but in the foreseeable future, we’ll be facing a data storage crisis. We can’t afford the resources to be able to store data in the same way we do today. Data centers now use nearly 20% of the electricity used by technology. A single data center uses more electricity than a small town. We’re consuming electric generation resources and spinning off huge amounts of carbon dioxide to be able to save the 45 pictures taken at the birthday party you attended last night.

One of the obvious solutions to the data storage challenge is to throw away data. But who gets to decide what gets kept? The alternative is to find better methods of data storage that don’t require as much energy or take as much space. There are several areas of research into better storage – none is yet ready for prime time, but the ideas are intriguing.

5D Optical Storage.

Researchers at the University of Southampton are exploring data storage using lasers to etch into cubes of silicon glass. The technique is being called 5F, because in addition to using the normal three axes as storage parameters they are also using the size of a recorded record and the orientation. Think of this as a 3D version of the way we used to store data on compact disks. This technology would be used for long-term storage since something that is etched into the glass is permanent. Storing data in glass requires no power, and the glass cubes are nearly indestructible. One small cube could store hundreds of terabytes of data.

Cold Storage.

Researchers at the University of Manchester are taking a different approach and looking at the benefits of storing data at super-cold temperatures. They have developed man-made molecules that can store several hundred times more data than in the equivalent space on current hard drives. The key is to store the molecules at low temperatures. This is the same research group that discovered graphene and that works with unique molecular structures. Scientists have known that storage at lower temperatures can work, and the big breakthrough is having this technology work at 80 Kelvin using liquid nitrogen (which is significantly warmer than past work near to absolute zero using liquid helium). Since our atmosphere is mostly nitrogen the frozen gas is inexpensive to produce. Scientists are hoping that the molecules will be able to store data for a long time, even if losing power.

DNA Storage. Scientists have been intrigued for over a decade about using DNA as a storage medium. DNA could be an idea storage media because it’s made from our base-pair amino acids, and the convoluted coiled structure provides a lot of storage capacity in a condensed space. A team at Harvard was able to store the code for a video on a strand of bacterial DNA. Since then, the commercial company Catalog has been working to perfect the technology. The company believes it is close to a breakthrough by using a synthetic version of a DNA molecule rather than living tissue. Data could be written to the molecule as it’s being assembled. Like with etched glass, this is permanent storage and highly promising. In the past summer, the company announced it was able to record the full 16 Gigabytes of Wikipedia into a tiny vial of the material.

We need these technologies and others to work if we don’t want to drown in our own data.

Regulatory Capture

Regulatory capture is an economic principle that describes a situation where regulatory agencies are dominated by the industries they are supposed to be regulating. Economic theory predicts that regulators caught by regulatory capture act in ways that protect incumbent providers instead of the public interest. Unfortunately, the broadband industry is one of the best (or worst) examples of regulatory capture.

Economic theory says that it’s necessary to regulate any industry where a handful of large players control the market. Good regulation is not supposed to be antagonistic to large corporations but should strike a balance between what’s good for the industry and what’s good for the public. In a perfectly regulated industry, both the industry and the public should be miffed at regulators for not fully supporting their issues.

The concept of regulatory capture was proposed in the 1970s by George Stigler, a Nobel prize-winning economist. He described the characteristics of regulatory capture as follows. His list matches what’s happening in the broadband industry to a tee.

  • Regulated industries devote a large budget to influence regulators at the federal, state, and local levels. It’s typical that citizens don’t have the wherewithal to effectively lobby the public’s side of issues.
  • Regulators tend to come from the regulated industry, and they tend to take advantage of the revolving door to return to industry at the end of their stint as a regulator.
  • Regulation from the legislative process tends to become corrupt, such as when politicians vote for bills they don’t understand in return for contributions. Actual regulators can also be corrupt – but often regulators side with the industry over the public because they have an industry perspective.
  • In the extreme case of regulatory capture, the incumbents are deregulated from any onerous regulations while new market entrants have hoops to jump through.

There are many examples throughout history of economic cartels that successfully captured regulators. For example, the railroads in the 19th century ran roughshod over the economy and regulators. Unfortunately, the best current example of regulatory capture is the broadband industry, perhaps closely followed by big agriculture and big pharmaceuticals. There is no question that the power of the broadband industry is concentrated among only a few firms. Comcast, Charter, AT&T, and Verizon together serve 75% of all broadband customers in the country.

The FCC is a textbook example of a captured regulator. The FCC under Ajit Pai went so far as to deregulate broadband and to wash the FCC’s hands of broadband as much as possible by theoretically passing the little remaining regulation to the FTC. It’s hard to imagine an FCC more under the sway of the broadband industry than the last one.

But federal regulators are only the tip of the iceberg. The large ISPs have convinced most state regulators to deregulate (or never regulate) broadband. The ISPs spend an immense amount of money in state legislatures trying to get laws passed that favor the big ISPs or that disfavor any potential competitors. The surest sign of regulatory capture is that the big ISPs are also active at the local level and pressure City and County Councils to not consider local broadband projects. There is an immense lobbying effort currently underway to dissuade local politicians from using ARPA grant money for broadband.

We don’t have to look far to see how the industry has gotten its way with regulators. The U.S. has some of the most expensive broadband in the world. Tens of millions of homes have little or no broadband. The broadband industry has the worst overall customer service among all industries- and that’s saying something. The big ISPs abuse customers in other ways such as quietly monetizing customers’ private data.

There is no real fix for regulatory capture other than a loud public outcry that brings back strong regulations. That can start at the FCC, but even that isn’t going to put a dent in the influence of the ISPs at the state and local level.

WiFi 7

The WiFi 6 standard was just approved in 2020 and is starting to find its way into home and business WiFi networks. If you’ve purchased a new WiFi router recently, there is a decent chance that it can support WiFi 6. However, the benefits of the new WiFi aren’t going to benefit a home until you’ve upgraded devices like TVs, computers, and various IoT devices to use the new standard. It’s likely to take years for WiFi 6 to get fully integrated into most homes.

But that hasn’t stopped vendors from already working on the next generation of WiFi technology, naturally being called WiFi 7. WiFi 7 promises faster speeds and lower latency and will be aimed at maximizing video performance. Qualcomm says it expects full WiFi 7 to become available after 2024. WiFi 7 will be using the new WiFi specification 802.11be.

The speed capabilities have climbed with each subsequent generation of WiFi. WiFi 5, which most of you are running in your home today has a maximum speed capability of 3.5 Gbps. WiFi 6 stepped maximum speeds up to 9.6 Gbps. The early specifications for WiFi 7 call for maximum data speeds of 30 Gbps. While most of us will never tax the capabilities of WiFi 5, faster speeds are important because it means a WiFi signal can burst huge amounts of data in a short period of time.

WiFi 7 isn’t going to require additional WiFi spectrum – but more spectrum helps. The federal Court of Appeaks for Washington DC just recently confirmed the FCC’s allocation of 6 GHz spectrum for WiFi use. The NCTA, representing the big cable companies, recently filed a request with the FCC asking the agency to consider opening additional new bands of free public spectrum for WiFi using 7 GHz spectrum and lower 3 GHz spectrum. The trade group argues that WiFi has created the largest public benefit of any spectrum band that FCC has ever authorized. The trade association argues that the world is finally becoming awash in Internet of Things devices, with Charter alone connecting to half a billion IoT devices.

There are two big changes that will differentiate WiFi 7 from WiFi 6. First is a major upgrade to the WiFi upload link. WiFi 7 will incorporate uplink multiuser multiple-input multiple-output (UL MU-MIMO) technology. The new technology creates multiple paths between a router and a WiFi-connected device. Connecting multiple paths to a device will significantly increase the amount of data that can be transmitted in a short period of time. WiFi 6 allows for a theoretical eight simultaneous paths – WiFi 7 increases that to sixteen paths.

WiFi 7 will also bring another improvement labeled as coordinated multiuser MMO (CMU-MIMO). CMU-MIMO will let a home device connect to more than one WiFi router at the same time. Picture your computer connected to several channels from different home routers. This coordination should result in faster connections, lower latency, and the ability to deliver high bandwidth to every corner of a home that is equipped with multiple WiFi access points. This is the most complicated challenge in the WiFi 7 specification.

WiFi 7 promises other improvements as well. The 802.11be specification allows for combining spectrum paths. Today’s WiFi routers use one channel of spectrum for a single device, and the planned upgrade would allow devices to combine signal paths from different WiFi frequencies at the same time. Another slated improvement is an upgrade to allow the use of 4096-QAM. The QAM technology will allow the combination of more than one frequency modulation in the same data path.

The 801.11be specification is pushing the limits of physics in a few places and may never fully achieve everything being promised. But it represents another huge upgrade for WiFi. There are a few vendors that will be previewing early versions of WiFi 7 technology at CES 2022. Maybe most of us will at least have made the transition to WiFi 6 before this latest and greatest WiFi is available.

FCC – Please Do the Right Thing with RDOF

The $42.5 federal BEAD broadband grants that are being funded from the Infrastructure Investment and Jobs Act should be a gamechanger for rural broadband. There will be many hundreds of millions of grants given to each state to fund the construction of broadband networks. This is likely once-in-a-generation funding, so there will only be one chance to do this right.

There is one pending issue that could really gum up the BEAD grants – there are pending RDOF awards that should not be funded. These pending RDOF grants fall into three categories.

First are RDOF auction winners that have probably bitten off more than they can chew. An example of this might be LTD Broadband. I don’t have any inside knowledge of the company, but I’ve seen estimates that the company would need to raise something north of $7 billion dollars to go along with the $1 billion RDOF award. There are likely other similar companies in the auction. The FCC has had almost a year to determine the financial ability of grant winners to fund the rest of the projects they won. If these companies don’t have the needed funding, it’s time for the FCC to cut them loose. This shouldn’t be a hard determination.

The second category is unique. Starlink won nearly a billion dollars of RDOF funding. There are still a lot of unknowns about the company’s capabilities. I know some of the RDOF areas won by Starlink are heavily wooded, and from what I hear, that’s a big problem for the technology. There are also still questions about the ability of Starlink to serve every home in a grant area – which is what the RDOF requires. I have nothing against Starlink, and if I lived in a rural area, I would have been first in line for the beta test. But the company is still an unproven technology in terms of being able to serve everybody. The company is still a start-up with no guarantee of success or longevity. At the end of the day, Starlink doesn’t meet the basic requirement that federal funding should only go to companies that can guarantee to meet the requirements of the award.

Finally, are the RDOF auction winners that claim to be able to deliver gigabit wireless technology. Like Starlink, these are not field-proven technologies and likely will never deliver what is being promised. Over the last year, I haven’t talked to a single engineer who thinks it’s possible to deliver a wireless gigabit to every customer in rural Census blocks with gigabit wireless. I have no doubt that the new wireless technologies have the capability of being a lot faster than current fixed wireless technology. But these grants weren’t awarded to deliver a few hundred megabits per second. These grant winner should be tossed for overclaiming the technology, since doing so gave them an unfair advantage in the auction. If they had bid with the ability to deliver 200 Mbps the auction results would have been very different. These companies gamed the auction rules and that alone should have invalidated the awards. Unfortunately, the FCC might be ready to make these awards, having recently awarded funding to Resound Networks to provide gigabit wireless broadband.

It’s obvious that the FCC is already wrestling with all of these issues because it’s been eleven months since the RDOF winners filed their long-form information. But the FCC must know that the BEAD grants change everything. If it had known that BEAD grants were coming, the FCC probably would not have held the reverse auction. This new federal grant money changes the equation and brings a new paradigm that should make it easier for the FCC to make up its mind about questionable RDOF awards.

If the FCC gets this wrong, then the RDOF areas in question won’t be seeing the same broadband solutions that are coming everywhere else. The BEAD grants make it easy for the FCC to reject applicants that have not demonstrated the financial wherewithal to fund the promised RDOF solution. The BEAD grants should make it easy to reject Starlink – the company is still free to market broadband to all of rural America, and it already has a huge waiting list of people willing to buy service. The BEAD grants should make it easier for the FCC to admit it erred in letting bidders overclaim technology.

It’s not going to be easy for the FCC to publicly admit that it made some big mistakes in the RDOF auction. Most of these issues could have been avoided if the FCC had pre-screened applicants. Any technology that was not already proven to work in the real world should have been excluded from the auction. Applicants should have been given a dollar limit for participation in the auction based on their balance sheet. But the FCC has a chance to set this right by rejecting the questionable awards and letting the folks that live in these areas have a chance for a better and more permanent broadband solution through BEAD grants. FCC – please do the right thing.