Broadband on Tribal Lands

The American Indian Policy Institute recently issued a report titled Tribal Technology Assessment – The State of Internet Service on Tribal Lands. The report looks in-depth at broadband issues concerning tribal lands and reports on a survey of tribal members that is the first attempt ever of quantifying the issues of Internet access on tribal lands.

The FCC has often noted in various reports that tribal areas suffer from poor broadband. However, the FCC has been relying on the same faulty data to describe tribal lands that is used to look at rural broadband in general. The data collected from ISPs in the Form 477 process has been discredited in numerous ways, the latest being a test of the FCC data in Virginia and Missouri by USTelecom that showed that the 477 data had underestimated unserved homes by 38%. This AIPI report takes the first stab ever at understanding the real nature of broadband on tribal lands.

According to the FCC’s 2018 Broadband Progress Report, 35% of the 1.5 million people living on tribal lands lack access to 25/3 Mbps broadband, compared to 8% for the country as a whole. The older 2016 Broadband Progress Report showed that 41% of tribal lands lacked access to 25/3 Mbps compared to 10% of the whole country. Not all tribal lands are rural, and the FCC report showed that 72% of rural tribal residents in the lower 48 states lack broadband access while 33% of urban ones lack access. It showed that 70% of rural tribal lands in Alaska lack broadband access while 15% of urban areas there lack access.

The AIPI study included a survey of numerous tribal members from around the country. Following is a small sample of the responses to the survey, which are covered in more depth in the report.

  • 35% of respondents own or use a smartphone. 24% own or use a desktop or laptop computer. 16% own or use a tablet. All of these results are far lower than the US average.
  • For survey respondents with access to the internet, 36% have a connection through a telco like CenturyLink, Frontier or Windstream, 29% use a cellphone, 12% buy broadband from a cable company, six percent use satellite, and 1% still use dial-up. The rest of the respondents get access to the Internet at work, from libraries or at public hotspots.
  • Only 47% of respondents subscribe to a cellular plan, far below the 94% penetration for the country as a whole. 22% of respondents said that they have spotty access to home cellular coverage and 6% said they have no home cellphone coverage.
  • 50% of respondents said they feel limited by the broadband choices available to them.

The report makes specific recommendations for getting better broadband to tribal lands. Some of the recommendations include:

  • The FCC should earmark and prioritize some of the funding from the Universal Service Fund to solve the tribal broadband shortfalls instead of generically targeting hard-to-serve areas.
  • The RUS and USDA should identify and recommend pathways for Tribes to create rural cooperatives, consortia or creative partnerships to provide affordable broadband.
  • The FCC should prioritize spectrum licensing directly to Tribes or those who want to serve tribal lands.
  • Tribes should be allowed to challenge Form 477 data that misstates the broadband available on tribal lands.
  • Congress should provide an annual budget and provide more independence to the Office of Native Affairs and Policy at the FCC.

The report also includes numerous other recommendations for Congress, the FCC, large telcos and service providers, and tribal governments.

It’s clear that in aggregate that tribal lands are more poorly served than rural America as a whole. The report describes a broadband environment on tribal lands that lacks in both landline and cellular broadband.  I’ve seen numerous papers and articles on the topic over the years, but this report goes into more depth than anything else I’ve read on the topic.

Welcome, Merit Network!

The rural broadband community has a new ally in Merit Network of Michigan. Merit Network is a non-profit network that is governed by Michigan’s public universities. The organizations was founded in 1966 and was an early player that helped to develop some of the practice and protocols still used on the Internet. Their early mission was to seek ways for universities to network together, something that they accomplished by connecting Michigan and Michigan State in 1971. Merit went on to manage NSFNET, a nationwide network sponsored by the National Science Foundation, that was used to connect advance research labs and universities.

Over time, the company also collaborated with the Internet 2 project but also turned its attention to Michigan where it cobbled together a network comprised or owned and leased fibers used to provide bandwidth to K-12 schools around the state.

In the last year, Merit decided to further expand their mission. They now see that the biggest problem in Michigan education is the lack of home broadband for students. 70% of the teachers in Michigan assign computer-based homework, and yet 380,000 homes in Michigan don’t have a broadband connection. They are convinced, like many of us, that this homework gap is creating permanent harm and disadvantaging students without broadband.

The organization recently held their first statewide broadband summit and invited communities, service providers, anchor institutions, and broadband ‘activists’ to attend the summit. I’m pleased to have been invited to be a speaker. The goal of the conference was to describe the homework gap and to talk about real solutions for solving the problem in the state. The summit hoped to bring together stakeholders in rural broadband to form alliances to tackle the problem.

Merit has also taken several extraordinary steps that is going to make them a major player in the national effort to solve the homework gap. They’ve undertaken what they call Michigan Moonshot. This is an intensive effort to map and understand the availability of broadband around the state. The effort is being undertaken in collaboration with M-Lab and the Quello Center of Michigan State University. The concept is to encourage state educators to get students to take a specific speed test and to pair that effort with a program that teaches students about gathering scientific data.

The Moonshot effort is also going to correlate student test scores with broadband availability. This will be done in such a way as to guarantee student anonymity. This has been done before, but not on such a large scale. The project solicited participation from several school districts in Spring 2019 but expects to include many more in the future. The results of the data collection will be analyzed by scientists at Michigan State. The results of Moonshot studies should be of interest to educators and rural broadband proponents all over the country. Preliminary results show that it’s likely that there will be a strong measurable negative impact for students without home broadband. This study will provide peer-reviewed statistical evidence of that impact and should be a useful tool to educate legislators and to goad communities into action to find a broadband solution.

Merit is also nearing completion of a lengthy document they call the Michigan Moonshot Broadband Framework, which they hope will be a living document (meaning that collaborators can make edits) that lays forth a guide for communities that want to find a local broadband solution. This document is a step-by-step roadmap for how a community can tackle the lack of broadband.

It’s always good to have another major player in the national battle to tackle the lack of household broadband. I have high hopes that Merit Network will spur finding broadband solutions for rural and urban students in Michigan.

FCC Modifies CBRS Spectrum Rules

The FCC adopted Report and Order 18-149 that modifies the rules for using the 3.5 GHz spectrum band known as the Citizens Broadband Radio Service or CBRS. This is a huge swath of spectrum covering 150 MHz between 3550 and 3700 MHz. This order initiates the process of activating the spectrum for widespread use. This spectrum sits in the middle between the two WiFi bands and has great operating characteristics for wireless broadband.

The FCC plans to auction 70 MHz of the spectrum in June 2020 while authorizing the remaining 80 MHz for public use. In all cases, the spectrum must be shared with the military, which gets priority access to the spectrum at any time.

The spectrum also must be shared among users – something that will be monitored by authorized SAS administrators. The FCC named five administrators in the docket: Amdocs, CommScope, Federated Wireless, Google, and Sony. The administrators must report back to the FCC after 30 days to report how their software is handling the tracking and sharing of the spectrum.

The FCC changed their original plans for the auction and use of the spectrum that was originally proposed in 2015. The size of a license footprint is now set at the county level rather than the smaller census blocks. Licenses will now be issued for 10 years with a provision to renew instead of 3 years. Small businesses and rural bidders can get bidding credits. The FCC is also establishing a secondary market by allowing license holders to sell or lease the spectrum to others.

Of most interest to rural carriers are the bidding credits. Small businesses with gross revenues less than $55 million will get a 15% bidding credit. Very small businesses with annual gross revenues under $20 million will get a 25% bidding credit. Rural carriers with less than 250,000 customers will get a 15% credit. There will be additional credits given for serving tribal lands.

Much of the public comments in the docket centered on the size of the service areas. The FCC had originally considered using census blocks, reasoning that rural carriers could pursue reasonably small licenses for offering rural fixed wireless. The cellular carriers wanted much larger areas referred to as partial economic areas (PALs). The FCC finally chose a license size in the middle, using county borders.

The spectrum in each county will be auctioned off in seven 10-MHz bands. Any bidder will be limited to using no more than 4 of the bands, or 40 MHz of spectrum. Interestingly, the ability to lease spectrum from others might mean that a wireless carrier could put together even a bigger swath of the spectrum.

The auction is going to be interesting to watch to see who shows up to bid. The cellular carriers have said that this spectrum is key to their mid-band 5G plans. The industry was already anticipating this order and this spectrum is already built into the new iPhone and a few other devices. The cellular carriers have plans to heavily use the 80 MHz of public spectrum, and they will certainly also chase the licensed spectrum in urban counties. We’ll have to see if they have any need or interest to pursue the licensed spectrum in rural counties, where one might think that the public bands of CBRS ought to satisfy their needs. If the big companies pursue the rural bands, they can drive prices up, even considering the rural company bidding credit.

Spectrum licenses have historically been awarded for much larger footprints and it will be interesting to see how awarding spectrum at the county level will impact the auction. There are already rural carriers using the public portions of the spectrum for fixed wireless service. What is unknown is how much the cellular carriers will also use the public spectrum in rural places, perhaps making the public band too crowded for getting the best use of fixed wireless. WISPs will likely prefer licensed spectrum if they can get it affordably since there will be zero interference. We’ll also have to see if many WISPs will have the financial wherewithal to pursue licensed spectrum.

One of the most interesting aspects of the order is allowing spectrum buyers to lease or sell spectrum. I fear this provision will attract speculators to the auction, which could drive up the cost of buying spectrum and also then drive up the cost of leasing the spectrum. But this also might give small ISPs that couldn’t qualify for the auction the ability to use licensed spectrum.

Maine Legislates a la Carte Programming

The Maine legislature passed a law that would create a la carte cable TV programming in the state. Titled “An Act to Expand Options for Consumers of Cable Television in Purchasing Individual Channels and Programs”, the act would require cable companies to offer individual channels to consumers starting September 19.

Comcast, and many of the primary programmers like A&E, C-Span, CBS, Discovery, Disney, Fox, Viacom, and New England Sports Network recently went to court to try to stop implementation of the law.

Consumers have been asking for a la carte programming for decades. People don’t like the idea of having to pay for channels they don’t watch. The cost of a cable subscription is the number one issue mentioned by the majority of the several million cord cutters that are now dropping cable every year.

It’s hard to think that the law can stand up to a legal challenge. There is a question of jurisdiction since Congress has enacted numerous laws governing cable TV that are still on the books and still enforced by the FCC. Those laws require cable companies to offer several tiers of programming for cable companies that have enough capacity to carry a lot of channels. The courts will have to decide if the Maine legislature can override federal law.

Probably more salient are the contracts between programmers and cable companies. Those contracts are extremely specific about how the cable companies must carry their content. Any cable company that tries to enforce the law will be in direct conflict with those contracts. Laws often preempt contracts, but I find it likely that the programmers would yank programming from Maine cable companies rather than see a la carte programming go into effect. If this law is allowed to stand in Maine it would likely quickly appear in other states as well.

The next problem is technical. Cable companies would find it difficult to deliver only those channels a customer wants. Cable TV networks act like a big radio system and every channel on a cable system is broadcast to every home on the network. A cable company uses filters to block channels that a customer doesn’t subscribe to. This is fairly easily done today because channels are delivered in large blocks. If a customer doesn’t want to pay for a digital programming tier the cable company blocks that whole tier. From my understanding, the blocking software used today doesn’t provide the ability to establish a custom blocking screen for each customer. This could probably be made to work with some assistance from the manufacturers of headend and from Cable Labs, but nobody has ever created the software to allow for custom blocking down to the individual channel level – it’s never been needed.

Individual channels are more easily delivered by companies that deliver cable TV on an all-digital network like fiber or DSL. This technology for delivering cable TV on these technologies is IPTV, and with this technology the cable provider only broadcasts one channel at a time for whatever customers want to watch. But even IPTV providers would need to buy modified software to give each customer a custom choice of channels.

It’s worth noting that idea has been tried in a controlled way. Last year Charter offered what they call Spectrum TV Choice where customers can pick ten channels out of a list of 65 choices and bundle them with local channels. This package is priced at $25. Charter is able to provide this product because they step outside of their normal network topology and deliver the channels over the customer broadband connection using a Roku box at the customer end. Charter has not reported on the success of this package and I’ve not seen it advertised for a while.

The final issue to consider is the price. Even if a cable company unbundles channels, they would likely charge a lot per channel. Are consumers going to be better off if they buy a dozen unbundled channels priced at $5 each for $60 or get a 150-channel bundle of channels for that same price? I believe that the cable companies would make buying single channels a costly endeavor.

It’s easy to understand why Maine legislators crafted this law. Cable programming has been increasing in price for years and is growing out of the range of affordability for many homes. Many homes don’t have sufficient broadband to cut the cord and feel trapped by the expensive cable options available to them. Surveys have also shown that the average home watched a dozen or fewer channels. My bet is that the legislation won’t survive a legal challenge, but I guess that’s why they have lawsuits.

Happy Birthday Wi-Fi

This year is the twentieth anniversary of the formation of the Wi-Fi Alliance and the launch of commercial Wi-Fi. Wi-Fi has become so ubiquitous in our lives that it’s hard to believe that it’s only been twenty years since all broadband connections came with wires.

In 1999 most people were still using dial-up and that’s the year when early adapters started buying DSL. I remember having incredibly long phone cords so that I could use my laptop at different places around the house. When I bought DSL, I became tied to the desk with the DSL modem because I couldn’t find equally long cords to carry DSL all over the house.

I remember the day I bought my first Linksys Wi-Fi router. At that time, I think the only device in my home that would talk to Wi-Fi was my laptop. I was able to use that laptop everywhere around the house, and I remember how liberating it felt to be able to use the laptop on the front porch. I got my first carrier-class Wi-Fi router when I upgraded to fiber on Verizon FiOS. Even then I think the only devices in my house that communicated with the Wi-Fi router were a desktop and some laptops – the world had not yet started to build Wi-Fi into numerous devices. Today my home is crammed full of Wi-Fi-capable devices and it’s hard to imagine going without the wireless technology.

There’s an article in the current Wired by Jeff Abramowitz discussing how Wi-Fi as we know it almost didn’t happen. At the time that 802.11b was introduced there was a competing technology called HomeRF that was being pushed as a home wireless solution. We easily could have ended up with HomeRF used in the home and 802.11b used in the office. That would have meant no easy transition of devices from office to home, which would likely have stymied the ubiquitous Wi-Fi we have today.

The growth of Wi-Fi required free spectrum to thrive, and for that, we can thank microwave ovens. Microwave ovens were first developed in the 1940s and emitted radiation the 2.45 GHz frequency. In the 1960s practically every home bought a microwave oven, and at that time the devices didn’t have great shielding. Since the microwave ovens polluted the spectrum on both sides of the 2.45 GHz band, the FCC decided in 1985 to add frequency bands on both sides of that spectrum, creating the ISM band that was open for anybody to use. With the radio technology available at the time nobody wanted to put any commercial usage too close to leaky microwave ovens. Since then, microwave ovens have better shielding and radios are more accurate in pinpointing narrow channels, and we can now use most of what the FCC had considered in 1985 to be junk spectrum.

I am amused every time I hear somebody in the industry say that broadband is going wireless – and by that, they mean 5G cellular. Today the average cellphone customer uses about 6 GB of cellular data per month. What the cellphone companies don’t talk about is that the average cellphone user also consumes three times that much data each month on Wi-Fi connection. The fact is that our cellphones are mostly Wi-Fi devices that can change to cellular data when we’re out of reach of our homes, schools, and offices.

Wi-Fi is about to take another big leap forward as WiFi 6 is being officially released this month. This newest version of Wi-Fi uses less energy, reduces latency, increases performance in crowded wireless environments, and allows for faster speeds. Wi-Fi has gotten a lot more sophisticated with the introduction of techniques like beamforming and the technology is light years ahead of what first came out in 1999. In those early days, a Wi-Fi modem was just good enough to handle the 1 Mbps DSL and cable modems broadband of the day.

Device manufacturers love Wi-Fi. Estimates vary, but there are predictions that there will be something like 10 billion worldwide Wi-Fi connected devices in 2020 and 22 billion by 2025 – which would be nearly three Wi-Fi devices for every person on the planet. Those are unbelievable numbers for a technology that only came into existence twenty years ago. The manufacturers must be thrilled knowing that we’ll all be buying new devices to upgrade to Wi-Fi 6 over the next few years.

If Wi-Fi was a person, I’d bake them a cake or buy them a drink to honor this birthday. I’ll have to settle for thanking all of those who have contributed over the years to turn the Wi-Fi concept into the robust products that have changed all of our lives.

The Downside to Smart Cities

I read almost daily about another smart city initiative somewhere in the country as cities implement ideas that they think will improve the quality of life for citizens. I just saw a statistic that says that over two-thirds of cities have now implemented some form of smart city technology. Some of the applications make immediately noticeable differences like smart electric grids to save power, smart traffic lights to improve traffic flow, and smart streetlights to save electricity.

But there are a few downsides to smart city technology that can’t be ignored. The two big looming concerns are privacy and security. There was an article in Forbes earlier this year that asked the question, “Are Privacy Concerns Halting Smart Cities Indefinitely?” Citizens are pushing back against smart city initiatives that indiscriminately gather data about people. People don’t trust the government to not misuse personal data.

Some smart city initiatives don’t gather data. For instance, having streetlights that turn off when there is nobody in the area doesn’t require gathering any data on people. But many smart city applications gather mountains of data. Consider smart traffic systems which might gather massive amounts of data if implemented poorly. Smart traffic systems make decisions about when to change lights based upon looking at images of the cars waiting at intersections. If the city captures and stores those images, it accumulates a massive database of where drivers were at specific times. If those images are instantly discarded, never stored and never available for city officials to view then a smart traffic system would not be invading citizen privacy. But the natural inclination is to save this information. For instance, analysts might want to go back after a traffic accident to see what happened. And once the records are saved, law enforcement might want to use the data to track criminal behavior. It’s tempting for a city to collect and store data – all for supposedly good reasons – but eventually, the existence of the data can lead to abuse.

Many people are very leery of systems that capture public video images. If you look at smart city sales literature, it’s hard to find sensor systems that don’t toss in video cameras as part of any street sensor device. I just saw a headline saying that over 400 police departments now partner with Ring, the video cameras people install at their front door – which allow police to have massive numbers of security cameras in a city. It’s incredibly easy for such systems to be abused. Nobody is uncomfortable with using surveillance systems to see who broke into somebody’s home, but it’s highly disturbing if a policeman is using the same system to stalk an ex-wife. Video surveillance isn’t the only sensitive issue and smart city technology can gather all sorts of data about citizens.

What I find scarier is security since smart city systems can be hacked. Security experts recently told Wired that smart city networks are extremely vulnerable to hacking. Municipal computer systems tend to be older and not updated as regularly. Municipal computer systems have the same problems seen in corporations – weak passwords, outdated and ignored security patches, and employees that click on spam emails.

Smart city networks are more vulnerable to attack than corporate networks that sit behind layered firewalls because a smart city network can be attacked at the sensor edge devices. It’s well known that IoT devices are not as rigorously updated for security as other components of computer networks. I’ve seen numerous articles of hackers who were able to quickly defeat the security of IoT devices.

While there might be a concern that city employees will abuse citizen data there is no doubt that hackers will. It’s not hard to envision hackers causing mischief by messing with traffic lights. It’s not hard to envision terrorists paralyzing a city by shutting down everything computer-related.

But the more insidious threat is hackers who quietly gain access to city systems and don’t overtly cause damages. I have one city client that recently found a system they believe has been compromised for over a decade. It’s not hard to envision bad actors accessing video data as a tool to use for burglary or car theft. It’s not hard to imagine a bad actor selling the data gathered on city networks to players on the dark web.

I’m not against smart city technology, and that’s not the point of this blog. But before a city deploys networks of hundreds of thousands of sensors, they need to have planned well to protect citizen data from misuse by city employees and by abuse from hackers. That sounds like a huge challenge to me and I have to wonder how many cities are capable of doing it right. We’ve seen numerous large corporations get hacked. Smart city networks with huge numbers of sensors are far less secure and look to be an open invitation to hackers.

Trusting Big Company Promises

When AT&T proposed to merge with Time Warner in 2016, attorneys at the Justice Department argued against the merger and said that the combined company would have too much power since it would be both a content provider and a content purchaser. Justice Department lawyers and various other antitrust lawyers warned that the merger would result in rate hikes and blackouts. AT&T counterargued that they are good corporate citizens and that the merger would be good for consumers.

In retrospect, it looks like the Justice Department lawyers were right. Soon after the merger, AT&T raised the prices for DirecTV and its online service DirecTV Now by $5 per month. The company raised the rates on DirecTV Now again in April of this year by $10 per month. AT&T accompanied the price increases with a decision to no longer negotiate promotional prices with TV customers. In the first two quarters of this year DirecTV lost over 1.3 million customers as older pricing packages expired and the company insisted that customers move to the new prices. AT&T says they are happy to be rid of customers that were not contributing to their bottom line.

In July of this year, CBS went dark for 6.5 million DirecTV and AT&T U-verse cable customers. AT&T said that CBS wanted too much money to renew a carriage deal. The two companies resolved the blackout in August.

Meanwhile, AT&T and Dish networks got into a dispute in late 2018 which resulted in turning off HBO and Cinemax on Dish Network. This blackout has carried into 2019 and the two sides still have not resolved the issue. The dispute cost Dish a lot of customers when the company was unable to carry the Game of Thrones. Dish says that half of its 334,000 customer losses in the fourth quarter of 2018 were due to not having the Game of Thrones.

I just saw headlines that AT&T is headed towards a rate fight with ESPN and warns there could be protracted blackouts.

It’s hard to fully fault any one of the AT&T decisions since they can be justified to some degree as smart business practices. But that’s how monopoly abuses generally work. AT&T wants to pay as little as possible when buying programming from others and wants to charge as much as possible when selling content. In the end, it’s consumers who pay for the AT&T practices – something the company had promised would not happen just months before the blackouts.

Programming fights don’t have to be so messy. Consider Comcast which is also a programmer and the biggest cable TV company. Comcast has gotten into a few disputes over programming, particularly with regional sports programming. In a few of these disputes, Comcast was leveraging its programming power since it also owns NBC and other programming. But these cases mostly got resolved without blackouts.

Regulators are most worried about AT&T’s willingness to allow prolonged blackouts because during blackouts the public suffers. Constantly increasing programming costs have caused a lot of angst for cable TV providers, and yet most disputes over programming don’t result in turning off content. AT&T is clearly willing to flex its corporate muscles since it is operating from a position of power in most cases, as either an owner of valuable content or as one of the largest buyers of content.

From a regulatory perspective this raises the question of how the government can trust the big companies that have grown to have tremendous market power. The Justice Department sued to challenge the AT&T and Time Warner merger even after the merger was approved. That was an extraordinary suit that asked to undo the merger. The Justice Department argued that the merger was clearly against the public interest. The courts quickly ruled against that suit and it’s clear that it’s nearly impossible to undo a merger after it has occurred.

The fact is that companies with monopoly power almost always eventually abuse that power. It’s incredibly hard for a monopoly to decide not to act in its own best interest, even if those actions are considered as monopoly abuses. Corporations are made up of people who want to succeed and it’s human nature for people to take any market advantages their corporation might have. I have to wonder if AT&T’s behavior will make regulators hesitate before the next big merger. Probably not, but AT&T barely let the ink dry on the Time Warner merger before doing things they promised they wouldn’t do.

The Hidden World of Undersea Fiber

Since the first undersea cable was completed in 1858 to deliver telegraph messages between the US and England, we’ve has an extensive network of undersea cable networks that enable communications between continents.

Earlier this year there were 378 undersea fiber cables in place that stretch over 745,000 miles. Here’s an interactive map that shows all of the cables and also allows highlighting of individual cables. What’s most intriguing about the map is that are a few cities around the world where numerous cables terminate. One such place is New York City, and Superstorm Sandy cut the connections of several fibers and the connection between the US and Europe went dark for a few hours. The companies building the cables are now considering diversifying the terminal locations of the fiber cables.

Cables also routinely get cut from other events such as earthquakes, underwater mudslides, ship anchors and even a tiny number from sharks. There is an average of about 25 underseas fiber cuts per year. Repairs are made by ships that pull the cut ends of the fiber to the surface and splice the ends back together. There have a few fiber cuts where there was suspicion of sabotage, but it’s never been proven. There is no real way to provide security for undersea cables and the routes of the cables are well known. It’s been a poorly kept secret that spy agencies around the world tap into various cables to monitor traffic.

Undersea fibers are made differently than other fiber. Since the biggest danger from fiber cuts is in shallow water, the cable for shallow locations is as thick as a coke can and is routinely buried under the surface. At deeper depths below 8,000 feet, where the dangers of fiber cuts are minimal, the cables are only as thick as a magic marker. There are cables laid as deep as 25,000 feet below the surface. One unusual aspect of underseas fibers is the use of an underlying copper layer that is used to transmit the electricity needed to power fiber repeaters along the long underseas paths. The cables can be powered with as much as 10,000 volts to force the power along the longest Pacific routes.

The undersea fiber paths carries over 99% of the traffic between continents, with the small remainder carried by satellites. Satellites are never expected to carry more than a tiny fraction of the traffic due to the gigantic, and constantly growing volume of worldwide data traffic. The FCC estimated that only 0.37% of the US international data traffic is carried by satellite. The capacity of the newer cables is mind-boggling – the Marea cable that was completed between Spain and Virginia in 2018 has a capacity of 208 terabits per second. No satellite network is ever going to be able to carry more than a tiny fraction of that kind of capacity. Worldwide bandwidth usage is exploding as the users on the Internet continues to grow (there were 1 million new users added to the Web every day in 2018). And just like in the US, usage per person is growing everywhere at an exponential rate.

One thing that is obvious from the fiber map is there are parts of the world or routes that don’t exist. The companies that fund the cables build them to satisfy existing broadband needs, which is why there are so many cables between places like the US and Europe or between countries in the Mediterranean. There are no routes between places like Australia and South America because there is not enough specific traffic between the two places to justify the cost of a new cable route. While cable routes terminate to India and China, one would expect to see more fibers added in coming years. These two countries are currently seeing the biggest number of new Internet users (in 2018 there were 100 million new users in India and 80 million new users in China).

The cables have traditionally been built and owned by the world’s biggest telecom companies. But in recent years, companies like Google, Facebook, Microsoft, and Amazon have been investing in new undersea fibers. This will allow them to carry their own traffic between continents in the same way they are also now carrying terrestrial traffic.

Undersea cables are designed for a 25-year life, and so cables are regularly being retired and replaced. Many cables aren’t reaching the 25-year life because the built-in repeaters become obsolete and it’s often more profitable to lay a larger capacity newer cable.

Funding the USF

The Universal Service Fund (USF) has a bleak future outlook if the FCC continues to ignore the funding crisis that supports the program. The fund continues to be funded with a fee levied against the combined Interstate and international portion of landlines, cellphones and certain kinds of traditional data connections sold by the big telcos. The ‘tax’ on Interstate services has grown to an indefensible 25% of the retail cost of the Interstate and international portion of these products.

The FCC maintains arcane rules to determine the interstate portion of things like a local phone bill or a cellular bill. There are only a tiny handful of consultants that specialize in ‘separations’ – meaning the separation of costs into jurisdictions – who understand the math behind the FCC’s determination of the base for assessing USF fees.

The USF has done a lot of good in the past and is poised to do even more. The segment of the program that brings affordable broadband to poor schools and libraries is a success in communities across the country. The USF is also used to subsidize broadband to non-profit rural health clinics and hospitals. I would argue that the Lifeline program that provides subsidized phone service has done a huge amount of good. The $9.25 per month savings on a phone or broadband bill isn’t as effective today as it once was because the subsidy isn’t pegged to inflation. But I’ve seen firsthand the benefits from this plan that provided low-cost cellphones to the homeless and connected them to the rest of society. There are numerous stories of how the subsidized cellphones helped homeless people find work and integrate back into society.

The biggest potential benefit of the fund is bringing broadband solutions to rural homes that still aren’t connected to workable broadband. We’ve gotten a hint of this potential in some recent grant programs, like the recent CAF II reverse auction. We’re ready to see the USF create huge benefits as the FCC starts awarding $20.4 billion in grants from the USF, to be dispersed starting in 2021. If that program is administered properly then huge numbers of homes are going to get real broadband.

This is not to say that the USF hasn’t had some problems. There are widespread stories about fraud in the Lifeline program, although many of those stories have been exaggerated in the press. A decent amount of what was called fraud was due to the ineptitude of the big phone companies that continued to collect USF funding for people who die or who are no longer eligible for the subsidy. The FCC has taken major steps to fix this problem by creating a national database of those who are eligible for the Lifeline program.

The biggest recent problem with the USF came when the FCC used the fund to award $11 billion to the big telcos in the CAF II program to upgrade rural broadband to speeds of at least 10/1 Mbps. I’ve heard credible rumors that some of the telcos pocketed much of that money and only made token efforts to tweak rural DSL speeds up to a level that households still don’t want to buy. It’s hard to find anybody in the country who will defend this colossal boondoggle.

However, we’ve learned that if used in a smart way that the USF can be used to bring permanent broadband to rural America. Every little pocket of customers that gets fiber due to this funding can be taken off the list of places with no broadband alternatives. Areas that get fixed wireless are probably good for a decade or more, and hopefully, those companies operating these networks will pour profits back into bringing fiber (which I know some USF fund recipients are doing).

But the USF is in real trouble if the FCC doesn’t fix the funding solution. As traditional telephone products with an interstate component continue to disappear the funds going into the USF will shrink. If the funding shrinks, the FCC is likely to respond by cutting awards. Somebody might win $1 million from the upcoming grant program but then collect something less as the fund decreases over time.

The fix for the USF is obvious and easy. If the FCC expands the funding base to include broadband products, the percentage contribution would drop significantly from the current 25% and the fund could begin growing again. The current FCC has resisted this idea vigorously and it’s hard to ascribe any motivation other than that they want to see the USF Fund shrink over time. This FCC hates the Lifeline program and would love to kill it. This FCC would prefer to not be in the business of handing out grants. At this point, I don’t think there is any alternative other than waiting for the day when there is a new FCC in place that embraces the good done by the USF rather than fight against it.

Farms Need Broadband Today

I recently saw a presentation by Professor Nicholas Uilk of South Dakota State University. He is the head of the first bachelor degree program in the country for Precision Agriculture. That program does just what the name suggests – they are teaching budding farmers how to use technology in farming to increase crop yields – and those technologies depend upon broadband.

Precision agriculture is investigating many different aspects of farming. Consider the following:

  • There has been a lot of progress creating self-driving farm implements. These machines have been tested for a few years, but there are not a lot of farmers yet willing to set machines loose in the field without a driver in the cab. But the industry is heading towards the day when driverless farming will be an easily achievable reality.
  • Smart devices have moved past tractors and now include things like automated planters, fertilizer spreaders, manure applicators, lime applicators, and tillage machines.
  • The most data-intensive farming need is the creation of real-time variable rate maps of fields. Farmers can use smart tractors or drones to measure and map important variables that can affect a current crop like the relative amounts of key nutrients, moisture content, and the amount of organic matter in the soil. This mapping creates massive data files that are sent off-farm. Experts agronomists review the data and prepare a detailed plan to get the best yields from each part of the field. The problem farms have today is promptly getting the data to and from the experts. Without fast broadband, the time required to get these files to and from the experts renders the data unusable if the crop grows too large to allow machines to make the suggested changes.
  • Farmers are measuring yields as they harvest so they can record exactly which parts of their fields produced the best results.
  • SDSU is working with manufacturers to develop and test soil sensors that could wirelessly transmit real-time data on pH, soil moisture, soil temperature, and transpiration. These sensors are too expensive today to be practical – but the cost of sensors should drop over time.
  • Research is being done to create low-cost sensors that can measure the health of individual plants.
  • Using sensors for livestock is the most technologically advanced area and there are now dairy farms that measure almost everything imaginable about every milking cow. The sensors for monitoring pigs, chickens, and other food animals are also advanced.
  • The smart farm today measures an immense amount of data on all aspects of running the business. This includes gathering data for non-crop parts of the business such as the performance of vehicles, buildings, and employees. The envisioned future is that sensors will be able to sense a problem in equipment and a send a replacement part before a working machine fails.
  • One of the more interesting trends in farming is to record and report on every aspect of the food chain. When the whole country stopped eating romaine last year because of contamination at one farm, the industry has started to develop a process where each step of the production of crops is recorded, with the goal to report the history of food to the consumer. In the not-too-distant future, a consumer will be able to scan a package of lettuce and know where the crop was grown, how it was grown (organic) when it was picked, shipped and brought to the store. This all requires creating a blockchain with an immutable history of each crop, from farm to store.

The common thread of all of these developments in precision agriculture is the need for good broadband. Professor Uilk says that transmitting the detailed map scans for crop fields realistically requires 100 Mbps upload to get the files to and from the experts in a timely exchange. That means fiber to the farm.

A lot of the other applications require reliable wireless connections around the farm, and that implies a much better use of rural spectrum. Today the big cellular carriers buy the rights to most spectrum and then let it lie fallow in rural areas. We need to find a way to bring spectrum to the farm to take advantage of measuring sensors everywhere and for directing self-driving farm equipment.