CoBank Supports Telemedicine

For those who don’t know CoBank, it’s a bank that specializes in loans to telecom and electric cooperative but which also has funded numerous rural fiber projects for other borrowers over the years. In August CoBank filed comments In FCC Docket 18-213 in support of expanded use of the Universal Service Fund for rural telemedicine. CoBank is a huge supporter of telemedicine and has made substantial grants to telemedicine projects dealing with diabetes management, opioid abuse, prenatal maternity care, and veteran care.

As part of that filing, CoBank discussed a telemedicine trial they had sponsored in rural Georgia. The trial was conducted in conjunction with Perry Health, a software provider and Navicent Health, a healthcare provider in Macon, Georgia.  The trial was for 100 low-income patients with uncontrolled Type 2 diabetes. These patients were on a path towards kidney failure, amputation, loss of vision, and numerous other major related health problems. These are patients who would normally be making numerous emergency room visits and needing other costly medical procedures.

In the trial, the patients were provided with tablets containing Perry Health software that provided for daily interaction between patients and Navicent. Patients were asked to provide daily feedback on how they were sticking to the treatment regimen and provided information like the results of blood sugar tests, the food they ate each day, the amount of daily exercise, etc. The tablet portal also provided for communication from Navicent asking patients how they generally felt and providing recommendations when there was a perceived need.

The results of the trial were hugely positive. In the trial of 100 patents, 75% of the patients in the trial showed a marked improvement in their condition compared to the average diabetes patient. The improvements for these patients equated to reduced health care costs of $3,855 per patient per year through reduced doctor visits and reduced needs to make emergency room visits. The American Diabetes Association says that patients with Type 2 diabetes have 2-3 times the normally expected medical costs, which they estimate totals to $327 billion per year.

Patients in the trial liked the daily interaction which forced them to concentrate on following treatment plans. They felt like their health care provider cared about how they were doing, and that led them to do better. After the trial, Navicent Health expanded the telemedicine plan to hundreds of other patients with Type 2 diabetes, heart failure, and Chronic Obstructive Pulmonary Disease (COPD).

One of the interesting outcomes of the trial was that patents preferred to use cellphones rather than the special tablets. The trial also showed the need for better broadband. One of the challenges of the trial was the effort required by Navicent Health to make sure that a patient had the needed access to broadband. To some degree using cellphones gives patients easier access to broadband. However, there are plenty of rural areas with poor cellular data coverage, and even where patients can use cellular data, the cost of cellular data can be prohibitive if heavily used. Landline broadband is still the preferred connection to take advantage of unlimited WiFi connections to the healthcare portal.

One thing that struck me about this study is that this sounds like it would be equally useful in urban areas. I’ve read that a lot of healthcare costs are due to patients who don’t follow through on a treatment plan after they go home after a procedure. The Navicent Health process could be applied to patients anywhere since the biggest benefit of the trial looks to be due to the daily interface between patient and doctor.

The FCC has already pledged to increase funding for the rural medicine component of the Universal Service Fund. However, that funding is restricted. For example, funding can only be granted to rural non-profit health care providers.

Telemedicine has been picking up steam and is seeing exponential growth. But telemedicine still only represents just a few percentages of rural healthcare visits. The primary barrier seems to be acceptance of the process and the willingness of health care providers to tackle telemedicine.

The Market Uses for CBRS Spectrum

Spencer Kurn, an analyst for New Street Research recently reported on how various market players plan to use the 3.5 GHz CBRS spectrum recently approved by the FCC. I described the FCC’s order in this recent blog. As a quick refresher, this is a large swath of spectrum and the FCC has approved 80 MHz of spectrum for public use and will be auctioning 70 MHz of the spectrum in 2020.

Cellular Bandwidth. Kurn notes that Verizon plans to use the new spectrum to beef up 4G bandwidth now and eventually 5G. Verizon plans to use the spectrum in dense markets and mostly outdoors. Cable companies like Comcast and Charter that have entered the wireless business are also likely to use the spectrum in this manner.

I’ve been writing for a while about the crisis faced by cellular network. In urban areas they are seeing broadband usage double almost every two years and keeping up with that growth is a huge challenge. It’s going to require the combination of new spectrum, more cell sites (mostly small cells), and the improvements that come with 5G, mostly the frequency slicing.

It’s interesting that Verizon only sees this as an outdoor solution, but that makes sense because this spectrum is close in characteristics as the existing WiFi bands and will lose most of its strength in passing through a wall. It also makes sense that Verizon will only do this in metro areas where there is enough outdoor traffic for the spectrum to make a difference. I’ve seen several studies that say that the vast majority of cellular usage is done indoors in homes, businesses, and schools. But this spectrum still becomes one more piece of the solution to help relieve the pressure on urban cell sites.

For this to be of use the spectrum has to be built into cellular handsets. Apple recently announced that they are building the ability to receive Band 48 of CBRS into their new models. They join the Samsung Galaxy S10 and the Google Pixel 3 with the ability to use the spectrum. Over time it’s likely to be built into many phones, although handset manufacturers are always cautious because adding new spectrum bands to a handset increases the draw on the batteries.

Point-to-Multipoint Broadband. Numerous WISPs and other rural ISPs have been lobbying for the use of the spectrum since it can beef up point-to-multipoint broadband networks. These are networks that put a transmitter on a tower and then beam broadband to a dish on a subscriber premise. This technology is already widely in use mostly using the 2.4 GHz and 5.0 GHz WiFi spectrum. Layering on CBRS will beef up the broadband that can be delivered over a customer link.

It will be interesting to see how that works in a crowded competitive environment. I am aware of counties today where there are half a dozen WISPs all using WiFi spectrum and the interference degrades network performance for everybody. There are five SAS Administrators named by the FCC that will monitor bandwidth usage and who also will monitor interference. The FCC rules don’t allow for indiscriminate deployment of public CBRS spectrum and we’ll have to see how interference problems are dealt with.

One interesting player in the space will be AT&T who intends to layer the frequency onto their fixed wireless product. AT&T widely used the technology to meet their CAF II buildout requirements and mostly has used PCS spectrum to meet the FCC requirement to deliver at least 10/1 Mbps speeds to customers. Adding the new spectrum should significantly increase rural customer speeds – at least for those with a few miles of AT&T towers.

Cable Company Edge-out. The most interesting new players considering the market are the cable companies. Kurn believes that the big cable companies will use the spectrum to edge out to serve rural customers with fixed wireless around their existing cable networks. He says the cable networks could theoretically pass 6 – 7 million new homes if this is deployed everywhere. This is an ideal application for a cable company because they typically have fiber fairly close the edge of their service areas. The point-to-point wireless product operates best when the radios are fiber-fed and cable companies could deliver a product in the 50-100 Mbps range where they have line-of-sight to customers.

We’ve already seen one cable company tackle this business plan. Midco was awarded $38.9 million in the CAF II reverse auctions to deploy 100 Mbps broadband in Minnesota and the Dakotas. Midco is going to need this spectrum, and probably even more to deliver 100 Mbps to every customer. Their deployment is not really an edge-out, and the company plans to build networks that will cover entire rural counties with fixed wireless broadband.

Broadband on Tribal Lands

The American Indian Policy Institute recently issued a report titled Tribal Technology Assessment – The State of Internet Service on Tribal Lands. The report looks in-depth at broadband issues concerning tribal lands and reports on a survey of tribal members that is the first attempt ever of quantifying the issues of Internet access on tribal lands.

The FCC has often noted in various reports that tribal areas suffer from poor broadband. However, the FCC has been relying on the same faulty data to describe tribal lands that is used to look at rural broadband in general. The data collected from ISPs in the Form 477 process has been discredited in numerous ways, the latest being a test of the FCC data in Virginia and Missouri by USTelecom that showed that the 477 data had underestimated unserved homes by 38%. This AIPI report takes the first stab ever at understanding the real nature of broadband on tribal lands.

According to the FCC’s 2018 Broadband Progress Report, 35% of the 1.5 million people living on tribal lands lack access to 25/3 Mbps broadband, compared to 8% for the country as a whole. The older 2016 Broadband Progress Report showed that 41% of tribal lands lacked access to 25/3 Mbps compared to 10% of the whole country. Not all tribal lands are rural, and the FCC report showed that 72% of rural tribal residents in the lower 48 states lack broadband access while 33% of urban ones lack access. It showed that 70% of rural tribal lands in Alaska lack broadband access while 15% of urban areas there lack access.

The AIPI study included a survey of numerous tribal members from around the country. Following is a small sample of the responses to the survey, which are covered in more depth in the report.

  • 35% of respondents own or use a smartphone. 24% own or use a desktop or laptop computer. 16% own or use a tablet. All of these results are far lower than the US average.
  • For survey respondents with access to the internet, 36% have a connection through a telco like CenturyLink, Frontier or Windstream, 29% use a cellphone, 12% buy broadband from a cable company, six percent use satellite, and 1% still use dial-up. The rest of the respondents get access to the Internet at work, from libraries or at public hotspots.
  • Only 47% of respondents subscribe to a cellular plan, far below the 94% penetration for the country as a whole. 22% of respondents said that they have spotty access to home cellular coverage and 6% said they have no home cellphone coverage.
  • 50% of respondents said they feel limited by the broadband choices available to them.

The report makes specific recommendations for getting better broadband to tribal lands. Some of the recommendations include:

  • The FCC should earmark and prioritize some of the funding from the Universal Service Fund to solve the tribal broadband shortfalls instead of generically targeting hard-to-serve areas.
  • The RUS and USDA should identify and recommend pathways for Tribes to create rural cooperatives, consortia or creative partnerships to provide affordable broadband.
  • The FCC should prioritize spectrum licensing directly to Tribes or those who want to serve tribal lands.
  • Tribes should be allowed to challenge Form 477 data that misstates the broadband available on tribal lands.
  • Congress should provide an annual budget and provide more independence to the Office of Native Affairs and Policy at the FCC.

The report also includes numerous other recommendations for Congress, the FCC, large telcos and service providers, and tribal governments.

It’s clear that in aggregate that tribal lands are more poorly served than rural America as a whole. The report describes a broadband environment on tribal lands that lacks in both landline and cellular broadband.  I’ve seen numerous papers and articles on the topic over the years, but this report goes into more depth than anything else I’ve read on the topic.

Welcome, Merit Network!

The rural broadband community has a new ally in Merit Network of Michigan. Merit Network is a non-profit network that is governed by Michigan’s public universities. The organizations was founded in 1966 and was an early player that helped to develop some of the practice and protocols still used on the Internet. Their early mission was to seek ways for universities to network together, something that they accomplished by connecting Michigan and Michigan State in 1971. Merit went on to manage NSFNET, a nationwide network sponsored by the National Science Foundation, that was used to connect advance research labs and universities.

Over time, the company also collaborated with the Internet 2 project but also turned its attention to Michigan where it cobbled together a network comprised or owned and leased fibers used to provide bandwidth to K-12 schools around the state.

In the last year, Merit decided to further expand their mission. They now see that the biggest problem in Michigan education is the lack of home broadband for students. 70% of the teachers in Michigan assign computer-based homework, and yet 380,000 homes in Michigan don’t have a broadband connection. They are convinced, like many of us, that this homework gap is creating permanent harm and disadvantaging students without broadband.

The organization recently held their first statewide broadband summit and invited communities, service providers, anchor institutions, and broadband ‘activists’ to attend the summit. I’m pleased to have been invited to be a speaker. The goal of the conference was to describe the homework gap and to talk about real solutions for solving the problem in the state. The summit hoped to bring together stakeholders in rural broadband to form alliances to tackle the problem.

Merit has also taken several extraordinary steps that is going to make them a major player in the national effort to solve the homework gap. They’ve undertaken what they call Michigan Moonshot. This is an intensive effort to map and understand the availability of broadband around the state. The effort is being undertaken in collaboration with M-Lab and the Quello Center of Michigan State University. The concept is to encourage state educators to get students to take a specific speed test and to pair that effort with a program that teaches students about gathering scientific data.

The Moonshot effort is also going to correlate student test scores with broadband availability. This will be done in such a way as to guarantee student anonymity. This has been done before, but not on such a large scale. The project solicited participation from several school districts in Spring 2019 but expects to include many more in the future. The results of the data collection will be analyzed by scientists at Michigan State. The results of Moonshot studies should be of interest to educators and rural broadband proponents all over the country. Preliminary results show that it’s likely that there will be a strong measurable negative impact for students without home broadband. This study will provide peer-reviewed statistical evidence of that impact and should be a useful tool to educate legislators and to goad communities into action to find a broadband solution.

Merit is also nearing completion of a lengthy document they call the Michigan Moonshot Broadband Framework, which they hope will be a living document (meaning that collaborators can make edits) that lays forth a guide for communities that want to find a local broadband solution. This document is a step-by-step roadmap for how a community can tackle the lack of broadband.

It’s always good to have another major player in the national battle to tackle the lack of household broadband. I have high hopes that Merit Network will spur finding broadband solutions for rural and urban students in Michigan.

Happy Birthday Wi-Fi

This year is the twentieth anniversary of the formation of the Wi-Fi Alliance and the launch of commercial Wi-Fi. Wi-Fi has become so ubiquitous in our lives that it’s hard to believe that it’s only been twenty years since all broadband connections came with wires.

In 1999 most people were still using dial-up and that’s the year when early adapters started buying DSL. I remember having incredibly long phone cords so that I could use my laptop at different places around the house. When I bought DSL, I became tied to the desk with the DSL modem because I couldn’t find equally long cords to carry DSL all over the house.

I remember the day I bought my first Linksys Wi-Fi router. At that time, I think the only device in my home that would talk to Wi-Fi was my laptop. I was able to use that laptop everywhere around the house, and I remember how liberating it felt to be able to use the laptop on the front porch. I got my first carrier-class Wi-Fi router when I upgraded to fiber on Verizon FiOS. Even then I think the only devices in my house that communicated with the Wi-Fi router were a desktop and some laptops – the world had not yet started to build Wi-Fi into numerous devices. Today my home is crammed full of Wi-Fi-capable devices and it’s hard to imagine going without the wireless technology.

There’s an article in the current Wired by Jeff Abramowitz discussing how Wi-Fi as we know it almost didn’t happen. At the time that 802.11b was introduced there was a competing technology called HomeRF that was being pushed as a home wireless solution. We easily could have ended up with HomeRF used in the home and 802.11b used in the office. That would have meant no easy transition of devices from office to home, which would likely have stymied the ubiquitous Wi-Fi we have today.

The growth of Wi-Fi required free spectrum to thrive, and for that, we can thank microwave ovens. Microwave ovens were first developed in the 1940s and emitted radiation the 2.45 GHz frequency. In the 1960s practically every home bought a microwave oven, and at that time the devices didn’t have great shielding. Since the microwave ovens polluted the spectrum on both sides of the 2.45 GHz band, the FCC decided in 1985 to add frequency bands on both sides of that spectrum, creating the ISM band that was open for anybody to use. With the radio technology available at the time nobody wanted to put any commercial usage too close to leaky microwave ovens. Since then, microwave ovens have better shielding and radios are more accurate in pinpointing narrow channels, and we can now use most of what the FCC had considered in 1985 to be junk spectrum.

I am amused every time I hear somebody in the industry say that broadband is going wireless – and by that, they mean 5G cellular. Today the average cellphone customer uses about 6 GB of cellular data per month. What the cellphone companies don’t talk about is that the average cellphone user also consumes three times that much data each month on Wi-Fi connection. The fact is that our cellphones are mostly Wi-Fi devices that can change to cellular data when we’re out of reach of our homes, schools, and offices.

Wi-Fi is about to take another big leap forward as WiFi 6 is being officially released this month. This newest version of Wi-Fi uses less energy, reduces latency, increases performance in crowded wireless environments, and allows for faster speeds. Wi-Fi has gotten a lot more sophisticated with the introduction of techniques like beamforming and the technology is light years ahead of what first came out in 1999. In those early days, a Wi-Fi modem was just good enough to handle the 1 Mbps DSL and cable modems broadband of the day.

Device manufacturers love Wi-Fi. Estimates vary, but there are predictions that there will be something like 10 billion worldwide Wi-Fi connected devices in 2020 and 22 billion by 2025 – which would be nearly three Wi-Fi devices for every person on the planet. Those are unbelievable numbers for a technology that only came into existence twenty years ago. The manufacturers must be thrilled knowing that we’ll all be buying new devices to upgrade to Wi-Fi 6 over the next few years.

If Wi-Fi was a person, I’d bake them a cake or buy them a drink to honor this birthday. I’ll have to settle for thanking all of those who have contributed over the years to turn the Wi-Fi concept into the robust products that have changed all of our lives.

The Downside to Smart Cities

I read almost daily about another smart city initiative somewhere in the country as cities implement ideas that they think will improve the quality of life for citizens. I just saw a statistic that says that over two-thirds of cities have now implemented some form of smart city technology. Some of the applications make immediately noticeable differences like smart electric grids to save power, smart traffic lights to improve traffic flow, and smart streetlights to save electricity.

But there are a few downsides to smart city technology that can’t be ignored. The two big looming concerns are privacy and security. There was an article in Forbes earlier this year that asked the question, “Are Privacy Concerns Halting Smart Cities Indefinitely?” Citizens are pushing back against smart city initiatives that indiscriminately gather data about people. People don’t trust the government to not misuse personal data.

Some smart city initiatives don’t gather data. For instance, having streetlights that turn off when there is nobody in the area doesn’t require gathering any data on people. But many smart city applications gather mountains of data. Consider smart traffic systems which might gather massive amounts of data if implemented poorly. Smart traffic systems make decisions about when to change lights based upon looking at images of the cars waiting at intersections. If the city captures and stores those images, it accumulates a massive database of where drivers were at specific times. If those images are instantly discarded, never stored and never available for city officials to view then a smart traffic system would not be invading citizen privacy. But the natural inclination is to save this information. For instance, analysts might want to go back after a traffic accident to see what happened. And once the records are saved, law enforcement might want to use the data to track criminal behavior. It’s tempting for a city to collect and store data – all for supposedly good reasons – but eventually, the existence of the data can lead to abuse.

Many people are very leery of systems that capture public video images. If you look at smart city sales literature, it’s hard to find sensor systems that don’t toss in video cameras as part of any street sensor device. I just saw a headline saying that over 400 police departments now partner with Ring, the video cameras people install at their front door – which allow police to have massive numbers of security cameras in a city. It’s incredibly easy for such systems to be abused. Nobody is uncomfortable with using surveillance systems to see who broke into somebody’s home, but it’s highly disturbing if a policeman is using the same system to stalk an ex-wife. Video surveillance isn’t the only sensitive issue and smart city technology can gather all sorts of data about citizens.

What I find scarier is security since smart city systems can be hacked. Security experts recently told Wired that smart city networks are extremely vulnerable to hacking. Municipal computer systems tend to be older and not updated as regularly. Municipal computer systems have the same problems seen in corporations – weak passwords, outdated and ignored security patches, and employees that click on spam emails.

Smart city networks are more vulnerable to attack than corporate networks that sit behind layered firewalls because a smart city network can be attacked at the sensor edge devices. It’s well known that IoT devices are not as rigorously updated for security as other components of computer networks. I’ve seen numerous articles of hackers who were able to quickly defeat the security of IoT devices.

While there might be a concern that city employees will abuse citizen data there is no doubt that hackers will. It’s not hard to envision hackers causing mischief by messing with traffic lights. It’s not hard to envision terrorists paralyzing a city by shutting down everything computer-related.

But the more insidious threat is hackers who quietly gain access to city systems and don’t overtly cause damages. I have one city client that recently found a system they believe has been compromised for over a decade. It’s not hard to envision bad actors accessing video data as a tool to use for burglary or car theft. It’s not hard to imagine a bad actor selling the data gathered on city networks to players on the dark web.

I’m not against smart city technology, and that’s not the point of this blog. But before a city deploys networks of hundreds of thousands of sensors, they need to have planned well to protect citizen data from misuse by city employees and by abuse from hackers. That sounds like a huge challenge to me and I have to wonder how many cities are capable of doing it right. We’ve seen numerous large corporations get hacked. Smart city networks with huge numbers of sensors are far less secure and look to be an open invitation to hackers.

Trusting Big Company Promises

When AT&T proposed to merge with Time Warner in 2016, attorneys at the Justice Department argued against the merger and said that the combined company would have too much power since it would be both a content provider and a content purchaser. Justice Department lawyers and various other antitrust lawyers warned that the merger would result in rate hikes and blackouts. AT&T counterargued that they are good corporate citizens and that the merger would be good for consumers.

In retrospect, it looks like the Justice Department lawyers were right. Soon after the merger, AT&T raised the prices for DirecTV and its online service DirecTV Now by $5 per month. The company raised the rates on DirecTV Now again in April of this year by $10 per month. AT&T accompanied the price increases with a decision to no longer negotiate promotional prices with TV customers. In the first two quarters of this year DirecTV lost over 1.3 million customers as older pricing packages expired and the company insisted that customers move to the new prices. AT&T says they are happy to be rid of customers that were not contributing to their bottom line.

In July of this year, CBS went dark for 6.5 million DirecTV and AT&T U-verse cable customers. AT&T said that CBS wanted too much money to renew a carriage deal. The two companies resolved the blackout in August.

Meanwhile, AT&T and Dish networks got into a dispute in late 2018 which resulted in turning off HBO and Cinemax on Dish Network. This blackout has carried into 2019 and the two sides still have not resolved the issue. The dispute cost Dish a lot of customers when the company was unable to carry the Game of Thrones. Dish says that half of its 334,000 customer losses in the fourth quarter of 2018 were due to not having the Game of Thrones.

I just saw headlines that AT&T is headed towards a rate fight with ESPN and warns there could be protracted blackouts.

It’s hard to fully fault any one of the AT&T decisions since they can be justified to some degree as smart business practices. But that’s how monopoly abuses generally work. AT&T wants to pay as little as possible when buying programming from others and wants to charge as much as possible when selling content. In the end, it’s consumers who pay for the AT&T practices – something the company had promised would not happen just months before the blackouts.

Programming fights don’t have to be so messy. Consider Comcast which is also a programmer and the biggest cable TV company. Comcast has gotten into a few disputes over programming, particularly with regional sports programming. In a few of these disputes, Comcast was leveraging its programming power since it also owns NBC and other programming. But these cases mostly got resolved without blackouts.

Regulators are most worried about AT&T’s willingness to allow prolonged blackouts because during blackouts the public suffers. Constantly increasing programming costs have caused a lot of angst for cable TV providers, and yet most disputes over programming don’t result in turning off content. AT&T is clearly willing to flex its corporate muscles since it is operating from a position of power in most cases, as either an owner of valuable content or as one of the largest buyers of content.

From a regulatory perspective this raises the question of how the government can trust the big companies that have grown to have tremendous market power. The Justice Department sued to challenge the AT&T and Time Warner merger even after the merger was approved. That was an extraordinary suit that asked to undo the merger. The Justice Department argued that the merger was clearly against the public interest. The courts quickly ruled against that suit and it’s clear that it’s nearly impossible to undo a merger after it has occurred.

The fact is that companies with monopoly power almost always eventually abuse that power. It’s incredibly hard for a monopoly to decide not to act in its own best interest, even if those actions are considered as monopoly abuses. Corporations are made up of people who want to succeed and it’s human nature for people to take any market advantages their corporation might have. I have to wonder if AT&T’s behavior will make regulators hesitate before the next big merger. Probably not, but AT&T barely let the ink dry on the Time Warner merger before doing things they promised they wouldn’t do.

The Hidden World of Undersea Fiber

Since the first undersea cable was completed in 1858 to deliver telegraph messages between the US and England, we’ve has an extensive network of undersea cable networks that enable communications between continents.

Earlier this year there were 378 undersea fiber cables in place that stretch over 745,000 miles. Here’s an interactive map that shows all of the cables and also allows highlighting of individual cables. What’s most intriguing about the map is that are a few cities around the world where numerous cables terminate. One such place is New York City, and Superstorm Sandy cut the connections of several fibers and the connection between the US and Europe went dark for a few hours. The companies building the cables are now considering diversifying the terminal locations of the fiber cables.

Cables also routinely get cut from other events such as earthquakes, underwater mudslides, ship anchors and even a tiny number from sharks. There is an average of about 25 underseas fiber cuts per year. Repairs are made by ships that pull the cut ends of the fiber to the surface and splice the ends back together. There have a few fiber cuts where there was suspicion of sabotage, but it’s never been proven. There is no real way to provide security for undersea cables and the routes of the cables are well known. It’s been a poorly kept secret that spy agencies around the world tap into various cables to monitor traffic.

Undersea fibers are made differently than other fiber. Since the biggest danger from fiber cuts is in shallow water, the cable for shallow locations is as thick as a coke can and is routinely buried under the surface. At deeper depths below 8,000 feet, where the dangers of fiber cuts are minimal, the cables are only as thick as a magic marker. There are cables laid as deep as 25,000 feet below the surface. One unusual aspect of underseas fibers is the use of an underlying copper layer that is used to transmit the electricity needed to power fiber repeaters along the long underseas paths. The cables can be powered with as much as 10,000 volts to force the power along the longest Pacific routes.

The undersea fiber paths carries over 99% of the traffic between continents, with the small remainder carried by satellites. Satellites are never expected to carry more than a tiny fraction of the traffic due to the gigantic, and constantly growing volume of worldwide data traffic. The FCC estimated that only 0.37% of the US international data traffic is carried by satellite. The capacity of the newer cables is mind-boggling – the Marea cable that was completed between Spain and Virginia in 2018 has a capacity of 208 terabits per second. No satellite network is ever going to be able to carry more than a tiny fraction of that kind of capacity. Worldwide bandwidth usage is exploding as the users on the Internet continues to grow (there were 1 million new users added to the Web every day in 2018). And just like in the US, usage per person is growing everywhere at an exponential rate.

One thing that is obvious from the fiber map is there are parts of the world or routes that don’t exist. The companies that fund the cables build them to satisfy existing broadband needs, which is why there are so many cables between places like the US and Europe or between countries in the Mediterranean. There are no routes between places like Australia and South America because there is not enough specific traffic between the two places to justify the cost of a new cable route. While cable routes terminate to India and China, one would expect to see more fibers added in coming years. These two countries are currently seeing the biggest number of new Internet users (in 2018 there were 100 million new users in India and 80 million new users in China).

The cables have traditionally been built and owned by the world’s biggest telecom companies. But in recent years, companies like Google, Facebook, Microsoft, and Amazon have been investing in new undersea fibers. This will allow them to carry their own traffic between continents in the same way they are also now carrying terrestrial traffic.

Undersea cables are designed for a 25-year life, and so cables are regularly being retired and replaced. Many cables aren’t reaching the 25-year life because the built-in repeaters become obsolete and it’s often more profitable to lay a larger capacity newer cable.

Farms Need Broadband Today

I recently saw a presentation by Professor Nicholas Uilk of South Dakota State University. He is the head of the first bachelor degree program in the country for Precision Agriculture. That program does just what the name suggests – they are teaching budding farmers how to use technology in farming to increase crop yields – and those technologies depend upon broadband.

Precision agriculture is investigating many different aspects of farming. Consider the following:

  • There has been a lot of progress creating self-driving farm implements. These machines have been tested for a few years, but there are not a lot of farmers yet willing to set machines loose in the field without a driver in the cab. But the industry is heading towards the day when driverless farming will be an easily achievable reality.
  • Smart devices have moved past tractors and now include things like automated planters, fertilizer spreaders, manure applicators, lime applicators, and tillage machines.
  • The most data-intensive farming need is the creation of real-time variable rate maps of fields. Farmers can use smart tractors or drones to measure and map important variables that can affect a current crop like the relative amounts of key nutrients, moisture content, and the amount of organic matter in the soil. This mapping creates massive data files that are sent off-farm. Experts agronomists review the data and prepare a detailed plan to get the best yields from each part of the field. The problem farms have today is promptly getting the data to and from the experts. Without fast broadband, the time required to get these files to and from the experts renders the data unusable if the crop grows too large to allow machines to make the suggested changes.
  • Farmers are measuring yields as they harvest so they can record exactly which parts of their fields produced the best results.
  • SDSU is working with manufacturers to develop and test soil sensors that could wirelessly transmit real-time data on pH, soil moisture, soil temperature, and transpiration. These sensors are too expensive today to be practical – but the cost of sensors should drop over time.
  • Research is being done to create low-cost sensors that can measure the health of individual plants.
  • Using sensors for livestock is the most technologically advanced area and there are now dairy farms that measure almost everything imaginable about every milking cow. The sensors for monitoring pigs, chickens, and other food animals are also advanced.
  • The smart farm today measures an immense amount of data on all aspects of running the business. This includes gathering data for non-crop parts of the business such as the performance of vehicles, buildings, and employees. The envisioned future is that sensors will be able to sense a problem in equipment and a send a replacement part before a working machine fails.
  • One of the more interesting trends in farming is to record and report on every aspect of the food chain. When the whole country stopped eating romaine last year because of contamination at one farm, the industry has started to develop a process where each step of the production of crops is recorded, with the goal to report the history of food to the consumer. In the not-too-distant future, a consumer will be able to scan a package of lettuce and know where the crop was grown, how it was grown (organic) when it was picked, shipped and brought to the store. This all requires creating a blockchain with an immutable history of each crop, from farm to store.

The common thread of all of these developments in precision agriculture is the need for good broadband. Professor Uilk says that transmitting the detailed map scans for crop fields realistically requires 100 Mbps upload to get the files to and from the experts in a timely exchange. That means fiber to the farm.

A lot of the other applications require reliable wireless connections around the farm, and that implies a much better use of rural spectrum. Today the big cellular carriers buy the rights to most spectrum and then let it lie fallow in rural areas. We need to find a way to bring spectrum to the farm to take advantage of measuring sensors everywhere and for directing self-driving farm equipment.

Gaining Access to Multi-tenant Buildings

In 2007 the FCC banned certain kinds of exclusivity arrangements between ISPs and owners of multi-tenant buildings. At the time of the order, the big cable companies had signed contracts with apartment owners giving them exclusive access to buildings. The FCC order in 2007 got rid of the most egregious types of contracts – in many cases, cable company contracts were so convoluted that building owners didn’t even understand the agreements were exclusive.

However, the FCC order was still a far cry away from ordering open access for ISPs to buildings and there are many landlords still today who won’t allow in competitors. The most common arrangements liked by landlords are revenue share arrangements where the building owner makes money from an arrangement with an ISP. While such arrangements aren’t legally exclusive, they can be lucrative enough to make landlords favor an ISP and give them exclusive access.

WISPA, the industry association for wireless ISPs has asked the FCC to force apartment owners to allow access to multiple ISPs. WISPA conducted a survey of its members and found that wireless companies are routinely denied access to apartment buildings. Some of the reasons for denying access include:

  • Existing arrangements with ISPs that make the landlord not want to grant access to an additional ISP.
  • Apartment owners often deny access because wireless ISPs (WISPs) are often not considered to be telephone or cable companies – many WISPs offer only broadband and have no official regulatory status.
  • Building owners often say that an existing ISP serving the building has exclusive rights to the existing wiring, including conduits that might be used to string new wiring to reach units. This is often the case if the original cable or telephone company paid for the inside wiring when the building was first constructed.
  • Many landlords say that they already have an existing marketing arrangement with an ISP, meaning they get rewarded for sending tenants to that ISP.
  • Many landlords will only consider revenue sharing arrangements since that’s what they have with an existing ISP. Some landlords have even insisted on a WISP signing a revenue-sharing arrangement even before negotiating and talking pricing and logistics.

These objections by landlords fall into two categories. One is compensation-based where a landlord is happy with the financial status quo relationship with an existing ISP. The other primary reason is some contractual relationship with an existing ISP that is hard or impossible for a landlord to preempt.

The concerns of WISPs are all valid, and in fact, the same list can be made by companies that want to build fiber to apartment buildings. However, landlords seem more open to fiber-based ISPs since saying that their building has fiber adds cachet and is valued by many tenants.

WISPs sometimes have unusual issues not faced by other ISP overbuilders. For example, one common wireless model is to beam broadband to a roof of an apartment building. That presents a challenge for gaining access to apartments since inside wiring generally begins in a communications space at the base of a building.

The issue is further clouded by the long history of FCC regulation of inside wiring. The topic of ownership and rights for inside wiring has been debated in various dockets since the 1990s and there are regulatory rulings that can give ammunition to both sides of wiring arguments.

The WISPs are facing an antagonistic FCC on this issue. The agency recently preempted a San Francisco ordinance that would have made all apartment buildings open access – meaning available to any ISP. This FCC has been siding with large incumbent cable and telephone companies on most issues and is not likely to go against them by allowing open access to all apartment buildings.