One-Web Launches Broadband Satellites

Earlier this month OneWeb launched six test satellites intended for an eventual satellite fleet intended to provide broadband. The six satellites were launched from a Soyuz launch vehicle from the Guiana Space Center in Kourou, French Guiana.

OneWeb was started by Greg Wyler of Virginia in 2012, originally under the name of WorldVu. Since then the company has picked up heavy-hitter investors like Virgin, Airbus, SoftBank and Qualcomm. The company’s plan is to launch an initial constellation of 650 satellites that will blanket the earth, with ultimate deployment of 1,980 satellites. The plans are to deploy thirty of the sixty-five pound satellites with each launch. That means twenty-two successful launches are needed to deploy the first round.

Due to the low-earth orbits of the satellites, at about 745 miles above earth, the OneWeb satellites will avoid the huge latency that is inherent from current satellite broadband providers like HughesNet, which uses satellites orbiting at 22,000 miles above the earth. The OneWeb specifications filed with the FCC talks about having latency in the same range as cable TV networks in the 25-30 millisecond range. But where a few high-orbit satellites can see the whole earth, the big fleet of low-orbit satellites is needed just to be able in see everywhere.

The company is already behind schedule. The company had originally promised coverage across Alaska by the end of 2019. They are now talking about having customers demos sometime in 2020 with live broadband service in 2021. The timeline matter for a satellite company because the bandwidth license from the FCC requires that they launch 50% of their satellites within six years and all of them within nine years. Right now, OneWeb and also Elon Musk’s SpaceX have both fallen seriously behind the needed deployment timeline.

The company’s original goal was to bring low-latency satellite broadband to everybody in Alaska. While they are still talking about bringing broadband to those who don’t have it today, their new business plan is to sell directly to airlines and cruise ship lines and to sell wholesale to ISPs who will then market to the end user.

It will be interesting to see what kinds of speeds will really be delivered. The company talks today about a maximum speed of 500 Mbps. But I compare that number to the claim that 5G cellphones can work at 600 Mbps, as demonstrated last year by Sprint – it’s possible only in a perfect lab setting. The best analog to a satellite network is a wireless transmitter on a tower in a point-to-multipoint network. That transmitter is capable of making a relatively small number of big-bandwidth connections or many more low-bandwidth connections. The economic sweet spot will likely be to offer many connections at 50 – 100 Mbps rather than fewer connections at a higher speed.

It’s an interesting business model. The upfront cost of manufacturing and launching the satellites is high. It’s likely that a few launches will go awry and destroy satellites. But other than replacing satellites that go bad over time, the maintenance costs are low. The real issue will be the bandwidth that can be delivered. Speeds of 50 – 100 Mbps will be welcomed in the rural US for those with no better option. But like with all low-bandwidth technologies – adequate broadband that feels okay today will feel a lot slower in a decade as household bandwidth demand continues to grow. The best long-term market for the satellite providers will be those places on the planet that are not likely to have a landline alternative – which is why they first targeted rural Alaska.

Assuming that the low-earth satellites deliver as promised, they will become part of the broadband landscape in a few years. It’s going to be interesting to see how they play in the rural US and around the world.

Ideas for Better Broadband Mapping

The FCC is soliciting ideas on better ways to map broadband coverage. Everybody agrees that the current broadband maps are dreadful and misrepresent broadband availability. The current maps are created from data that the FCC collects from ISPs on the 477 form where each ISP lists broadband coverage by census block. One of the many problems with the current mapping process (I won’t list them all) is that census blocks can cover a large geographic area in rural America, and reporting at the census block level tends to blur together different circumstances where some folks have broadband and others have none.

There have been two interesting proposals so far. Several parties have suggested that the FCC gather broadband speed availability by address. That sounds like the ultimate database, but there are numerous reasons why this is not practical.

The other recommendation is a 3-stage process recommended by NCTA. First, data would be collected by polygon shapefiles. I’m not entirely sure what that means, but I assume it means using smaller geographic footprints than census blocks. Collecting the same data as today using a smaller footprint ought to be more accurate. Second, and the best idea I’ve heard suggested, is to allow people to challenge the data in the mapping database. I’ve been suggesting that for several years. Third, NCTA wants to focus on pinpointing unserved areas. I’m not sure what that means, but perhaps it means creating shapefiles to match the different availability of speeds.

These ideas might provide better broadband maps than we have today, but I’m guessing they will still have big problems. The biggest issue with trying to map broadband speeds is that many of the broadband technologies in use vary widely in actual performance in the field.

  • Consider DSL. We’ve always known that DSL performance decreases with distance from a DSL base station. However, DSL performance is not as simple as that. DSL also varies for other reasons like the size of the gauge of copper at a customer or the quality of the copper. Next door neighbors can have a significantly different DSL experience if they have different size wires in their copper drops, or if the wires at one of the homes have degraded over time. DSL also differs by technology. A telco might operate different DSL technologies out of the same central office and see different performance from ADSL versus VDSL. There really is no way for a telco to predict the DSL speed available at a home without installing it and testing the actual speed achieved.
  • Fixed wireless and fixed cellular broadband have similar issues. Just like DSL, the strength of a signal from a wireless transmitter decreases over distance. However, distance isn’t the only issue and things like foliage affect a wireless signal. Neighbors might have a very different fixed wireless experience if one has a maple tree and the other has a pine tree in the front yard. To really make it difficult to define the speed, the speeds on wireless systems are affected to some degree by precipitation, humidity and temperature. Anybody who’s ever lived with fixed wireless broadband understands this variability. WISPs these days also use multiple spectrum blocks, and so the speed delivered at any given time is a function of the particular mix of spectrum being used.

Regardless of the technology being used, one of the biggest issues affecting broadband speeds is the customer home. Customers (or ISPs) might be using outdated and obsolete WiFi routers or modems (like Charter did for many years in upstate New York). DSL speeds are just as affected by the condition of the inside copper wiring as the outdoor wiring. The edge broadband devices can also be an issue – when Google Fiber first offered gigabit fiber in Kansas City almost nobody owned a computer capable of handling that much speed.

Any way we try to define broadband speeds – even by individual home – is going to still be inaccurate. Trying to map broadband speeds is a perfect example of trying to fit a round peg in a square hole. It’s obvious that we can do a better job of this than we are doing today. I pity a fixed wireless ISP if they are somehow required to report broadband speeds by address, or even by a small polygon. They only know the speed at a given address after going to the roof of a home and measuring it.

The more fundamental issue here is that we want to use the maps for two different policy purposes. One goal is to be able to count the number of households that have broadband available. The improved mapping ideas will improve this counting function – within all of the limitations of the technologies I described above.

But mapping is a dreadful tool when we use it to start drawing lines on a map defining which households can get grant money to improve their broadband. At that point the mapping is no longer a theoretical exercise and a poorly drawn line will block homes from getting better broadband. None of the mapping ideas will really fix this problem and we need to stop using maps when awarding grants. It’s so much easier to decide that faster technology is better than slower technology. For example, grant money ought to be available for anybody that wants to replace DSL on copper with fiber. I don’t need a map to know that is a good idea. The grant process can use other ways to prioritize areas with low customer density without relying on crappy broadband maps.

We need to use maps only for what they are good for – to get an idea of what is available in a given area. Mapping is never going to be accurate enough to use to decide which customers can or cannot get better broadband.

Regulatory Sleight of Hand

I was looking through a list of ideas for blogs and noticed that I had never written about the FCC’s odd decision to reclassify commercial mobile broadband as private mobile broadband service in WC Docket No. 17-108 – The Restoring Internet Freedom order that was used to kill net neutrality and to eliminate Title II regulation of broadband. There was so much industry stir about those larger topics that the reclassification of the regulatory nature of mobile broadband went largely unnoticed at the time by the press.

The reclassification was extraordinary in the history of FCC regulation because it drastically changed the definition of one of the major industries regulated by the agency. In 1993 the Congress had enacted regulatory amendments to Section 332 of the FCC’s rules to clarify the regulation for the rapidly burgeoning cellular industry.

At that time there were about 16 million cellular subscribers that used the public switched telephone network (PSTN) and another two million private cell phones that used private networks primarily for corporate dispatch. Congress made a distinction between the public and private use of cellular technology and coined the term CMRS (Commercial Mobile Radio Service) to define the public service we still use today for making telephone calls on cell phones. That congressional act defined CMRS service as having three characteristics: a) the service is for profit, b) it’s available to the entire public, and c) it is interconnected to the PSTN. Private mobile service was defined as any cellular service that didn’t meet any one of the three tests.

The current FCC took the extraordinary step of declaring that cellular broadband is private cellular service. The FCC reached this conclusion using what I would call a regulatory sleight-of-hand. Mobile broadband is obviously still for profit and also available to the public, and so the FCC tackled the third test and said that mobile broadband is part of the Internet and not part of the public telephone network. It’s an odd distinction because the path of a telephone call and a data connection from a cellphone is usually identical. A cellphone first delivers the traffic for both services to a nearby cellular tower (or more recently to pole-mounted small cell sites). The traffic for both services is transported from the cell tower using ethernet transport that the industry calls trunking. At some point in the network, likely a switching hub, the voice and data traffic are split and the voice calls continue inside the PSTN while data traffic is peeled off to the Internet. There is no doubt that the user end of every cellular call or cellular data connection uses the network components that are part of the PSTN.

Why did the FCC go through these mental gymnastics? This FCC had two primary goals of this particular order. First, they wanted to kill the net neutrality rules established by the prior FCC in 2015. Second, they wanted to do this in such a way as to make it extremely difficult for a future FCC to reverse the decision. They ended up with a strategy of declaring that broadband is not a Title II service. Title II refers to the set of rules established by the Telecommunications Act of 1934 that was intended as the framework for regulating common carriers. Until the 2017 FCC order, most of the services we think of as telecommunications – landline telephone, cellular telephones, and broadband – were all considered as common carrier services. The current FCC strategy was to reclassify landline and mobile broadband as a Title I information service and essentially wash their hands from regulating broadband at all.

Since net neutrality rules applied to both landline and mobile data services, the FCC needed to first decree that mobile data was not a public and commercial service before they could remove it from Title II regulation.

The FCC’s actions defy logic and it’s clear that mobile data still meets the definition of a CMRS service. It was an interesting tactic by the FCC and probably the only way they could have removed mobile broadband from Title II regulation. However, they also set themselves up for some interesting possibilities from the court review of the FCC order. For example, a court might rule that mobile broadband is a CMRS service and drag it back under Title II regulation while at the same time upholding the FCC’s reclassification of landline broadband.

Why does this matter? Regulatory definitions matter because the regulatory process relies on an accumulated body of FCC orders and court cases that define the actual nature of regulating a given service. Congress generally defines regulation at a high level and later FCC decisions and court cases better define issues that are disputed. When something gets reclassified in this extreme manner, most of the relevant case law and precedents go out the window. That means we start over with a clean slate and much that was adjudicated in the past will likely have to be adjudicated again, but now based upon the new classification. I can’t think of any time in our industry where regulators decided to arbitrarily redefine the basic nature of a major industry product. We are on new regulatory ground, and that means uncertainty, which is never good for the industry.

Streamlining Regulations

Jonathan Spalter of USTelecom wrote a recent blog calling on Congress to update regulations for the telecom industry. USTelecom is a lobbying arm representing the largest telcos, but which also still surprisingly has a few small telco members. I found the tone of the blog interesting, in that somebody who didn’t know our industry would read the blog and think that the big telcos are suffering under crushing regulation.

Nothing could be further from the truth. We currently have an FCC that seems to be completely in the pocket of the big ISPs. The current FCC walked in the door with the immediate goal to kill net neutrality, and in the process decided to completely deregulate the broadband industry. The American public hasn’t really grasped yet that ISPs are now unfettered to endlessly raise broadband prices and to engage in network practices that benefit the carriers instead of customers. Deregulation of broadband has to be the biggest regulatory giveaway in the history of the country.

Spalter goes on to praise the FCC for its recent order on poles that set extremely low rates for wireless pole connections and which lets wireless carriers place devices anywhere in the public rights-of-way. He says that order brought “fairness’ to the pole attachment process when in fact the order was massively unbalanced in favor of cellular companies and squashes any local input or authority over rights-of-ways – something that has always been a local prerogative. It’s ironic to see USTelecom praising fairness for pole attachments when their members have been vehemently trying to stop Google Fiber and others from gaining access to utility poles.

To be fair, Spalter isn’t completely wrong and there are regulations that are out of date. Our last major telecom legislation was in 1996, at a time when dial-up Internet access was spreading across the country. The FCC regulatory process relies on rules set by Congress, and since the FCC hasn’t acted since 1996, Spalter accuses Congress of having “a reckless abdication of government responsibility”.

I find it amusing that the number one regulation that USTelecom most dislikes is the requirement for the big telcos make their copper wires available to other carriers. That requirement of the Telecommunications Act of 1996 was probably the most important factor in encouraging other companies to compete against the monopoly telephone companies. In the years immediately after the 1996 Act, competitors ordered millions of wholesale unbundled network elements on the telco copper networks.

There are still competitors that using the telco copper to provide far better broadband than the telcos are willing to do, so we need to keep these regulations as long as copper remains hanging on poles. I would also venture a guess that the telcos are making more money selling this copper to the competitors than they would make if the competitors went away – the public is walking away from telco DSL in droves.

I find it curious that the telcos keep harping on this issue. In terms of the total telco market the sale of unbundled elements is a mere blip on the telco books. This is the equivalent to a whale complaining about a single barnacle on his belly. But the big telcos never miss an opportunty to harp about the issue and have been working hard to eliminate sale of copper to competitors since the passage of the 1996 Act. This is not a real issue for the telcos – they just have never gotten over the fact that they lost a regulatory battle in 1996 and they are still throwing a hissy fit over that loss.

The reality is that big telcos are less regulated than ever before. Most states have largely deregulated telephone service. The FCC completely obliterated broadband regulation. While there are still cable TV regulations, the big telcos like AT&T are bypassing those regulations by moving video online. The big telcos have already won the regulatory war.

There are always threats of new regulation – but the big telcos always lobby against new rules far in advance to weaken any new regulations. For example, they are currently supporting a watered-down set of privacy rules that won’t afford much protection of customer data. They have voiced support for a watered-down set of net neutrality rules that doesn’t obligate them to change their network practices.

It’s unseemly to see USTelecom railing against regulation after the telcos have already been so successful in shedding most regulations. I guess they want to strike while the iron is hot and are hoping to goad Congress and the FCC into finishing the job by killing all remaining regulation. The USTelcom blog is a repeat of the same song and dance they’ve been repeating since I’ve been in the industry – which boils down to, “regulation is bad”. I didn’t buy this story forty years ago and I still don’t buy it today.

The Cost of Siting Small Cells

One of the more unusual things ordered by the current FCC was setting a low cap on local fees that a City can charge to review an application for placing a small cell site. The FCC capped the application fee at up to $500 for a request to up to five small cell sites and $100 per site after that. The FCC also set a cap of $270 for an annual fee to use the rights-of-way for each small cell site.

Cities have an option to charge a more and can bill a ‘reasonable approximation’ of actual costs, but a City can expect a legal fight from wireless carriers for fees that are much higher than the FCC caps.

It’s worth looking back at the history of the issue. Wireless carriers complained to the FCC that they were being charged exorbitant fees to put equipment on utility poles in the public rights-of-way. The wireless carriers cited examples of having to pay north of $10,000 per small cell site. In most cases, fees have been far smaller than that, but citing the worst examples gave cover to the FCC for capping fees.

However, some of the examples of high fees cited by the carriers were for installations that would not be considered as a small cell. I’ve seen applications requests for hanging devices the size of a refrigerator on poles and also placing large cabinet on the sidewalk under a pole. The FCC acknowledged this in their order and set a size limit on what constitutes a small cell as a device occupying something less than 28 cubic feet.

It’s worth noting that much of the FCC’s order for small cell sites are under appeal. The most controversial issues being challenged are aspects of the order that stripped cities of the ability to set local rules on what can and cannot be hung on poles. The FCC basically said that cellular carriers are free to do what they want anywhere in the public rights-of-way and cities are arguing that the order violates the long-standing precedent that rights-of-ways issues should be decided locally.

Communities all over the country are upset with the idea that they have to allow a small cell site any place that the carriers want to put one. There are also active citizen’s groups protesting the implementation of millimeter wave cell sites due to public health concerns. A lot of the prominent radio scientists from around the world have warned of the potential public health consequences for prolonged exposure to the millimeter wave spectrum – similar to the spectrum used in airport scanners, but which would be broadcast continuously from poles in front of homes. There is also a lot of concern that carriers that hang millimeter wave transmitters are going to want aggressive tree trimming to maintain lines-of-sight to homes. Finally, there are concerns about the wild proliferation of devices if multiple wireless providers install devices on the same street.

The cap on local fees has already been implemented and cities are now obligated to charge the low rates unless they undertake the effort (and the likely legal fight) for setting higher fees. It is the setting of low fees that is the most puzzling aspect of the FCC order. It seems that the FCC has accepted the wireless carrier’s claim that high fees would kill the deployment of 5G small cell sites everywhere.

I live in a city that is probably pretty typical and that has an application process and inspectors for a huge range of processes, from building inspection, restaurant inspections, electrical and gas installation inspections and inspections of anything that disturbs a city street surface or is hung in the public rights-of-way. The city takes a strong position in assuring that the public rights-of-way are maintained in a way that provides the best long-term opportunity for the many uses of the rights-of-way. They don’t let any utility or entity take steps that make it harder for the next user to gain the needed access.

The $100 fee is to compensate the city for processing the application for access, to survey the site of the requested access and to then inspect that the wireless carrier really did what they promised and didn’t create unsafe conditions or physical hindrances in the right-of-way. It’s hard to think that $100 will compensate any city for the effort required. It will be interesting to see how many cities acquiesce to the low FCC rates instead of fighting to implement fair rates. Cities know that fights with carriers can be costly and they may not be willing to tackle the issue. But they also need to realize that the wireless carriers could pepper their rights-of-ways with devices that are likely to hang in place for decades. If they don’t tackle the issue up front they will have no latitude later to rectify small cell sites that were hung incorrectly or unsafely. I’ve attended hundreds of city council meetings and have always been amazed at the huge number of different issues that local politicians have to deal with. This is just one more issue added to that long list, and it will be understandable if many cities acquiesce to the low fees.

The Status of the CAF II Deployments

The Benton Foundation noted last month that both CenturyLink and Frontier have not met all of their milestones for deployment of CAF II. This funding from the FCC is supposed to be used to improve rural broadband to speeds of at least 10/1 Mbps. As of the end of 2018, the CAF II recipients were to have completed upgrades to at least 60% of the customers in each state covered by the funding.

CenturyLink took funding to improve broadband in 33 states covering over 1 million homes and businesses. CenturyLink claims to have met the 60% milestone in twenty-three states but didn’t make the goal in eleven states: Colorado, Idaho, Kansas, Michigan, Minnesota, Missouri, Montana, Ohio, Oregon, Washington, and Wisconsin.

Frontier received CAF II funding to improve broadband to over 774,000 locations in 29 states. Frontier says they’ve met the milestone in 27 states but haven’t reached the 60% deployment milestone in Nebraska and New Mexico.  There were a number of other large telcos that took CAF Ii funding like AT&T, Windstream, and Consolidated, and I have to assume that they’ve reported meeting the 60% milestone.

Back in 2014 when it looked like the CAF II program might be awarded by reverse auction, we helped a number of clients take a look at the CAF II service areas. In many cases, these are large rural areas that cover 50% or more of most of the rural counties in the country. Most of my clients were interested in the CAF II money as a funding mechanism to help pay for rural fiber, but all of the big telcos other than AT&T announced originally that they planned to upgrade existing DSL. AT&T announced a strategy early on to used fixed cellular wireless to satisfy their CAF II requirements. Since then a few big telcos like Frontier and Windstream have said that they are also using fixed wireless to meet their obligations.

To us, the announcement that the telcos were going to upgrade DSL set off red flag alarms. In a lot of rural counties there are only a small number of towns, and those towns are the only places where the big telcos have DSLAMs (the DSL hub). Rural telephone exchanges tend to be large and the vast majority of rural customers have always been far out of range of DSL that originates in the small towns. One only has to go a few miles – barely outside the towns – to see DSL speeds fall off to nothing.

The only way to make DSL work in the CAF II areas would be to build fiber to rural locations and establish new DSL hub sites. As any independent telco can tell you who deployed DSL the right way, this is expensive because it takes a lot of the rural DSLAMs to get within range of every customer. By electing DSL upgrades, the big telcos like CenturyLink and Frontier had essentially agreed to build a dozen or more fiber DSLAMs in each of the rural counties covered by CAF II. My back-of-the-envelope math showed that was going to cost a lot more than what the companies were receiving from the CAF fund. Since I knew these telcos didn’t want to spend their own money in rural America, I predicted execution failures for many of the planned DSL deployments.

I believe the big telcos are now facing a huge dilemma. They’ve reached 60% of customers in many places (but not all). However, it is going to cost two to three times more per home to reach the remaining 40% of homes. The remaining customers are the ones on extremely long copper loops and DSL is an expensive technology use for reaching these last customers. A DSLAM built to serve the customers at the ends of these loops might only serve a few customers – and it’s hard to justify the cost of the fiber and electronics needed to reach them.

I’ve believed from the beginning that the big telcos building DSL for the CAF II program would take the approach of covering the low hanging fruit – those customers that can be reached by the deployment of a few DSLAMs in a given rural area. If that’s true, then the big telcos aren’t going to spend the money to reach the most remote customers, meaning a huge number of CAF II customers are going to see zero improvements in broadband. The telcos mostly met their 60% targets by serving the low-hanging fruit. They are going to have a huge challenge meeting the next milestones of 80% and 100%.

Probably because I write this blog, I hear from folks at all levels of the industry about rural broadband. I’ve heard a lot of stories from technicians telling me that some of the big telcos have only tackled the low-hanging fruit in the CAF builds. I’ve heard from others that some telcos aren’t spending more than a fraction of the CAF II money they got from the FCC and are pocketing much of it. I’ve heard from rural customers who supposedly already got a CAF II upgrade and aren’t seeing speeds improved to the 10/1 threshold.

The CAF II program will be finished soon and I’m already wondering how the telcos are going to report the results to the FCC if they took shortcuts and didn’t make all of the CAF II upgrades. Will they say they’ve covered everybody when some homes saw no improvement? Will they claim 10/1 Mbps speeds when many households were upgraded to something slower? If they come clean, how will the FCC react? Will the FCC try to find the truth or sweep it under the rug?

ISPs Are Violating the Old Net Neutrality Rules

It’s been just over a year since the FCC repealed net neutrality. The FCC’s case is being appealed and oral arguments are underway in the appeal as I write this blog. One would have to assume that until that appeal is finished that the big ISPs will be on their best behavior. Even so, the press has covered a number of ISP actions during the last year that would have violated net neutrality if the old rules were still in place.

It’s not surprising that the cellular carriers were the first ones to violate the old net neutrality rules. This is the most competitive part of the industry and the cellular carriers are not going to miss any opportunity to gain a marketing edge.

AT&T is openly advertising that cellular customers can stream the company’s DirecTV Now product without it counting against monthly data caps. Meanwhile, all of the competing video services like Sling TV, Paystation Vue, YouTube TV, Netflix or Amazon Prime count against AT&T data caps – and video can quickly kill a monthly data plan download allotment. AT&T’s behavior is almost a pure textbook example of why net neutrality rules were put into place – to stop ISPs from putting competitor’s products at an automatic disadvantage. AT&T is the biggest cellular provider in the country and this creates a huge advantage for DirecTV Now. All of the major cellular carriers are doing something similar in allowing some video to not count against the monthly data cap, but AT&T is the only one pushing their own video product.

In November a large study of 100,000 cellphone users by Northeastern University and the University of Massachusetts showed that Sprint was throttling Skype. This is not something that the carrier announced, but it’s a clear case of pushing web traffic to the ‘Internet slow lane’. We can only speculate why Sprint would do this, but regardless of their motivation this is clearly a violation of net neutrality.

This same study showed numerous incidents where all of the major cellular carriers throttled video services at times. YouTube was the number one target of throttling, followed by Netflix, Amazon Prime, and the NBC Sports app. This throttling wasn’t as widespread as Sprint’s throttling of Skype, but the carriers must have algorithms in their network that throttles specific video traffic when cell sites get busy. In contrast to the big carriers, the smaller independent cellular carrier C.Spire had almost no instances of differentiation among video streams.

Practices that might violate net neutrality were not limited to cellular carriers. For example, Verizon FiOS recently began giving free Netflix for a year to new broadband customers. AT&T also started giving out free HBO to new customers last year. This practice is more subtle than the cellular carrier practice of blocking or throttling content. One of the purposes of net neutrality was for ISPs to not discriminate against web traffic. By giving away free video services the landline broadband companies are promoting specific web services over competitors.

This doesn’t sound harmful, but the discussions in the net neutrality order warned about a future where the biggest ISPs would partner with a handful of big web services like Facebook or Netflix to the detriment of all smaller and start-up web services. A new video service will have a much harder time gaining customers if the biggest ISPs are giving away their competitors for free.

There are probably more bad practices going on that we don’t know about. We wouldn’t have known about the cellular throttling of services without the big study. A lot of discrimination can be done through the network routing practices of the ISPs, which are hard to prove. For example, I’ve been seeing a growing number of complaints from consumers recently who are having trouble with streaming video services. If you recall, net neutrality first gained traction when it became known that the big ISPs like Comcast were blatantly interfering with Netflix streaming. There is nothing today to stop the big ISPs from implementing network practices that degrade certain kinds of traffic. There is also nothing stopping them from demanding payments from web services like Netflix so that their product is delivered cleanly.

Interestingly, most of the big ISPs made a public pledge to not violate the spirit of net neutrality even if the rules were abolished. That seems to be a hollow promise that was to soothe the public that worried about the end if net neutrality. The FCC implemented net neutrality to protect the open Internet. The biggest ISPs have virtual monopolies in most markets and public opinion is rarely going to change an ISP behavior if the ISP decides that the monetary gain is worth the public unhappiness. Broadband customers don’t have a lot of options to change providers and Cable broadband is becoming a near-monopoly in urban areas. There is no way for a consumer to avoid the bad practices of the cellular companies if they all engage in the same bad practices.

There is at least some chance that the courts will overturn the FCC repeal of net neutrality, but that seems unlikely to me. If the ISPs win in court and start blocking traffic and discriminating against web traffic it does seem likely that some future FCC or Congress will reinstitute net neutrality and starts the fight all over again. Regardless of the court’s decision, I think we are a long way from hearing the last about net neutrality.

We Need a Challenge Process for Broadband Maps

We all know that the broadband maps maintained by the FCC are terrible. Some of the inaccuracy is due to the fact that the data in the maps come from ISPs. For example, there are still obvious examples where carriers are reporting their marketing speeds rather than actual speeds, which they might not know. Some of the inaccuracy is due to the mapping rules, such as showing broadband by census block – when a few customers in a block have decent broadband it’s assumed that the whole census block has it. Some of the inaccuracy is due to the vagaries of technology – DSL can vary significantly from one house to the next due to the condition of local copper; wireless broadband can vary according to interference and impediments in the line-of-sight. The maps can be wrong due to bad behavior of an ISP who has a reason to either overstate or understate their actual speeds (I’ve seen both cases).

None of this would matter if the maps were just our best guess at seeing the state of broadband in the country. Unfortunately, the maps are used for real-life purposes. First, the maps are used at the FCC and state legislators to develop and support various policies related to broadband. It’s been my contention for a long time that the FCC has been hiding behind the bad maps because those maps grossly overstate the availability of rural broadband. The FCC has a good reason to do so because they are tasked by Congress to fix inadequate broadband.

Recently the maps have been used in a more concrete way and are used to define where grants can or cannot be awarded. Used in this manner the maps are being used to identify groups of homes that don’t already have adequate broadband. The maps were the basis of determining eligible areas for the CAF II reverse auction and now for the e-Connectivity grants.

This is where bad mapping really hurts. Every rural county in the country knows where broadband is terrible or non-existent. When I show the FCC maps to local politicians they are aghast at how inaccurate the maps are for their areas. The maps often show large swaths of phantom broadband that doesn’t exist. The maps will show towns that supposedly have universal 25/3 Mbps broadband or better when the real speeds in the town are 10 Mbps or less. The bad maps hurt every one of these places because if these maps were accurate these places would be eligible for grants to help fix the poor broadband. A lot of rural America is being royally screwed by the bad maps.

Of even more dismay, the maps seem to be getting worse instead of better. For example, in the CAF II program, the big telcos were supposed to bring broadband of at least 10/1 Mbps to huge swaths or rural America. A lot of the areas covered by the CAF II program are not going to see any improvement of broadband speeds. In some cases, the technology used, such as AT&T’s use of fixed cellular can’t deliver the desired speeds to customers who live too far from a tower. I also believe we’re going to find that in many cases the big carriers are electing to only upgrade the low-hanging fruit and are ignoring homes where the CAF upgrade costs too much. These carriers are likely to claim they’ve made the upgrades on the maps rather than admit to the FCC that they pocketed the subsidy money instead of spending it to improve broadband.

There have been a few suggested fixes for the problem. A few states have tried to tackle their own broadband maps that are more accurate, but they can’t get access to any better data from the ISPs. There are a few states now that are asking citizens to run speed tests to try to map the real broadband situation, but unless the speeds tests are run under specific and rigorous conditions they won’t, by themselves, serve as proof of poor broadband.

The easiest fix for the problem is staring us right in the face. Last year the FCC got a lot of complaints about the soon-to-be-awarded Mobility Fund Phase II grants. This money was to go to cellular carriers to bring cell coverage to areas that don’t have it. The FCC maps used for those efforts were even worse than the broadband maps and the biggest cellular companies were accused of fudging their coverage data to try to stop smaller rival cell providers from getting the federal money. The outcry was so loud that the FCC created a challenge process where state and local governments could challenge the cellular coverage maps. I know a lot of governments that took part in these challenges. The remapping isn’t yet complete, but it’s clear that local input improved the maps.

We need the same thing for the FCC broadband maps. There needs to be a permanent challenge process where a state or local government can challenge the maps and can supply what they believe to be a more accurate map of coverage. Once counties understand that they are getting bypassed for federal grant money due to crappy maps they will jump all over a challenge process. I know places that will go door-to-door if the effort can help bring funds to get better broadband.

Unfortunately, only the FCC can order a challenge process, and I don’t think they will even consider it unless they got the same kind of outcry that came with the Mobility II Funding. It’s sad to say, but the FCC has a vested interest in burying their head in the sand and pretending that rural broadband is okay – otherwise they have to try to fix it.

I think states ought to consider this. If a state undertakes a program to allow challenges to the map, then governors and federal legislators can use the evidence gathered to pressure the USDA to accept alternate maps for areas with poor broadband. These challenges have to come from the local level where people know the broadband story. This can’t come from a state broadband mapping process that starts with carrier data. If local people are allowed to challenge the maps then the maps will get better and will better define areas that deserve federal grants. I believe a lot of county governments and small towns would leap at the opportunity to tell their broadband story.

Looking Back at the Net Neutrality Order

Chairman Ajit Pai used three arguments to justify ending net neutrality. First, he claimed that the net neutrality rules in effect were a disincentive for big ISPs to make investments and that ending net neutrality would lead to a boom in broadband investment. He also argued that ending net neutrality would free the big ISPs to make broadband investments in rural parts of the US that were underserved. Finally, he argued that the end of net neutrality would spark the growth of telecom jobs. It’s been two years since he used those arguments to justify the repeal net neutrality and it’s easy to see that none of those things have come to pass.

The investment claim is easy to check. The big ISPs are starting to release their 2018 financial results and it looks like capital spending in 2018 – the first year after the end of net neutrality – are lower than in 2017. We’ve already heard from Comcast and Charter and that capital spending was down in 2018 over 2017. The industry analyst MoffettNathanson has already predicted that capital spending for the four biggest cable companies – Comcast, Charter, Altice, and CableONE is expected to drop by 5.8% more in 2019. Anybody who watches the cable companies understands that they all just made big investments in upgrading to DOCSIS 3.1 and that capital spending ought to drop significantly for the next several years.

MoffettNathanson also predicts that wireline capital spending for Verizon and AT&T will drop from $20.3 billion in 2018 to $19.6 billion in 2019. The press is also full of articles lamenting that investments in 5G by these companies is far smaller than hoped for by industry vendors. It seems that net neutrality had no impact on telecom spending (as anybody who has spent time at an ISP could have told you). It’s virtually unheard of for regulation to drive capital spending.

The jobs claim was a ludicrous one because the big companies have been downsizing for years and have continued to do so after net neutrality was repealed. The biggest layoff came from Verizon in October 2018 when the company announced that it was eliminating 44,000 jobs and transferring another 2,500 to India. This layoff is an astronomical 30% of its workforce. AT&T just announced on January 25 that it would eliminate 4,600 jobs, the first part of a 3-year plan to eliminate 10,000 positions. While the numbers are smaller for Comcast, they laid off 500 employees on January 4 and also announced the close of a facility with 405 employees in Atlanta.

Pai’s claim that net neutrality was stopping the big ISPs from investing in underserved areas might be the most blatantly false claim the Chairman has made since he took the Chairman position. The big ISPs haven’t made investments in rural America in the last decade. They have been spending money in rural America in the last few years – but only funds handed to them by the FCC through the CAF II program to expand rural broadband and the FCC’s Mobility Fund to expand rural cellular coverage. I’ve been hearing rumors all over the industry that most of the big ISPs aren’t even spending a lot of the money from those two programs – something I think will soon surface as a scandal. There is no regulatory policy that is going to get the big ISPs to invest in rural America and it was incredibly unfair to rural America for the Chairman to imply they ever would.

Chairman Pai’s arguments for repealing net neutrality were all false and industry insiders knew it at the time. I probably wrote a dozen blog posts about the obvious falsehoods being peddled. The Chairman took over the FCC with the goal of eliminating net neutrality at the top of his wish list and he adopted these three talking points because they were the same ones being suggested by big ISP lobbyists.

What bothers me is this is not how regulation is supposed to work. Federal and state regulatory agencies are supposed to gather the facts on both sides of a regulatory issue, and once they choose a direction they are expected to explain why. The orders published by the FCC and other regulatory bodies act similar to court orders in that the language in these orders are then part of the ongoing record that is used later to understand the ‘why’ behind an order. In later years courts rely on the discussion in regulatory orders to evaluate disputes based upon the new rules. The order that repeals net neutrality sadly repeats these same falsehoods that were used to justify the repeal.

There are always two sides for every regulatory issue and there are arguments that could be made against net neutrality. However, the Chairman and the big ISPs didn’t want to publicly make the logical arguments against net neutrality because they knew these arguments would be unpopular. For example, there is a legitimate argument to made for allowing ISPs to discriminate against certain kinds of web traffic – any network engineer will tell you that it’s nearly mandatory to give priority to some bits over others. But the ISPs know that making that argument makes it sound like they want the right to shuttle customers into the ’slow lane’, and that’s a PR battle they didn’t want to fight. Instead, telecom lobbyists cooked up the false narrative peddled by Chairman Pai. The hoped the public would swallow these false arguments rather than argue for the end of net neutrality on its merits.