We Need a Challenge Process for Broadband Maps

We all know that the broadband maps maintained by the FCC are terrible. Some of the inaccuracy is due to the fact that the data in the maps come from ISPs. For example, there are still obvious examples where carriers are reporting their marketing speeds rather than actual speeds, which they might not know. Some of the inaccuracy is due to the mapping rules, such as showing broadband by census block – when a few customers in a block have decent broadband it’s assumed that the whole census block has it. Some of the inaccuracy is due to the vagaries of technology – DSL can vary significantly from one house to the next due to the condition of local copper; wireless broadband can vary according to interference and impediments in the line-of-sight. The maps can be wrong due to bad behavior of an ISP who has a reason to either overstate or understate their actual speeds (I’ve seen both cases).

None of this would matter if the maps were just our best guess at seeing the state of broadband in the country. Unfortunately, the maps are used for real-life purposes. First, the maps are used at the FCC and state legislators to develop and support various policies related to broadband. It’s been my contention for a long time that the FCC has been hiding behind the bad maps because those maps grossly overstate the availability of rural broadband. The FCC has a good reason to do so because they are tasked by Congress to fix inadequate broadband.

Recently the maps have been used in a more concrete way and are used to define where grants can or cannot be awarded. Used in this manner the maps are being used to identify groups of homes that don’t already have adequate broadband. The maps were the basis of determining eligible areas for the CAF II reverse auction and now for the e-Connectivity grants.

This is where bad mapping really hurts. Every rural county in the country knows where broadband is terrible or non-existent. When I show the FCC maps to local politicians they are aghast at how inaccurate the maps are for their areas. The maps often show large swaths of phantom broadband that doesn’t exist. The maps will show towns that supposedly have universal 25/3 Mbps broadband or better when the real speeds in the town are 10 Mbps or less. The bad maps hurt every one of these places because if these maps were accurate these places would be eligible for grants to help fix the poor broadband. A lot of rural America is being royally screwed by the bad maps.

Of even more dismay, the maps seem to be getting worse instead of better. For example, in the CAF II program, the big telcos were supposed to bring broadband of at least 10/1 Mbps to huge swaths or rural America. A lot of the areas covered by the CAF II program are not going to see any improvement of broadband speeds. In some cases, the technology used, such as AT&T’s use of fixed cellular can’t deliver the desired speeds to customers who live too far from a tower. I also believe we’re going to find that in many cases the big carriers are electing to only upgrade the low-hanging fruit and are ignoring homes where the CAF upgrade costs too much. These carriers are likely to claim they’ve made the upgrades on the maps rather than admit to the FCC that they pocketed the subsidy money instead of spending it to improve broadband.

There have been a few suggested fixes for the problem. A few states have tried to tackle their own broadband maps that are more accurate, but they can’t get access to any better data from the ISPs. There are a few states now that are asking citizens to run speed tests to try to map the real broadband situation, but unless the speeds tests are run under specific and rigorous conditions they won’t, by themselves, serve as proof of poor broadband.

The easiest fix for the problem is staring us right in the face. Last year the FCC got a lot of complaints about the soon-to-be-awarded Mobility Fund Phase II grants. This money was to go to cellular carriers to bring cell coverage to areas that don’t have it. The FCC maps used for those efforts were even worse than the broadband maps and the biggest cellular companies were accused of fudging their coverage data to try to stop smaller rival cell providers from getting the federal money. The outcry was so loud that the FCC created a challenge process where state and local governments could challenge the cellular coverage maps. I know a lot of governments that took part in these challenges. The remapping isn’t yet complete, but it’s clear that local input improved the maps.

We need the same thing for the FCC broadband maps. There needs to be a permanent challenge process where a state or local government can challenge the maps and can supply what they believe to be a more accurate map of coverage. Once counties understand that they are getting bypassed for federal grant money due to crappy maps they will jump all over a challenge process. I know places that will go door-to-door if the effort can help bring funds to get better broadband.

Unfortunately, only the FCC can order a challenge process, and I don’t think they will even consider it unless they got the same kind of outcry that came with the Mobility II Funding. It’s sad to say, but the FCC has a vested interest in burying their head in the sand and pretending that rural broadband is okay – otherwise they have to try to fix it.

I think states ought to consider this. If a state undertakes a program to allow challenges to the map, then governors and federal legislators can use the evidence gathered to pressure the USDA to accept alternate maps for areas with poor broadband. These challenges have to come from the local level where people know the broadband story. This can’t come from a state broadband mapping process that starts with carrier data. If local people are allowed to challenge the maps then the maps will get better and will better define areas that deserve federal grants. I believe a lot of county governments and small towns would leap at the opportunity to tell their broadband story.

Looking Back at the Net Neutrality Order

Chairman Ajit Pai used three arguments to justify ending net neutrality. First, he claimed that the net neutrality rules in effect were a disincentive for big ISPs to make investments and that ending net neutrality would lead to a boom in broadband investment. He also argued that ending net neutrality would free the big ISPs to make broadband investments in rural parts of the US that were underserved. Finally, he argued that the end of net neutrality would spark the growth of telecom jobs. It’s been two years since he used those arguments to justify the repeal net neutrality and it’s easy to see that none of those things have come to pass.

The investment claim is easy to check. The big ISPs are starting to release their 2018 financial results and it looks like capital spending in 2018 – the first year after the end of net neutrality – are lower than in 2017. We’ve already heard from Comcast and Charter and that capital spending was down in 2018 over 2017. The industry analyst MoffettNathanson has already predicted that capital spending for the four biggest cable companies – Comcast, Charter, Altice, and CableONE is expected to drop by 5.8% more in 2019. Anybody who watches the cable companies understands that they all just made big investments in upgrading to DOCSIS 3.1 and that capital spending ought to drop significantly for the next several years.

MoffettNathanson also predicts that wireline capital spending for Verizon and AT&T will drop from $20.3 billion in 2018 to $19.6 billion in 2019. The press is also full of articles lamenting that investments in 5G by these companies is far smaller than hoped for by industry vendors. It seems that net neutrality had no impact on telecom spending (as anybody who has spent time at an ISP could have told you). It’s virtually unheard of for regulation to drive capital spending.

The jobs claim was a ludicrous one because the big companies have been downsizing for years and have continued to do so after net neutrality was repealed. The biggest layoff came from Verizon in October 2018 when the company announced that it was eliminating 44,000 jobs and transferring another 2,500 to India. This layoff is an astronomical 30% of its workforce. AT&T just announced on January 25 that it would eliminate 4,600 jobs, the first part of a 3-year plan to eliminate 10,000 positions. While the numbers are smaller for Comcast, they laid off 500 employees on January 4 and also announced the close of a facility with 405 employees in Atlanta.

Pai’s claim that net neutrality was stopping the big ISPs from investing in underserved areas might be the most blatantly false claim the Chairman has made since he took the Chairman position. The big ISPs haven’t made investments in rural America in the last decade. They have been spending money in rural America in the last few years – but only funds handed to them by the FCC through the CAF II program to expand rural broadband and the FCC’s Mobility Fund to expand rural cellular coverage. I’ve been hearing rumors all over the industry that most of the big ISPs aren’t even spending a lot of the money from those two programs – something I think will soon surface as a scandal. There is no regulatory policy that is going to get the big ISPs to invest in rural America and it was incredibly unfair to rural America for the Chairman to imply they ever would.

Chairman Pai’s arguments for repealing net neutrality were all false and industry insiders knew it at the time. I probably wrote a dozen blog posts about the obvious falsehoods being peddled. The Chairman took over the FCC with the goal of eliminating net neutrality at the top of his wish list and he adopted these three talking points because they were the same ones being suggested by big ISP lobbyists.

What bothers me is this is not how regulation is supposed to work. Federal and state regulatory agencies are supposed to gather the facts on both sides of a regulatory issue, and once they choose a direction they are expected to explain why. The orders published by the FCC and other regulatory bodies act similar to court orders in that the language in these orders are then part of the ongoing record that is used later to understand the ‘why’ behind an order. In later years courts rely on the discussion in regulatory orders to evaluate disputes based upon the new rules. The order that repeals net neutrality sadly repeats these same falsehoods that were used to justify the repeal.

There are always two sides for every regulatory issue and there are arguments that could be made against net neutrality. However, the Chairman and the big ISPs didn’t want to publicly make the logical arguments against net neutrality because they knew these arguments would be unpopular. For example, there is a legitimate argument to made for allowing ISPs to discriminate against certain kinds of web traffic – any network engineer will tell you that it’s nearly mandatory to give priority to some bits over others. But the ISPs know that making that argument makes it sound like they want the right to shuttle customers into the ’slow lane’, and that’s a PR battle they didn’t want to fight. Instead, telecom lobbyists cooked up the false narrative peddled by Chairman Pai. The hoped the public would swallow these false arguments rather than argue for the end of net neutrality on its merits.

Broadening the USF Funding Base

The funding mechanism to pay for the Universal Service Fund is broken. The USF is funded from fees added to landline telephones, cell phones and on large business data connections that are still billed using telco special access products (T1s and larger circuits). The USF fee has now climbed to an exorbitant month tax of 20% of the portion of those services that are deemed to be Interstate by the FCC. This equates to a monthly fee of between a dollar or more for every landline phone and cellphone (the amount charged varies by carrier).

The funding mechanism made sense when it was originally created. The fee at that time was assessed on landlines and was used to built and strengthen landline service in rural America. When the USF fee was introduced the nationwide penetration rate of landlines in urban America was over 98%, and the reasoning was that those with phone service ought to be charged a small fee to help bring phone service to rural America. The concept behind universal service is that everybody in the country is better off when we’re all connected to the communications network.

However, over time the use of the Universal Service Fund has changed drastically and this money is now the primary mechanism that FCC is using to pay for the expansion of rural broadband. This pot of money was used to fund the original CAF II programs for the big telcos and the A-CAM program for the smaller ones. It’s also the source of the Mobility Fund which is used to expand rural cellular coverage.

Remember the BDAC? That’s the Broadband Deployment Advisory Committees that was created by Chairman Ajit Pai when he first took the reins at the FCC. The BDAC was split into numerous subcommittees that looked at specific topics. Each BDAC subcommittee issued a report of recommendations on their topic, and since then little has been heard from them. But the BDAC subcommittees are still meeting and churning out recommendations.

The BDAC subcommittee tasked with creating a State Model Code has suggested the broadening of the funding for the USF. This is the one committee that is not making recommendations for the FCC but rather suggesting ideas that states ought to consider. The Committee has suggested that states establish a fee, similar to the federal USF fee and use the fee to expand broadband in each state. Many states have already done something similar and have created state Universal Service Funds.

The recommendation further suggests that states tax anybody that benefits from broadband. This would include not just ISPs and customers of ISPs, but also the big users of the web like Netflix, Google, Amazon, Facebook, etc. The reasoning is that those that benefit from broadband ought to help pay to expand broadband to everybody. The BDAC recommended language has been modified a few times because the original language was so broad that almost everybody in the country would be subject to the tax, and we’ve learned over years that taxation language needs to be precise.

This is not the first time that this idea has been floated. There are many who suggested in the past to the FCC that USF funding should be expanded to include broadband customers. Just as telephone customers were charged to fund the expansion of the telephone network it makes sense to tax broadband customers to expand broadband. But this idea has always been shot down because early in the life of the Internet the politicians in DC latched onto the idea of not taxing the Internet. This made sense at the time when we needed to protect the fledgling ISP industry – but that concept is now quaintly obsolete since Internet-related companies are probably collectively the world’s biggest industry and hardly need shielding from taxation.

AT&T is a member of this BDAC subcommittee and strongly supports the idea. However, AT&T’s motivations are suspect since they might be the biggest recipient of state USF funds. We saw AT&T lobbyists hijack the state broadband grant program in California and grab all of the money that would have been used to build real rural broadband in the state. The big carriers have an overly large influence in statehouses due to decades of lobbying, and so there is a concern that they support this idea for their own gain rather than supporting the idea of spreading broadband. We just saw AT&T lobbyists at the federal level sneak in language that makes it hard to use the e-Connectivity grants from competing with them.

But no matter how tainted the motivation of those on the BDAC committee, this is an idea with merit. It’s hard to find politicians anywhere who don’t think we should close the broadband gap. It’s clear that it’s going to take some government support to make this work. Currently, there are a number of state broadband grant programs, but these programs generally rely annually on allocations from the legislature – something that is always used annually as a bargaining chip against other legislative priorities. None of these grant programs have allocated enough money to make a real dent in the broadband shortfalls in their states. If states are going to help solve the broadband gap they need to come up with a lot more money.

Setting up state USF funds with a broad funding base is one way to help solve the rural broadband divide. This needs to be done in such a way that the money is used to build the needed fiber infrastructure that is needed to guarantee broadband for the rest of the century – such funds will be worthless if the money is siphoned instead to the pockets of the big telcos. It makes sense to assess the fees on a wider base, and I can’t see any reasonable objection against charging broadband customers but also charging big broadband-reliant companies like Netflix, Google, Amazon, and Facebook. The first state to try this will get a fight from those companies, but hopefully the idea of equity will win since it’s traffic from these companies that is driving the need for better broadband infrastructure.

An End-run Around Broadband Regulation

In a recent article in Wired, Susan Crawford, the Harvard law professor who follows tech policy and telecom writes about the long-term strategy of the big ISPs to avoid regulation. She discusses the attempt of ISPs to equate some of their actions to be the equivalent of free speech – thus removing any such ISP actions from regulation.

The big ISPs aren’t the only big corporations adopting this strategy which has been enabled due to the Citizens United v. Federal Election Commission decision in 2010. This landmark Supreme Court decision ruled that the free speech clause of the First Amendment to the Constitution prohibits the government from regulating independent expenditures for communications from corporations – specifically in that case campaign contributions to politicians. Corporations have been emboldened by that ruling to push to widen the definition of First Amendment rights for corporations. While not entirely accurate, the most common interpretation of that case is that corporations now have some of the same First Amendment rights as people, and corporations want to expand that list of rights.

The heart of the big ISP argument is that transmitting speech is protected by the First Amendment. The ISPS want to equate the act of transmitting a voice call or sending any transmission of data as protected speech – the same as speech between two people. Susan Crawford’s article describes the big ISP argument, and to a non-lawyer the big ISP logic is a bit hard to understand. However, what matters is that the big ISPs are hoping to get a favorable hearing of the issue should this ever make it to the Supreme Court – a ruling in their favor would effectively eliminate the possibility of regulated ISP broadband transmissions.

To anybody who is not a constitutional lawyer this seems like a silly argument. It’s clear to most of us that big ISPs can best be classified as utilities. They sell services that we think of as utility products. Depending upon the market, the ISPs differ in the degree of competition, but even where there aren’t telecom monopolies, we understand that the big cable companies and telcos act like oligopoly providers and don’t vigorously compete with each other on price. I think the average person believes that the big ISPs services ought to be regulated to some extent since we are all aware of ways that the big ISPs have abused their customers in the past.

The big ISPs are currently enjoying the least amount of regulation they’ve ever seen. The current FCC effectively walked away from regulating broadband. While there are still telephone and cable TV regulations on the books that derive from acts of Congress, the current FCC is regulating those products in the lightest manner possible.

However, the big ISPs know this could change in a hurry. In 2017 the Supreme Court ruled that the prior FCC had the authority to impose net neutrality rules using Title II regulation. The ISPs understand that as future administrations change, they could get a future FCC that is pro-consumer rather than pro-ISP. They also understand that a future Congress could pass new laws that provide for stricter regulations.

In fact, it’s almost inevitable that the regulatory pendulum will swing the other way – that’s how regulation of all industries has always worked. The government will implement new regulations and the companies that are regulated will challenge those regulations and over time weaken them. When regulation become too lax, the government will restart the cycle with a new round of tougher regulations. The very nature of regulation leads to this cycle of swings between tougher and weaker regulation.

ISPs are their own worst enemy, because like all monopolies they can’t seem to control themselves from going too far in the way they treat customers. Just in recent news we saw the State of Minnesota suing Comcast for lying about hidden fees on cable bills. We just heard about the big wireless carriers selling real-time customer cellphone location data to the highest bidders like bounty hunters, after promising the government they would stop the practice. The big ISPs (and all monopolies) are unable to police themselves because the desire for greater profits always overrides common sense – which is the primary reason that we regulate big corporations.

As a consumer I feel that the current FCC has gone too far towards deregulation, and as someone who understands regulation, I’ve always assumed the pendulum would swing the other way. You have to give the big ISP lawyers credit for thinking out of the box, and they have found a tactic that they hope might remove them from the regulatory cycle. I think anybody that buys services from these big ISPs hopes that they are unsuccessful in this effort.

Minnesota Sues Comcast

Lori Swanson, the Attorney General of Minnesota sued Comcast on December 21 seeking refunds to all customers who were harmed by the company’s alleged violation of the state’s Prevention of Consumer Fraud Act and Uniform Deceptive Trade Practices Act. The complaint details the sort of practices that we’ve come to expect from most of the big cable companies – and hopefully this serves as a warning to smaller ISPs that might be following similar practices. It’s an interesting read.

The most significant dollar complaint is that Comcast has defrauded customers about the true nature of two fees – the ‘Regional Sports Network Fee’ and the ‘Broadcast TV’ fee. These two fees now total $18.25 per month. These fees are both a part of every cable package and are not optional to customers, but Comcast does not mention them when advertising the cable products. Further, Comcast customer service has repeatedly told the public that these fees are mandated by the government and are some a tax that is not set by Comcast.

Comcast only started charging separately for these two fees in 2014, but the size of these line items has skyrocketed on bills. In recent years the company has put a lot of the annual rate increases into these fees, allowing the company to continue to advertise low prices. The Regional Sports fee passes along the cost of Fox Sports North, and perhaps other regional sports. The Broadcast TV fee includes the amounts that Comcast pays local affiliate stations for ABC, CBS, FOX and NBC.

Interestingly, Comcast was previously sued over this same issue and settled the case without a verdict. As part of that suit the company promised to fix the problems, but they continued into 2017. In a pleading that is sure to displease company employees, Comcast threw its customer service reps under the bus and blame the issue on them. Comcast argues that breaking out these fees makes it easier for customers to know what they are paying for – but there are numerous examples cited in the complaint where new customers were surprised at the size of the first bill they receive from the company.

The complaint also says that the company often misrepresents the fees for equipment rental such as cable settop boxes, digital adapters and broadband modems. The complaint says that for some packages these fees add 30% to the cost of the product and are not fully disclosed to customers.

The complaint also says that Comcast routinely adds unwanted fees to customer bills. Customers that are visited by Comcast field technicians, who visit a business office or who buy from a Comcast door-to-door salesperson are often surprised to see additional products added to their bill. The complaint blames this on the practice of paying commissions to employees for sales.

The complaint notes that Comcast is well aware of these issues. The company settled an FCC complaint about the same issues in 2016 and late last year made refunds to more than 20,000 customers in Massachusetts over these same issues.

It’s not hard to verify some of the issue. If you go to the Comcast website you’ll find that it’s almost impossible to find the real cost of their cable and broadband products. The company constantly advertises low-priced specials that don’t mention the extra programming fees or the equipment fees.

This is a cautionary tale for smaller ISPs that compete with Comcast or other large cable companies. It’s always tempting to advertise cheap special prices in response to big cable company advertising. I know many smaller cable providers that have also separated out the sports and broadcast fees and who are not always fully forthcoming about equipment charges and other fees. It’s hard to watch customers leave who are lured by falsely advertised low prices – but most small ISPs have elected to deal with customers fairly as a way to differentiate themselves from the big companies.

The Huge CenturyLink Outage

At the end of December CenturyLink had a widespread network outage that lasted over two days. The outage disrupted voice and broadband service across the company’s wide service territory.

Probably the most alarming aspect pf the outage is that it knocked out the 911 systems in parts of fourteen states. It was reported that calls to 911 might get a busy signal or a recording saying that “all circuits are busy’. In other cases, 911 calls were routed to the wrong 911 center. Some jurisdictions responded to the 911 problems by sending out emergency text messages to citizens providing alternate telephone numbers to dial during an emergency. The 911 service outages prompted FCC Chairman Ajit Pai to call CenturyLink and to open a formal investigation into the outage.

I talked last week to a resident of a small town in Montana who said that the outage was locally devasting. Credit cards wouldn’t work for most of the businesses in town including at gas stations. Businesses that rely on software in the cloud for daily operations like hotels were unable to function. Bank ATMs weren’t working. Customers with CenturyLink landlines had spotty service and mostly could not make or receive phone calls. Worse yet, cellular service in the area largely died, meaning that CenturyLink must have been supplying the broadband circuits supporting the cellular towers.

CenturyLink reported that the outage was caused by a faulty networking management card in a Colorado data center that was “propagating invalid frame packets across devices”. It took the company a long time to isolate the problem, and the final fix involved rebooting much of the network electronics.

Every engineer I’ve spoken to about this says that in today’s world it’s hard to believe that it would take 2 days to isolate and fix a network problem caused by a faulty card. Most network companies operate a system of alarms that instantly notify them when any device or card is having problems. Further, complex networks today are generally supplied with significant redundancy that allows the isolation of troubled components of a network in order to stop the kind of cascading outage that occurred in this case. The engineers all said that it’s almost inconceivable to have a single component like a card in a modern network that could cause such a huge problem. While network centralization can save money, few companies route their whole network through choke points – there are a dozen different strategies to create redundancy and protect against this kind of outage.

Obviously none of us knows any of the facts beyond the short notifications issued by CenturyLink at the end of the outage, so we can only speculate about what happened. Hopefully the FCC enquiry will uncover the facts – and it’s important that they do so, because it’s always possible that the cause of the outage is something that others in the industry need to be concerned about.

I’m only speculating, but my guess is that we are going to find that the company has not implemented best network practices in the legacy telco network. We know that CenturyLink and the other big telcos have been ignoring the legacy networks for decades. We see this all of the time when looking at the conditions of the last mile network, and we’ve always figured that the telcos were also not making the needed investments at the network core.

If this outage was caused by outdated technology and legacy network practices then such outages are likely to recur. Interestingly, CenturyLink also operates one of the more robust enterprise cloud services in the country. That business got a huge shot in the arm through the merger with Level 3, with new management saying that all of their future focus is going to be on the enterprise side of the house. I have to think that this outage didn’t much touch that network, just more likely the legacy network.

One thing for sure is that this outage is making CenturyLink customers look for an alternative. A decade ago the local government in Cook County, Minnesota – the northern-most county in the state – was so frustrated by continued prolonged CenturyLink network outages that they finally built their own fiber-to-the-home network and found alternate routing into and out of the County. I talked to one service provider in Montana who said they’ve been inundated after this recent outage by businesses looking for an alternate to CenturyLink.

We have become so reliant on the Internet that major outages are unacceptable. Much of what we do everyday relies on the cloud. The fact that this outage extended to cellular outages, a crash of 911 systems and the failure of credit card processing demonstrates how pervasive the network is in the background of our daily lives. It’s frightening to think that there are legacy telco networks that have been poorly maintained that can still cause these kinds of widespread problems.

I’m not sure what the fix is for this problem. The FCC supposedly washed their hands of the responsibility for broadband networks – so they might not be willing to tackle any meaningful solutions to prevent future network crashes. Ultimately the fix might the one found by Cook County, Minnesota – communities finding their own network solutions that bypass the legacy networks.

How Bad is the Digital Divide?

The FCC says that approximately 25 million Americans living in rural areas don’t have access to an ISP product that would be considered as broadband – currently defined as 25/3 Mbps. That number comes out of the FCC’s mapping efforts using data supplied by ISPs.

Microsoft tells a different story. They say that as many as 163 million Americans do not use the Internet at speeds that the FCC considers as broadband. Microsoft might be in the best position of anybody in the industry to understand actual broadband performance because the company can see data speeds for every customer that updates Windows or Microsoft Office – that’s a huge percentage of all computer users in the country and covers every inch of the country.

Downloading a big software update is probably one of the best ways possible to measure actual broadband performance. Software updates tend to be large files, and the Microsoft servers will transmit the files at the fastest speed a customer can accept. Since the software updates are large files, Microsoft gets to see the real ISP performance – not just the performance for the first minute of a download. Many ISPs use a burst technology that downloads relatively fast for the first minute or so, but then slows for the rest of a download – a customer’s true broadband speed is the one that kicks in after the burst is finished. The burst technology has a side benefit to ISPs in that it inflates performance on standard speed tests – but Microsoft gets to see the real story.

I’ve ranted about the FCC’s broadband statistics many times. There are numerous reasons why the FCC data is bad in rural America. Foremost, the data is self-reported by the big ISPs who have no incentive to tell the FCC or the public how poorly they are doing. It’s also virtually impossible to accurately report DSL speeds that vary from customer to customer according to the condition of specific copper wires and according to distance from the DSL core router. We also know that much of the reporting to the FCC represents marketing speeds or ‘up-to’ speeds that don’t reflect what customers really receive. Even the manner of reporting to the FCC, by Census block, distorts the results because when a few customers in a block get fast speeds the FCC assumes that everyone does.

To be fair, the Microsoft statistics measure the speeds customers are actually achieving, while the FCC is trying to measure broadband availability. The Microsoft data includes any households that elect to buy slower broadband products to save money. However, there are not 140 million households that purposefully buy slow broadband (the difference between 163 million and 24 million). The Microsoft numbers tell us that the actual speeds in the country are far worse than described by the FCC – and for half of us slower than 25/3 Mbps. That is a sobering statistic and doesn’t just reflect that rural America is getting poor broadband, but also that many urban and suburban households also aren’t achieving 25/3 Mbps.

I’ve seen many real-life examples of what Microsoft is telling us. At CCG Consulting we do community surveys for broadband and we sometimes see whole communities where the achieved speeds for customers is lower than the speeds advertised by the ISPs. We often see a lot more households claim to have no broadband or poor broadband than would be expected using the FCC mapping data. We constantly see residents in urban areas complain that broadband with a relatively fast speed seems slow and sluggish.

Microsoft reported their findings to the FCC, but I expect the FCC to ignore their story. This is a drastic departure from the narrative that the FCC is telling Congress and the public. I wrote a blog just a few weeks ago describing how the FCC is claiming that big ISPs are delivering the speeds that they market. Deep inside the recent reports the FCC admitted that DSL often wasn’t up to snuff – but the Microsoft statistics mean that a lot of cable companies and other ISPs are also under-delivering.

In my mind the Microsoft numbers invalidate almost everything that we think we know about broadband in the country. We are setting national broadband policy and goals based upon false numbers – and not numbers that are a little off, but rather than are largely a fabrication. We have an FCC that is walking away from broadband regulation because they have painted a false narrative that most households in the country have good broadband. It would be a lot harder for politicians to allow broadband deregulation if the FCC admitted that over half of the homes in the country aren’t achieving the FCC definition of broadband.

The FCC has been tasked by Congress to find ways to improve broadband in areas that are unserved or underserved – with those categories being defined by the FCC maps. The Microsoft statistics tell us that there are huge numbers of underserved households, far higher than the FCC is recognizing. If the FCC was to acknowledge the Microsoft numbers, they’d have to declare a state of emergency for broadband. Sadly, the FCC has instead doomed millions of homes from getting better broadband by declaring these homes as already served with adequate broadband – something the Microsoft numbers say is not true.

The current FCC seems hellbent on washing their hands of broadband regulation, and the statistics they use to describe the industry provide the needed cover for them to do so. To be fair, this current FCC didn’t invent the false narrative – it’s been in place since the creation of the national broadband maps in 2009. I, and many others predicted back then that allowing the ISPs to self-report performance would put us right where we seem to be today – with statistics that aren’t telling the true story. Microsoft has now pierced the veil to see behind the curtain – but is there anybody in a position of authority willing to listen to the facts?

FCC Urban Rate Survey

The FCC collects retail prices annually from urban carriers for landline telephone and broadband services. These prices are used to determine benchmark rates for rural areas for incumbent local exchange rate-of-return carriers, incumbent price-cap carriers receiving CAF Phase II support, recipients of the Rural Broadband Experimental grants, and winners of the recent Connect America Fund Phase II Auction.

I find it ironic that the FCC says they no longer regulate broadband, yet they still define maximum broadband rates allowed for various classes of carriers. The fact is that there are still numerous ways that the FCC is regulating broadband and since many of these mandates come from Congress the FCC will never be able to back out of broadband regulations entirely.

The FCC publishes spreadsheets summarizing of the rates they collected. The benchmark rate for voice defines the highest and lowest rates that are allowable by the affected carriers. Starting in 2019 the lowest rate that can be charged for residential voice is $26.98 and the highest is $51.61.

The following table is for the residential broadband rates listed by AT&T in North Carolina, where I live. The rates listed are non-discounted rates and many customers pay less due to bunding or to negotiating a lower rate. It is striking to me that AT&T charges $70 per month for a 10/1 Mbps connection on DSL and also for a 100/100 Mbps connection on fiber. This is one of the issues that has rural customers up in arms – they pay high prices for less performance, particularly considering that they often only receive a fraction of the published speeds shown in the table. It’s also worth noting that AT&T has a monthly data cap on every product other than their symmetrical gigabit product.

Download Upload Data Cap
Mbps Mbps GB Price Technology
3 0.384 150 $56 DSL
5 1 1000 $60 DSL
5 5 1000 $60 FTTP
6 0.512 150 $61 DSL
10 1 1000 $70 DSL
18 1.5 1000 $70 DSL
25 25 1000 $70 DSL
25 5 1000 $70 FTTP
25 25 1000 $60 FTTP
50 50 1000 $70 DSL
50 10 1000 $70 DSL
50 50 1000 $70 FTTP
75 20 1000 $70 DSL
100 20 1000 $70 DSL
100 100 1000 $70 FTTP
300 300 1000 $90 FTTP
1000 1000 unlimited $100 FTTP

The benchmarks for broadband are extremely high and it’s doubtful that many carriers are even trying to charge the rates shown in the table below. There are separate rate caps calculated for Alaska and the rest of the US.

Download Bandwidth (Mbps) Upload Bandwidth (Mbps) Capacity Allowance (GB) 2019 U.S.

($)

2019 AK

 ($)

4 1 200 66.12 113.19
4 1 Unlimited 70.76 119.06
10 1 200 72.31 121.54
10 1 Unlimited 77.30 127.75
25 3 200 77.65 129.52
25 3 Unlimited 82.66 135.75
25 5 200 78.49 129.78
25 5 Unlimited 83.50 136.01
50 5 Unlimited 100.85 153.64
100 10 Unlimited 106.23 161.16
250 25 Unlimited 128.69 203.67
500 50 Unlimited 148.35 223.87
1000 100 Unlimited 162.33 232.38

This is one of the exercises that the FCC must go through that seems largely meaningless. They set a really high rate cap for those that participate in various FCC subsidy programs – but realistically it’s unlikely that many carriers would want to charge more than $100.85 for a 50/5 Mbps connection – but if they did, customers have a legal recourse. What’s more valuable from this exercise is seeing the list prices of the larger urban ISPs – something that’s getting harder to find on line.

The FCC Looks at 911

The FCC recently released its tenth annual report to Congress reporting on the collection and distribution of 911 fees nationwide. The report includes a number of interesting statistics, a few which will be listed below.

But first I’d like to look backwards a bit because we now take 911 for granted, but it hasn’t always been so. 911 has been implemented during my adult lifetime. The idea for having an emergency phone number was first introduced in 1967 by Lyndon Johnson’s Commission on Law Enforcement. AT&T selected the 9-1-1 digits the following year. An independent telco, the Alabama Telephone Company leaped on the concept and introduced 911 in Haleyville, Alabama in 1968 – but it then took decades for the implementation nationwide since this was deemed a local issue to be implemented by local governments. I recall the introduction of 911 in the DC suburbs in the mid-70s, accompanied by a flurry of radio, newspaper and TV ads to inform the public of the new safety service. There were major metropolitan areas like the Chicago suburbs that didn’t get 911 until the early 1980s.

911 service has been enhanced over the years. For example, by 2015 96% of homes in the US were covered by E-911 (enhanced) where the 911 operator knows the caller’s location according to the phone number for landlines or by using triangulation of cell sites for mobile phones. Currently 911 systems are upgrading to NG911 (next generation) that ties 911 systems into broadband to be able to relay text messages, photos and videos as part of the 911 process.

Some of the interesting statistics from the FCC report:

  • In 2017 almost $3 billion was collected in 911 fees to fund local 911 efforts. The total cost to provide 911 was reported at $4.8 billion, with 911 services in many states also funded partially by tax revenues.
  • States collect 911 fees in different ways. This includes flat rates per telephone or cellular lines, percentage of telecommunications bills, and flat rate per subscriber. Fees vary widely and range from $0.20 per residential landline in Arizona to $3.34 in West Virginia per cell phone. There are states that charge eve more for business landlines.
  • Most states use the 911 fees to fund the 911 service, but six states – Montana, New Jersey, New York, Rhode Island and West Virginia use some of their 911 fee to fund non-safety purposes or even just to go into the general funds of the state. In total $284 million was diverted from collected 911 fees.
  • Thirty-five states, Puerto Rico and the District of Columbia have begun the process of upgrading to NG911.
  • Sixteen states have deployed statewide Emergency Services IP Networks (ESInets) for exclusive use of public safety agencies.
  • Thirty states, Guam, Puerto Rico and the US Virgin Islands have not taken any steps for cybersecurity for 911 centers (PSAPs).
  • There are 5,232 PSAPs in the country. These range from tiny centers in sheriff stations in rural counties to massive 911 centers in major metropolitan areas. For example, Washington DC has one PSAP while there are 586 in Texas.
  • 1,381 PSAPs now had the ability to communicate with the public by text message. Another 1,103 PSAPs will be implementing that capability in 2018.
  • There were over 39,000 operators employed to take 911 calls in 2017.
  • Only 44 states reported 911 call volumes and in those states there were over 211 million calls to 911. Over 70% of calls now come from cellular phones.

I know it’s easy to hate regulation, but without it we wouldn’t have a 911 system that works so well. People in most of the country feel a lot safer knowing they can dial 911 and get help when needed.

A Corporate Call for Privacy Legislation

Over 200 of the largest companies in the country are proposing a new set of national privacy laws that would apply to large companies nationwide. They are pushing to have this considered by the upcoming Congress.

The coalition includes some of the largest companies in Silicon Valley like Apple and Oracle, but it doesn’t include the big three of Facebook, Google and Amazon. Among the other big businesses included the group are the largest banks like Bank of America and Wells Fargo, big carriers like AT&T and big retailers like Walmart.

As you might expect, a proposed law coming from the large corporations would be favorable to them. They are proposing the following:

  • Eliminate Conflicting Regulations. They want one federal set of standards. States currently have developed different standards for privacy and for issues like defining sensitive information. There are also differing standards by industry such as for medical, banking and general corporations;
  • Self-regulation. The group wants the government to define the requirements that must be met but don’t want specific methodologies or processes mandated. They argue that there is a history of government technical standards being obsolete before they are published;
  • Companies Can Determine Interface with Consumers. The big companies want to decide how much rights to give to their customers. They don’t want mandates for defining how customer data can be used or for requiring consumer consent to use data. They don’t want mandates giving consumers the right to access, change or delete their data;
  • National Standard for Breach Notification. They want federal, rather than differing state rules on how and when a corporation must notify customers if their data has been breached by hackers;
  • Put the FTC in Charge of these Issues. They want the FTC to enforce these laws rather than State Attorney Generals;
  • Wants the Laws to Only Apply to Large Corporations. They don’t want rigid new requirements on small businesses that don’t process much personal data.

There are several reasons big companies are pushing for legislation. There are currently different privacy standards around the country due to actions brought by various State Attorney Generals and they’d like to see one federal standard. But like most laws the primary driver behind this legislation is monetary. Corporations are seeing some huge hits to the bottom line as a result of data breaches and they hope that having national rules will provide a shield against damages – they hope that a company that is meeting federal standards would be shielded from large lawsuits after data breaches.

I look at this legislation both as a consumer and as somebody working in the small carrier industry. With my consumer hat on there are both good and bad aspects of the proposed rules. On the positive side a set of federal regulations ought to be in place for a complex issue that affects so many different industries. For example, it is hard for a corporation to know what to do about a data breach if they have to satisfy differing rules by state.

But the negatives are huge from a consumer perspective. It’s typical political obfuscation to call this a privacy law because it doesn’t provide any extra privacy for consumers. Instead it would let each corporation decide what they want to disclose to the public and how companies use consumer data. A better name for the plan might be the Data Breach Lawsuit Protections Act.

There are also pros and cons for this for small carriers. I think all of my clients would agree that we don’t need a new set of regulations and obligations for small carriers, so small carriers will favor the concept of excusing smaller companies from some aspect of regulations.

However, all ISPs are damaged if the public comes to distrust ISPs because of the behavior of the largest ISPs. Small ISPs already provide consumer privacy. I’ve never heard of a small ISP that monitors customer data, let alone one that is trying to monetize their customers’ data. Small ISPs are already affording significant privacy rights to customers compared to the practices of AT&T, Verizon or Comcast who clearly view customer data as a valuable asset to be exploited rather than something to protect. The ISP industry as a whole would benefit by having rules that foster greater customer trust.

I’m not sure, however, that many small ISPs would automatically notify customers after a data breach – it’s a hard question for every corporation to deal with. I think customers would trust us more if there were clear rules about what to do in the case of a breach. This proposed law reminds me that this is something we should already be talking about because every ISP is vulnerable to hacking. Every ISP ought to be having this conversation now to develop a policy on data breaches – and we ought to tell our customers our plans. Small ISPs shouldn’t need a law to remind us that our customers want to trust us.