Categories
Regulation - What is it Good For?

FCC Considering New Rules for Data Breaches

Back in January of this year, the FCC issued a Notice of Proposed Rulemaking in WC Docket No. 22-21 that proposes to change the way that ISPs and carriers report data breach to the FCC and to customers. The proposed new rules would modify some of the requirements of the customer proprietary network information (CPNI) rules that were originally put into place in 2007.

Since the 2007 CPNI order, all fifty states have adopted a version of the CPNI rules as well as rules from federal agencies like the Federal Trade Commission, the Cybersecurity and Infrastructure Agency, and the Securities Exchange Commission. The FCC is hoping to strengthen the rules on reporting data breaches since it recognizes that data breaches are increasingly important and can be damaging to customers.

The FCC completed a round of initial and reply comments by the end of March 2023, but is not expected to make a final order before the end of this year.

The current FCC rules for data breaches require carriers to notify law enforcement within seven days of a breach using an FCC portal that forwards a report to the Secret Service and the FBI. After a carrier has notified law enforcement, it can opt to notify customers, although that is not mandatory. One of the reasons this docket was initiated is that carriers have kept quiet about some major data breaches. The new rules would require carriers to provide additional information to the FCC and law enforcement. The new requirements also eliminate any waiting period, and carriers would be required to notify law enforcement and customers “without unreasonable delay”. The only exception to rapid customer notification would be if law enforcement asks for a delay.

The FCC is proposing new reporting rules that it says will better protect consumers, increase security, and reduce the impact of future breaches. There was a lot of pushback from carriers in comments to the docket that centered on two primary topics – the definition of what constitutes a data breach, and the requirement of what must be told to customers.

The FCC wants to expand the definition of data breach to include the inadvertent disclosure of customer information. The FCC believes that requiring the disclosure of accidental breaches will incentivize carriers to adopt more strenuous data security practices. Carriers oppose the expanded definition since disclosure would be required even when there is no apparent harm to customers.

Carriers also oppose the quick notification requirements. Carriers argue that it takes time to  understand the breadth and depth of a data breach and to determine if any customers were harmed. Carriers also need to be working immediately after discovering breach to contain and stop the problem.

Carriers are opposed to the FCC suggestions of what must be disclosed to customers. The FCC wants to make sure that customer notices include everything needed for customers to react to the breach. Carriers say that assembling the details by customer will take too long and could leave customers open to further problems. Carriers would rather make a quick blanket announcement instead of a detailed notice to specific customers.

One of the interesting nuances of the proposed rules is that there would be two types of notifications required – one for inadvertent leaks and another for what the FCC calls a harms-based notification. This would require a carrier to notify customers based on the specific harm that was caused.  Carriers were generally in favor of the harms-based approach but didn’t want to confuse customers by notifying them of every inadvertent breach that doesn’t cause any harm.

Consumer advocates opposed allowing only the harm-based trigger, because it allows a carrier to decide when a breach causes harm. They fear that carriers will under-report harm-based breaches.

These rules would apply to all ISPs and carriers, regardless of size. While it might still be some months before any new rules become effective, small ISPs ought to use this impending change as a reason to review data security practices and the ability to notify customers.

Categories
Regulation - What is it Good For?

Increasing the ACP Subsidy

I’m puzzled by the recent change to the Affordable Connectivity Program (ACP). The FCC recently implemented an increase in the monthly ACP subsidy in qualifying high-cost areas from $30 to $75. The reason for the change is easy to understand – this was codified in the Infrastructure Investment and Jobs Act legislation. The legislation required higher ACP payments be higher in  areas of the country designated as high-cost.

The NTIA has been working with State Broadband Offices to designate the high-cost areas in each state – because such areas are also eligible for special treatment and consideration in the upcoming BEAD grants. Now that high-cost areas are being defined, the FCC can implement the legislatively mandated ACP change.

What puzzles me is why this was in the legislation. The concept seems to be that areas with higher costs need additional support. To quote the recent FCC order on the increase, “the $75 monthly benefit would support providers that can demonstrate that the standard $30 monthly benefit would cause them to experience “particularized economic hardship” such that they would be unable to maintain part or all of their broadband network in a high-cost area”.

I agree with the concept that areas with particularly high costs might need some kind of broadband subsidy. For example, this is a big piece of the rationale for subsidy programs like ACAM.

But the extra ACP subsidy doesn’t help ISPs. ISPs use the ACP program to discount customer rates and then get reimbursed for the customer discount from the ACP funding provided by Congress. Whether the discount is $30 or $75, this is a net wash for the ISP. None of this support goes to the ISP and all of the benefit flows directly to the customer. It appears to me that the folks who wrote the legislation thought the ACP benefits ISPs and not low-income households.

I have a hard time rationalizing why this extra discount is only given in high-cost areas. Isn’t a low-income household located elsewhere just as worthy of extra help?

I guess you can make the argument that having a larger discount will make it easier to add more low-income customers to the network – and that would improve revenues for a rural ISP.

But realistically, having a higher customer discount also puts an ISP at greater risk if the ACP subsidy ever stops. The ACP discount only applies to customers who can demonstrate they are low-income or that they take part in one of several social programs. If a customer is getting free broadband because of a $75 ACP subsidy, is that customer going to be able to suddenly start paying for broadband if the ACP subsidy ends? That’s a valid question to ask since it looks like the ACP fund will run out of money some time in the second quarter of next year.

This extra subsidy would a little make more sense if ACP was a permanently funded program. But it seems like a rural ISP can be badly harmed if it relies on ACP and suddenly loses a lot of customers if the ACP fund runs dry.

I’m sure that the folks who drafted this requirement had good intentions, and some of the envisioned benefit might materialize if ACP is permanently funded. With a permanent ACP, ISPs in high-cost areas could justify making the effort to connect low-income households to broadband. But I have to advise ISPs not to aggressively pursue getting folks connected to the $75 ACP subsidy because the ISP stands to lose most such customers if the ACP program ends. There is a fixed cost to add a new customer to the network, and an ISP adding a new customer today won’t even recover that initial cost if the ACP subsidy ends early next year.

Perhaps the folks who inserted this language into the IIJA assumed that ACP would be so beneficial that Congress would permanently fund it past the end of the IIJA funding. But unless that commitment is made soon by Congress, I find it impossible to advise small ISPs to enroll new ACP customers.

Categories
Regulation - What is it Good For?

Protecting Broadband Customer Data

At the end of July, the FCC proposed a $20 million penalty against Q Link and Hello Mobile for not complying with the Customer Propriety Network Information (CPNI). The FCC concluded that the two companies violated the CPNI rules when they failed to protect confidential user data. The companies both had security flaws in their apps that allowed outside access to customer account information.

Today’s blog is not talking about these two carriers, but their security measures must be terrible to invite fines of that magnitude. Today’s blog will use these fines to highlight that there are still stringent privacy rules in place for voice providers, but nothing similar for broadband. Other than perhaps invoking an investigation from the Federal Trade Commission for allowing leaks of broadband customer information, there are no specific prohibitions in place to stop ISPs from misusing customer data.

There is an interesting history of regulations for the protection of broadband customer information. The FCC, under Chairman Tom Wheeler, had implemented CPNI rules for broadband in 2016 along with other broadband regulations like net neutrality. These regulations went into effect near the end of 2016 and included a provision to allow customers to opt in or out of allowing an ISP to use and share their personal data.

In 2017, Congress eliminated the CPNI protections for broadband in response to a request by FCC Chairman Ajit Pai. Pai argued that it wasn’t fair to enforce privacy rules on big ISPs that weren’t also required for web companies like Google and Facebook. He also argued that CPNI rules made no sense after the Pai FCC had eliminated Title II regulation, which had declared that broadband is considered to be an information service and not a telecommunications service. Congress passed the Congressional Rule Act that eliminated the CPNI requirement along with other broadband regulations, and the FCC implemented the change in September 2017.

This has resulted in an unusual regulatory environment where two cellular carriers can be heavily penalized for not protecting customer data while ISPs cannot.

Telephone companies routinely capture details of customer calling – who you call and who calls you. This is familiar to anybody who’s seen a TV crime show since one of the first things detectives routinely do is to ask to see telephone calling records for a suspect. Telephone companies can’t release this information without a warrant. CPNI rules also require phone companies to keep other customer data secure, such as billing records, credit card numbers, etc. Telephone companies are even prohibited from marketing their own products to customers if a customer opts out of such marketing.

The 2016 privacy rules that were in place for only a short time implemented the same sort of privacy rules as voice, but customers were also given the choice to allow or deny access to their records. ISPs gather a lot more data about customers than telephone companies. For example, an ISP knows every web page you have visited since they control the DNS routing that connects you to websites. There are numerous other things an ISP can know about a customer if they choose to look deeper into the packets between users and websites.

ISPs I know aren’t worried about these issues because they don’t share customer information. They don’t record details of customer broadband transactions, and they try hard to keep information like credit card numbers safe from hackers. But I don’t think anybody believes the largest ISPs when they say that they don’t monetize information from customer data, particularly since, with current rules, there is no restriction against them doing so. The big ISPs don’t want any restrictions on what they do with customer data and any revenue streams that might come from selling data, and in today’s regulatory world, they are largely getting what they want.

Categories
Regulation - What is it Good For?

The Future of Broadband Maps

I read that an AI expert at a workshop hosted by the FCC and the U.S. National Science Foundation suggested that AI could be used to produce better broadband maps. I had to chuckle at that idea.

The primary reason for my amusement is that FCC maps are created from self-reported broadband coverage and speeds by the many ISPs in the country. ISPs have a variety of motivations for how and why they report data to the FCC. Some ISPs try to report accurate speeds and coverage. People may be surprised by this, but some of the biggest telcos, like CenturyLink and Frontier, seem to have gotten better at reporting DSL speeds – in some markets, you can find DSL capability being reported at a dozen different speeds to reflect that DSL speeds vary by the distance from the central office.

Other ISPs take the exact opposite approach and report marketing speeds that are far in excess of the capability of the technology being deployed. It’s not hard to find WISPs claiming 100 Mbps to 300 Mbps download capability when they are delivering speeds in the 10 Mbps to 30 Mbps range. My guess is that some of these ISPs are using the FCC maps as an advertisement to get customers to call them after looking at the FCC map. Some ISPs have already been accused of over-reporting speeds to try to block grant money from overbuilding them.

There are also endless examples of ISPs reporting coverage that doesn’t exist. The FCC mapping rules say that only locations that can be served within ten business days should be included in broadband coverage areas, and many ISPs are claiming much larger areas than they can serve quickly. Even worse, some ISPs claim coverage in areas that they can’t serve, such as when WISPs claim coverage of homes that are blocked from line-of-sight by hills or other impediments.

The only way that AI could be used to improve the maps is if the FCC gets serious about mapping and changes some rules, and enforces others. The FCC would have to eliminate the ability of ISPs to claim marketing speeds, which provides easy cover for overstating capabilities. The FCC would also have to get serious about enforcing coverage to meet the 10-day installation rule. If those two changes were made and enforced, the FCC might be able to use AI to improve the maps. AI could match claimed ISP coverage to speed test data and also reference and compare coverage to complaints and challenges from consumers. I don’t see the FCC ever being willing to get that aggressive with ISPs – because this process would be extremely contentious.

I don’t believe any of this will ever happen because after the wave of BEAD funding is finally spent, the FCC and everybody else is going to lose interest in the broadband maps. Nobody will care if some ISP overstates capabilities in an area as long as the BEAD winner is going to bring faster broadband.

There are already a number of State Broadband offices that are saying that the BEAD allocations are not going to be enough to fix broadband everywhere. My prediction is that states that care about fixing the remaining places will create their own broadband maps and will go back to ignoring the FCC maps.

The FCC won’t care. At the point where they can say with a straight face that 95% of homes will be be able to buy broadband that meets the FCC’s definition of broadband, the FCC is going to declare job done. For the last decade, the FCC has issued annual broadband reports to Congress that have said that the state of broadband is good and improving – all based upon maps that everybody knew were grossly overstated in both broadband speeds and coverage. I can’t see the FCC putting extra effort into proving that there are still homes left without good broadband.

Categories
Uncategorized

New FCC Role – Device Security

Depending upon the survey you believe, U.S. homes have an average of thirteen to twenty-two connected devices in their home. That can range from computers, TVs, security cameras, game boxes, baby monitors – it’s a huge list these days.  A concern for anybody with connected devices is that somebody will hack them and cause problems in the home. I’ve seen many articles that describe how people have hacked home cameras to watch families or hijacked computers for various nefarious reasons.

The White House announced a new initiative in July that would create a certification for connected devices that meet cyber safety standards. The authority to handle this program was given to the FCC. Being labeled as the U.S. Cyber Trust Mark, device makers can send devices to the FCC to be certified as meeting basic security standards. This is similar to the Energy Star efficiency sticker that comes with home appliances.

This is a voluntary program for device makers, but the hope is that companies will seek the approval label to be able to more easily market their products.

The next step for the FCC will be to open a rulemaking to determine the devices that are eligible for the certification and the standards that must be met. During the announcement of the initiative, FCC Chairwoman Jessica Rosenworcel mentioned devices that might apply, like smart refrigerators, microwaves, thermostats, fitness trackers, and baby monitors. It’s likely that many other kinds of devices will be added to the list. The FCC says it will work closely with the National Institute of Standards and Technology (NIST) to create the cyber standards.

NIST has developed a Profile of the IoT Core Baseline for Consumer IoT Products. That NIST document says that connected devices should have features like the following:

  • A clear way to identify the specific device, such as a device serial number.
  • The ability to change the configuration of a device and to be able to reset it to the default security settings.
  • Devices should protect stored data and encrypt or otherwise secure transmitted data.
  • A device should give access to settings only to authorized users.
  • A device should have the ability to receive, verify, and apply software updates.
  • A device should be cybersecurity aware and have the ability to detect and capture evidence of any changes to software or security settings.
  • Manufacturers of connected devices should have full documentation of the security measures present.
  • The product developer should be able to receive and respond to queries about cybersecurity from device users.

Security experts have been making similar recommendations for many years and have requested that the government create and enforce standards. Since Congress has never passed a law about device security, a voluntary process sounds like a good first step to get this started.

Chairman Rosenworcel said she hoped the agency could develop standards by the end of 2024. The proceeding to determine how this should work ought to be interesting reading.

Like with everything at the FCC, I have to wonder how this gets funded. I would expect that the fees charged to those seeking the certification would cover the cost.

Categories
Regulation - What is it Good For?

Who’s In Charge of Broadband?

On July 24, the FCC authorized a new subsidy program, Enhanced A-CAM (Alternate Connect America Cost Model). This program will extend subsidies to small, regulated telephone companies at a cost of about $1.27 billion per year for ten years. The subsidy will be paid from the FCC’s Universal Service Fund.

The funding requires recipients to deploy voice plus broadband with speeds of at least 100/20 Mbps to 100% of the areas covered by the subsidy within four years. The order is technology neutral, so telcos could elect to meet this requirement with fiber or with licensed fixed wireless technology.

According to Mike Conlow, this order will bring broadband to almost 583,000 unserved or underserved locations that are already covered by the NTIA’s BEAD grant footprint. Today’s blog talks about the absurdity of the FCC making this announcement only weeks after the NTIA announced the distribution of the $42.5 billion in BEAD funds to states. This means that two U.S. agencies both announced funding to cover the identical half-million locations within a month of each other.

Think about what this means. A state that has some of these A-CAM locations was allocated BEAD grant money to bring broadband to these areas. The FCC order is then directly funding to build broadband to the same passings. This means that a state that has a lot of unserved and underserved A-CAM passings is getting a funding windfall. Conlow estimated that this double funding is bringing a funding windfall of $180 million to Nebraska – the state with the most unserved and underserved A-CAM locations. The downside of this is that if Nebraska and other states are getting a windfall from the FCC decision, then other states are receiving less BEAD funding than they would have if these locations had been excluded from BEAD before the NTIA allocated the $42.5 billion.

The FCC’s A-CAM order was released only three weeks after the NTIA announced the BEAD allocations to states. There is no way that the FCC didn’t do this deliberately. The FCC could have asked the NTIA to take these locations out of the BEAD process so that the $42.5 billion would have been allocated fairly.

Two years ago, the Biden administration directed the FCC, the NTIA, and the USDA to coordinate everything associated with federal funding for broadband. The FCC’s actions with this decision are the exact opposite of coordination.

I speculate that the FCC did this to reclaim relevance in the discussion of who is helping America solve the rural broadband gap. The FCC has taken a lot of criticism in recent years for botching the RDOF funding process and handing out wasted billions to the big telcos in the CAF II subsidies. The FCC was also largely cut out of the biggest effort ever with BEAD grants to solve the rural broadband gap, and that had to sting. The FCC can now say to the folks living in the A-CAM areas that it provided the funding to bring better broadband instead of the NTIA. I’m picturing FCC ribbon cuttings for projects that launch fiber in these areas. I can’t think of any other reason that this order would have been released so soon after the NTIA announcements of BEAD funding for each state.

The NTIA should react to this announcement by reallocating the BEAD funding to states because for every state that got a windfall like Nebraska from the FCC’s A-CAM order, other states received less BEAD funding. Unfortunately, reopening the allocation process could open a can of worms, so that likely won’t happen.

In my mind, the FCC has become a loose cannon due to its control of the Universal Service Fund. The USF for all practical purposes is a big slush fund that gives the FCC the ability to tackle anything it wants, outside of any control by Congress or the White House. After this announcement, it wouldn’t shock me to see the FCC announce another round of RDOF funding in the middle of the BEAD grant process next year.

Categories
Regulation - What is it Good For?

Cybersecurity for Schools

FCC Chairwoman Jessica Rosenworcel recently asked the other FCC Commissioners to support a proposal to spend $200 million over three years to bolster school cybersecurity. Rosenworcel plans to issue a Notice for Proposed Rulemaking (NRPM) soon for her proposal. The NPRM will set off a round of public comments and then a ruling if a majority of the Commissioners agree with the final set of rule changes.

There seems to be some need for better school security systems. According to Emsisoft, a New Zealand-based anti-viral and anti-malware company, there were ransomware attacks on 44 U.S. universities and colleges and 45 on school districts in 2022. That was up slightly over 88 attacks in 2021. According to Emsisoft, school IT networks are a popular target since they have less security and staff with less training than corporations.

This announcement immediately raised the question for me of why the FCC is considering this. The U.S. Department of Education has a 2023 budget of $79.6 billion. I can’t help but wonder why school and university cybersecurity is not the responsibility of the USDE, state governments, or local school systems rather than the FCC.

Rosenworcel is proposing that this effort get funded from the Universal Service Fund, specifically the recently launched Learn Without Limits program that is part of the E-Rate program that subsidizes broadband connections for schools with a high percentage of low-income students. According to Rosenworcel’s press release, this could be done without undermining E-Rate’s primary mission of promoting digital equity for schools.

The E-Rate program is perhaps the most popular program at the FCC since it helps poor school districts afford gigabit broadband connections. I can see why the FCC wants to ride that wave of popularity. Rosenworcel has made other interesting proposals recently that would also come from the E-Rate program.

For example, Rosenworcel recommended that E-Rate be used to provide mobile hotspots on school buses. That seems to be an extension of bringing broadband to schools, to bring broadband to students who have long bus rides. She’s also recommended that E-Rate be used to provide Wi-Fi hotspots for students and library patrons. This also extends broadband to students but seems to be in competition with the funding from the Infrastructure Investment and Jobs Act, which is providing billions of dollars for digital equity that would also provide money for hotspots.

The main reason this raises an issue for me is that the Universal Service Fund is funded with an ever-increasing fee burden on voice lines and interstate broadband services. There has been widespread unhappiness with the FCC USF fees. There doesn’t seem to be any appetite at the FCC to let the size of the Universal Service Fund shrink when it makes sense. Instead, the FCC keeps finding new ways to spend the pot of money.

While cybersecurity for schools seems like an important function, cybersecurity is not broadband. If the FCC can sink money into cybersecurity in this manner, then what’s next – money for training for school system IT employees? I’m sure I’ll get some negative comments about my position, but I am not against somebody helping schools with cybersecurity issues. I just can’t see why this is the responsibility of the FCC.

Categories
Regulation - What is it Good For?

The Gigi Sohn FCC Nomination

Gigi Sohn recently was interviewed by The Verge and discussed her nomination process to become the fifth FCC Commissioner. It’s a fascinating read about the process of being nominated for a position that must be confirmed by the Senate. From the day she was nominated, Sohn was not allowed to talk in public during the long 500 days she was under consideration about issues considered by the FCC. She was first nominated for the position back in October 2021 and only recently withdrew as a candidate. As she describes it, it’s a dreadful process for anybody to go through.

Today’s blog isn’t specifically about Sohn but rather about the FCC and who controls it. In my opinion, Sohn was ultimately not considered because she has spent her career as an advocate for the public and for taking positions that favored small corporations and startups over giant corporations. It’s disheartening that somebody who has sided with the small guy over giant corporations in the past has little or no chance of becoming an FCC Commissioner.

By definition, regulators are supposed to be neutral arbiters that represent both the public and the industries they regulate. An FCC Commissioner (or a regulator at any federal agency) is supposed to equally look out for citizens as well as industries. It’s often been said that perfect regulation is one that leaves both sides a little unhappy because it considers what is best for both the public and industries in every decision.

I’ve written before about regulatory capture. Regulatory capture is an economic principle that describes a situation where regulatory agencies are dominated by the industries they are supposed to be regulating. Economic theory says that an industry has succumbed to regulatory capture when regulators predominantly side with the industries being regulated instead of the general public.

The concept of regulatory capture was proposed in the 1970s by George Stigler, a Nobel prize-winning economist. He described the characteristics of regulatory capture as follows. This list describes current broadband regulation to a tee.

  • Regulated industries devote a large budget to influence regulators at the federal, state, and local levels. The general public does not have the resources to effectively lobby the public’s side of issues.
  • Regulators tend to come from the regulated industry, and they tend to take advantage of the revolving door to return to industry at the end of their stint as a regulator.
  • In what Stigler labeled as corruption, regulators tend to automatically side with the industries they are supposed to be regulating.
  • In the extreme case of regulatory capture, the incumbents are deregulated from onerous regulations, while new market entrants have hoops to jump through.

The two best current examples of regulatory capture are the broadband and banking industries. There is no question that the power of the broadband industry is concentrated among only a few firms. Comcast, Charter, AT&T, and Verizon together serve 75% of all broadband customers in the country.

The FCC is a textbook example of a captured regulator. The FCC under Ajit Pai went so far as to deregulate broadband and to wash the FCC’s hands of broadband as much as possible by theoretically passing the little remaining regulation to the FTC. It’s hard to imagine an FCC more under the sway of the broadband industry than the last one.

There was no chance for the Sohn nomination to succeed because the big ISPs and the big broadcasters began attacking her as soon as she was nominated. Everybody who knows Sohn says that she is two things – extremely knowledgeable about the industry and fair-minded. That sounds like exactly who ought to be a federal regulator.

Perhaps the most telling part of the story is this was not political. Sohn was nominated by a Democratic president and was to be confirmed by a Democratic Senate. Sohn was supported by a wide range of folks including the ultra-conservative One America News (OAN). But the large ISPs and large broadcasters brought enough pressure that the nomination never even made it to the floor for a vote. Perhaps the ultimate measure of regulatory capture is when industries get to accept or reject their regulators.

Categories
Regulation - What is it Good For?

RDOF in Trouble?

In June, three Senators – Roger Wicker, Cindy Hyde-Smith, and J.D. Vance – sent a letter to FCC Chairperson Jessica Rosenworcel asking for relief for RDOF award winners. The Senators said that the RDOF subsidies are no longer adequate due to massive increases in construction costs to build and operate the promised RDOF networks. The Senators asked that the FCC particularly consider relief for RDOF winners that have fewer than 250,000 broadband customers.

More recently, a coalition of RDOF winners sent a similar letter pleading for relief from the same issues. The RDOF winners cited much higher costs due to both the pandemic and to actions taken by the federal government with funding programs that were not in place at the time of the RDOF awards. The Coalition of RDOF winners offered various possible solutions the FCC might consider to help the winners meet their RDOF obligations:

  • Extra funding to RDOF winners that have “affirmatively requested such funding.”
  • A short amnesty window to let RDOF winners withdraw from RDOF if the FCC is not going to provide supplemental funding.
  • Earlier payments for the RDOF funds due from years 7-10.
  • Add an additional eleventh year of RDOF subsidy.
  • Provide relief from all or some requirements related to the letter of credit requirements.

As a reminder to readers, the RDOF program provided a 10-year subsidy to cover some of the costs of deploying broadband in unserved areas. The money was awarded in a reverse auction that ended in November 2020, where the lowest bidder for the subsidy in any given Census block won the subsidy. The FCC originally considered awarding as much as $16 billion in the auction. However, Many ISPs bid the prices lower than expected, and only $9 billion was claimed at the close of the auction. Some ISPs withdrew after the auction and the FCC disqualified some large bidders like Starlink and LTD Broadband. Ultimately, only $6 billion of subsidies are now in play.

Chairwoman Rosenworcel quickly responded to the three Senators and largely closed the door on the Senator’s requests. She pointed out that the FCC reserves the funding through the Universal Service Fund each quarter to pay the $6 billion subsidy and that there is no additional funding to increase RDOF payments. She also reminded the Senators that the rules and penalties for withdrawing from RDOF were clearly known by bidders before they bid in the auction and that the penalties are in place to ensure that RDOF winners fulfill their obligations.

The entire RDOF process has been badly flawed since the beginning. Some auction winners bid down prices a lot lower than expected. The areas that were available in many places are scattered and don’t create a reasonable footprint for building a broadband solution. There clearly has been an unprecedented amount of inflation since the awards were made. And to be fair, the RDOF awards were made after the pandemic was in full force, and winners could reasonably have anticipated that there would be economic consequences of a major pandemic. Even without the last year of high inflation, it would be hard not to expect some kind of economic turmoil during a 10-year subsidy plan.

I have no doubt that many RDOF winners are now looking at a broken financial model for fulfilling their promise. They are stuck with a terrible dilemma – build the promised networks and have a losing business or pay a substantial penalty to withdraw from RDOF.

It’s disturbing that both the Senators and some RDOF winners are asking for a soft landing for anybody that wants to change their mind and withdraw from RDOF. The RDOF footprints have already been off-limits for other federal grant programs that could have brought faster broadband to these areas. It’s fully expected that the BEAD grants will start being awarded next year, and it would be a true disaster if ISPs default on RDOF after those grants have been awarded. That could strand large numbers of folks with no broadband solution.

This is a dilemma for the FCC. No matter what the agency does, there will likely be additional negative outcomes if RDOF winners are unable to fulfill the pledge to build and operate the promised networks. I’ve always expected the program to have eventual troubles since many of the winning auction bids were lower that what seemed to be needed to create a sustainable business. But I never thought that we’d be seeing requests for a major rework of the program less than three years after the end of the auction.

Categories
Regulation - What is it Good For?

Too Little Too Late

On July 25, Chairwoman Jessica Rosenworcel shared with the other FCC Commissioners a draft Notice of Inquiry that would begin the process of raising the federal definition of broadband from 25/3 Mbps to 100/20 Mbps. In order for that to become the new definition, the FCC must work through the NOI process and eventually vote to adopt the higher speed definition.

This raises a question of the purpose of having a definition of broadband. That requirement comes from Section 706 of the Telecommunications Act of 1996 that requires that the FCC make sure that broadband is deployed on a reasonable and timely basis to everybody in the country. The FCC interpreted that requirement to mean that it couldn’t measure broadband deployment unless it created a definition of broadband. The FCC uses its definition of broadband to count the number of homes that have or don’t have broadband.

The FCC is required by the Act to report the status of broadband deployment to Congress every year. During the last week of Ajit Pai’s time as FCC Chairman, he issued both the 2020 and 2021 broadband reports to Congress. Those reports painted a rosy picture of U.S. broadband, partially because progress was measured using 25/3 Mbps definition of broadband and partially because the FCC broadband maps were rife with overstated speeds. The FCC has not issued a report since then, and I can only suppose there aren’t the votes in an evenly split FCC to approve a new report.

To give credit, Chairwoman Rosenworcel tried to get the FCC to increase the definition of broadband to 100/20 Mbps four years ago, but the idea went nowhere in the Ajit Pai FCC. At that time, 100/20 Mbps seemed like a reasonable increase in the definition of broadband. Most cable companies were delivering 100 Mbps download as the basic product, and a definition set at 100/20 Mbps would have made the federal statement that the speeds that most folks buy in cities is a reasonable definition of broadband for everybody else.

Chairwoman Rosenworcel is now ready to try again to raise the definition. Perhaps the possible addition of a fifth Commissioner means this has a chance of passing.

But this is now too little too late. 100/20 Mbps is no longer a reasonable definition of broadband. In the four years since Chairwoman Rosenworcel introduced that idea, the big cable companies have almost universally increased the starting speed for broadband to 300 Mbps download. According to OpenVault, almost 90% of all broadband customers now subscribe to broadband packages of 100 Mbps or faster. 75% of all broadband customers subscribe to speeds of at least 200 Mbps. 38% of households now subscribe to speeds of 500 Mbps or faster.

I have to think that the definition of broadband needs to reflect the broadband that most people in the country are really using. One of the secondary uses of the FCC broadband definition is that it establishes a goal for bringing rural areas into parity with urban broadband. If 75% of all broadband subscribers in the country have already moved to something faster than 200 Mbps, then 100 Mbps feels like a speed that is already in the rearview mirror and is rapidly receding.

When the 25/3 definition of broadband was adopted in 2015, I thought it was a reasonable definition at the time. Interestingly, when I first read that FCC order, I happened to be sitting in a restaurant that was lucky enough to be able to buy gigabit speeds and was sharing it with customers. I knew from that experience that the 25/3 Mbps definition was going to become quickly obsolete because it was obvious that we were on the verge of seeing technology increases that were going to bring much faster speed.

I think the FCC should issue two broadband definitions – one for measuring broadband adoption today and a second definition as a target speed for a decade from now. That future broadband target speed should be the minimum speed required for projects funded by federal grants. It seems incredibly shortsighted to be funding any technology that only meets today’s speed definition instead of the speeds that will be needed when the new network will be fully subscribed. Otherwise, we are building networks that are too slow before they are even finished construction.

Another idea for the FCC to consider could take politics out of the speed definition. Let’s index the definition of broadband using something like the OpenVault speed statistics, or perhaps the composite statistics of several firms that gather such data. Indexing speeds would mean automatic periodic increases to the definition of broadband. If we stick to the current way of defining broadband, we might see the increase in the federal definition of broadband to 100/20 at the end of this year and won’t see another increase for another eight years.

Exit mobile version