Categories
Technology The Industry

Self-driving Cars and Broadband Networks

There are two different visions of the future of self-driving cars. Both visions agree that a smart car needs to process a massive amount of information in order to make real-time decisions.

One vision is that smart cars will be really smart and will include a lot of edge computing power and AI that will enable a car to make local decisions as the car navigates through traffic. Cars will likely to able to communicate with neighboring cars to coordinate vehicle spacing and stopping during emergencies. This vision requires only minimal demands for external broadband, except for perhaps to periodically update maps and to communicate with things like smart traffic lights.

The other vision of the future is that smart cars will beam massive amounts of data to and from the cloud that includes LiDAR imagery and GPS location information. Big data centers will then coordinate between vehicles. This second vision would require a massively robust broadband network everywhere.

I am surprised by the number of people who foresee the second version, with massive amounts of data transferred to and from the cloud. Here are just some of the reasons why this scenario is hard to imagine coming to fruition:

  • Volume of Data. The amount of data that would need to be transferred to the cloud is massive. It’s not hard to foresee a car needing to transmit terabytes of data during a trip if all of the decisions are made are made in a data center. Most prognosticators predict 5G as the technology that would support this network. One thing that seems to be ignored in these predictions is that almost no part of our current broadband infrastructure is able to handle this kind of data flow. We wouldn’t only need a massive 5G deployment, but almost every part of the existing fiber backbone network, down to the local level, would need to also be upgraded. It’s easy to fall into the trap that fiber can handle massive amounts of data, but the current electronics are not sized for this kind of data volumes.
  • Latency. Self-driving cars need to make instantaneous decisions and any delays of data going to and from the cloud will add delays. It’s hard to imagine any external network that can be as fast as a smart car making its own local driving decisions.
  • Migration Path. Even if the cloud is the ultimate network design, how do you get from here to there? We already have smart cars and they make decisions on-board. As that technology improves it doesn’t make sense that we would still pursue a cloud-based solution unless that solution is superior enough to justify the cost of migrating to the cloud.
  • Who will Build? Who is going to pay for the needed infrastructure? This means a 5G network built along every road. It means fiber built everywhere to support that network, including a massive beefing up of bandwidth on all existing fiber networks? Even the biggest ISPs don’t have both the financial wherewithal and the desire to tackle this kind of investment.
  • Who will Pay? And how is this going to get paid for? It’s easy to understand why cellular companies tout this vision as the future since they would be the obvious beneficiary of the revenues from such a network. But is the average family going to be willing to tack on an expensive broadband subscription for every car in the family? And does this mean that those who can’t afford a smart-car broadband connection won’t be able to drive? That’s a whole new definition of a digital divide.
  • Outages. We are never going to have a network that is redundant down to the street level. So what happens to traffic during inevitable fiber cuts or electronics failures?
  • Security. It seems sending live traffic data to the cloud creates the most opportunity for hacking to create chaos. The difficulty of hacking a self-contained smart car makes on-board computing sound far safer.
  • Who Runs the Smart-car Function? What companies actually manage this monstrous network? I’m not very enthused about the idea of having car companies operate the IT functions in a smart-car network. But this sounds like such a lucrative function I can’t foresee them handing this off to somebody else? There are also likely to be many network players involved and getting them all to perfectly coordinate sounds like a massively complex task.
  • What About Rural America? Already today we can’t figure out how to finance broadband in rural America. Getting broadband along every rural road is going to be equally as expensive as getting it to rural homes. Does this imply a smart-car network that only works in urban areas?

I fully understand why some in the industry are pushing this vision. This makes a lot of money for the wireless carriers and the vendors who support them. But the above list of concerns make it hard for me to picture the cloud vision. Doing this with on-board computers costs only a fraction of the cost of the big-network solution, and my gut says that dollars will drive the decision.

It’s also worth noting that we already have a similar example of this same kind of decision. The whole smart-city effort is now migrating to smart edge devices rather than exchanging massive data with the cloud. As an example, the latest technology for smart traffic control places smart processors at each intersection rather than sending full-time video to the cloud for processing. The electronics at a smart intersection will only communicate with the hub when it has something to report, like an accident or a car that has run a red light. That requires far less data, meaning far less demand for broadband than sending everything to the cloud. It’s hard to think that smart-cars – which will be the biggest source of raw data yet imagined – would not follow this same trend towards smart edge devices.

Categories
Regulation - What is it Good For?

The FCC Process

I recently wrote a blog that discussed the possibility that the FCC would change the definition of the speed that constitutes broadband. I got a number of inquiries from readers asking how this could happen outside of the scope of the formal rulemaking process. Specifically, I had reported on the rumor that the FCC was likely to make this decision by February 3, which is not one of the dates when the FCC formally holds open meetings and votes on changes to FCC rules. Today I’m going to try to shed some light on how the FCC makes decisions, which will hopefully clarify the issue.

The FCC has several paths to make decisions. The one that the industry is most familiar with is the rulemaking process. The basic process for rulemaking for all administrative government agencies was created with the Administrative Procedure Act in 1946. This Act defined a process of changing federal rules that mandates getting feedback from the public.

The FCC might consider changing rules for several reasons. Some rule changes are mandated by Congress, with one of the more recent such FCC actions being in response to changes in consumer privacy rules. The FCC can also start a rulemaking in response to a petition asking for a clarification of the rules. In the past such petitions often came from the large carriers or else from state regulators. Finally, the FCC can simply identify an industry problem on their own and begin the rulemaking process to seek possible solutions.

The FCC then has several tools available to facilitate the rulemaking process:

  • One tool available to the FCC is the NOI (Notice of Inquiry). This can be done when the FCC is trying to understand an issue and the possible solutions.
  • But the NOI process is not mandatory and the agency can move directly to an NPRM (Notice of Proposed Rulemaking). This is a formal document that proposes specific rule changes. There is a defined minimum timeline for this process that includes time for the public to comment and for a second round of reply comments, if needed. During this process the FCC might allow ex parte meetings from interested parties, hold public meetings to solicit feedback or engage with industry experts to get feedback on their proposals.
  • Finally, some dockets proceed to an FNPRM (Further Notice of Proposed Rulemaking). This tool is used when the comments on an NPRM cause the FCC to consider a different solution than what they originally proposed. This also then goes through the public comment process.

But not everything done at the FCC goes through the rulemaking process. For example, one of the mandated functions of the FCC is acting to adjudicate industry disputes. Industry parties that disagree on the interpretation of existing FCC rules can ask the agency to clarify – and in the case the agency takes on a nearly judicial role in looking at the facts of a specific case.

Finally, the FCC has a major administrative function. The agency has to make numerous policy decisions in order to meet its mandates from Congress. A simple way to think about this is that the rulemaking process creates formal rule changes. But then the agency must develop the processes and policies to make the new rules function. The FCC spends a lot of time on these administrative functions. For example, holding auctions for spectrum is an administrative function. Deciding how to fund and administer the Universal Service Fund is an administrative function. Approving new telecom and wireless devices is an administrative function.

The decision in the past to define the speed of broadband was an administrative decision. The agency has wide discretion to arbitrarily define administrative rules, but they often ask for public feedback.

The speed of broadband has been discussed at the FCC in several different contexts. First, the FCC has administered several grant programs and they decided that it was in the public good to set minimum broadband speeds for various grant programs. For example, the CAF II program requires the large telcos to deploy technology that delivers at least 10/1 Mbps. But there have been other speed requirements for other grant programs and the ‘experimental grants’ of a few years ago looked to fund technologies that delivered at least 100 Mbps download.

But the primary reason that the FCC decided they needed to define broadband using speeds was due to a mandate from Congress for the FCC is to report once per year on the state of broadband in the country. The Congress wants to know how many people have, or do not have broadband. Past FCCs decided that a definition of broadband was needed in order to create a meaningful report to Congress. They initially set the definition of broadband at 10/1 Mbps and later raised it to 25/3 Mbps. And they purposefully have excluded cellular broadband as not being broadband.

In anticipation of each annual broadband report the FCC sometimes asks questions of the public. They did so last year in an NOI where they asked if the 25/3 Mbps definition of broadband is too high. And they asked if cellular broadband ought to now be counted as broadband. This NOI is issued only for factfinding and to solicit public opinion on the topic. But the speed of broadband is an administrative decision of the agency, meaning that there are not formal rules associated with setting or changing the definition of broadband. The agency is free to make changes at any time to these kinds of administrative definitions. In the past the definition of broadband speeds was included with the annual broadband reports issued to Congress. And the anticipation is that the agency will use this same mechanism this year. There is no formal docket open on the topic and thus no formal and public vote is required. The FCC might or might not change the definition of broadband, but as my blog conjectured, the consensus of industry experts is that they are likely to do so. But we’ll have to wait for the annual broadband report to see if they actually lower the definition of broadband speeds or add cellular data to the definition.

Categories
Technology

Facebook’s Gigabit WiFi Experiment

Facebook and the city of San Jose, California have been trying for several years to launch a gigabit wireless WiFi network in the downtown area of the city. Branded as Terragraph, the Facebook technology is a deployment of 60 GHz WiFi hotspots that promises data speeds as fast as a gigabit. This delays in the project are a good example of the challenges of launching a new technology and is a warning to anybody working on the cutting edge.

The network was first slated to launch by the end of 2016, but is now over a year late. The City or Facebook won’t commit on when the network will be launched, and they are also no longer making any guarantees of the speeds that will be achieved.

This delayed launch highlights many of the problems faced by a first-generation technology. Facebook first tested an early version of the technology on their Menlo Park campus, but has been having problems making it work in a real-life deployment. The deployment on light and traffic poles has gone much slower than anticipated, and Facebook is having to spend time after each deployment to make sure that traffic lights still work properly.

There are also business factors affecting the launch. Facebook has had turnover on the Terragraph team. The company has also gotten into a dispute over payments with an installation vendor. It’s not unusual to have business-related delays on a first-generation technology launch since the development team is generally tiny and subject to disruption and the distribution and vendor chains are usually not solidified. There is also some disagreement between the City and Facebook on who pays for the core electronics supporting the network.

Facebook had touted that the network would be significantly less expensive than deploying fiber. But the 60 GHz spectrum gets absorbed by oxygen and water vapor, so Facebook is having to deploy transmitters no more than 820 feet apart – a dense network deployment. Without fiber feeding each transmitter the backhaul is being done using wireless spectrum, which is likely to be contributing to the complication of the deployment as well as the lower expected data speeds.

For now, this deployment is in the downtown area and involves 250 pole-mounted nodes to serve a heavy-traffic business district which also sees numerous tourists. The City hopes to eventually find a way to deploy the technology citywide since 12% of the households in the City don’t currently have broadband access – mostly attributed to affordability. The City was hoping to get Google Fiber, but Google canceled plans last year to build in the City.

Facebook says they are still hopeful that they can make the technology work as planned, but that there is still more testing and research needed. At this point there is no specific planned launch date.

This experiment reminds me of other first-generation technology trials in the past. I recall several cities including Manassas, Virginia that deployed broadband over powerline. The technology never delivered speeds much greater than a few Mbps and never was commercially viable. I had several clients that nearly went bankrupt when trying to deploy point-to-point broadband using the LMDS spectrum. And I remember a number of failed trials to deploy citywide municipal WiFi, such as a disastrous trial in Philadelphia, and trials that fizzled in places like Annapolis, Maryland.

I’ve always cautioned my smaller clients to never be guinea pigs for a first-generation technology deployment. I can’t recall a time when a first-generation deployment did not come with scads of problems. I’ve seen clients suffer through first-generation deployments of all of the technologies that are now common – PON fiber, voice softswitches, IPTV, you name it. Vendors are always in a hurry to get a new technology to market and the first few ISPs that deploy a new technology have to suffer through all of the problems that crop up between a laboratory and a real-life deployment. The real victims of a first-generation deployment are often the customers using the network.

The San Jose trial won’t have all of the issues as are experienced by commercial ISPs since the service will be free to the public. But the City is not immune from the public spurning the technology if it doesn’t work as promised.

The problems experienced by this launch also provide a cautionary tale for the many 5G technology launches promised in 2018 and 2019. Every new launch is going to experience significant problems which is to be expected when a wireless technology bumps up against the myriad of issues experienced in a real-life deployment. If we have learned anything from the past, we can expect a few of the new launches to fizzle and die while a few of the new technologies and vendors will plow through the problems until the technology works as promised. But we’ve also learned that it’s not going to go smoothly and customers connected to an early 5G network can expect problems.

Categories
Current News Regulation - What is it Good For?

The White House Broadband Plan

The White House used a forum at the American Farm Bureau Federation to announce new policies affecting rural broadband. Unfortunately, similar to the policies of the last administration the announced plans seem to offer no useful remedies for the lack of rural broadband infrastructure.

The President’s new recommendations were captured in two executive orders:

  • The biggest thrust of the new policies is to make it easier to place cell towers on federal lands. The President said, “Those towers are gonna go up and you’re gonna have great broadband,”. But finding places to site rural cell towers has never been a real problem. There is not much cost difference between putting a tower for free on federal land versus finding a site on private land in rural America. The biggest issue with placing new rural cell towers is getting broadband backhaul to the tower. It’s hard to think that there will be more than a handful of instances where this new policy will make a difference.
  • The second executive order was aimed at streamlining and expediting requests for placement of broadband facilities on federal lands. Except for finding better routes for long-haul fiber this new policy also doesn’t seem to have much real-life market value, particularly for the needed last mile connections.

These new policies add to a few policies issued in October by the administration’s Task Force on Agriculture and Rural Prosperity. That report made a few recommendations that included having multiple government agencies concentrate on expanding e-connectivity (a new phrase used to describe higher bandwidth), attracting private capital investment through “free-market policies, laws and structures”, and reducing barriers to rural infrastructure deployment (which the new executive orders apparently address).

To be clear, I am not particularly criticizing this administration for these announcements because they are similar to the proposals of the past administrations. President Obama had announced rural broadband policies that included:

  • A dig once policy for any construction done on federal highways. The goal was to get conduit into the ground over time along Interstate highways. But the directive came with no additional funding and to the best of my knowledge has never been implemented;
  • The last administration also announced its intention to make it easier to place broadband infrastructure on federal lands in nearly the same language as the current executive orders. But one of the biggest characteristics of federal land is that it’s extremely rural and for the most part is not close to a lot of rural homes. The big issue with building rural broadband infrastructure is the cost of construction, and making it slightly easier to site facilities barely makes a dent in the total cost of building rural infrastructure

What was not put on the table by this and the last administration is any meaningful funding for rural broadband – the one thing the federal government could do that might make a real difference. There was talk at the beginning of this administration of creating some sort of grant program aimed at paying for part of the cost of rural broadband. From the beginning all of the administration’s infrastructure plans involved using seed money from federal grants to attract significant commercial investment. The President’s speech at the AFBF mentioned hopes for the administration to still find infrastructure for “roadways, railways and waterways”, but there was no longer any mention of broadband.

Presidential policies aimed at dig once policies or easier siting for rural cell towers aren’t going to have any practical impact on new rural broadband deployment. I’ve never really understood politics and I guess the temptation to sound like you are doing something to solve an issue is too tempting. But today’s announcements bring nothing new to the table. And in fact, by making it sound like the government is doing something about rural broadband it probably does more harm than good by holding out hope for those with no broadband without any solutions.

Categories
The Industry

The Crowded MVPD Market

The virtual MVPD (Multichannel Video Programming Distributor) market is already full of providers and is going to become even more crowded this year. Already today there is a marketing war developing between DirecTV Now, Playstation Vue, Sling TV, Hulu Live, YouTube TV, CBS All Access, fuboTV and Layer3 TV. There are also now a lot of ad-supported networks offering free movies and programming such as Crackle and TubiTV. All of these services tout themselves as an alternative to traditional cable TV.

This year will see some new competitors in the market. ESPN is getting ready to launch its sports-oriented MVPD offering. The network has been steadily losing subscribers from cord cutting and cord shaving. While the company is gaining some customers from other MVPD platforms they believe they have a strong enough brand name to go it alone.

The ESPN offering is likely to eventually be augmented by the announcement that Disney, the ESPN parent company, is buying 21st Century Fox programming assets, including 22 regional sports networks. But this purchase won’t be implemented in time to influence the initial ESPN launch.

Another big player entering the game this year is Verizon which is going to launch a service to compete with the offerings of competitors like DirecTV Now and Sling TV. This product launch has been rumored since 2015 but the company now seems poised to finally launch. Speculation is the company will use the platform much like AT&T uses DirecTV Now – as an alternative to customers who want to cut the cord as well as a way to add new customers outside the traditional footprint.

There was also announcement last quarter by T-Mobile CEO John Legere that the company will be launching an MVPD product in early 2018. While aimed at video customers the product will be also marketed to cord cutters. The T-Mobile announcement has puzzled many industry analysts who are wondering if there is any room for a new provider in the now-crowded MVPD market. The MVPD market as a whole added almost a million customers in the third quarter of 2017. But the majority of those new customers went to a few of the largest providers and the big question now is if this market is already oversaturated.

On top of the proliferation of MVPD providers there are the other big players in the online industry to consider. Netflix has announced it is spending an astronomical $8 billion on new programming during the next year. While Amazon doesn’t announce their specific plans they are also spending a few billion dollars per year. Netflix alone now has more customers than the entire traditional US cable industry.

I would imagine that we haven’t seen the end of new entrants. Now that the programmers have accepted the idea of streaming their content online, anybody with deep enough pockets to work through the launch can become an MVPD. There have already been a few early failures in the field and we’ve seen Seeso and Fullscreen bow out of the market. The big question now is if all of the players in the crowded field can survive the competition. Everything I’ve read suggests that margins are tight for this sector as the providers hold down prices to build market share.

I have already tried a number of the services including Sling TV, fuboTV, DirecTV Now and Playstation Vue. There honestly is not that much noticeable difference between the platforms. None of them have yet developed an easy-to-use channel guide and they feel like the way cable felt a decade ago. But each keeps adding features that is making them easier to use over time. While each has a slightly different channel line-up, there are many common networks carried on most of the platforms. I’m likely to try the other platforms during the coming year and it will be interesting to see if one of them finds a way to distinguish themselves from the pack.

This proliferation of online options spells increased pressure for traditional cable providers. With the normal January price increases now hitting there will be millions of homes considering the shift to online.

 

Categories
Regulation - What is it Good For?

FCC BDAC on Competitive Access

Today I discuss the draft proposal from the FCC’s Broadband Deployment Advisory Committee (BDAC) sub-committee that is examining competitive access. This draft report to the FCC is not yet final, but it details the issues and discussions of the group and is likely close to the finished work product.

This sub-committee is tackling some of the hardest issues in the industry. The pole attachment process has been a costly roadblock to implementation of new networks since the Telecommunications Act of 1996 allowed access of competitors to poles, ducts and conduits. The report considers a number of different issues:

The FCC Complaint Timeline. The FCC currently has no rules that require the agency to respond to a complaint from a carrier having problems connecting to poles. This deters attachers from making complaints since there is no guarantee that the FCC will ever resolve a given problem. The subcommittee recommends that the FCC adopt a 180-day ‘shot-clock’ to require rulings on attachment issues. The sub-committee is also recommending that the FCC react within 180 days to complaints about attachment rates and fees. The group wants to stop pole owners from capturing some capital costs twice. They claim some pole owners capitalize the cost for pole make-ready, which is paid by new attachers, and then build these costs again into the base pole attachment fees.

One Touch Make Ready. The sub-committee looked in depth at make-ready costs – the costs of a new attacher to get onto a pole. They are making numerous recommendations:

  • They want a simplified one-touch pole attachment process that streamlines the application, permitting and make-ready process. They would like to see all attachers agree to use only one contractor to speed up the make-ready process. They are also asking that the various parties agree to one contractor that is allowed to work in the power space, which is needed for some wireless attachments. They want make-ready rules to be uniform across all jurisdictions.
  • They want to require that the pole owner and all existing atachers be present during the feasibility survey, rather than having to coordinate visits with each existing attacher.
  • They want to speed up the time lines for reviewing and amending attachment requests.
  • They want to strengthen the FCC’s rules for ‘self-help’ which allow work to proceed when existing attachers don’t respond to attachment requests.

Fees and Rates. The sub-committee does not want the FCC to create a new pole attachment rate for a broadband connection – something they fear might be considered due to removing Title II regulation of broadband. They want ‘broadband’ attachments to be the same rate as telecom or cable attachments.

Recommendations for Other Infrastructure. The sub-committee would like to see an infrastructure database that identifies the owners of common telecom infrastructure like poles, ducts, trenches, street lights, traffic lights, towers, water towers, bridges, etc. This should include public buildings that might be useful for placement of 5G infrastructure. Knowing such a database will be expensive they have suggested ways to fund the effort.

Jurisdictional Issues. They want to see processes that streamline the jurisdictional differences for projects that crosses multiple local jurisdictions.

Use of Subsidized Infrastructure. Currently infrastructure built to serve schools or rural health care facilities is restricted to those specific uses if subsidized by the E-rate or Healthcare Connect Fund. The sub-committee wants such facilities to also be usable for other commercial purposes.

It’s hard to guess how much traction some of these recommendations might get at the FCC. Some of the jurisdictional issues, as well as the creation of an attachment database probably require Congressional action to solve. And some of the biggest ISPs like AT&T are both pole owners and fiber builders and it’s hard to know where they will support issues that will help them but which will also make it easier for their competitors. It’s also worth noting that the FCC is under no obligation to respond to the BDAC process. However, this particular sub-committee has taken a logical approach to some of the biggest problems with attachments, and these proposals deserve a hearing.

Categories
The Industry

Big ISPs Raise Broadband Prices

As the new year dawns we are starting to see big ISPs raise broadband prices. One of the more interesting increases is by Comcast. They increased two rates – the rate of standalone broadband and the price of renting a cable modem.

The company now charges $75 per month for a standalone broadband connection that meets the FCC’s definition of broadband of being at least 25/3 Mbps. In many of their markets the minimum speed offered to new customers is faster than this, making the $75 entry price for standalone broadband.

For now it doesn’t look like Comcast increased the cost of bundled broadband, although they just announced that all bundled packages are increasing by $5 per month. But that increase can largely be attributed to increased programming costs. The price for standalone broadband was $65 a year ago, was raised by $5 during 2017 and just went up by $5 again.

The standalone price increase is aimed squarely at cord cutters. This price punishes customers who don’t want to pay for the other services in the various Comcast bundles. This is their way to still extract a lot of margin from somebody who elects to watch video online. I wrote a blog a few months ago that cited a Wall Street analyst that suggested that the company ought to charge $90 for standalone broadband, and it looks like the company is heeding that advice.

To put that price into perspective, Google Fiber and a few others are charging $70 for a standalone symmetrical gigabit connection – 20 times the speed for a lower price. But to really make a fair comparison you also have to consider the Comcast cable modem. They just raised that rate from $10 to $11 per month. The company makes it a challenger for customers who won’t use the Comcast modem, and so the real standalone price for the minimal Comcast broadband product is $86 per month.  It’s not hard to understand why households are beginning to find broadband unaffordable.

The $11 fee for a cable modem is outrageous. Comcast gets these directly manufactured and I am doubtful that they are spending more than $100 per device, and probably less. The $1 price increase adds roughly $300 million to Comcast’s bottom line. In total, the company is billing roughly $3.3 billion per year for all customers for an inventory of modems that probably costed them less than $2.5 billion. And since people tend to keep the modems for a number of years, this rate is mostly margin. Even for a new customer Comcast recovers the cost of the modem within 9 months.

Frontier also has introduced a troubling new price increase for broadband. Rather than increase the advertised price of the product they are adding a $1.99 per month ‘Internet Infrastructure Surcharge.’ This is strictly an increase in broadband rates, and the company is clearly hoping that most people don’t notice or don’t understand this new charge on their bill. For the last few years we have seen cable companies sneak in rates that look like taxes or external fees but which are just a piece of the cable TV bill. It’s disturbing to see this happening with broadband and I suspect other ISPs will begin copying this concept over the next few years.

Cox has also increased data prices, and unlike the above two companies which are trying to mask the broadband price increases, Cox raised all packages that include broadband from $2 to $4 per month.

Broadband prices have never been regulated. There was a minimal threat of price regulation under Title II authority at the FCC, but that’s now gone. I’ve seen a few articles blaming these latest price increases on the end of Title II regulation, but there has never been anything stopping an ISP from raising rates other than market forces. In fact, the FCC has never threatened to regulate broadband rates.

There are two real drivers of these and future broadband price increases. First, broadband is no longer growing explosively since most homes now have a broadband connection. And the publicly traded ISPs are feeling earnings pressure while the loss of cable TV and telephone customers leaves broadband as the only place to increase bottom line margins.

The second major factor is the absence of real broadband competition. In markets where a real competitor like Google shows up the big ISPs come close to matching the lower prices of the competitor. But as houses need faster broadband, the residual competitive pressure from DSL is waning, meaning that in most cities the cable companies are becoming a virtual monopoly. Big ISPs like Comcast will lower rates where they have a good competitor, but they are more than making up for it in markets where they have the only fast broadband.

One consequence of the kind of prices that Comcast is now charging is that, over time, they will induce more competitors to enter the market. But the only real threat on the horizon for the big cable companies is point-to-multipoint 5G. It will be interesting to see if that technology can really work as touted. If 5G is successful it will be interesting to see the pricing philosophy of the ISPs offering the service. They could price low like Google Fiber or else ride the coat strings of the cable companies with higher prices.

Categories
Regulation - What is it Good For?

Challenging the Net Neutrality Order

It looks like there are going to be a number of challenges to the FCC’s recent repeal of Title II regulation and net neutrality. Appealing FCC decisions is normal for controversial rulings and the big telcos and cable companies have routinely challenged almost every FCC decision they haven’t liked.

The FCC voted to repeal Title II regulation on December 14th, but just released the order on Friday. As expected, there were some wording changes made that the FCC hopes will help during the expected legal challenges. The time clock for any challenges will start when the order is published in the Federal Register. The FCC order goes into effect 60 days later and any court challenges must be filed within that two-month window.

When FCC rules are challenged, it’s not unusual for a court to put a stay on some parts, or even make an entire new ruling until the legal issues are sorted out. This happened a few years back when Verizon challenged the FCC’s first net neutrality order and the courts stayed all of the important parts of that ruling before eventually ruling that the FCC didn’t have the authority to make the rules as they did.

It appears that challenges are going to come from a number of different directions. First, there are states that have said they will challenge on procedural issues. This is a tactic often taken by the big ISPs, and generally if the courts agree that the FCC didn’t follow the right procedures in this docket they will then rule that the agency has to start the whole process over again. That alone would not change the outcome of the proceeding, but it could add another year until the FCC’s order goes into effect. I wonder if this kind of delay is meaningful because it’s likely that this FCC won’t enforce any net neutrality ‘violations’ during a reboot of the rules process.

The Attorney General of New York has an interesting appeal tactic. He is claiming that the FCC ignored the fact that there were millions of fake comments made in the docket – some for and others against the proposed rules. New York is suing the FCC over the issue and expects some other states to join in the lawsuit. This would be a unique procedural challenge and would be another way to have to reset and start the whole process over again.

Legislators in California, New York and Washington are planning to tackle the issue in a different way. Legislators are proposing to create a set of state net neutrality laws that basically mimic what was just repealed by the FCC. These states would not be directly challenging the FCC order and it would require some third party like a big ISP to challenge the state laws through the court system. Such a process might take a long time since it might have to go through several layers of courts, and might even end up at the US Supreme Court. State’s rights have been a common way to challenge FCC rulings ad there have been numerous fights between states and the FCC any time that Congress has created ambiguity in telecom laws.

The hope of these state legislators is that the state rules will be allowed to stand. They know that if ISPs and other tech companies have to follow net neutrality laws in large states like California that they are more likely to follow them in the whole country. A similar State / Federal battle is also underway on a different issue and twenty states are considering enactment of state privacy laws to replace ones preempted by Congress.

Another challenge to the FCC’s decision will come from democrats in Congress who are trying to use the Congressional Review Act (CRA) rules to challenge the FCC’s ruling. This is a set of rules that allow Congress to reverse rulings from administrative agencies like the FCC with a simple majority and has been used effectively recently by republicans in a number of ways. With a 51-49 Republican majority it would only take a few republican defections to maintain at least some aspects of net neutrality. The make-up of the Congress might also change with the elections later this year – meaning that Congress might change the rules in the middle of all of the various appeals.

One thing is for certain – this FCC ruling is not going to be easily implemented and I’m guessing that during the next sixty days we will see a number of creative challenges used to appeal the FCC’s ruling. It could easily be a few years before these issues are resolved through the courts.

Categories
Regulation - What is it Good For?

The New European Privacy Standards

It’s worth keeping an eye on the new European privacy standards that go into effect in May. Titled the General Data Protection Regulation (GDPR), the new rules provide significant privacy protection for European Union citizens. The new rules are required for all companies doing business in the EU, so that means it applies to the majority of web companies operating in the US. The GDPR rules also apply to brick and mortar companies that collect customer data like banks and doctors. The privacy rules apply to companies that collect data directly from customers (data controllers) as well as any secondary companies that process that data (data processors). Interestingly, under the new rules a data controller is responsible to know what data processors do with the data they provide to them.

The major basis for the new rules are that consumers own and have control of their own data and companies can only use data if there is at least one lawful basis for doing do. This includes:

  • A consumer gives specific permission to use personal data for one or more specific purposes;
  • Processing the data is necessary to meet a contractual arrangement with a consumer;
  • Processing the data is necessary to meet a legal obligation which applies to the consumer;
  • Processing is necessary to protect the vital interests of the consumer or some other natural person;
  • Processing is allowed for the performance of a task carried out in the public interest, such as by the government;
  • Processing is necessary to pursue legitimate interests of the data controller or a third party.

For the most part the new laws require consumers to give explicit consent to use their data, including the specific purpose for the use. Just like in the US, there are provisions for law enforcement to gain access to customer data through subpoena or court order.

Larger companies are expected to create the position of Data Protection Officer who is tasked to make sure that all parts of a company are compliant with the law. As you might expect, meeting these requirements is a major change for many companies and there has been a two-year transition period leading up to the May implementation.

The new law also changes the way that companies store customer data to minimize the impact of data breaches. For example, companies are encouraged to store data in such a way that the stored data cannot be attributed to a specific person without the use of additional data. The law calls this pseudonymisation which means encrypting stored data and storing it in a manner to make it hard for an outsider to use. For example, a company would not store things like a social security number, date of birth, address and email address all in the same record.

The law has teeth and allows for fines up to 4% of the worldwide revenues of a business for massive violations of the rules. The expectation is that there will probably have to be a few serious fines levied to get most companies to get serious about following the new rules.

Overall this law creates a drastic change in the handling of customer data. Companies will not be allowed to mine and sell customer data without specific customer approval. It seems to particularly discourage the practice of selling data to brokers who can then use the data in any manner they choose. In this country companies like Google and Facebook make huge revenues from data mining and the big ISPs are now leaping into this same business line. In Europe this is going to greatly restrict the value of selling customer data.

This new law is worth following since the big web companies that are so predominant in this country are going to be complying with the new rules. This means it would be relatively easy at some point to require similar rules here concerning customer data.

The GDPR data storage rules also have the purpose of limiting the value of data breaches. If we see a great reduction in damaging hacking in the EU because of this law, then companies here might begin following the EU recommended data storage methods even if the privacy rules are never implemented here. Some of the most damaging hacks we’ve seen here are when a hacker gets records that provide multiple data points for a given customer. If a hacker can’t use the data to put together a coherent picture of a given customer then the value of a breach is greatly reduced.

Categories
Regulation - What is it Good For?

FCC BDAC Model State Code

This is the first in a series of blogs on the progress being made by the FCC’s Broadband Deployment Advisory Committee. The FCC created five working groups to make recommendations on rule changes needed to better promote broadband deployment. Today’s blog discusses the subcommittee considering model codes for states.

This draft report is roughly in the form of legislation recommended for adoption by states. This report largely reads like a wish-list of regulations wanted by the big ISPs. There are a few ideas in here that have been widely discussed for years along with some new ideas. I could write ten pages talking about the nuances of this draft report, but here are some of the highlights:

  • State Goals. The goals are innocuous and have a state pledging to promote broadband everywhere including in rural areas. But there is one interesting twist over the existing goals that a number of states have adopted that defines broadband as bandwidth adequate to meet the person, business, educational and economic needs of the state. This differs from current goals that often set a specific download speeds as the goal.
  • Statewide Franchising. The proposed regulations would do away with all local franchising and establish one statewide franchise authority. This is something that a number of states have already adopted. The proposed regulations have more teeth than most existing such rules and eliminate a locality from imposing any kind of restrictions on a broadband service provider.
  • Access to Government Assets. The rules would create a centralized Network Support Infrastructure Register in which local governments would have to list every asset that might be of use for broadband providers. This would include rights-of-ways, towers, buildings, etc. Governments would then have to provide access to these assets to any communication provider at affordable rates set by the state.
  • One Touch Make Ready. The rules contain one of the many variations on one-touch make ready for attaching to poles. These rules allow for short time frames for existing wire owners to comply with an attachment request before allowing an attacher to connect to poles using pre-approved contractors.
  • New Hoops for Municipal Broadband Infrastructure. Cities and counties must jump through a lot of hoops before building any broadband infrastructure. For example, before building a fiber to connect government buildings they would have to seek permission of the State through a process called a Minimum Network Specification Notice. Commercial providers would be able to intervene in this process and offer to build some or all of the desired infrastructure. This would largely stop municipalities from building private networks to serve their own needs and would let ISPs instead build the facilities and bill the municipalities for the use.

Municipalities would also have to jump through a series of hoops before being able to build a broadband network to serve customers. For example, a city would have to prove that what they propose could not be done better through some kind of public-private partnership or by a commercial provider. These kinds of restrictions have been pushed for years by ALEC, and where they are enacted they effectively stop municipalities from creating a broadband business.

Any broadband facilities built by a municipality would have to be made available on a cost-plus lease basis to a service provider. This would include dark fiber, towers, and space inside of government buildings.

  • Preempt Building Owner Rights. The rules require that building owners must provide access for communications providers to create a ‘network access point’ inside or outside of a building.
  • Priority for Wireless Infrastructure. The proposed rules would prohibit localities from restricting the deployment in any way of wireless towers or small cell site.
  • Paying for Rural Broadband. The report supports the idea of State Universal Service Funds and a new Rural Broadband Deployment and Maintenance Fund that would be used to support rural broadband service providers.

In summary, this represents the same wish list we’ve seen from the big ISPs and from their lobbying arms like ALEC. While many states have adopted some portion of these rules, nobody has adopted them all. It’s fairly obvious that the recommendations from this sub-committee are being driven by the big ISPs.

It’s worth noting that these sub-committees are advisory and the FCC doesn’t have to do anything with their recommendations. In this particular case, since these are proposed state rules the FCC would not have the authority to implement most of these recommendations, so these are really a ‘model’ set of regulations that the big ISPs would love to see enacted at the state level. However, by generating this through the FCC process these recommendations will be touted as being blessed by the FCC.

Exit mobile version