Matter – The New IoT Standard

Anybody that uses more than one brand of Internet of Things (IoT) device in the home understands that there is no standard way to connect to these devices. Each manufacturer chooses from a range of different protocols to communicate with and control its devices, such as BLE, LoRa, LTE-M, NB-IoT, SigFox, ZigBee, and others. Every family of devices, and typically every different brand requires a separate app on your smartphone, which means managing a pile of different apps, passwords, and logs-ins to control your devices.

The situation is tougher on businesses. Consider a farmer that might need a dozen sets of software to control the different smart devices and systems installed in a modern dairy or chicken farm. Farmers have complained to me that it’s been growing increasingly complex to manage the electronics in their operation from day to day. Not only must they master different systems to control each set of devices, but the outputs of the various systems are not in a compatible format to communicate with other systems. A farmer must manually intervene if an alarm from one set of devices needs a response from other devices.

This is a big problem also for larger businesses that deploy IoT devices. It’s not uncommon for the makers of smart devices to retool their products over time, and a large business might find over time that it has multiple generations of smoke alarms, security cameras, smart door locks, or other devices from the same manufacturer that each require a different set of software to control. Companies have sometimes resorted to ripping and replacing older but still functional devices that are incompatible with the newest generation of devices.

Big companies also have the same problems as farmers in that there is no easy way to tie devices together onto one platform. The benefit of using smart sensors loses a lot of appeal if people are needed to manually interpret and intervene when trying to coordinate alarms or other events. Some companies have spent a lot of money to develop unique software to make sense of the outputs of different kinds of smart sensors – but that software has to constantly be tweaked for new devices.

The manufacturers of smart devices recognized that the chaos in the industry is holding down sales. Amazon, Apple, Google, and more than 200 other makers of home electronics and smart devices got together to develop a common IoT platform. These manufacturers agreed that it is important for them to work together, even though they are market rivals because the confusion created by the multiple communications platforms for IoT devices is hurting sales for the industry as a whole.

The new IoT platform that addresses the problems of the industry has been named Matter. There were hundreds of new devices using Matter at this years CES from a variety of vendors. Matter has created a standard language for interpreting the outputs from IoT devices. This means that the commands to operate a smart door lock will be identical from every manufacturer of smart door locks that joins the Matter consortium.  Matter also tests and certifies that devices adhere to the new standard.

This has huge potential for users of IoT. It will be possible to have one app on a smartphone that can communicate with all Matter-enabled devices in the home. This will make it easy and almost instantaneous to connect a new Matter device into your home network of devices. It also will make it easier to coordinate interactions between devices. For example, let’s say that you want your smart blinds to be lowered any time the inside temperature rises to some predetermined level. That can be made to work even if your smart thermostat and smart blinds equipment come from different vendors – commands will be unified across Matter devices, regardless of who made them. The implications for the farmer and the businesses are even more profound. They might finally be able to have a suite of interactive smart devices instead of disparate devices that can’t communicate with each other.

Interestingly, there were folks calling for this from the beginning of the IoT industry. But vendors all chose to take separate paths, and some competitors chose a different path so they wouldn’t be compatible with anything else. In the early days, manufacturers had a vision that people would buy a whole integrated suite of products from them – but the industry didn’t go in that direction. If this catches on, vendors that use Matter ought to have a major advantage within a few years of anybody that refuses to use the new standard.

The Increasing Cost of Building fiber

Diana GoovaErts recently cited Pascal Desroches, the CFO of AT&T, as saying that the cost of building fiber has increased. He said that increased costs are getting close to hitting the company’s goal of not spending more than $900 – $1,000 per new fiber passing.

Any time I see an ISP talking about fiber costs, my first question is what is included in the costs. Does AT&T’s number cover only the fiber on the street? Does it also include a fiber drop, customer electronics including Wii, and installation labor? AT&T operates a PON fiber network – does the cost include field splitters, cabinets and other such costs? We don’t have any context to judge AT&T’s number and that makes it impossible to compare to costs claimed by other ISPs.

To put the AT&T numbers into perspective, I work with ISPs that are building aerial fiber in county seats that hope to hold all-in costs to $2,000 per passing when building to everybody, but often go higher. That number includes all of the costs I listed above. But it also differs from AT&T because the higher number includes the cost of building to everybody in a community. We know that AT&T only builds to small pockets of customers, and it probably rarely builds to any parts of a city that are challenging or expensive.

The other big difference is that AT&T is mostly overlashing fiber onto its existing copper. That is a construction method that is not available to other overbuilders who have to pay for make-ready on poles. The only times when costs are low for other ISPs is when the poles are in great shape, with minimal make-ready work needed. AT&T’s low target number highlight two things – its advantage from being able to overlash, and a willingness to skip neighborhoods with higher costs.

AT&T’s low target price also highlights that AT&T is shooting for a higher margin goal than most overbuilders. There is a big difference in the short-term return between an ISP paying $1,000 and one paying $2,000 per passing. AT&T is clearly under pressure to make fiber profitable as quickly as possible. Interestingly, when looking out at a ten-year horizon, there is very little difference in the cash flow generated for the low or higher cost build. Most ISPs that overbuild fiber recognize that the business has relatively low-returns for the short run but eventually cranks a lot of cash flow.

The $1,000 top target of cost also tells us a lot about AT&T’s market plan. To stay under that number means being very careful about where the company builds. This explains why AT&T is building to small pockets of customers in its markets and not building to everybody. The low target cost number also tells us that there is very little buried fiber in AT&T’s plans.

To some degree, AT&T is following the model established fifteen years ago by Verizon FiOS. Local communities were incensed when the Verizon built some streets but not the ones a block away, or when Verizon built fiber in one subdivision but not the one immediately next door. I don’t recall Verizon in those days ever mentioning a target price for construction, but it was clear that it had a cost metric that was driving where the company decided to build.

Desorches also said that AT&T is only forging ahead because the company is seeing higher than expected customer penetration rates on fiber. That fact must be creating a chill in cable company board rooms. It explains why cable companies are moving as quickly as possible to boost broadband speeds through upgrades. Cable companies are hoping that matching the speeds on fiber will fend off fiber overbuilders. That’s going to be an interesting marketing challenge because it seems to me that a lot of the public now believes that fiber is superior to other broadband technologies.

Desroches said that AT&T is still holding to its goal to pass 30 million homes by the end of 2025. The company closed 2022 with 24 million passings and will need to pass 2 million new homes per year to meet that target.

It seems likely to me that inflation isn’t the only reason that AT&T’s costs are rising. I would guess that the company has already constructed to the locations with the lowest cost per passing and that the remaining 6 million passings  likely have higher costs than the places already built.

It’s going to be interesting to see what AT&T does when it hits 30 million passings. The company could do what Verizon did with FiOS and sit on the fiber portfolio and generate a lot of cash. It’s anybody’s guess if the company will roll any of those profits back into building more fiber.

AT&T announced recently that it is interested in pursuing some of the $42.5 billion BEAD grant funding to build in rural markets. I don’t foresee the company finding any grant opportunities where its cost for matching funds will be under its $1,000 target per passing. But I think all the big telcos are considering that a higher out-of-pocket cost for grant areas will be offset by the benefits of creating a virtual monopoly in those places.

AI and Telecom Jobs

I’ve seen a lot of articles recently predicting that artificial intelligence will bring about a massive upheaval in the U.S. job market. Such predictions are not new, but the recent introduction of ChatGPT and other language models has elicited a new round of predictions. We already know that software can displace people. In 2019, Wells Fargo predicted that efficient software would replace 200,000 jobs in the banking industry. Much of this has already come to pass as software has replaced a lot of bond traders and behind-the-scenes analysts at banks. The question I’ve been pondering today is how artificial software will impact the telecom industry.

This industry has seen major retooling over the years. My first industry job was as an RF technician, and almost every function I tackled in the early 70s has been replaced by software. There is great software today that can pop out a propagation study or quickly estimate the link budget for a wireless connection. Similar changes have happened across most jobs in the industry. Folks proficient in copper technologies have been nearly phased out. There is no longer an army of certified Cisco techs working in every network engineering office. Rooms full of draftspeople have been replaced by fiber network design software.

Many of the past changes to industry jobs are solely due to the introduction of new technologies, such as copper jobs being replaced by fiber jobs. But a lot of the changes to jobs are due to productivity software, where computers can figure things out faster and more accurately than people.

The web is currently full of predictions that the next wave of innovations will impact office workers much more than craft jobs. Outside of the telecom industry, there are some drastic predictions of big changes in the next five years. One of the most immediate jobs that will be under fire is coders. There will always be a place for the smart innovators that come up with unique software ideas, but folks who write the fill-in code or people that debug software are likely to be replaced by AI software that can do the same functions faster and more accurately.

There are predictions that call centers will be emptied out over the next decade when voice software becomes as good at answering customer questions as a live person. The same is true for jobs that deal with a lot of paperwork. Jobs like paralegals, insurance claims specialists, and anything else that means processing repetitive information can be replaced by AI software.

One of the direst predictions is that AI can replace a lot of the work done by high-proficiency experts. For example, the prediction is that medical diagnosis software will be faster and far more accurate than doctors at diagnosing and recommending treatment for diseases. In the telecom world, this might mean replacing jobs like network engineers since software can monitor and react to network issues in real-time. A lot of this has already happened, and it’s amazing how few people it takes today to operate a NOC or data center.

Not all of the predictions are dour. I read one prediction that AI would eliminate 12 million U.S. over the next decade. But these predictions don’t talk about the new jobs that will be created in a world with prevalent AI. I don’t know what those jobs will be, but they are bound to materialize.

Innovation from AI is likely to impact large corporations far sooner than small ones. It’s not hard to envision some of the giant ISPs fully automating the backoffice function to eliminate many customer service, accounting, and other office workers. Little companies are not going to easily duplicate this transition. Employees in smaller ISPs tend to wear many hats and usually don’t perform just one function. The cost for a small company to implement an AI solution might be a lot higher than the savings.

One consequence of improved efficiency for big ISPs might be that it will become easier to justify buying small ISPs and eliminating everybody except the field technicians.

Interestingly, there is one area where most of the predictions agree – that AI will not replace innovators and experts who see the big picture. Nobody believes that software is going to have any creative spark in the coming decades, and maybe never. But that raises an interesting question. How do we grow the next generation of experienced veterans in an industry where a lot of the functions are done by AI? All of the smartest people I know in the broadband industry have worn many different hats during their careers. It is the accumulated experience of working in many parts of the business that makes them an expert.

One thing is sure. ChatGPT and similar software is new, and we’re at the very beginning of the AI revolution. But if this new software meets only a fraction of the early claimed benefits, we’re going to see huge changes across the economy. Whatever is coming is going to be massively disruptive, and working in telecom or any other industry will never be the same.

The Next Big Thing

I’ve always been somewhat amused to read about the colossally important technology trends that are right around the corner. These trends are mostly driven by the wishful thinking of vendors, and they have rarely come true, at least to the extent that is predicted. Even when the next big thing comes to pass, it’s almost never at the predicted magnitude. There has been at least one of these big trends announced every year, and here are a few of the more interesting ones.

I can remember when it was announced that we would be living in an Internet of Things world. Not only would our houses be stuffed full of labor-savings IOT devices, but our fields, forests, and even the air around us would be full of small sensors that would give us feedback on the world around us. The reality was not the revolution predicted by the industry press, but over a decade, most of us now have smart devices in our homes. But the fields, forests, and surrounding environment – not so much.

The IOT trend was followed by big pronouncements that we’d all be adopting wearables. This was not only devices like Google Glass, but we’d all have wearables built into our everyday clothes so that we could effortlessly carry a computer and sensors with us everywhere. This prediction was about as big of a flop as imaginable. Google Glass crashed and burned when the public made it clear that nobody wanted everyday events to be live streamed. Other than gimmicks at CES, there was no real attempt at smart clothes.

But wearables weren’t the biggest flop of all – that is reserved in my mind for 5G. The hype for 5G swamps the hype for all of the other big trends combined. 5G was going to transform the world. We’d have near gigabit speeds everywhere, and wireless was going to negate the need for investing in fiber broadband networks. 5G was going to enable fleets of driverless cars. 5G would drive latency so low that it was going to be the preferred method for connection by gamers and stock traders. There was going to be 5G small cell sites on every corner, and fast wireless broadband would be everywhere. Instead of 5G, we got a watered-down version of 4G LTE labeled as 5G. Admittedly, cellular broadband speeds are way faster, but none of the predicted revolution came to pass.

A few predictions came to pass largely as touted – although at a much slower pace. Five years ago, we were told that everything was going to migrate to the cloud. Big corporations were going to quickly ditch internal computing, and within a short time, the cloud would transform computing. It didn’t happen as quickly as predicted, but we have moved a huge amount of our computing lives into the cloud. Tasks like gaming, banking, and most of the apps we’ve come to rely on are in the cloud today. The average person doesn’t realize the extent that they rely on the cloud until they lose broadband and realize how little of the things they do are stored in the computers at their homes and offices.

This blog was prompted by the latest big trend. The press is full of stories about how computing is moving back to the edge. In case the irony of that escapes you, this largely means undoing a lot of the big benefits of going to the cloud. There are some good reasons for this shift. For example, the daily news about hacking has corporations wondering if data will be safer locally than in the cloud. But the most important reason cited for the movement to edge computing is that the world is looking for extremely low latency – and this can only come when computer processing is done locally. The trouble with this prediction is that it’s hard to find applications that absolutely must have a latency of less than 10 milliseconds. I’m sure there are some, but not enough to make this into the next big trend. I could be wrong, but history would predict that this will happen to a much smaller degree than being touted by vendors.

All big technology trends have one big weakness in common – the fact that the world naturally resists change. Even when the next big thing has clear advantages, there must be an overwhelming reason for companies and people to drop everything to immediately adopt something new, and that usually is untested in the market. Most businesses have learned that being an early adapter is risky – a new technology can bring a market edge, but it can also result in having egg on one’s face.

Should DSL Cost Less Than Fiber?

As I was going through my pile of unread articles, I found an article from the Associated Press that asked how big ISPs can get away with charging the same prices in urban areas for both slow and fast broadband. The article was about Shirley Neville, in New Orleans, who found that she was paying the same price for 1 Mbps DSL from AT&T as other city residents are paying for a fiber connection.

It’s a great question, and I was surprised that I hadn’t thought to write about it before. I investigate broadband prices around the country, and it’s not unusual to find the price for fiber broadband in a city set close to the price charged for DSL.

It would be easy to justify charging the same price for both technologies if AT&T was in the process of converting everybody in New Orleans to fiber. In fact, if that was the reason, I’d be impressed that AT&T wasn’t charging more for the technology upgrade. But this is not the situation. It’s clear that the AT&T fiber business plan is to build fiber to small pockets of cities, but not everywhere. The chances are high that Shirley Neville’s neighborhood and many others will not be getting fiber soon from AT&T, if ever. For every neighborhood that gets fiber, there will be many that will never see AT&T fiber.

Another possibility is that AT&T’s low price for a fiber connection is an introductory price to lure people to switch from Cox, the cable company. Perhaps when the introductory price expires the fiber price will be higher than DSL. This still doesn’t feel like a great answer to Shirley’s question since AT&T is willing to give a fiber customer a big break.

The most likely answer to the question is the ugliest. AT&T doesn’t feel like it needs to reduce the price of DSL in the city because DSL customers are a captive audience. Cox has some of the highest broadband prices in the country, and that gives cover for AT&T to charge whatever it wants for DSL as long as the price is lower than Cox.

Another reason that AT&T can charge the same for DSL and fiber is that there isn’t anybody to tell the company that it shouldn’t do so. The FCC eliminated broadband regulation and the Louisiana Public Service Commission doesn’t assert any authority over broadband prices. Folks like Shirley Neville don’t have anybody looking out for them, and the big ISPs can overcharge customers with impunity.

As the article points out, Shirley’s question is germane today because of the FCC’s investigation of digital discrimination. The article cites an investigation by The Markup, which analyzed over 800,000 broadband offerings from AT&T, Verizon, Earthlink, and CenturyLink in 38 cities across America and found that the four ISPs regularly offer broadband speeds at 200 Mbps or faster at the same price as broadband with speeds under 25 Mbps.

The Markup analysis shows that the neighborhoods with the worse speed options have lower median household incomes in 90% of the cities studied. Where The Markup could gather the data, it also looks like the big ISPs offered the worst deals to the least-white neighborhoods.

USTelecom responded to the issue by stating that the high cost of maintaining old copper networks justifies high prices for DSL. The article cites Marie Johnson of USTelecom writing that “Fiber can be hundreds of times faster than legacy broadband—but that doesn’t mean that legacy networks cost hundreds of times less. Operating and maintaining legacy technologies can be more expensive, especially as legacy network components are discontinued by equipment manufacturers”.

That’s exactly the response I would expect to defend monopoly pricing. Nobody expects the price of DSL to be hundreds of times less than fiber – but DSL should cost less. The big telcos have argued for decades that it costs too much to maintain copper networks. But they never finish that statement by telling us how much money they have collected over the years from a customer like Shirley Neville – possibly hundreds of times more than the cost of her share of the network.

Amazon’s Huge IoT Network

In a recent blog post, Amazon invited developers to test drive its gigantic IoT network. This network has been labeled as Sidewalk and was created by tying together all of Amazon’s wireless devices like Amazon Echos and Ring cameras.

Amazon claims this huge wireless network now covers 90% of U.S. households. Amazon created the network by transmitting Bluetooth and 900 MHz LoRa signals from its various devices. This network provides a benefit to Amazon because it can detect and track its own devices separate from anything a homeowner might do with WiFi.

But Amazon has intended for years to monetize this network, and this announcement begins that process. This network has been under-the-radar until now, and most homeowners have no idea that their Amazon devices can connect and communicate with other devices outside the home. Amazon swears that the IoT connection between devices is separate from anything happening inside the house using WiFi – that the IoT network is a fully separate network.

Anyplace where there are more than a few Amazon devices, the network should be robust. The 900 MHz spectrum adds a lot of distance to the signals, and it’s a frequency that does a good job of penetrating obstacles like homes and trees.

Amazon believes that this network can be used by IoT device makers to improve the performance of IoT devices in a neighborhood – things like smart thermostats, appliance sensors, and smart door locks. Such devices use only a small amount of bandwidth but are reliant on the home broadband network being operational to work. Amazon’s vision with this network is that your smart door lock will still work even when your home WiFi isn’t working.

By making the network available to others, Amazon can unleash developers to create new types of wireless devices. For example, it’s always been a challenge to use outdoor sensors since WiFi signals outside of homes is weak and inconsistent. It’s not hard to imagine a whole new array of sensors enabled by the Sidewalk network. Picture a motion detector on a shed door or a leak detector on outdoor faucets. With this network, vendors can now manufacture such devices with the knowledge that most homes will be able to make the needed wireless connection.

This also holds a lot of promise for municipal and business sensors. This is a low-cost way to communicate with smart city or other sensors. This would enable, for the first time, the deployment of environment sensors anywhere within range of the Sidewalk network.

This is another interesting venture by Amazon. At least in the U.S., this is a lower-cost solution than trying to connect to IoT devices by satellite. The only cost of building this network for Amazon was adding the wireless capability to its devices – mere pennies when deployed across millions of devices. But interestingly, Amazon will also have a satellite network starting in 2025 that can fill in the gaps where the Sidewalk network can’t reach.

Amazon says that it has already made deals to test the network with companies like Netvox, OnAsset, and Primax. Now that manufacturers know this network exists and is available, this ought to open up a wide range of new IoT devices that are not reliant only on WiFi. This might finally be the network that enables the original promise of IoT of a world with sensors everywhere, keeping tabs on the environment around us.

Some Musings on Telecom Valuations

One of the most interesting things I’ve witnessed in the industry over my career is how the valuation for telecom companies have increased and decreased over time. Telecom companies are generally valued and sold based on a multiple of earnings. Companies with a higher margin per customer are worth more than companies with lower margins. This method of valuation applies to telephone companies, cable companies, and fiber overbuilders.

For more than a decade, the valuation of small telephone companies has hovered around a base valuation of five times EBITDA (earnings before interest, taxes, depreciation, and amortization). While the price somebody is willing to pay for a company is more complex than that simple math, this basic metric has provided a good way to guess the relative value of a telco by starting with that math.

A given company might sell something other than this average valuation. For example, there might be a motivated buyer willing to pay more, such as a neighboring company that understands the boost to combined margins through economy of scale. Properties sometimes sell for less than the expected valuation if the owners have decided it’s time to exit the business and don’t want to wait for a higher offer.

If you look back twenty years, valuations for telcos and small cable companies sold for ten to twelve times EBITDA. Twenty years ago was the beginning of the transition of small telcos and cable companies into becoming ISPs. Buyers recognized that broadband sales would increase over time and recognized this potential in setting a valuation for these companies. Buyers were willing to pay more to gain the upside from future broadband sales.

After the peak valuations of twenty years ago, valuations dropped over time. Rural telephone companies started to lose the historic subsidies that had bolstered earnings. Small cable companies started to see a serious erosion of cable TV margins as the price of programming skyrocketed. Buyers were less willing to buy into a company with lowered future expectations, and values dropped accordingly. I recall talking to telcos that got offers to sell at multiples of only three or four times earnings.

Over the last few years, valuations have climbed again – at least for some companies. Telcos that invested in fiber and cable companies that upgraded to gigabit capabilities have become worth more to buyers. Companies that didn’t make these upgrades are worth a lot less.

One of the interesting changes in the industry is that external venture capital has become interested in buying telecom properties. When the industry valuations hit the lowest point, most sales of telecom companies were made to other telecom companies. It seems like external interest in the industry has ratcheted up valuations. I always have to wonder if outsiders understand the industry well enough to be willing to pay more for businesses than folks who have been in the industry forever.

There were a few factors that led to increased valuations in recent years. One is historically low interest rates that made it easier and more affordable for buyers to finance the purchase of companies. I also think valuations went up as some ISPs demonstrated the ability to gain near-monopolies in markets. I guess this emboldens buyers that they can duplicate this with a company they purchase.

I’m suddenly talking to companies that are being offered multiples of as much as ten times earnings. That puts these companies back to the heady valuations of 2000. It’s going to be interesting to see how many small telcos and cable companies sell when valuations are high – it has to be tempting.

I’m frankly perplexed by valuation in the ten times range. If a buyer pays ten times earnings and doesn’t improve the business, it will take ten years just to get back the investment – without considering the cost of the debt used to finance the purchase. A buyer has to make huge improvements to an acquisition to get the investment back in a reasonable time. The upside can come from increased revenues, reduced expenses, or a combination of the two. It’s not easy to squeeze that much improvement out of a telecom business without alienating customers.

The Trade War for Undersea Fiber

A recent article by Joe Brock for Reuters describes a new geopolitical battle over undersea fibers. There are about 400 undersea fiber routes that cross oceans and that connect the world with fiber. This is a huge business, and about 95% of all international broadband traffic passes through the undersea fibers.

There has always been some concern about undersea fibers. Countries fear that sabotage of the fibers connected to their shores could result in being isolated from the Internet. For example, there were several undersea fiber cuts in recent years that isolated Taiwan. These cuts were blamed on fishing boats and not on China, but the cuts highlight a vulnerability in the networks that drive international commerce.

I’ve also read a few other articles that claim that undersea fibers are vulnerable to eavesdropping and spying and that countries with sophisticated technology could be listening in on the traffic that crosses the seas.

The article focuses on a recent trade battle between China and the West over laying a new fiber route that is planned for 2025 construction that would go from France to Singapore and connect to twenty countries along the way. The cable route is known as the South East Asia–Middle East–Western Europe 6, or SeaMeWe-6 route, for short.

The article describes the complicated consortiums that fund undersea fiber routes. This particular route included more than a dozen investors, which are mostly large companies that have to transport huge amounts of international data traffic. The partners on this project included companies like Microsoft, the EU’s Orange, and India’s Bharti Airtel along with China Telecom, China Mobile and China Unicom.

It initially looked like the technology award for the electronics and construction was going to go to HMN Technologies Co Ltd. for around $500 million. This is a Chinese company that was originally created by Huawei, but which was spun off as a standalone company. The primary competitor bidding for the  route was SubCom LLC, an American company.

Things quickly got complicated since the US and China are now embroiled in a trade war that covers a huge range of industries, including undersea fibers. After the deal was awarded to the Chinese firm, the US began warning the investors about the espionage risk of dealing with Chinese electronics vendors. The US went so far as to threaten a boycott against HMN Technologies. The various investors were split on the choice of technology vendor, but eventually agreed to spend $100 million more to use the American company.

It’s almost impossible to stress how much of the world economy is reliant on communications through bandwidth. I find it dismaying to see basic infrastructure becoming enmeshed in international politics. The wrangling over this one fiber route is not going to be the end game but is more like the beginning of a trade war that will add cost to international communications.

This is an new escalation in the trade war that has seen the US government ban Huawei and other Chinese telecom electronics from the country. I haven’t the slightest idea about the real risk of international spying through these fibers, and I suspect there are not a lot of folks who truly understand it. I might be cynical, but it stands to reason if there is spying on this kind of traffic by the Chinese, that there Is likely also spying by the West. Microsoft and Orange argued that the threat of data security was not big enough to justify spending more to switch to the American fiber company. But in the end, the pressure from the American government won, and the more expensive vendor was chosen.

Filling a Regulatory Void

Earlier this year, the Ninth Circuit Court of Appeals upheld the net neutrality regulations enacted by California. The appeal case was filed on behalf of big ISPs by ACA Connect, CTIA, NCTA, and USTelecom.

The case stems from the California net naturality legislation passed in 2018. The California law was a direct reaction to the Ajit Pai FCC that not only killed federal net neutrality rules but also wiped out most federal regulation of broadband. The California legislation made it clear that the State doesn’t want ISPs to have an unfettered ability for bad behavior.

The California net neutrality rules are straightforward. The law applies to both landline and mobile broadband. Specifically, the California net neutrality law:

  • Prohibits ISPs from blocking lawful content.
  • Prohibits ISPs from impairing or degrading lawful Internet traffic except as is necessary for reasonable network management.
  • Prohibits ISPs from requiring compensation, monetary or otherwise, from edge providers (companies like Netflix or Google) for delivering Internet traffic or content.
  • Prohibits paid prioritization.
  • Prohibits zero-rating.
  • Prohibits interference with an end user’s ability to select content, applications, services, or devices.
  • Requires the full and accurate public disclosure of network management practices, performance, and clearly worded terms of service.
  • Prohibits ISPs from offering any product that evades any of the above prohibitions.

This is an interesting step in the battle to regulate ISPs. The big ISPs put a huge amount of money and effort into getting the FCC under Ajit Pai to kill federal broadband regulation. There has been a long-standing tradition in the telecom world that cedes that the FCC has the power to make federal rules, but states have always been free to regulate issues not mandated by the FCC. There have been some tussles over the years between states and the FCC, but courts have consistently sided with the FCC’s authority to make national rules. When the FCC walked away from most broadband regulation it created a regulatory void that tradition would imply that states are allowed to fill.

Losing this court case creates a huge dilemma for big ISPs. California is such a large part of the economy that it would be hard for ISPs to follow this law in California and not follow it elsewhere. It also seems likely that other states will now pass similar laws over the next few years, and that will create the worst possible nightmare for big ISPs – different regulations in different states.

I’ve always adhered to the belief that there is a regulatory pendulum. When regulations get too tough for a regulated industry, there is usually a big push to lighten the regulatory burden. But when the pendulum swings the other way and regulation gets too slack, there is inevitably a big push to put more restrictions on the industry being regulated. In this case, the ISPs and Ajit Pai went too far by eliminating most meaningful federal broadband regulation. There is nothing surprising about California and other states reacting to the lack of federal regulation.

With this court decision, there is nothing to stop a dozen states from creating net neutrality rules or tackling the other regulations that got voided by the Ajit Pai FCC. It’s also not hard to predict that the big ISPs will now push to create a watered-down federal version of net neutrality as a way to override a plethora of state rules.

I said earlier that this is a dilemma for large ISPs because it is extremely rare and not easy for a small ISP to violate net neutrality principles. The California rules will require ISPs to create more plain English terms of service, but otherwise, small ISPs in California will not likely be bothered by any of these rules.

For the big ISPs, this is a harsh reminder that the regulatory pendulum always swings back. It’s not hard to envision celebration behind the scenes at the big ISPs when they convinced the FCC to give them everything on their wish list. But when regulations get out of balance, there is inevitably pushback in the other direction.

There is still one piece of unfinished business in this case. There is still an open issue in the court examining if the California law impinges on interstate commerce. But the Ninth Circuit’s ruling made it clear that California is free to enforce its version of net neutrality within the state.

Businesses Rely on Broadband

I don’t think most folks understand the extent to which businesses are adapting to broadband. My firm interviews businesses all over the country, and there is a drastic difference between the ways that businesses with and without good broadband operate today.

One of the best examples I can give you is to talk about a specific business. It’s a casual bar/restaurant that attracts customers by offering good food and arcade games for customers. The business is not part of a big chain and was created and is operated by the owner. A customer might spend an evening at the business and not have any clue about the extent to which this business uses broadband. But consider the following ways this one local business uses broadband:

  • Customers make reservations using a service that is hosted in the cloud. The business does not keep a local reservation book and is completely reliant on the reservation service to know who will be showing up for the evening. The reservation service provides updates to the owner so that he is aware of heavily-booked days so that he can make sure there are enough employees on hand.
  • Most of the food and drinks to supply the kitchen and the bar are ordered using online vendor portals. The owner rarely has to talk to vendor salespeople and rarely has to go shopping for supplies.
  • The software running the games is located in the cloud. If the broadband connection dies, the games instantly go dead. The owner says one of the coolest features of the cloud software is that customers can see how they scored on a given game in past visits – and people will try to beat their own best scores.
  • The merchant services software that accepts and processes credit cards is hosted in the cloud. The business uses touchscreen terminals for customers to pay their bills and enter tips.
  • Payroll is totally in the cloud. Employees log in when they come and go for the day, and payroll is calculated automatically. The merchant services software also processes tips directly to each waitperson.
  • Accounting for sales is in the cloud. All food, bar, and game sales are automatically added to the accounting books.
  • The background music in the restaurant comes from a cloud service.
  • The business has a voice over IP telephone that only works when the broadband is functioning.
  • There are security cameras inside and outside the business to keep a record of who comes and goes. The cameras are tied into a burglar alarm service hosted in the cloud.
  • The restaurant is active on social media and posts comments and pictures throughout the day.
  • The owner keeps a backup copy of all accounting and other key records in the cloud.
  • One of the biggest uses of bandwidth comes from providing free WiFi for patrons. At business times that can accumulate to a lot of bandwidth.

The owner of the business fully understands the degree to which the business is reliant on broadband. To protect against outages, the owner always bought a broadband connection from two different ISPs. Unfortunately, when there was storm damage, it turned out that both ISPs were on the same physical route, and the business had to shut down for a day. The owner changed to a different ISP that uses a different physical path from the business.

I’m not highlighting this business because it is extraordinary – just the opposite. This is a business that is using the tools that are available to any business with broadband. There are now millions of businesses that are fully reliant on broadband to function, and that’s something we don’t talk about enough.

One interesting thing I’ve found in talking to businesses that don’t have good broadband is that they usually have only a short list of functions that could be done better if they could buy faster broadband. I’m not surprised about that because such businesses can’t imagine the changes to their daily work life that would come from fully integrating broadband into their business.