FCC Wants to Change 3.5 GHz Spectrum Rules

The FCC voted last week to re-examine the rules for the deployment of 3.5 GHz spectrum for wireless broadband. This is the spectrum that has generally been referred to as Citizen’s Band Radio. This change clearly favors large carriers over the small carriers which were the targeted users from the existing rules.

The specific changes proposed by the rules include:

  • Lengthened the length of a license from 1 year to 10 years.
  • Eliminate the rules that the exclusivity of a license expires at the end of the first license term. Exclusivity can now extend into a license renewal.
  • Increase the size of the geographic footprint of a license. The license area before was a census tract, which is generally an area encompassing 2,500 to 8,000 people. The Census views a tract as the equivalent of a ‘neighborhood’. The new licenses areas are proposed to be something larger like entire counties or else Partial Economic Areas (PEAs). PEAs were defined in the recent incentive auctions and subdivide the country into 416 PEA regions.
  • Allows license holders to partition and disaggregate licenses between adjacent geographic areas.
  • Eliminated the rules that limited the number of licenses that can be held by one entity in an area. This also would allow license holders to bid on the use of individual channels.

What does all of this mean? This is largely a shift to allow big wireless carriers to obtain and use the spectrum for cellular service. Before the spectrum rules were aimed at benefiting small rural broadband providers. They would have been able to get a license for a small geographic area and they then got a 1-year head-start to deploy the spectrum before anybody else. The first licensee then had an advantage because future deployments had to be synchronized to not interfere with them.

The old rules made it difficult, but not impossible, for the bigger companies to use the spectrum. A cellular provider was not likely to invest in small license footprints and only be protected for a year from competition and interference. But the new rules allow for a much bigger footprint, similar to that used for other cellular spectrum. And the ten-year license provides a long-term opportunity for no competition, as well as a chance to renew the original license.

Basically this is a spectrum grab by the cellular providers to use for LTE or 5G cellular. Two of the big proponents of these changes include Comcast and Charter which want their own spectrum to support their new cellular businesses.

This change will make it much harder for rural deployments by WISPs and other ISPs willing to serve customers with wireless connections. The original rules also envisioned that this spectrum would enable smaller carriers to deploy various small-cell technologies and not just point-to-multipoint radios.

This is another proposed ruling that shows that current FCC is now clearly pro-big business. Almost every ruling they’ve made so far benefits big companies – the big ISPs, the big TV station owners, and the big wireless carriers. This particular ruling is a big give-away to the cellular companies and to Comcast and Charter. Under the rules the spectrum can be licensed inexpensively compared to spectrum that is auctioned. The new rules allowing large coverage areas will greatly disadvantage small carriers that only want to license a small service area – which was the entire purpose of the original rules for the spectrum.

The FCC voted 4-1 to consider the new rules, which is a likely indication that the new rules will be adopted after the required deliberation time required by FCC rules.

Broadband Speeds are a Local Issue

You might think that the big ISPs deliver the same broadband products everywhere. But I’ve been seeing evidence that broadband speeds are definitely a local issue. One of the products that we’ve been using to help clients assess a new market is to get a lot of people in the potential market to take speed test. We’ve mostly been using the Ookla speed test, but probably any speed test is sufficient as long as everybody in a market takes the same test.

The results of these speed tests surprised me a bit because they showed a wide variance in the products of the major ISPs. For example, I’ve seen markets where Comcast is delivering a little more download speed than they are advertising. But I also saw tests results from a Comcast market where the speeds were about 20% less than advertised. I’ve seen the same thing with AT&T where there are markets that get only half of the advertised speeds and other markets where they were mostly delivering what they are promising. I’m not sure if there is any better demonstration that speeds are a local issue than by seeing that the big ISPs don’t deliver the same speeds in every market.

There is a long list of reasons that can account for the differences in speeds. A big one is the age and quality of the network cables. Older telco copper and older coaxial cables can cause a lot of problems with quality. The size of customer nodes is always an issue. If everything else is equal, a cable company node serving 100 customers is going to have better broadband speeds than one serving 200 customers.

The other big issue that affects customer performance is what I call network choke points. A chokepoint is any place in a broadband network that restricts the flow of data to and from customers. There can be a choke point directly within a neighborhood if the nodes are too large. There can be a chokepoint between a node and the core network if the electronics for the connection are undersized. There can be a chokepoint on local network rings if they don’t provide enough bandwidth. There can be electronics chokepoints at a headend if a router or other major piece of electronics is overwhelmed. And finally, there can be an overall chokepoint in a network if the data pipe going to the Internet is too small.

Chokepoints don’t have to always be a problem. Many chokepoints only appear during the busiest hours of usage on the network, but don’t impede data speeds when data traffic volumes are smaller. And this means that chokepoints are often hyper-local. They might affect one neighborhood but not the one next door, and only at some times of the day. I’m guessing that the slowest results I saw in the big ISP speed tests were during the peak evening hours.

These chokepoints obviously don’t only affect the large ISPs and plenty of smaller ISP networks have chokepoints. I’ve seen numerous network chokepoints appear in recent years due to the explosive growth of the use of broadband. A network that may have been functioning perfectly a few years ago will develop chokepoints as the amount of total bandwidth on networks overwhelm some portion of a network.

ISPs often are challenged to keep up with the upgrades needed to avoid chokepoints, because generally the only ways to relieve chokepoints is to replace cables or to upgrade electronics, which can be expensive. Smaller ISPs often don’t have the immediate capital available to fix chokepoints as they appear. The big ISPs tend to ignore chokepoints as they appear and to make large fork-lift upgrades periodically instead of making the constant small upgrades needed to keep the network working perfectly.

I always advice my clients to keep a running list of all of their chokepoints. With good network engineering and monitoring practices a company can see chokepoints coming long before they materialize and hopefully can plan to make the needed upgrades before they degrade the customer experience.

 

FCC’s Recommendations to Avoid Network Outages

The FCC’s Public Safety and Homeland Security Bureau just released a list of recommended network practices. These recommendations are not a comprehensive list of good network practices, but rather are compiled by analyzing the actual network outages reported to the FCC over the last five years. Telcos are required to notify the FCC of significant network outages and every item on this list represents multiple actual network outages. It’s easy to look at some of the items on the list as think they are common sense, but there obviously there are regulated telcos that triggered had outages due to ignoring each of these network practices.

Following are some of the more interesting recommendations on the list:

Network Operators, Service Providers and Property Managers together with the Power Company and other tenants in the location, should verify that aerial power lines are not in conflict with hazards that could produce a loss of service during high winds or icy conditions. This speaks to having a regular inspection and tree trimming process to minimize damage from bad storms.

Network Operators and Property Managers should consider pre-arranging contact information and access to restoral information with local power companies. This seems like common sense, but I’ve been involved in outages where the technicians did not know how to immediately contact other utilities.

Network Operators, Service Providers and Public Safety should establish a routing plan so that in the case of lost connectivity or disaster impact affecting a Public Safety Answering Point (PSAP), 9-1-1 calls are routed to an alternate PSAP answering point. A lot of the recommendations on the FCC’s list involve 9-1-1 and involve having contingency plans in place to keep 9-1-1 working in the case of network failures.

Network Operators, Public Safety, and Property Managers should consider conducting physical site audits after a major event (e.g., weather, earthquake, auto wreck) to ensure the physical integrity and orientation of hardware has not been compromised. It’s easy to assume that sites that look undamaged after big storms are okay. But damage often doesn’t manifest as outages until days, weeks or months later.

Network Operators and Service Providers should verify both local and remote alarms and remote network element maintenance access on all new critical equipment installed in the network, before it is placed into service. I’ve seen outages where equipment was installed but the alarms were not tested. You don’t want to find out that an alarm isn’t working when it’s needed.

Network Operators, Service Providers, Public Safety and Property Managers should engage in preventative maintenance programs for network site support systems including emergency power generators, UPS, DC plant (including batteries), HVAC units, and fire suppression systems. This might easily be the biggest cause of network outages. ISPs get busy and don’t test all of the components critical to maintaining systems. A lot of outages I’ve been involved with were due to failures of minor components like fans or air conditioning compressors.

Network Operators, Service Providers, Public Safety, and Equipment Suppliers should consider the development of a vital records program to protect vital records that may be critical to restoration efforts. Today there is often software, databases and other vital records that must be restored in order first to get equipment up and functioning. Electronics records of this type need to be kept in a secure system that is separate and doesn’t rely on the network to be functioning, but that also can be accessed easily when needed.

Network Operators, Service Providers, Public Safety and Property Managers should take appropriate precautions to ensure that fuel supplies and alternate sources of power are available for critical installations in the event of major disruptions in a geographic area (e.g., hurricane, earthquake, pipeline disruption). Consider contingency contracts in advance with clear terms and conditions (e.g., Delivery time commitments, T&Cs). This is a lesson most recently experienced after the recent hurricanes where local gasoline supplies dried up and several utilities without their own private fuel supply were stranded along with the rest of the public.

This FCC list is a great reminder that it’s always a good idea to periodically assess your disaster and outage readiness. You don’t want to discover gaps in your processes during the middle of an outage.

Title II Regulation and Investment

As the FCC continues its effort to reversing Title II regulation, I’ve seen the carriers renewing their argument that Title II regulation has reduced their willingness to invest in infrastructure. However, their numbers and other actions tell a different story.

The FCC put broadband under Title II regulation in February of 2015 and revised the net neutrality rules a few months later in April. So we’ve now had nearly three years to see the impact on the industry – and that impact is not what the carriers are saying it is.

First, we can look at annual infrastructure spending for the big ISPs. Comcast spent $7.6 billion upgrading its cable plant in 2016, its highest expenditure ever. Charter spent 15% more in 2016 compared to what was spent on it and the cable companies it purchased. Even Verizon’s spending was up in 2016 by 3% over 2015 even though the company had spun off large fiber properties in Florida, Texas, California and other states. AT&T spent virtually the same amount on capital on 2015 and 2016 as it had done in 2013 and 2014.

I’ve seen a number of articles that focus on the overall drop in investment from the cellular industry in 2015. But that drop is nearly 100% attributable to Sprint, which pulled back on new capital spending due to lack of cash. All of the big cellular companies are now crowing about how much they are going to spend in the next few years to roll-out 5G.

It’s important to remember that what the big ISPs tell their investors is often quite different than what they say when lobbying. As publicly traded companies the ISPs are required by law to provide accurate financial data including a requirement to warn stockholders about known risk factors that might impact stock prices. I’m one of those guys that actually reads financial statements and I’ve not seen a single warning about the impact of Title II regulation in the financial reporting or investor press releases of any of the big ISPs.

But the lobbying side of these businesses is a different story. The big ISPs started complaining about the risks of Title II regulations as far back as 2013 when it was first suggested. The big companies and their trade associations have written blogs warning about Title II regulation and predicted that it would stifle innovation and force them to invest less. And they’ve paid to have ‘scholarly’ articles written that come to the same conclusion. But these lobbying efforts are aimed mostly at the FCC and at legislators, not at stockholders.

The fact that big corporations can get away with having different public stories has always amazed me. One would think that something published on the AT&T or Comcast blog would be under the same rules as documents formally given to investors – but it’s obviously not. AT&T in particular tells multiple stories because the company wears so many different hats. In the last year the company has taken one position as an owner of poles that is diametrically opposed to the position it takes as a cellular company that wants to get onto somebody else’s poles. Working in policy for the big ISPs has to be a somewhat schizophrenic situation.

It seems almost certain that this FCC is going to reverse Title II regulation. The latest rumor floating around is that it will be on their agenda on the day before Thanksgiving. That may lead you to ask why the ISPs are still bothering cranking out the lobbying arguments against Title II if they have already won. I think they are still working hard to get a legislative solution through Congress to kill Title II regulation and net neutrality, even if the FCC kills it for now. I think they well understand that a future FCC under a different administration could easily reinstate Title II regulation – particularly now that it has passed muster through several court challenges. The ISPs understand that it will be a lot harder to get a future Congress to reverse course than it might be if Democrats are back in charge of the FCC.

Until recently I always wondered why the ISPs are fighting so hard against Title II regulation. All of the big companies like Comcast, AT&T and Verizon have told stockholders that their initial concerns about Title II regulation did not materialize. And it’s obvious that Title II hasn’t changed the way they invest in their own companies.

But recently I saw an article and wrote a blog about an analyst who thinks that the ISPs are going to drastically increases broadband prices once Title II regulation is gone. Title II is the only tool that the government can use to investigate and possibly act against the ISP for rate increases and for other practices like data caps. If true, and his arguments for this are good ones, then there is a huge motivation for the big ISPs to shed the only existing regulation of broadband.

Smart Cities and Fiber

I’ve noticed that a lot more cities are talking about becoming ‘smart cities.’ Only a few years ago this was something that only NFL cities talked about, but now I see it as a goal for much smaller cities. ‘Smart city’ is an interesting concept. If you listen to the various vendors pushing the idea this means investing in massive amounts of sensors and the computing power to make sense of them. But there are also a lot of lower-tech ideas that fit under this same umbrella.

I’ve had discussion with folks at cities who think that they need fiber in order to have a smart city. Nobody is a bigger proponent of fiber than I am, but fiber is not necessarily needed for many of the concepts that are part of this high-tech vision.

Having smarter traffic flow is generally at the top of everybody’s list. It’s common sense that having vehicles needlessly waiting for lights wastes fuel and wastes time. Smarter traffic lights in cities would improve the quality of life and the economy. A decade ago a lot of cities built fiber networks just to provide a real-time connection to each traffic signal. Those fiber networks allowed the city to change signal timing in reaction to emergencies and similar events, but the whole effort is largely still manual.

But with AI starting to become a realistic technology it looks like truly smart traffic lights are a possibility in the near future. A smart traffic system could change lights on the fly in response to real-life traffic to reduce the average time that vehicles wait for a green light. But the question that must be asked is if this really requires fiber? A decade ago it did. Fiber was needed just to provide the traffic cameras needed to allow somebody at traffic headquarters to eyeball the situation at a given intersection.

But we are now seeing a revolution in sensing devices. We are not too many years removed from the big push to do all heavy-computing in the cloud. A decade ago the vision was that a smart traffic light system would rely on cloud computing power. But faster computers have now reversed that trend and today it makes more sense to put smart computers at the edge of network. In the case of traffic lights, smart computers at the edge reduces the need for bandwidth. Sensors at an intersection no longer need to broadcast non-stop and only need to relay information back to the central core when there is some reason to do so.

For example, one of the uses of a smart traffic system is to identify problem intersections. Sensors can be programmed to record every instance when somebody runs a red light or even a late yellow light and this can alert authorities to problems long before a tragic accident. But these sensors only need to send data when there is an actionable event, and even that doesn’t require a gigantic burst of data.

The same goes for smart traffic control. The brains in the device at an intersection can decide to allow for a longer green for a turn lane if there are more cars than normal waiting to turn. That doesn’t need a big continuous bandwidth connection. The city will want to gather data from intersections to know what the devices are doing, but with smart edge devices a wireless connection provides adequate broadband and a lower cost solution for data gathering.

This same trend is happening with other kinds of sensors. Sensors that listen for gunshots, smart grid sensors used to monitor water and electric networks, and smart sensors used to provide smarter lighting all can be done wirelessly and do not need a fiber connection.

The real purpose behind the concept of a smart city is to provide better government service to constituents. Many of the best ideas out there don’t involve much bandwidth at all. For example, I recently watched a demo of a system in a mid-western city that allows citizens to see, in real time, the location on a map all of the snow plows and trash trucks operating in the city – much like is done when you can see a Lyft ride coming to pick you up. This will drastically cut down on calls during snowstorms since citizens can see a plow making its way towards their street. (And watching the plow inch towards you on a snowy day is good entertainment!)

Cities are undertaking all sorts of other initiatives to improve quality of life. I see cities working on computer systems that put all government forms and processes online, making it easier to get a permit or to report a problem to the city. Cities are reducing pollution by passing ordinances that promote roof-top gardens, that require that new high-rises that are energy self-sufficient and that promote safe bicycling.

There are still big corporations out pitching the expensive smart city vision. But there are now smaller boutique smart city vendors that working towards more affordable and reasonably-priced sensors to spread around a city.

Like anyone who lives in a city I would love to see my city implement smart city ideas that improve the quality of life. But as much as I am a fiber-proponent, I am finding it hard to make a case that a lot of urban fiber is needed to implement the best smart-city ideas.

The Battle Over Small Cell Deployment

Governor Jerry Brown of California recently vetoed a bill, SB 649, that would have given wireless carriers cheap and easy access to poles. He said the bill was too much in the favor of the wireless companies and that a more balanced solution is needed.

This law highlights the legislative efforts of the cellular industry and the big telcos working to deploy 5G networks who want cheap and fast access to poles. There were similar pushes in many state legislative bodies this past year including in Texas, Florida and Washington. I think we can expect this to appear in many more state legislatures next year. This is obviously a big priority for the carriers who reportedly spent tens of millions of dollars lobbying for this in the recent legislative sessions.

It’s not hard to understand why the carriers want a legislative solution, because the alternative is the regulatory path. This is a complicated issue and the carriers know that if they try to get this through state regulatory commissions that it will take a long time and that regulators are likely to provide a balanced solution that the carriers don’t want.

There is one regulatory push on the issue and the FCC is considering it. The FCC voted in May to begin an investigation on the issues involved. One of the things they are examining are the regulatory impediments at the state and local levels that affect the issue. But the carriers know that the FCC path is a slow one. First, any FCC decision is likely to be challenged in court, a tactic that the carriers themselves often use to slow down the regulatory process. But there is also a big jurisdictional question, because today the states have the authority to override FCC rules concerning pole issues.

The issue is important because it’s at the heart of the hottest area of telecom growth in the deployment of mini-cell sites and the upcoming deployment of the various kinds of 5G. Not only do the carriers need to deploy millions of such connections to implement the networks they are promising to stockholders, but they also will have to be building a lot of new fiber to support the new wireless deployments.

It’s easy to sympathize with the carriers. I’ve herd the horror stories of it taking two years to get a wireless attachment approved in some cities, which is an obvious impediment to any sensible business plan deployment. But as is typical with these carriers, rather than asking for sensible rule changes that everybody can agree on they are promoting plans that are heavily lopsided in their favor. They want to deploy wireless devices using a method they are calling one-touch – which they interpret to mean installing devices on poles and telling the pole owner after it’s done. They also want these connections for dirt cheap. And they don’t want to have to be concerned with the safety issues involved in adding boxes and live electric connections into the mix of wires on existing poles.

The issue is interesting from the perspective of small CLECs and fiber overbuilders because small carriers have been yelling for years about the problems associated with getting access to poles – and nobody has been listening. In fact, one of the big proponents of the legislative process is AT&T, which is still fighting Google and others about getting access to AT&T poles. It’s not surprising to see that the proposed new laws favor wireless deployments without necessarily making it any easier for fiber overbuilders.

Since the carriers are throwing a lot of money at this it certainly seems likely that they will win this issue in some states. There are a number of states where the lobbying money of the big carriers has always gotten the carriers what they wanted. But there are plenty of states where this won’t pass, and so we are likely going to end up with a hodgepodge of rules, state by state, on the issue.

I’m not even sure where I stand on the issue. As a consumer I want to see advanced wireless technologies deployed. But as a homeowner I don’t necessarily want to see an ugly proliferation of big boxes on poles everywhere. And I certainly don’t want to see 120-foot poles deployed in my neighborhood and the trees decimated to accommodate line-of-sight wireless connections to homes. And as somebody who mostly works for smaller carriers I’m naturally biased against anything that benefits the big carriers over everybody else. I don’t know if there is a better indication about how complicated this is when somebody with my knowledge has mixed feelings about the issue.

Cellular WiFi Handoffs

If you use anybody except Verizon you may have noticed that your cellphone has become adept at handing your cellular connections to a local WiFi network. Like most people I keep my smartphone connected to WiFi when I’m at home to save from exhausting my cellular data cap. I have AT&T cellular service and I’ve noticed over the last year that when I’m out of the house that my phone often logs onto other WiFi networks. I can understand AT&T sending me to their own AT&T hotspots, but often I’m logged on to networks I can’t identify.

When I lived in Florida I was a Comcast customers and so when I was out of the house my phone logged onto Comcast hotspots. Even today my phone still does this, even though I’m no longer a Comcast customer and I assume there is a cookie on the phone that identifies me as a Comcast customer. I understand these logins, because after I the first time I logged onto a Comcast hotspot my phone assumed that any other Comcast hotspot is an acceptable network. This is something I voluntarily signed up for.

But today I find my phone automatically logged onto a number of hotspots in airports and hotels which I definitely have not authorized. I contrast this with using my laptop in an airport or hotel. With the laptop I always have to go through some sort of greeting screen, and even if it’s a free connection I usually have to sign on to some terms of service. But my phone just automatically grabs WiFi in many airports, even those I haven’t visited in many years. I have to assume that AT&T has some sort of arrangement with these WiFi networks.

I usually notice that I’m on WiFi when my phone gets so sluggish it barely works. WiFi is still notoriously slow in crowed public places. Once I realize I’m on a WiFi network I didn’t authorize I turn the WiFi off on my phone and revert to cellular data. Every security article I’ve ever read says to be cautious when using public WiFi and so I’d prefer not to use these connections unless I have no other option.

There was a major effort made a few years back to create a seamless WiFi network for just this purpose. The WiFi Alliance created a protocol called Hotspot 2.0 that is being marketed under the name of Passpoint. The purpose of this effort was to allow cellular users to automatically connect and roam between a wide variety of hotspots without having to ever log in. Their ultimate goal was to enable WiFi calling that could hand off between hotspots in the same way that cellular phones hand-off between cell sites.

It’s obvious that AT&T and other cellular carriers have implemented at least some aspects of Hotspot 2.0. In the original vision of Hotspot 2.0 customers were to be given the option of authorizing their participation in the Passpoint network. But AT&T has never asked my permission to log me onto WiFi hotspots (unless it was buried in my terms of service). AT&T has clearly decided that they want to use these WiFi handoffs in a busy environment like an airport to protect their cellular networks from being swamped.

It’s interesting that Verizon is not doing this. I think one reason for this is that they don’t want to give up control of their customers. Verizon foresees a huge future revenue stream from mining customer data and I’m guessing they don’t want their customer to be shuttled to a WiFi network controlled by somebody else, where they can’t track customer behavior. Verizon is instead pushing forward with the implementation of LTE-U where they can direct some data traffic into the WiFi bands, but all under their own control. While LTE-U uses WiFi frequency, it is not a hotspot technology and is as hard to intercept or hack as any other cellular traffic.

Most new cellphones now come with the Passpoint technology baked into the chipset. I think we can expect that more and more of our cellular data connections will be shuttled to hotspots without notifying us. Most people are not going to be bothered by this because it will reduce usage on their cellular data plans. I’m just not nuts about being handed off to networks without some sort of notification so that I can change my settings if I don’t want to use the selected network. I guess this is just another example of how cellular companies do what they want and don’t generally ask for customer permission.

Death of the Smartphone?

The smartphone has possibly been the most transformative technology of the past hundred years. It’s unleashed the power of the computer in a portable always-with-us way that has changed the way that most of us interface with the world. But as unlikely as it might seem, it also might be one of the shortest-lived major technologies in history.

When looking forward it seems inevitable that smartphones will largely be replaced by voicebot technology. Voicebots are already intertwining into our lives in major ways. Apple’s Siri, Amazon’s Echo and Google Assistant are already replacing a lot of other technologies.

Voicebots have already entered my life in several key ways. As a music lover I’ve gone through every technology upgrade since vinyl. I had a huge CD collection and burned tons of custom CDs of my favorite songs. I used an iPod heavily for a few years. I downloaded music and built custom playlists of my music. And I used streaming radio services. But this has all now been replaced by my Amazon Echo. It’s integrated into Amazon music, Sirius XM Radio, and Pandora, and I can just ask aloud to hear the music I want.

I also now use voicebots for simple web searches and I no longer have to use my phone or PC to find out when a local store or restaurant is open. I use my Echo to take notes to remember later, something that is important to me since I wake with ideas at 2:00 in the morning!  In the past I would scramble for something to write on, which inevitably woke me up – but no longer.

Voicebots are also supplanting a lot of apps I used to use. It’s a lot easier to just ask about the weather rather than look it up. I can ask for sports scores before my feet hit the floor out of bed. Voicebots are starting to displace other smartphone functions. I can now make and receive texts by voice – this isn’t quite fully integrated into Echo, but I expect it soon will be. Voicebots integrated into the car give us driving directions and can lead us to the nearest gas station, all directed by voice.

Voicebots are growing steadily better at voice recognition. I’ve had the Amazon Echo for about 18 months and it gets a little better month by month. Voicebots are also getting better at responding to requests. All of the major voicebots are using primitive artificial intelligence to learn from their mistakes and to get better at responding to user requests. Questions that puzzled my Echo months ago are now sailing through.

Some voicebot functions are still nearly unusable. I have Microsoft’s Cortana on my PC and it’s not really helpful in the way I would like to use it. Ideally it could replace most of my keyboard functions. But it’s not hard to forecast that within a few years that voice commands will finally make it easier to use a PC.

If voicebots are going to grow to the next level it’s going to take improvements in AI. But everything is pointing in that direction. Just a few weeks ago a new AI from Google learned the game of Go from scratch in just three days with nothing more than being given the rules of the game. The new AI won 100 games straight against the older Google AI that beat the best human player earlier this year.

As AI gets better the voicebots are going to get better. There will come a time soon where it’s easier to use a voicebot for most of the apps on a smartphone, and that’s when voicebots will start to eat away at smartphone penetration rates.

I for one would love to ditch my smartphone. Even after all of these years I’ve never been comfortable having to remember to carry it and I walk away and leave it all of the time. And somehow we’ve gotten roped into spending $600 or more every two years for a new device. I would be much happier wearing tiny earbuds that let me talk to a voicebot that has been able to learn my habits.

Most of the developers in the AI world think that voicebots will enable real digital assistants that step in many times a day to make our lives easier. This trend seems inevitable and one has to wonder how the emergence of voicebots will affect the huge push for 5G wireless? Most of the things that voicebots do for us are low bandwidth and could easily be done using a fully implemented LTE network. It’s hard to picture where this all might lead, but one thing seems certain – the death of the smartphone will probably be just as disruptive as its birth.

Why the Big Programming Cost Increases?

SANYO DIGITAL CAMERA

I recently talked to several clients who are expecting an increase in cable TV programming costs of between 8.5% and 9% for next year. They are able to forecast this because most of the contracts for programming cover at least three years of baked-in rate increases.

Every one of these clients is bleeding cable customers. We hear about how the big cable companies are experiencing impact from cord cutting. Last year the big companies altogether lost about 1.7 million customers, which is a little less than 2% of their customer base. But my small clients seem to be losing cable customers at a much faster pace. Cord cutting is obviously a real phenomenon and I’ve seen recent estimates that the big companies are expected to lose around 1.9 million customers this year. But while the big companies are losing customers at a steady pace, smaller cable operators are seeing a much bigger impact.

I think there are a number of reasons that small cable providers are suffering more.

  • Most of my small clients don’t pay the same billing games as the big cable companies. The big companies have created a number of ‘fees’ such as a local programming fee or a sports fee to disguise the real cost of cable. Many customers think these fees are taxes of some sort and they believe that the base price of cable shown on their bill is the actual price they are paying. That lower number is the one that they use when comparing to other alternatives.
  • The big companies are also far more aggressive with their bundling. They work hard to force customers into bundles and they penalize customers for leaving a bundle. Customers often don’t know what they pay for any specific product in a bundle and when they try to drop one product the full bundle savings are applied to that product. Even when small companies have bundles they don’t create a huge financial disincentive to leave the bundle.
  • Big companies are willing to give ‘special’ pricing to keep customers. They tend to give special pricing discounts aimed at new customers to anybody else who is willing to wade through the customer service minefield to ask for it. I think since smaller companies often don’t advertise ‘special’ prices they are far less likely to even be asked to reduce rates.
  • My smaller clients are generally more rural than the big companies, and as such they face far stiffer competition from the satellite companies. Both of the satellite providers now have a ‘skinny’ bundle that a lot of customers are finding attractive.

Why are the programmers raising rates so aggressively when it’s clear that the price of cable service is the number one driver of cord cutting? I have several ideas why they might be doing this:

  • These are all publicly traded companies and to some degree they don’t have a choice. Over 90% of cable channels are bleeding customers much faster than the rate of cord cutting. This shows that many customers are cord shaving and downgrading to smaller, less expensive packages. The programmers are compelled to increase profits, and with declining sales they can only compensate by raising programming rates. That sounds insane because it sounds like the beginning of a classic death spiral. But you must remember that any large publicly traded company that performs poorly is subject to being purchased by somebody else who will then force profits back up again. Our dreadful quarterly profit driven economy is forcing the programmers into a path that is not in anybody’s best interest.
  • They are all chasing hit shows. There are now a lot more companies like Netflix and Amazon creating unique programming, which adds to the pressure on the programmers. The financial rewards from producing even one hit show is gigantic, so they all keep spending money trying to find that next big hit, and raising rates to cover the cost of producing content.
  • Another theory is that the current rate increases are their last hurrah. They can see where the industry is headed. I saw an interview with the head of programming for FOX and he said that he expects that the company is going to have to ultimately collapse most of its many channels as they keep losing customers. And so perhaps these rate increases are the chance for making big profits for a few more years before the wheels come off. It seems that end is coming anyway, so maybe raising rates now is a way to milk every last penny out of a fading industry.

Programming content is certainly never going to go away. But companies like Netflix and Amazon are showing that there are reasonable alternatives to the huge TV bundles. I just wish I knew what to tell my clients. The most common question I seem to be getting these days is, “Should I even be in the cable business any longer?” I’m starting to think that the answer for many of these businesses is no – or it will be no within a few short years.

When a Consultant Says ‘No’

Doug Dawson, 2017

One of my competitors recently held a webinar where they told a group of municipalities that they should never accept ‘no’ from a consultant who is evaluating fiber business plans. This is about the worst advice I think I have ever heard for many reasons. I think perhaps this consultant meant that one shouldn’t be afraid to be creative and to look at alternative ideas if your first ideas don’t pan out. But that’s not what they said.

Building and operating a fiber network is like any other new business venture and sometimes a new business venture is just not a good idea. This is why anybody launching a new business of any type does their homework and kicks the tires on their ideas to quantify the opportunity. A feasibility study means going through the process of gathering as many facts as possible in order to make an informed decision about a new opportunity.

The advice in this webinar was given to municipalities. Somebody giving this same advice to for-profit ISPs would be laughed out of the room. Established commercial ISPs all understand that they have natural limitations. They are limited in the amount of money they can borrow. They understand that there are natural limits on how far they can stretch existing staff without harming their business. They understand that if they expand into a new market and fail that they might jeopardize their existing company. My experience in building business plans for existing ISPs is that they are as skeptical of a good answer as a bad one and they dig and dig until they understand the nuances of a business plan before ever giving it any real consideration.

But municipalities build fiber networks for different reasons than for-profit ISPs. Existing ISPs want to make money. They also undertake expansion to gain economy of scale, because in the ISP world being larger generally means better margins. But cities have a whole other list of motivations for building fiber. They might want to solve the digital divide. They might want to lower prices in their market and foster competition. They might want to promote economic development by opening their communities to the opportunities created by good broadband.

These are all great goals, but I have rarely talked with a municipality that also doesn’t want a broadband business to at least break even. I say rarely, because there are small communities with zero broadband that are willing to spend tax dollars to subsidize getting broadband. But most communities only want a fiber business if the revenues from the venture will cover the cost of operations.

Sometimes a strong ‘no’ is the best and only answer to give to a client. Clients often come to me determined to make one specific business plan idea work. For example, many communities don’t just want a fiber network, but they want a fiber network operating under a specific business model like open access. That’s a business model where multiple ISPs use the network to compete for customers. Open access is an extremely hard business plan to make work. I’ve often had to show municipalities that this specific idea won’t work for them.

Or a commercial ISP might want to enter a new market and want to make it work without having to hire new employees. My advice to them might be that such an expectation is unrealistic and that over time they will have to hire the extra people.

My advice to clients is that they should be just as leery of a ‘yes’ answer as a ‘no’ answer. For example, every one of the big open access networks has an original business plan on the shelf that shows that they were going to make a lot of money – and those business plans were obviously flawed. If they had challenged some of the flawed assumptions in those business plans they probably would not have entered the business in the way they did. It’s a shame their original consultant didn’t say ‘no’.

I’ve always said that ‘dollars speak’ and any new business has to make financial sense before you can think about meeting other goals. Every business plan contains hundreds of assumptions and it’s always possible to ‘cook’ the assumptions to find a scenario that looks positive. I have created business plans many times for commercial and municipal clients where an honest look at the numbers just doesn’t add up. I’ve had a few clients ask me to create a more rosy forecast and I’ve always refused to do this.

I personally would be leery of a consultant that doesn’t think that ‘no’ can be the right answer for doing something as expensive as launching a fiber venture. Sometimes ‘no’ is the right answer, and if somebody tells you ‘no’ you ought to listen hard to them. It makes sense to kick the tires on all of the assumptions when you hear ‘no’ and to get a second opinion, if needed. But it’s important to kick the tires just as hard when you get ‘yes for an answer.