FCC Loosens Regulation of Special Access

The FCC voted last week to eliminate any price controls on special access in about 90% of the markets in the US. The vote to implement this was 2-1 with Mignon Clyburn, a Democrat, issuing a detailed dissent that is worth the read. I can’t lay out the arguments against this deregulation in a short blog as well as she has done.

The order largely refers to what has been historically been called special access as BDS – Business Data Service. This refers to broadband connections sold by incumbent telephone companies using TDM (time division multiplexing) technology. This is technology that is based upon T1s (1.54 Mbps) or multiples of T1s. Techies I know that live in urban areas are surprised to find out how much of this TDM technology is still left in the world.

But a lot of businesses in the country still rely on this technology. There are still a surprising number of businesses that are not connected to fiber. And away from urban areas there are still a lot of business districts that have never been connected to a cable company network. And this means there are a lot of businesses that have special access as their only real broadband option. Further, the Telecommunications Act of 1996 forces special access interconnection onto competitors of the big telcos unless they own fiber directly into the large tandem hubs of the telcos.

The deregulation rules largely look at competition by county and are as follows:

  • Price caps will be eliminated in a county if 50% of the potential customers are within a half mile of a location served by a competitive provider.
  • A county would be considered as competitive if 75% of Census blocks have a cable provider.

That’s one of the more bizarre definitions of competition I can recall. Consider the typical county seat in most rural counties in the US. That is typically the town where most of the businesses for the county are located. If a handful of the businesses in the town get broadband from the cable company then the whole county is likely to be considered as competitive, removing any price caps on BDS services.

But I know from working in rural America that many of these county seat towns are anything but competitive. Often the cable companies in these towns are older and outdated systems that don’t offer fast cable modem service. And it’s not untypical for the cable networks to have been built years ago to only residential neighborhoods before cable systems were capable of delivering data – and since then the cable companies likely have not invested in expanding the network to business parks.

The prices charged by the telcos in these situations is already gigantic. I was working with a rural county in Minnesota last year where a company that makes upscale kitchen cabinets was paying well over $150,000 per year to get a less-than-adequate broadband connection. Had that town had real competition it’s likely that they could buy the speeds they need for a tiny fraction of that price. This business is considering relocating just to avoid the draconian broadband costs – and that would pull good-paying jobs out of a rural county.

The half-mile rule is also an odd measure of competitiveness. I have heard from hundreds of businesses over the years that have been quoted astronomical prices from the incumbent telco if they wanted to get a fiber connection. I’ve seen price quotes as high as almost $100,000 for a mile of fiber construction. A business that is a half-mile, or even a quarter mile from fiber might as well be 100 miles away.

The opponents of this FCC order all believe that AT&T, Verizon and CenturyLink will use this as an excuse to raise rates on special access. And that is the heart of this order. There is no way that this can be justified. The old incumbent telco networks are old and their costs have been recovered many times over. But as these big telcos have lost residential DSL customers to competition they have leaned more heavily on businesses buying special access. This is still a gigantic money-maker for the telcos.

I think Commissioner Clyburn summarized this order well. She said that she was not surprised by the order because this is industry consolidation month at the FCC. By that she means that recent FCC actions have all been aimed at helping the biggest companies in the industry rather than consumers. In this case it’s the schools, libraries, small businesses and governments in rural America that will pay the price so that the large incumbent telcos can maintain their high profits.

Keeping Up with Programming Costs

I saw a presentation recently that compared skinny bundles with traditional cable TV. One of the things mentioned in the presentation was how much the the cost of programming and the average cable rates have increased over time. I was asked recently if a cable provider should always pass on the increases in programming costs into rate increases. I know my clients have different views on the issue.

First a few numbers. The presenter said that programming costs have grown on average from $26.65 per customer per month in 2010 to $43.20 in 2016. That’s around a $16 increase and a growth rate of more than 9% per year, and that comports with what I’ve seen at my clients. But the overall numbers seem low and I’m guessing these numbers represent just the typical expanded basic package. Cable companies in general have three tiers – basic, expanded basic and premium. A lot of my clients today have programming costs that are well over $50 rather than $42.

This same presentation also showed that the average cable revenue per customer climbed from $65.90 in 2009 to $83.60 in 2016. That’s an annual 3.5% increase in rates, but it also generates a $16 increase in revenues from 2010 to 2016. I know most of my clients have had larger rate increases than this. I’m guessing the cited figures don’t reflect that the larger cable companies have significantly increased other rates such as settop box fees during this same time period. But generally the numbers cited show an industry that on average has raised rates to match the increases in programming costs. But if rates are only increased to match programming then they don’t cover any increases in the other costs of operating a cable business, such as keeping a headend up to date as well as the general inflation from operating a company.

This is an issue that my smaller clients wrestle with every year. Just two years ago I had a number of clients that saw an overall programming cost increase of more than 15% in a single year. A lot of them have seen costs go up even more than the 9% shown in the above numbers. Programming costs are driving cable rate increases that are far in excess of inflation over while average household wages over this same time frame have stagnated and grown only a tiny amount.

Small cable operators now face the dilemma that if they pass on a large programming cost they know they will lose customers. A lot of my clients operate robust broadband networks, making it a lot easier for households to elect to cut the cord. If they raise rates they are guaranteed to lose customers, and if they don’t raise rates then they directly eat into operating margins.

A company can get into real trouble by not raising rates. I had one client that had only small rate increases over a number of years and even skipped a few years without a rate increase. They compared their rates to surrounding communities and were surprised to find that their rates were nearly 40% lower than in nearby towns. I’ve seen a lot of similar situations and there are a number of small cable providers with rates that are 20% and 30% lower than surrounding communities.

Municipal operators and cooperatives have a particularly hard time with this issue because decisions are not made strictly based on the numbers. Many municipal cable companies require City Council approval of rate increases – and it’s not hard to picture politicians that want to vote against rate increases. But cooperative boards can act similarly if they think there are enough profits from other parts of the company to cover the cable rate increases. This is never an easy decision and I know a number of commercial cable providers that sometimes decide to eat some of the programming cost increases.

There is no easy answer to this question these days because nobody knows the elasticity of cable demand – meaning the degree to which customers will react negatively to a rate increase. For many years demand elasticity was low and a company could raise rates with a pretty good assurance that they would lose only a few customers. They’d suffer a spate of complaint calls when they raised rates, but almost everybody paid the increases.

But that’s no longer true. I think most small cable companies are afraid of that day when a rate increase drives a lot of their customers to find alternatives. There is a general wisdom in the industry that nobody makes money at cable, and on a fully-allocated cost basis that is almost always the case. But almost every small cable operator still has a positive margin on cable. And that means that a company suffers a real loss every time they lose a customer. The bottom line is that it’s a crap shoot these days. We all know that the day is going to come when most customers will refuse to pay the higher cable rates. But it’s anybody’s guess when that day will come.

Comcast as a Competitor

Somebody recently asked me about Comcast as a competitor. They have been a formidable competitor for many years, but I think they are pulling ahead of other cable companies in many ways. I’m sure that over time some of the other cable companies will try to emulate them. Consider the following:

  • They’ve created Comcast Labs (similar to Bell Lab). This group of scientists and engineers are concentrated largely on developing products that improve the customer experience. Nobody else has a research arm of this size and focus.
  • One of the first things out of Comcast Labs has been the proprietary X1 settop box, which has rave reviews and is heads above any other box. It has easy-to-use menus and is voice activated. It integrates the Internet into every TV. And it includes a growing list of unique features that customers really like.
  • Comcast has also now integrated Netflix and Sling TV into their settop box to keep customers on their box and platform. I suspect that Comcast takes a little slice of revenue for this integration. And it looks like they have a goal of becoming what the industry is starting to call a superbundler. There are around 100 OTT offerings on the market today and my guess is that over time they are going to integrate more of them into their ecosystem.
  • Comcast is working on skinny bundle packages that will let people buy smaller and more focused TV packages to keep them from leaving. Comcast is highly motivated to keep customers on the system since they own a lot of programming.
  • Comcast has found great success with their smart home product. This is probably the most robust such product on the market and includes such things as security and burglar alarms, smart thermostat, watering systems, smart blinds for energy control, security cameras, smart lights, smart door locks, etc. And this can all be easily monitored from the settop box or from a smartphone app. They don’t report numbers, but I’ve seen estimates that they now have a 7% to 8% customer penetration. Those customers are totally sticky and won’t easily drop Comcast.
  • Comcast has been an industry leader in in the race to unilaterally increase customer data speeds. They moved my 50 Mbps product to 75 Mbps with plans to raise it again to 100 Mbps after the DOCSIS 3.1 upgrade. I think they have figured out that faster speeds means a lot fewer customer complaints.
  • They are going to soon be offering cellphone services and will integrate them into the bundle. They just announced tentative pricing that looks to be lower than Verizon and AT&T in two-thirds of the markets in the US. Analysts say that over five years they could capture as much as 30% of the cellphone business in their markets. We’ll have to wait and see if that happens – because the cellular companies have better customer service than Comcast. But there is no doubt that they will get a lot of customers, and that those customers will also be sticky. They just bought a pile of spectrum that will help them offer some service directly to improve their margins.
  • One big advantage Comcast has over wireless competitors is that they own a lot of programming content. The industry expects them to use zero-rating, meaning that they will give their cellular customer access to all of their programming without having it count against cellular data caps.
  • As the biggest ISP Comcast probably has the most to gain from the reversal of customer privacy rules and net neutrality. Comcast already does well selling advertising but could become one of the major players online using customer data to target marketing.
  • Comcast is putting a lot of money into making their customer service better. They are quickly moving away from making everybody call their customer service centers. They also now have a decent customer service by text process. And they now allow people to ask and resolve questions by chat from their web site. Each of these improvements satisfies a niche of their customers and relieves the long wait times for a customer service rep.

They are also moving a lot of customer service back to the US, finally understanding that the cost savings of using foreign reps is not worth the customer dissatisfaction. But what they (and all of the other big companies) are banking on is the general belief that within five years there will be a decent artificial intelligence system for handling customer service. This will not be like the dreadful systems used today by airlines and banks. The expectation is that an AI will be able to satisfactorily handle the majority of customer service calls satisfactorily without needing a human service rep. Comcast will have these systems long before smaller competitors, giving them a big cost advantage.

I probably have a dozen blogs over the last few years blasting Comcast for their various practices and policies. But it’s not hard to see that they are possibly the most formidable competitor in the country. When you consider all of these positives and also understand that on a local basis that Comcast will match competitor’s prices – they are hard to beat. Like with any large ISP there are probably 20% of their customers that will choose somebody else out of reflex. But after that it’s a real challenge prying and keeping customers away from them.

The Customer WiFi Experience

Every broadband provider is familiar with customer complaints about the quality of broadband connections. A lot of these complaints are due to poorly performing WiFi, but I think that a lot of ISPs are providing broadband connections that are inadequate for customer needs. Making customers happy means solving both of these issues.

It’s the rare customer these days that still only has a wired connection to a computer and almost the whole residential market has shifted to WiFi. As I have covered in a number of blogs, there are numerous reasons why WiFi is not the greatest distribution mechanism in many homes. I could probably write three of four pages of ways that WiFi can be a problem, but here are a few examples of WiFi issues:

  • Customers (and even some ISPs) don’t appreciate how quickly a WiFi signal loses strength with distance. And the losses are dramatically increased when the signal has to pass through walls or other impediments.
  • Many homes have barriers that can completely block WiFi. For instance, older homes with plaster walls that contain metal lathe can destroy a WiFi signal. Large heating ducts can kill the signal.
  • Most ISPs place the WiFi router at the most convenient place that is nearest to where their wire enters the home. Most homes would benefit greatly by instead placing the router somewhere near the center of the house (or whatever place makes the most sense with more complicated floor plans). Customers can make things worse by placing the WiFi router in a closet or cupboard (happens far too often).
  • There are a lot of devices today, like your cellphones, that are preset to specific WiFi channels. Too many devices trying to use the same channels can cause havoc even if there is enough overall WiFi bandwidth.
  • A WiFi network can experience the equivalent of a death spiral when multiple devices keep asking to connect at the same time. The WiFi standard causes the transmission to pause when receiving new requests for connection, and with enough devices this can cause frequent stops and starts of the signal which significantly reduces effective bandwidth. Homes are starting to have a lot of WiFi capable devices (and your neighbor’s devices just add to the problem).

A number of ISPs have begun to sell a managed WiFi product that can solve a lot of these WiFi woes. The product often begins by a wireless survey of the home to understand the delivery barriers and to understand the best placement of a router. Sometimes just putting a WiFi router in a better place can fix problems. But there are also new tools available to ISPs to allow the placement of multiple networked WiFi routers around the home, each acting as a fresh and separate hotspot. I live in an old home built in 1923 and I bought networked hotspots from Eero which solved all of my WiFi issues. And there is more help coming in the future, with the next generation of home WiFi routers offering dynamic routing between the 2.4 and 5 GHz WiFi spectrum to better make sure that devices are spread around the usable spectrum.

But managed WiFi alone will not fix all of the customer bandwidth issues. A surprising number of ISPs are not properly sizing bandwidth to meet customer’s needs. Just recently I met with a client who still has over half of their customers on connection speeds of 10 Mbps or slower, even though their network is capable of gigabit speeds. It is a rare home these days that will find 10 Mbps to always be adequate. One of my other clients uses a simple formula to determine the right amount of customer bandwidth. They allow for 4 Mbps download for every major connected device (smart TV, laptop, heavily used cellphone, gaming device, etc). And then they add another 25% to the needed speed to account for interference among devices and for the many smaller use WiFi devices we now have like smart thermostats or smart appliances. Even their formula sometimes underestimates the needed bandwidth. But one thing is obvious, which is that there are very few homes today that don’t need more than 10 Mbps under that kind of bandwidth allowance.

It’s easy to fault the big cable companies for having lousy customer service – because they largely do. But one thing they seem to have figured out is that giving customers faster speeds eliminates a lot of customer complaints. The big cable companies like Comcast, Charter and Cox have unilaterally increased customer data speeds over the past few years. These companies now have base products in most markets of at least 50 Mbps, and that has greatly improved customer performance. Even customers with a lousy WiFi configuration might be happy if a 50 Mbps connection provides enough bandwidth to push some bandwidth into the remote corners of a home.

So my advice to ISPs is to stop being stingy with speeds. An ISP that still keeps the majority of customers on slow data products is their own worst enemy. Slow speeds make it almost impossible to design an adequate WiFi network. And customers will resent the ISP who delivers poor performance. I know that many ISPs are worried that increasing speeds will mean a decrease in revenue – but I find many of those that think this way might be selling six or more speeds. I’ve been recommending to ISPs for years to follow the big cable companies and to set your base speed high enough to satisfy the average home. A few years ago I thought that base speed was at least 25 Mbps, but I’m growing convinced that it’s now more like 50 Mbps. It seems like the big cable companies got this one thing right – while many other ISPs have not.

Looking at Generation Z

We’ve already seen a lot of analysis about the viewing habits of Millennials. We know as a group that they watch less traditional linear TV than older generations. We know that over 30% of millennial households are already cord cutters and get all of their entertainment from some source other than traditional TV.

But now we are starting to get a glimpse at Generation Z, the next wave of our kids. These are the generation following the millennials. A new survey firm, Wildness, is concentrating on this generation to study trends for companies that want to market to this segment. The firm is a spin-off of AwesomenessTV (and since I assume you don’t know what that is, it’s a leading source of programming for kids on YouTube).

Wildness just did their first survey of Generation Z viewing habits. These kids are the first ones to grow up in a connected world since birth. They looked at 3,000 kids from 12 to 24 and found the following:

  • Nine out of ten watch YouTube daily.
  • For 31% of them their favorite programming is on YouTube.
  • 30% of them follow their favorite brands on social media and post about them.
  • When asked if they could keep only one viewing screen, only 4% said they would keep a television. Their screen of choice is a cellphone.

This does not bode well for traditional linear television. For a long time industry pundits assumed that millennials would ‘come back’ to traditional TV as they got older and started their own households. But they have not done so and now it’s largely accepted that the way you learn to view content as a kid will heavily influence you throughout your life. And Generation Z kids are not watching linear TV.

Another interesting aspect of Generation Z is that they are not just content consumers, they are also content generators. More than half of them routinely generate content of their own (short videos, pictures, etc.) and share with their friends. And a significant amount of their viewing is of content generated by other kids. This has to scare traditional content generators a bit as these kids are not consuming traditional media to the extent of older generations. This generation has blurred and blended their social life with their online life to a much greater degree than older generations. This is the first generation that freely admits to being connected 24/7.

And it’s not just prime time TV shows that are being ignored by this generation. They are also not following sports, traditional news or any of the other standards of programming. At a young age they are discovering that interacting with each other is far more satisfying than watching content ‘crafted’ for them by older generations. Most of the programming they follow on YouTube is being generated by contemporaries (millennials or younger) rather than by traditional media companies.

Anybody that offers traditional cable TV has to look at these statistics and know that the clock is already ticking towards a day when cable TV becomes obsolete. Already today the average age of viewers of prime time shows keeps climbing as younger viewers eschew linear programming.

Last year about 1.7% of all households become new cord cutters. That may not sound like a lot, but it’s over 2.1 million households. And it seems that cord cutters rarely come back to traditional TV. A lot more older households are also favoring Netflix and other OTT content. These households still maintain cable TV subscriptions, but you have to wonder for how long.

I would not be surprised within a few years to see cord cutting accelerate rapidly. It’s getting hard to find households that are satisfied with what they are paying for cable TV. Even those who love traditional cable think it costs too much. And this could lead at some point to a rapid abandonment of traditional cable. But one thing the industry must accept is that when Generation Z grows up they are not going to be buying cable TV.

Two Visions for Self-Driving Cars

I was at a conference last week and I talked to three different people who believe that driverless cars are going to need extremely fast broadband connections. They cite industry experts who say that the average car is going to require terabytes per day of downloaded data to be functional and that only extremely fast 5G networks are going to be able to satisfy that need. These folks talk about needing high-bandwidth and very low latency wireless networks that can tell a car when to stop when encountering an obstacle. This vision sees cars as somewhat dumb appliances with a lot of the brains in the cloud. I would guess that wireless companies are hoping for this future.\

But I also have been reading about experts that instead think that cars will become rolling data centers with a huge amount of computing capacity on board. Certainly vehicles will need to communicate with the outside world, but in this vision a self-driving car only needs updates on things like the current location and for road conditions and traffic problems ahead – but not the masses of data anticipated by the first future vision cited above.

For a number of reasons I think the second vision is a lot more likely.

  • Self-driving cars are almost here now and that means any needed network to support them would need to be in place in the near future. That’s not realistically going to happen. Most projections say that a robust 5G technology is at least a decade away. There are a dozen companies investing huge sums on self-driving car technologies and they are not going to wait that long to even investigate if controlling cars from external sources makes sense. Every company looking into self-driving technology is operating under the assumption that the brains and sensing must be in the cars – and they are the ones that will drive the development and implementation of the new car technology. It’s not practical to think that the car industry can wait for deployment of the needed networks that are not under their control or reasonably available.
  • Who’s going to make the huge investments needed to build the network necessary to support self-driving cars? The ability to deliver terabytes of data to each car would require much faster data connections than can be delivered using the normal cellular frequencies. Consider how many fast simultaneous data connections would be needed to support all of the cars on a busy multilane highways in a major city. It’s an engineering challenge that would probably require using high frequencies. And that means putting lots of cell sites close to roads – and those cell sites will have to be largely fed by fiber to keep the latency low (wireless backhaul would add significant latency). Such a network nationwide would have to cost hundreds of billions of dollars between the widespread fiber and the huge number of mini-cell sites. I can’t picture who would agree to build such a network. The total annual capital budget for all of the wireless companies combined today is only in the low tens of billion range.
  • Even if somebody was to build the expensive networks who is going to pay for it? It seems to me like every car would need an expensive monthly broadband subscription, adding significantly to the cost of owning and driving a car. Most households are not going to want a car that comes with the need for an additional $100 – $200 monthly broadband subscription. But my back-of-the envelope tells me that the fees would have to be that large to compensate for such an extensive network that was built mostly to support self-driving cars.
  • The requirement for huge numbers of cars to download terabytes of data per day is a daunting challenge. The vast majority of the country today doesn’t even have a landline based broadband connection capable of doing that.
  • There are also practical reasons not to put the brains of a car in the cloud. What happens when there are power outages or cellular outages. I don’t care how well we plan – outages happen. I’d be worried about driving in a car if there was even just a temporary glitch in the network.
  • There are also issues of physics if this network requires any connections to be made by millimeter wave spectrum, or even spectrum that is just a little lower on the frequency scale. There is a huge engineering challenge to get such signals to track a moving vehicle reliably in real-time. Higher frequencies start having doppler shifts even at walking speeds. Compound this with the requirement to always have true line-of-sight and also the issue of connecting with many cars at the same time on crowded roads. I have learned to never say that something isn’t possible, but this presents some major engineering challenges that are going to take a long time to make work – maybe decades, and maybe never.
  • Finally are all of the issues having to do with security. I’m personally more worried about cars being hacked if they are getting most of their communications from the cloud. If cars are instead only getting location and other basic information from the outside it would be a lot easier to wall of the communications stream from the operating computing process, and reduce the chances of hacking. It also seems like a risk if cars get most of their brains from the cloud for a terrorist or mischief-maker to disrupt traffic by taking out small cell sites. There would be no way to ever make such devices physically secure.

I certainly can’t say that we’ll never have a time when self-driving cars are directed by a large outdoor cloud, as often envisioned in science fiction movies. But for now the industry is developing cars that are largely self-contained data centers and that fact alone may dictate the future path of the industry. The wireless carriers see a lot of potential revenue from self-driving cars, but I can’t imagine that the car industry is going to wait for them to develop the needed infrastructure.

Developing Customer Broadband Profiles

Last week I was the moderator of an IoT panel at NTCA’s IP Vision 2017 conference. The panel discussion took an interesting turn when the conversation turned to how small ISPs can monetize the IoT.

Customer demand for connecting devices is contributing to the need for bigger broadband pipes. Today there are about 6.6 billion IoT devices connected in the world. This is expected to grow to 22.5 billion by 2021. Obviously not all of these devices will be going into homes since there is a big growth also with industrial and agricultural IoT. But households will be steadily adding more connected devices.

One of the panelists works at a western telco and his company recently started considering the idea of profiling data customers to help them right-size broadband. The company first profiled employees to see how the idea would work. When the panelist was profiled he guessed that his household had 15 connected devices. But then he went home and did an inventory and was surprised to find that he actually had 55 devices. His household is probably a little unusual in that he has five kids and he loves technology, but he said that every telco employee had the same experience in that they underestimated how many connected devices they had in their home. It turns out for most households that the Internet of Things is already here to some degree.

His company has gone on the monetize this idea. They offer customers the chance to sit with a technician and to create a profile of how they use broadband. The goal is to determine if the customer has enough broadband to do everything they want to do. They immediately found the same thing I hear everywhere – most customers have no idea of how much broadband they really need. It turns out that most customers almost reflexively buy the lowest cost and lowest bandwidth data product and are then unhappy with some aspect of its performance.

Telcos everywhere are telling me that customer complaints about poor performance of broadband are becoming commonplace. It’s been easy to assume that problems are mostly due to issues associated with WiFi. But the experience of this particular telco shows that the problem is often that a customer has not purchased enough broadband to satisfy their needs. After the consultation, if they need a faster connection this telco gives the customer the larger data pipe free for a month – and so far not one customer has reverted to their old slower connection.

The telco also offers a second related product that is getting good traction. They sell what they call managed WiFi. The product starts with making sure that customers have placed WiFi routers in the most effective places. But the real benefit to customers is that they can call the company when they are trying to connect a new IoT device to their network. This is something that often frustrates customers. When customers find out that the telco can easily connect new devices and can help them manage their devices a large percentage of customers are buying this new product.

Within the industry we all understand that customer demand for broadband continues to grow at the torrid rate of doubling every three or four years. This kind of exponential growth surprises almost everybody. Customers that have been happy with a 10 Mbps broadband product invariably are going to need to move to something faster within only a few years. But customers are slow to realize that degraded service is due to their own increased usage and they often blame the ISP for broadband issues.

The broadband profiling has shown this telco that the customer experience varies widely. For example, not everybody needs faster download. They have a number of seasonal homes that are starting to install remote cameras that exceed the upload capacity of the broadband products, and the company can make sure there is enough broadband to satisfy the upload needs. The telco says their customers really appreciate this custom approach.

How Do VPNs Work?

After Congress clarified last month that ISPs have the right to monitor and use customer data I have read dozens of articles that recommend that people start using VPNs (Virtual Private Networks) to limit ISP access to their data. I’ve received several emails asking how VPNs work and will discuss the technology today.

Definition. A VPN is a virtualized extension of a private network across a public network, like the open Internet. What that means in plain English is that VPN technology tries to mimic the same kind of secure connection that you would have in an office environment where your computer is directly connected to a corporate server. In a hard-wired environment everything is secure between the server and the users and all data is safe from anybody that does not have access to the private network. If the private network is not connected to the outside world, then somebody would have to have a physical connection to the network in order to read data on the private network.

Aspects of a VPN Connection. There are several different aspects that are used to create the virtualized connection. A VPN connection today likely includes all of the following:

  • Authentication. A VPN connection always starts with authentication to verify the identity of the remote party that wants to make the VPN connection. This could use typical techniques such as passwords, biometrics or two-factor authentication.
  • Encryption. Most VPN connections then use encryption for the transmission of all data once the user has been authenticated. This is generally done by placing software on the user’s computer that scrambles the data and that can only be unscrambled at the VPN server using the same software. Encryption is not a foolproof technique and the Edward Snowden documents proved that the NSA knows how to read most kinds of encryption – but it’s still a highly effective technique to use for the general transmission of data.
  • IP Address Substitution. This is the technique that stops ISPs from seeing a customer’s Internet searches. When you use your ISP without a VPN, your ISP assigns you an IP address to identify you. This ISP-assigned IP address then can be used by anybody on the Internet to identify you and to track your location. Further, once connected your ISP makes all connections for you on the Internet using DNS (Domain Name Servers). For instance, if you want to visit this blog, your ISP is the one that finds PotsandPansbyCCG and makes the connection using the DNS system, which is basically a huge roadmap of the public Internet. Since they are doing the routing your ISP has complete knowledge of every website you visit (your browsing history).  But when you use a VPN, the VPN provider provides you with a new IP address, one that is not specifically identified as you. When you visit a website for the first time using the new VPN-provided IP address that website does not know your real location, but rather the location of the VPN provider. And since the VPN provider also does the DNS function for you (routes you to web pages) your ISP no longer knows your browsing history. Of course, this means that the VPN provider now knows your browsing history, so it’s vital to pick a VPN that guarantees not to use that information.

Different VPN Protocols and Techniques. This blog is too short to explore the various different software techniques used to make VPN connections. For example, early VPNs were created with the PPTP (Point-to-Point Tunneling Protocol). This early technique would encapsulate your data into larger packets but didn’t encrypt it. It’s still used today and is still more secure than a direct connection on the open Internet. There are other VPN techniques such as IPSec (IP Security), L2TP (Layer 2 Tunneling Protocol), SSL and TLS (Secure Socket Layer and Transport Layer Security), and SSH (Secure Shell). Each of these techniques handles authentication and encryption in different ways.

How Safe is a VPN? A VPN is a way to do things on the web in such a manner that your ISP no longer knows what you are doing. A VPN also establishes an encrypted and secure connection that makes it far harder for somebody to intercept your web traffic (such as when you make a connection through a hotel or coffee shop WiFi network). In general practice a VPN is extremely safe because somebody would need to expend a huge amount of effort to intercept and decrypt everything you are doing. Unless somebody like the NSA was watching you, it’s incredibly unlikely that anybody else would ever expend the effort to try to figure out what you are doing on the Internet.

But a VPN does not mean that everything you do on the Internet is now safe from monitoring by others. Any time you connect to a web service, that site will know everything you do while connected there. The giant web services like Google and Facebook derive most of their revenues by monitoring what you do while using one of their services and then use that information to create a profile about you.  Using a VPN does not stop this, because once you use the Google search engine or log onto Facebook they record your actions.

Users who want to be protective of their identities are starting to avoid these big public services. There are search engines other than Google that don’t track you. You can use a VPN to mask your real identify on social media sites. For example, there are millions of Twitter accounts that are not specifically linked back to the actual user. But a VPN or a fake identity can’t help you if you use a social media site like Facebook where you make connections to real-life friends. I recall an article a few years back from a data scientist who said that he only needed to know three facts about you to figure out online who you are. Companies like Facebook will quickly figure out your identity regardless of how you got to their site.

But a VPN will completely mask your web usage from your ISP. The VPN process bypasses the ISP and instead makes a direct, and encrypted connection to the VPN provider instead. A VPN can be used on any kind of data connection and you can use a VPN for home computers and also for cellphones. So if you don’t want Comcast or AT&T to monitor you and use and sell your browsing history to others, then a VPN service will cut your ISPs out of the loop.

The FCC’s Plan for Net Neutrality

This is already stacking up to be the most disruptive year for telco regulations that I can remember in my career. While 1996 and the Telecom Act brought a lot of changes, it looks like it’s possible that many of the regulations that have been the core of our industry for a long time might be overturned, re-examined or scrapped. That’s not necessarily a bad thing – for example, I think a lot of the blame for the condition of the cable TV market for small providers can be blamed on the FCC sticking with programming rules that are clearly obsolete. 

We now know for sure that one of our newest regulations, net neutrality, is going to largely be done away with at the FCC. FCC Chairman Ajit Pai has now told us about his plans for undoing net neutrality. His plan has several components. First, he proposes to undo Title II regulation of ISPs. Without that form of regulation, net neutrality naturally dies. It took nearly a decade for the FCC to find a path for net neutrality, and Title II was the only solution that the courts would support to give the FCC any authority over broadband.

However, Pai says that he still supports the general concepts of net neutrality such as no blocking of content and no paid priority for Internet traffic. Pai proposes that those concepts be maintained by having the ISPs put them into the ISP’s terms of service. Pai also doesn’t think the FCC should be the one enforcing net neutrality and wants to pass this responsibility to the Federal Trade Commission.

It’s hard to know where to start with that suggested solution. Consider the following:

·         I’m concerned as a customer of one of the big cable companies that removing Title II regulation is going to mean ever-increasing broadband rates, in the same way we’ve seen with cable rates. While the FCC said they didn’t plan to directly regulate data rates, they’ve already put pressure on the big ISPs over the last few years to ease up on data caps. Since the big ISPs have tremendous pressure from Wall Street to always make more, they have little option other than increasing data rates as a way to increase the bottom line.

·         Unless some federal agency proscribes specific and unalterable net neutrality language, every ISP is going to come up with a different way to describe this in their terms of service. This means that the topic can never really be regulated. For example, if somebody was to sue an ISP over net neutrality, any court ruling would be specific to only that ISP since everybody else will be using different language. Regulation requires some level of consistency and if every ISP tackles this in a different way then we have a free for all.

·         Probably the most contentious issue that brought about net neutrality was the big fights between ISPs and companies like Netflix over the interconnection of networks. I recall the FCC saying during some of those cases that they were one of the most challenging technical issues they had ever tackled. It’s hard to think that the FTC is going to have the ability to intercede in disputes of this complexity.

·         The proposed solution presupposes that the FTC will have the budget and the staff to take on something as complex as net neutrality. From what I can see it’s more likely that most federal agencies are going to have to deal with smaller budgets in coming years. And we know from long experience that regulations that are not enforced might as well not exist.

Interestingly, the big ISPs all say that they are not against the general principles of no paid priority and no blocking of content. Of course, they have a different interpretation of what both of those things mean. For example, now that a lot of the big ISPs are also content providers they think they should be able to offer their own content on a zero-rating basis. But overall I believe that they were okay with the net neutrality rules. They don’t like the Title II regulation because they fear rate regulation, but I think they mostly see that an open Internet benefits everybody, including them.

The one thing that big ISPs have always said is that the thing they want most from regulation is consistency and predictability. All of the changes that the FCC are making now are largely due to a change in administration – and in the long run the ISPs know this is not to their benefit. Of course, they have always complained about whatever rules are in place, and frankly that’s part of the industry game that has been around forever. But the last the thing the big ISPs want is for the rules to swing wildly back the other way in a future administration. That creates uncertainty. It’s hard to design products or to devise a 5-year business plan if you don’t know the rules that govern the industry.

The Valuation of a Cable Customer

SANYO DIGITAL CAMERA

Craig Moffett of MoffettNathanson recently set a valuation of an OTT customer from Sling TV at a quarter of the level of a normal Dish Networks customer. Since almost every small cable provider in the industry is interested in their valuation, I thought I’d talk today about Moffett’s numbers and how they might relate to cable valuation for small cable operators.

First the numbers. Moffett said that a normal Dish Networks cable customer is worth $1,100. That valuation reflects both the operating margin on Dish’s cable business as well as the average expected time that a cable customer stays with the company. Valuation in the industry in general is based on a multiple of operating margin – revenues less operating expenses. I don’t know what Moffett used as a multiple in this case since the valuation of Dish is muddled by the fact that they also own a mountain of spectrum.

Moffett set the value of a Sling TV customer (also operated by Dish Networks) at only $274. This low valuation tells us several things. First, the margins on Sling TV has to be significantly less. The company is obviously setting a low price to attract customers. And while Sling TV has a much smaller channel line-up than the big bundles at Dish Networks, Sling TV includes a lot of the most popular (and expensive) channels such as ESPN and Disney. I would also think that the valuation reflects a much higher churn for Sling TV. Customers are free to come and go easily and can buy service one month at a time. This contrasts to many Dish customers who get low prices by signing up for 1-year or longer contracts.

There are also other cost characteristics that are different for a satellite customer compared to on online customer. For instance, for a satellite customer Dish has to cover the cost of the satellite networks, the cost of the receivers used by customers. Sling TV has to instead just pay for transport of programming through Internet. Both parts of the business have to cover advertising and the cost of billing and back office. But it seems like Sling TV would have lower costs since customers must prepay by credit card. It’s hard to know which has a cost advantage, but I would guess it’s Sling TV. But Dish has millions of customers and would have some significant economy of scale.  

How do these valuations compare to the valuations of small cable providers? The big difference between terrestrial cable providers and Dish is having to provide a fleet of technicians in trucks and maintaining a landline network of some sort. Small cable operators also have to operate a headend and always face upgrades to keep up with the latest innovations in the industry. These costs are far more costly per customer for a small cable operator than what Dish is paying. I would think that due to economy of scale that Dish also has an advantage on costs like customer service, billing, etc. The equipment costs for customers are probably similar for Dish and terrestrial cable operators.

I have analyzed the books of a number of small triple play providers in recent years and if costs are allocated properly to products I haven’t seen one that has a positive margin on the cable TV product. While small cable systems generally charge more than Dish Networks they also pay more for programming. But the main reason that small terrestrial cable operators lose money is the work load associated with supporting cable TV. I’ve done detailed time studies at clients and have seen that in a triple play company that way more than half of the calls to customer service and the truck rolls are due to cable issues. If a small company allocates expenses properly between products, then cable is almost guaranteed to be a loser.

What does that mean for valuation? It’s probably obvious that if one of the major product lines of a company is losing money that the negative earnings pulls down the overall valuation of the business. Said more plainly, if the cable business at a small company is losing money, then that part of the business has no value or even a negative value. This is a conversation I have with clients all of the time, and most small cable providers have at least thought about the ramifications of dropping their cable product.

It’s not quite as easy as it sounds, because if somebody drops cable then they need to also pare expenses that were used to support cable. For a small company that means cutting back on customer service and field technician positions – something that small companies are loathe to do. Small carriers also worry that cutting cable will cost them overall customers, particularly if they are competing against somebody else that offers the triple play. It’s definitely a tough decision, but I’ve heard that as many as fifty small telcos have ditched traditional cable.

I’m also seeing for the first time that many new network operators are launching new markets without cable TV. Or they are instead looking at models where some external vendor like Skitter TV sells cable to customers.

Unfortunately, the cost of programming is still climbing fast and the margins on cable keep worsening for small cable operators. I expect that some time within the next five years or so we will reach a flash point where the collective wisdom of the industry will say that it’s time to ditch cable – and at that point we might see a flood of small companies exiting the business. But I don’t know of a harder decision to make for a small triple play provider.