The New FCC Broadband Privacy Rules

FCC_New_LogoThe FCC passed new privacy rules last week and the new rules are largely aimed at Comcast, AT&T, Verizon and other large ISPs. Most small ISPs do not participate today in the practices that the new rules are aimed at stopping. For the most part the rules won’t affect smaller companies much other than having more annual pieces of paper to file at the FCC saying that you follow the rules – and you probably already do.

The rules are aimed at protecting customers from abuse by ISPs, who by definition have the most access to a customer’s data. An ISP knows every web site visited, every web purchase made, every email and every instant message sent.

This is probably the FCC’s biggest use so far of its new Title II authority over broadband. The FCC knows this is going to be challenged in court, so the new rules don’t go into effect for a year, giving the lawsuits a chance to resolve.

I’m not going to repeat all of the specifics of how this works, but rather concentrate on what it means to the industry as a whole:

Customers have a right of privacy. The new rules create a new right that a customer’s data – where they search on the web, what they say in emails and texts – all belong to them. Each customer now has the right to decide if the ISP can use it. Today an ISP knows everything a customer does on the web that is not encrypted, and even with encryption they know the web sites visited. But the FCC now makes it clear that this customers can keep this personal information private if they so desire.

ISPs need to ask for permission to use customer data. The new rules compel ISPs to explicitly ask for permission to use customer data. I suspect ISPs are not going to be allowed to bury this choice inside a terms of service.

I would expect that big ISPs are going try to entice people to be able to use their data. They might offer lower prices or entice people by forwarding coupons to them from around the web for things they are interested in. But at the end of the day it’s the customer’s choice to allow or not allow their ISP to use the data. And there might be nuances. ISPs might ask to track where customers go on the web but not read emails. The rules would allow options for the ISP.

ISPs must say what they do with customer data. If somebody gives an ISP permission to use their data the ISP must disclose how they are going to use it. Are they using it only for their own marketing efforts or are they going to sell it to others? Right now, consumers don’t know what information is being collected by their ISPs, nor what’s being done with it.

ISPs will have to protect customer data. The new rules also place more responsibility on ISPs to protect customer data from hackers. This is perhaps the one area of the new rules that will have the most impact on smaller ISPs. ISPs must use best industry practices and also notify customers when there has been a data breach. And they must notify the FBI if a breach involves more than 5,000 customers.

This does not affect edge providers. The new rules only apply to ISPs. They do not apply to ‘edge providers’ like social media sites or search engines. Those companies are still allowed to use customer data in any manner they want since customers come to them voluntarily. So Facebook and Google are still free to use customer data since people use those sites voluntarily. This is the killer for the giant ISPs because they see how much money the edge providers make from using customer data from advertising and other uses. But it’s not clear if the FCC has any authority over edge providers.

Another big gap is the Internet of Things. As we saw in the recent giant denial of service attack, the devices used in the Internet of Things – thermostats, cameras, smart appliances, etc. – are not well protected. IoT companies also are capable of gathering a lot of information about customers. This will become a much bigger issue as people start using devices that include artificial intelligence like the Amazon Echo. It would be natural for the FCC to declare that IoT providers are also ISPs of a sort and regulate them that way. I expect that nothing will be done with IoT until this set of rules makes it through the court challenges.

Control of the Internet

The InternetIf you follow presidential politics you may have heard a few candidates claim that the US government is giving away control of the Internet. This one puzzled me, and it turns out what they are talking about the transition of the control of the DNS function from US control to a non-profit international body. It turns out that this is something that has been in the works for decades.

The issue involves DNS, or the Domain Name System. This is the system that matches the name of a web site with an IP address. This system allows you to go to the amazon.com website by typing the name address “amazon.com” into your browser instead of having to know the numerical IP address for Amazon.

DNS is essential to ISPs because it tells them how to route a given request on the web. There is one master file of all worldwide web names and the associated IP addresses. And obviously somebody has to be in charge of that directory to add, delete and make changes to web names and IP addresses.

After the early days of the Internet this function went to a group called IANA, the Internet Assigned Numbers Authority. This group was largely managed by a few staffers, academics, and help from some of the early web companies – all techies who only wanted to make sure that the burgeoning web worked well. And although they didn’t exert any control, the group was loosely under the auspices of the NTIA (National Telecommunications and Information Administration), a part of the Department of Commerce which had veto power over anything done by IANA.

This power was rarely exercised, but there were many around the world that were uncomfortable with the US Government being in charge of a vital web function. There was a push for an international group to take over the DNS function and in 1998 the function was transferred to ICANN, the Internet Corporation for Assigned Names and Numbers. ICANN brought in Board members from around the world and the group has effectively since then been operated with international consensus. But the NTIA still maintained a veto power over things done by the group.

But since it was founded there has been a planned transition to a fully international ICANN with no ties to the US government and on October 1 control of ICANN changed hands and is now operated only by an international Board without oversight from the US government.

Just a few weeks before the planned transfer four states sued to stop the transfer in the US District Court in Texas. Their argument was that the directory of IP names and addresses belonged to the US and could not be given away without approval from Congress.

The opponents to this suit argued that not turning over the control of ICANN was a much bigger threat because it might lead to other countries developing their own DNS databases – and the ability of anybody in the world to reach any web address using the same nomenclature is vital to the concept of an open and free Internet. Interestingly, it was this same concept a century ago – that anybody with a telephone ought to be able to call any other telephone number in the world – that was a driving principle in creating an efficient worldwide telephone network.

The suit was processed quickly and the judge came down on the side of the open Internet and the transition to ICANN. In the end this fight was more about politics than anything substantial. At the end of the day the DNS database is nothing more than the equivalent of a gigantic white pages listing of every address on the Internet. All that really matters is that this database be kept up to date and be available to the whole world. ICANN has had the same international board of techies since 1998 and this transition was planned for a long time. So there is no threat to the US losing control of the Internet folks that saw the headlines can sleep well knowing that this issue was about politics and not about a real threat.

It’s Okay to Fire a Customer

angry-smilieAs a telecom consulting firm we’ve had a lot of clients over the years. The last time I counted we have worked with over 800 companies. I was just thinking the other day about something that happened after we had been in business for a few years.

I had a client who had always been a pain to work with personally. He was irascible and constantly argued with me over work product. But that never bothered me too much because sometimes in his grumpy way he made very good points – and he always paid his bills on time.

But one day I walked into the office and he was on the phone with my office manager and he was yelling at her over the phone in a very abusive way. She put the call on the speaker and I heard him cursing and ranting and screaming at her. I had the call transferred into my office and I told him he was fired. This stunned him and he asked what this meant for the work we were doing for him, and my response was that we wouldn’t bill him for anything we had done but we were also not going to finish what we were working on.

I think I leapt up four notches that day on the boss scale because nobody in the office liked working with this particular client and they were thrilled to find out that I had their backs. But that day taught me a valuable lesson – that sometimes it’s okay sometimes to fire a customer. Sometimes the money they pay you is just not worth the aggravation. Over the years I’ve fired a few more clients, but luckily most of my clients are a pleasure to work with.

I’ve carried that same message to my clients. When I ask, almost every one of my clients has a few customers that nobody at the company likes to work with. These customers may be abusive, or impossible to please, or are the ones that always want adjustments to their billing for some perceived wrong.

It’s generally a novel concept to my clients when I tell them that it’s okay to fire such customers. A few of them have gone to their staffs after I put the idea in their head to ask how many customers they perceive to be hard to work with. It’s generally a small number, but universally every customer service rep and field technician will make the same short list of problem customers.

Some of my clients have then fired these customers. Others have taken the approach of calling these customers and warning them that their behavior will no longer be tolerated. In both cases I’ve been told that this has resulted in a huge morale booster at the company. Contrary to the popular maxim, the customer is not always right. Your employees should not have to take abuse as part of their job and they will greatly appreciate you making their life easier.

This is not an easy decision because small companies often emphasize the fact that they need every possible customer in order to thrive and survive. So it’s a question of weighing the revenue from a handful of problem customers against company harmony and a good workplace environment.

You also have to be careful not to take this to the opposite extreme. Your employees cannot feel empowered that you will fire anybody who disagrees with them, because that can foster bad behavior on behalf of your staff. But I don’t think it’s hard to identify the really bad apples, and if you do this the right way it’s another way to make your company a better place to work.

The Urban Broadband Gap

apartment-buildings-mascot-frontIt’s natural to think that all city-dwellers have great broadband options. But when you look closer you find out it’s often not really so. For various reasons there are sizable pockets of urban folks with gaping broadband needs.

Sometimes the broadband gap is just partial. I was just talking to a guy yesterday from Connecticut who lives in a neighborhood that largely commutes to New York City for work. These are rich neighborhoods of investment bankers, stockbrokers and other white collar households. They have cable modem service from Comcast and can get home broadband, but he tells me that cell phone coverage is largely non-existent. He can’t even use his cellphone outside of his house. There is a lot of talk about broadband migrating to wireless, but 5G broadband isn’t going to benefit people that can’t even get low-bandwidth cellular voice service.

I also have a good friend who lives in a multi-million dollar home in Potomac, Maryland – the wealthiest town in one of the wealthiest counties in the country. He has no landline broadband – no cable company, no Verizon FiOS, and not even any usable DSL. His part of the town has winding roads and sprawling lots and was built over time. I’m sure that it never met the cable company’s franchise density requirement of at least 15 or 20 homes per street mile of fiber – so it never got built. I am sure that most of the city has broadband, but even within the richest communities there are homes without.

You often see this problem just outside of city boundaries. Cities generally have franchise agreements that require the cable company to serve everybody, or almost everybody. But since counties rarely have these agreements the cable and phone companies are free to pick and choose who to serve outside of town. You will see some neighborhoods outside of a city with a cable company network while another similar neighborhood nearby goes without. It’s easy to find these pockets by looking for satellite TV dishes. The difference between the two neighborhoods is often due to nothing more to the whim of the telco and cable companies at the time of original construction.

The fault for not having broadband can’t always be laid on the cable company. Apartment owners and real estate developers for new neighborhoods are often at fault. For example there are many apartments around where the apartment owner made a deal years ago with a satellite TV providers to provide bulk cable TV service on a revenue sharing basis. In electing satellite TV the apartment owner excluded the cable company and today has no broadband.

Real estate developers often make the same bad choices. For instance some of hoped to provide broadband themselves but it never came to fruition. I’ve even seen some developments that just waited too long to invite in the cable company or telco and the service providers declined to build after the streets were paved. The National Broadband Map is a great resource for understanding local broadband coverage. In my own area there are two neighborhoods on the map that show no broadband. When I first saw the map I assumed these were parks, but there are homes in both of these areas. I don’t know why these areas are sitting without broadband, but it’s as likely to be a developer issue as a cable company issue.

There have also been several articles written recently that accuse the large cable companies and telcos of economic redlining. These companies may use some of the above excuses for not building to the poorer parts of an urban area, but overlaying broadband coverage and incomes often paints a startling picture. Since deciding where a cable company expands is often at the discretion of local and regional staff it’s not hard to imagine bias entering the process.

I’ve seen estimates that between 6 and 8 million urban people don’t have broadband available. These have to be a mixture of the above situations – the neighborhoods are outside of a franchise area, or the developers or apartments owners didn’t allow ISPs in, or the ISPs are engaging in economic redlining. But for whatever the reasons this is a lot of people, especially when added to the 14 million rural citizens without broadband.

I spend a lot of my time working on the rural broadband gap, but I don’t see much concentrated effort looking at the urban gap. That’s probably because this gap is one where it’s one subdivision, one apartment building or one street at a time with surrounding households having broadband. It’s hard to cobble together a constituency of these folks and even harder to find an economic solution to fix the problem.

Amazon as an ISP?

amazon_logo_rgbThere is an article on The Information that says that Amazon is considering becoming an ISP. They cite an unattributed insider at Amazon who says that the company has been discussing this. Officially the company denies the rumor, which is consistent with the way that Amazon has always operated.

It’s an interesting concept, but I honestly have a hard time seeing it. Amazon has been growing in Europe and it could make a little sense there. There are a number of cities on the continent as well as a few national ISP networks that allow open access to any ISP. On those networks Amazon could easily develop an ISP product. They already have massive data centers and it wouldn’t cost all that much to add the ISP functions.

But I just don’t see any big benefits to Amazon for doing this in the open access model. Due to price competition there are not a lot of profit for ISPs on the open access networks. But maybe Amazon can have some edge from somehow bundling ISP access with its Amazon Prime video and music. But every ISP already carries Amazon’s content today and unless bundling somehow sells a lot more Prime subscriptions it’s hard to see this as a big win.

I also can’t see any sense of Amazon being an ISP in the US. There are no open access networks to speak of outside a tiny handful of small municipal networks. One only has to look at Google’s foray into broadband in the US to see that it’s really hard to make money by building broadband infrastructure – at least the kind of money that excites stockholders. There are decent long-term infrastructure returns from building and operating a fiber network well, but those returns are miniscule compared to the returns on tech ventures.

I still don’t fully understand why Google got into the broadband business. In the fiber business they are investing a lot of money that is going to make relatively small returns compared to the rest of their core business. Google’s stock value comes from the company making high technology returns and infrastructure returns can’t do anything better than pull down their overall return. I can’t imagine how it will be any less so for Amazon.

Perhaps Amazon is intrigued by the idea of gigabit wireless connections.  But I think everybody looking at this new technology is going to figure out that millimeter wave spectrum technology is still going to require a lot of fiber in the urban network.

And even if Amazon is comfortable with the lower returns, they still have to deal with network neutrality. It would seem that the best advantage to Amazon from being an ISP would be to somehow bundle their content and broadband connections together – something that is not allowed in the US, and only barely allowed in Europe.

The biggest problem we have with getting real broadband in the country is that big money is chasing big returns. There was a time in our past where there were a lot of conservative investors who were very happy having part of their portfolio invested in safe and steady telephone, electric and water companies because they knew that they would receive secure dividends forever in these safe investments.

But it seems today that investors look at all of the instant tech billionaires and they don’t want to pour money into the basics any more. To compound the problem the big telcos and cable companies invest no more than absolutely necessary in capital to meet basic customer expectations. But big company networks are not nearly as good as they should be. You can’t watch a quarterly presentation of one of these big companies without hearing them talk about how they have plans to curtail capital spending.

So is Amazon really going to become an ISP? They certainly have access to the cash if they really want to. But it’s just hard to believe that they want to shift the company to be more brick and mortar company since they have fought hard to not be that. I just can’t see enough benefits to a publicly traded tech company to be an ISP.

An Upgrade to G.fast

Speed_Street_SignNokia has announced the lab trial of the next generation of G.fast, the technology that can pump more bandwidth through telephone copper. They ae calling the technology XG.fast.

In a recent trial the equipment was able to send a 5 Gbps signal over copper for 100 meters and 8 Gbps for 30 meters. This is much faster than the G.fast top speed in trials of about 700 Mbps. In a real life situation using older copper the speeds will not be nearly this fast. G.fast in real life trials has gotten about half of the speeds seen in labs, and it would be impressive if that can also be achieved for XG.fast.

The technology works by utilizing higher-band frequencies on the copper. Traditional VDSL uses frequencies up to about 17 MHz. G.fast uses frequencies between 106 MHz and 212 MHz. XG.fast climbs the spectrum even further and adds on spectrum between 350 MHz and 500 MHz.

There are a lot of issues involved in using all of this frequency on a small-gauge copper. The main problem is crosstalk interference – when adjoining copper wires interfere with each other, and this degrades the signal and drastically cuts down on the distance the signal can be transmitted.

Nokia mitigates the crosstalk using vectoring, the same as is done with VDSL and other DSL technologies. Vectoring generates an –out of-phase signal that can cancel out some of the interference. But there is so much interference at thise frequencies that vectoring can only keep the signal coherent for the short distances seen the trial.

To date there has not been a lot of interest in G.fast. Adtran, the other competitor in the G.fast space claims to have now conducted ninety field trials of the technology worldwide. That’s an extraordinarily low number for a technology that can add speed to existing copper. But it looks like most phone companies are not interested in the technology, and they have some good reasons.

The short distances make G.fast and its new successor impractically expensive in the copper plant. In order to use the technology the telco would have to mount an XG.Fast transmitter at the pole outside each home, or in dense neighborhoods to perhaps serve a few homes. But if the telco wants to take advantage of the faster speeds that XG.Fast can get into the home they also would need to string fiber to feed the XG.Fast transmitters.

XG.Fast is largely a fiber-to-the-curb technology and the cost of the building fiber up and down streets is the big hurdle to using the technology. Any company willing to spend the money to build that much fiber probably isn’t willing to trust copper for the last 100 feet.

There is one application where XG.fast makes good economic sense. It can be extremely costly to rewire older apartment buildings with fiber. But every apartment building has existing telephone wiring and XG.fast can be used to move data from a telephone closet to the apartment units. This sounds to be far less costly than trying to snake fiber through older buildings. Since a lot of companies have avoided older apartment buildings this might offer a relatively inexpensive way to bring broadband.

You can’t fault Nokia for continuing to pursue the technology. There is a huge amount of copper still hanging on poles and the world keeps shouting for more broadband. But I get nervous about recommending any technology that isn’t widely accepted. I can picture a telco deploying this technology and then seeing support dropped for the product line.

But I can’t see this ever being much more than a niche technology. Telcos in the US seem to be looking for reasons to tear down copper and don’t seem willing to take one more shot at a copper technology. There might be a good business case for using the technology to extend broadband inside older buildings. But US telcos seem completely uninterested in using this in older copper networks.

Thinking about Electronics Obsolescence

carrier-cardsWe are in the process currently of helping a number of clients make major upgrades to networks, something we’ve done many times over the years. And this got me thinking about obsolescence and when and why we replace major electronics.

There are a couple of different kinds of obsolescence. First is physical obsolescence, which is when we replace things because they simply wear out. We do this all of the time with vehicles and hard assets but it’s rare with electronics. I can only think of a few times over the years we’ve helped people replace things electronics that were failing due to age. A few that come to mind are some T-carrier systems in the customer network that lasted for far more years than anybody expected.

A more common phenomenon is functional obsolescence where the electronics are not up to the task of handling newer needs. While this can happen with all kinds of electronics, the most common such upgrade has been replacing the electronics on fiber backbone or long-haul networks. There has been such a prolonged explosion in the amount of data our networks carry that it’s been common to overwhelm transport electronics.

In these cases we yank out fully functional electronics and replace them with something that can handle a lot more data. I would hope in the future that we will see a little less than this. One of the reasons we’ve needed these kinds of upgrades is that network engineers would not consider exponential bandwidth growth into their future projections. The naturally conservative nature of engineers didn’t let them to believe how much traffic would grow in just a few years after they build a network. But I finally see a lot of them getting this.

We also see technologies that are much more easily expandable. For instance, a lot of fiber electronics are now equipped with DWDM and other tools that allow for an upgrade on the electronics without a forklift upgrade. The network operator can light a few more lambdas of light and get a boost in throughput.

My least favorite form of obsolescence is vendor obsolescence where functional equipment is made obsolete when a vendor introduces a new generation of electronics and stops supporting the old generation. Far too many times this feels like nothing more than the vendors trying to force more sales onto their customers rather than looking out for the customer’s best interest.

This is not a new phenomenon and there was nobody better at this in the past than companies like Nortel and Lucent. They constantly pushed their customers to upgrade and were famous for cutting off support to older equipment while it was still functional. But the practice is still very much alive today.

Losing vendor support for electronics is a big deal to a network owner. It means you will no longer be able to buy a replacement for a card that goes bad unless you can find one on eBay. It means that the vendor won’t talk to you about any problems that crop up in your network.

The industry is now entering the second round of vendor obsolescence with FTTH electronics. Vendors cut off BPON and other first generation FTTH gear almost a decade ago and are now planning to do the same to GPON. I remember when BPON stopped being supported that every vendor of the next generation of equipment promised that the newer generation of electronics would be frontwards compatible – meaning that the ONTS and field electronics would work with future generations of core electronics. But as I always suspected this isn’t going to be the case and there is going to be another forklift from GPON to next generation of PON electronics.

The shame of this is the older PON equipment still works great. I have a few clients who have kept BPON working for a decade after it was supposedly obsolete by buying spares on eBay. Those networks are now finally becoming functionally obsolete as customers are using more data than the network can handle. But the equipment became functionally obsolete ten years after the equipment was declared as vendor obsolete. Most BPON electronics were well made and the ONTs and other field electronics have been chugging along a lot longer than the vendors wanted.

It’s not always easy to decide to keep operating equipment that the vendor stops supporting. But I’ve seen this done many times over the years and I can think of very few examples where this caused a major problem. It takes a little bravery to keep operating equipment without full vendor support, but management often chooses this option from the pragmatic perspective of economic reality. Most networks don’t make enough money to fund replacement all of the electronics every seven or ten years, and perhaps it is lack of money as much as anything that provides courage to network owners.

The IP Address Crunch

4cb1f2dc96040Sometimes it feels like small ISPs just move from one crisis to another. The latest problem I am hearing about is that ISPs are having a hard time getting new IP addresses – which is something they need in order to connect new customers to their network. I have clients who have been trying for months to find new addresses, and if they don’t find any they are soon going to have to turn away new customers.

We’ve known for decades that we would exhaust the current IP addresses. The IP world introduced IPV6 IP addresses back in 2011 and that was supposed to be enough new IP addresses to last the whole world for a long time into the future. Historically the original Internet used IPV4 IP addresses, of which there was about 4.3 billion. The new addresses have more digits and there are about 79 with 28 zeroes after it times more IPV6 addresses. Even the tens of billions of expected IoT devises won’t make a dent in the new inventory of IP addresses.

So how can there be a shortfall of IP addresses with so many new ones available? The problem is the speed at which the world is implementing the new IPV6 addresses. Some of the large companies like Comcast, Verizon Wireless and T-Mobile have swapped all of their customers to IPV6 addresses. But the implementation has been slow. Google probably has the best measure of IPV6 implementation since they see a large chunk of the world’s traffic. By 2014 they reported that only 2% of the IP addresses in the world had been converted to IPV6. At the end of last month that had finally climbed to 14% of all IP addresses.

But so far the conversions have been done by the largest ISPs. It is exceedingly hard for small ISPs to make this transition. They are more or less locked into the IP practices of the large carriers that sell them Internet bandwidth. It’s been estimated that the small companies might not be offered IPV6 until perhaps 50% to 60% of the Internet traffic is using the new addressing standard. By the looks of the growth curve that is still at least a few years away.

The bodies that assign IP addresses have all run out of new addresses. The Internet Assigned Number Authority (IANA) free pool of numbers ran dry in February 2011. There are five Regional Internet Registries (RIRs) around the world and the last one of them ran out of IP addresses last year. Since then ISPs can’t get IP addresses through the normal channels.

So small ISPs are stuck in limbo. If they want to grow they need new IP addresses, but there are none available in the traditional channels. As happens with any scarce resource a new market of brokers has stepped in to supply the demand for IP addresses. There are several of these brokers worldwide. These brokers have gone to large companies like GE, Haliburton and Ford and bought their inventories of unused IP addresses. And this process created a market.

Back in 2012 these brokers established market prices for IP addresses. The prices started at about $5 per IP address. But as these brokers have found fewer unused blocks, and as there are more ISPs looking for numbers, the prices have risen and IP addresses today sell for between $11 and $15 per IP address.

So small ISPs should just be able to buy what they need from these brokers, right? Unfortunately it’s not that easy. The addresses are sold through a periodic online auction process, and like happens with any rare resource there are now speculators buying IP addresses with the hope of selling them later at a higher price. The competition in the auction processes has become fierce. To some extent this is like the process for trading bitcoins and those with the fastest and most powerful computers can win the auctions. The small ISPs I know tell me they are not getting any addresses. I know one ISP who has failed at the process for over 6 months.

So we now have a situation where small ISPs are nearly locked out of the process of buying new IP addresses (and even if they buy them they are expensive). This shortfall and the auction arbitrage is likely to last for a few more years. The economics of the market tell us that at some point the arbitrage price for IP addresses will drop. When that happens the speculators in the market will ditch their inventory and there should be IP addresses available at lower prices than today and more easily available. But that’s not expected until there are a lot more IPV6 users. The ISPs might be facing this problem for the next two years. I feel certain that we are going to see small ISPs that will find themselves unable to add new customers to their networks – and in world where we want broadband everywhere that is a disaster.

The Death of the Big Cable Bundles

TelevisionThere is a ton of evidence that customers no longer want the traditional 200 – 300 channel cable packages. For example, we’ve seen the number of customers of ESPN plunge by millions over the last year to a far greater extent than the overall erosion of the cable industry. The ESPN phenomenon can only be caused by cord shaving – or customers downsizing to smaller packages.

We got more evidence of this last week when Verizon CEO Lowell McAdam said that 40% of cable packages sold on Verizon are now skinny bundles. He said that if he had a preference that Verizon would only offer skinny bundles. He doesn’t believe there is customer demand for the larger packages.

This makes sense and we have had the statistics for years to tell us this. A study by Nielsen earlier this year showed that the average person watches around 17 channels to the exclusion of others. That’s means that the average household is wasting a lot of money paying for channels they don’t want.

Other studies tell us the same thing. A Gallup poll earlier this year said that 37% of households don’t watch any sports. And yet sports programming has become the most expensive component of the big cable bundle. And it’s only common sense that within the 63% who watch sports that a lot of them must be just casual sports fans or fans of only one or two sports.

And the trend has to be downward for the channels on traditional cable. In May of this year Nielsen reported that almost 53 million US homes watch Netflix. Another 25 million watch Amazon Prime. Another 13 million watch Hulu, and since they beefed up their lineup and slashed their price the number of viewers is bound to climb.

Unfortunately skinny bundles are not universally available everywhere. Only the largest cable companies have been able to negotiate for the right to sell smaller bundles so far. And among the large cable providers only Verizon and Dish Network are really pushing the skinny bundles. There are also a few skinny bundles on the web, like Sling TV, but every time I look their packages are getting fatter.

I can’t help but speculate what would happen if every household was given the choice tomorrow to downsize their cable bundle and monthly cable bill. Leichtman Research Group announced a few months ago that the average cable bill in this country is now $103.10. That’s an astronomical number, and if that is the average a lot of homes are paying a lot more than that. Contrast this with new the Dish Network skinny bundle that offers 50 channels for $39.99 per month.

The skinny bundle that is doing so well at Verizon isn’t even cheap and starts at $55 per month – but it’s a lot less expensive than the big traditional bundles. And the Verizon price is reduced significantly for customers buying a triple-play bundle.

I just wrote a blog last week that talked about how Wall Street is becoming unhappy with cable programmers. At least one analyst has downgraded Discovery Networks and Scripps. We might finally be seeing is a whole host of issues coming to bear in the industry at the same time. Cable bills are finally getting too expensive for a lot of homes. People are becoming more interested in content that is not on traditional cable. And the programmers are losing a little bit of the total lock they have had on the industry.

It’s hard to say when, or even if the industry is going to break in any significant way. There are still just under 100 million homes paying for some version of cable TV. And the overall effect of cordcutting has only been shaving that by a little over 1% per year. But if the Verizon trend becomes the norm and most customers start preferring skinny bundles then the industry will still be transformed. ESPN has lost 10 million customers since 2013, but over half of those losses have been in the last year. The same thing has to be happening to many other of the less-popular cable channels, and at some point the math just isn’t going to work for the programmers.

We’ve seen a similar phenomenon once before. We saw a gradual erosion of home landline telephones after the advent of the cellphone. But after a few years of gradual declines we saw a deluge of people dropping home telephones. You could barely turn on a TV without hearing about how having a home telephone was a waste of money, and so it became the popular wisdom that home phones weren’t needed. The same thing could happen with skinny bundles and the industry could be transformed in a short period of time if tens of millions of homes downsize their cable bundle. It is going to happen, we’ll just have to wait and see how fast and to what degree it’s going to occur.

A New Telecom Act?

FCC_New_LogoThere has been a lot of talk during the last year about putting together a new Telecom Act. It’s been twenty years since the Telecom Act of 1996 which created CLECs. But a lot has changed in twenty years and that Act is largely obsolete. Unfortunately it’s unlikely with political gridlock that we’ll get a new Act that fixes our real problems. But I asked myself what I would include in a new Telecom Act if I was allowed to write it. Here are some of the top changes I would make:

Fund Fiber Everywhere. There was recently a bill introduced in Congress to add $50M to the RUS for rural broadband grants. That makes such a tiny dent in the problem as to be embarrassing. If we believe as a country that broadband is essential for our economic future, then let’s do what other countries have done and start a federal program to build fiber everywhere, from rural America to inner cities. I could write a week’s worth of blogs about how this could be done, but it needs to be done.

Make Broadband Affordable to All. The Lifeline program that subsidizes $9.25 per month for broadband for low-income households has the right intentions. But the amount of subsidy is ridiculously low. If we believe that schoolkids ought to have broadband to succeed then let’s do this right and pony up and find a way to pay for it.

Tax Broadband. The continuing ban against taxing the Internet is stupid. It was put in place years ago to protect a fledgling new Internet industry. Let’s put a tax on landline and cellular broadband to pay for getting fiber everywhere and broadband to everybody.

Stop Subsidizing Non-Broadband. It should be impossible for the FCC to provide any funding or subsidies to broadband connections that don’t meet their own definition of what constitutes broadband speeds.

Fix Pole Issues. Pole issues have been a bane to competitors since the last Telecom Act required pole owners to allow access. Let’s create common-sense rules that don’t allow pole owners to hold new competitors hostage.

Break the Power of the Programmers. Most of what has been broken in the cable TV industry has been due to the immense power and greed of the programmers to set the price and conditions for their content. It’s time to put a halt to contracts for content that force cable providers to buy programming they don’t want. And it’s also time to consider requiring programmers to offer each network a la carte and not in big bundles.

Unleash Skinny Bundles. Existing cable rules put handcuffs on cable providers. Rules that require specific kinds of bundles such as basic and expanded basic means that a cable provider has a nearly impossible task of putting together offerings that customers really want to buy. Let’s scrap those rules and start fresh with customer choice as the driver behind the new rules.

Make Cable Rules Apply to Everybody. Any new cable rules need to apply to everybody that provides content – over wirelines or over the Internet. Anything less than this gives massive advantages to one side or the other. I would be fine if the best way to do this is to have almost no rules!

Reinstitute Limitations on Ownership of Media. Allowing a handful of companies to own all of the television and radio stations has put a huge dent in our free press and in local control of news stations and reporting. Let’s break up these conglomerates and start over.

I could easily add forty more items to this list, but these were the ones that first came to mind as I was writing. What would you add to a new Telecom Act?