The Downside to Bundling

bundleIt’s so common for triple play providers to bundle their services that it’s become the standard product of the industry. There aren’t any great stats on the percentages of bundles sold at the big cable companies, but I’ve seen speculation that it’s north of 70% of all residential customers. Certainly, with that kind of success it’s not hard to see why triple-play providers like bundles.

Back when bundles were first created all of the talk in the industry was that bundles would make customers ‘stickier’ – meaning that customers with a bundle were less likely to churn to another provider. The original lure of the bundles for customers was that they saved money over buying each product a la carte – and customers saw savings when they bought bundles.

But in much of the country the cable companies have won the competition battle. They now offer data speeds in most markets that are faster than their competition, and so customers no longer have an equal choice between two providers. We can see this by watching the huge ongoing shift of DSL customers to cable modems in the cities – just last year Comcast added almost 2 million new customers, most formerly from DSL.

I suspect that customers don’t look at the bundle in the same way they did years ago. If customers don’t have a real competitive alternative then the bundles are no longer saving them anything. When you consider the impact of a decade of high rate increases, one has to think that most homes are finding the bundle to become more of a burden than a boon. I just saw a statistic yesterday that showed that the average price of just the cable TV portion of the bundle has increased from $70 to $103 since 2011.

My guess is that bundles have lost their appeal for most customers. The stickiness that the cable companies crowed about can feel like a trap to somebody who wants to downsize. The industry has been abuzz for several years about the big movement towards cord cutting. But none of the articles I have read on the issue mention how hard it is for somebody to drop cable TV while keeping a data connection.

I’ve written in this blog a number of times about how Comcast forces me into buying basic cable TV in order to get a fast broadband product. I didn’t want that cable product on day one and I have tried over the years to ditch it. But I’ve always been told that I would have to drop my data speeds to a really slow product in order to buy standalone data. So what I have is a forced bundle – one with no options for breaking the bundle into the components and only buying what I want.

There are bundles in the industry that are not as rigid as mine, but which instead impose a harsh monetary penalty for breaking the bundle. Customers don’t know what they pay for any given piece of the bundle. But if you try to break a bundle, you find out that there was very little value assigned to whatever product you want to ditch and, thus, the savings when breaking a bundle are never as large as expected.

This phenomenon certainly has to be a contributing factor for homes not buying data from Google Fiber. It’s been widely reported that many people really like the cable products the big companies put out these days. The products are much improved over past years with cloud DVR, slick remotes and the ability to watch programming from anywhere on any device. But breaking a bundle to keep cable TV and then buy data from somebody else comes with such a significant financial penalty that it’s hard to justify.

Where bundles were once used to attract customers in a somewhat competitive environment, they have turned into anchors around customers’ necks. I have to think that a lot of the low rankings that the cable companies get on customer satisfaction surveys comes from the resentment over bundles. Nobody likes feeling of being forced to buy something they don’t want. From time to time I see articles ruing that we don’t have a la carte cable programming so that we can buy the channels that we want. But the fact is, that for most of us, we no longer even have many options for a la carte products that are not in a bundle.

Cable TV Rates

eyeballTrying to get your arms around industry trends for cable TV isn’t easy. There are a number of different entities that track various cable statistics and they are often not in synch. This week I saw a new press release from Leichtman Research that said that the average rate increase in cable rates this year has been around 4%.

I keep an eye on these kinds of statistics because most of my clients compete against the bigger cable companies. Leichtman says that the average monthly spending on pay-cable TV is now $103.10, which is 4% higher than 2015. This is an eye opener because household spending on cable increased from $73.63 in 2011, or an average increase since then of 7.7% per year. For the increases to finally drop to 4% is big news.

But like anything in the cable industry, there are a lot of moving parts in trying to see the future trend of cable rates. Consider all of the following, which have some bearing on current average nationwide cable spending:

  • There was a press release in January where Comcast said that their average cable bills would go up by 3.9% this year, right in line with this latest report. But in addition to raising cable rates the company also had a $2 increase in its ‘broadcast TV fee’ of $2 which affected every cable customer. All of the big cable companies now have these fees, which are just another piece of the cable rate, but which are not often counted as such. These fees let companies like Comcast hold down their advertised rates which increases overall cable rates.
  • Charter and Time Warner seem to have had a much lower annual increase than average due to the merger that was pending during the normal January rate-increase period. But one would have to think that now that the merger is over that these companies will make up lost ground. I’ve seen predictions that Time Warner customers could see a jump in their 2017 bills as large as $10.
  • Both satellite companies had one of the largest rate increases we’ve seen from them in years. DirecTV raised package rates from $1 – $9 and DISH Networks raised rates from $2 – $8.
  • Cablevision didn’t raise their rates at all at the beginning of the year due to their expected merger with Altice.
  • We know that there is a lot of cord-shaving going on, which would have a downward pressure on average cable bills. The large cable companies don’t report customers by size of package, but we have a lot of evidence of cord shaving due to networks like ESPN losing millions of customers since 2015. If the industry is not losing as many customers as ESPN then only cord shaving – people moving to a smaller package – can explain their customer losses. If lots of people buy smaller cable packages the average bill will drop.
  • Finally, with the big cable companies it’s getting really hard to distinguish cable increases from other price increases. I’ve seen estimates that most of the large cable companies have around 70% of customers in some kind of a bundle. Most people with bundles don’t know what they pay for any specific component of the bundle. But this also means that the cable companies can be arbitrary when separating the bundles into the component cable, data and telephone revenues. This means the reported ‘cable’ revenues from the big cable companies can be fudged to meet reporting goals or any other purpose.
  • In this last year we are starting to see increases in broadband rates from many of the cable companies. For example, Cox just recently increased various data rates from $2 to $7 per month. But for customers in a bundle these revenues fall into the same muddy bundled price along with the cable rates. Do customer in a bundle really care which piece of their bundle increased?

One thing I see external to these big industry statistics is that my smaller clients are not seeing any drop-off in increasing programming and other cable expenses. If anything, because of the continuing big increases in retransmission costs they are seeing as large or larger increases in underlying cable costs as ever. Smaller cable providers will really feel the squeeze if they compete with somebody like Time Warner that barely raised rates in 2015.

While it’s not really good news, it appears that it’s likely that the ‘smaller’ rate increases from the bigger cables for 2015 are probably an anomaly and that these companies will be back to larger increases in 2016. But it’s anybody’s guess going forward if the annual increases are going to be in cable rates, broadband rates or something else. Like everything in our industry it’s getting a little muddier to predict.

The Plight of Pinetops, North Carolina

pinetopsnc-640x125Most of my readers are probably aware that last year the FCC voted to overturn the restrictions on municipal competition in Tennessee and North Carolina. Specifically, the FCC gave permission to the Electric Power Board of Chattanooga and to the City of Wilson, North Carolina to extend their fiber networks to provide broadband to nearby communities.

But both states appealed the FCC decision and this past August the courts overturned the FCC order in favor of the states. The FCC has decided to not appeal that court decision.

During the time when the FCC order was in effect, the City of Wilson extended their Greenlight fiber network and brought fiber to the tiny town of Pinetops, NC. This is a small town that had a population of 1,374 in the 2010 US Census. When the Courts overturned the FCC rules, Wilson’s City Attorney interpreted the reversal of the FCC ruling to mean that Wilson had no authority to serve broadband in Pinetops.

The local governments of both Wilson and Pinetops have appealed to Governor Pat McCrory to allow the broadband service to continue. The mayor of Pinetops reports that 31% of the households in his community are below the poverty line and that the network had brought the opportunity for the town to do better economically. The town has been hoping to grow by attracting new residents.

In the telecom world we are often faced with similar situations, where the industry will react to a regulatory ruling that might eventually be overturned. We just saw this recently as the FCC took actions related to net neutrality in 2016 at the same time that the net neutrality rules were under appeal. The large incumbent telcos and cable companies routinely appeal decisions they don’t like from the FCC, and it has become somewhat common practice for parties to act as if the new rules are in effect, even during the appeal process.

It seems that Chattanooga took a conservative approach and did not expand their network, waiting for a resolution of the Court appeal. But Wilson expanded their network when the FCC said they had the right to do so, with the uncomfortable result that we now have a  small town that has lost access to fast broadband. Customers have been disconnected as Wilson turned off the network.

One would hope that the powers-to-be can find a way to keep the broadband going in Pinetops. It’s very easy for lawmakers and regulators who live in urban areas with good broadband to fail to understand how hard it is for rural households to live without broadband. It’s particularly cruel to provide broadband to a small town like Pinetops and then withdraw it.

Wilson constructed the network using regulations that were in place at the time of the construction. It’s also true that today, after the appeal the same fiber construction would no longer be allowed.  But common sense would say to grandfather the broadband in Pinetops while restricting Wilson from constructing fiber to any additional communities.

Unfortunately, common sense often doesn’t prevail in these situations. I’m sure that AT&T and Comcast have put pressure on the state to rollback the broadband, even if those companies are not providing a decent alternative in Pinetops. But these big companies have taken the position that all competition is bad and they take extraordinary measures to stop competition when they can. I just hope that somebody in North Carolina uses some common sense and compassion to let the folks in Pinetops keep the broadband they were recently given. To not do so would be inhumane. It would be surreal if the people in Pinetops are denied broadband when the fiber is already on their streets or connected to their homes.

Technology Shorts – September 2016

truenorthHere are some new technology developments that are likely to someday improve telecommunications applications.

Single Molecule Switch. Researchers at the Peking University of Beijing have created a switch that can be turned on and off by a single photon. This opens up the possibility of developing light-based computers and electronics. To make this work the researchers needed to create a switch using just one large molecule. The new switches begin with a carbon nanotube to which three methylene groups are inserted into the molecule, creating a switch that can be turned on and off again.

Until now researchers had not found a molecule that was stable and predictable. In earlier attempts of the technology a switch would turn ‘on’ but would not always turn off. Further, they needed to create a switch that lasted, since the switches created in earlier attempts began to quickly break down with use. The new switches function as desired and look to be good for at least a year, a big improvement.

Chips that Mimic the Brain. There are now two different chips that have hit the market that are introducing neural computing in a way that mimics the way the brain computes.

One chip comes from KnuEdge, founded by a former head of NASA. Their first chip (called “Knupath”) has 256 cores, or neuron-like brain cells on each chip, connected by a fabric that lets the chips communicate with each other rapidly. This chip is built using older 32 nanometer technology, but a newer and smaller chip is already under development. But even at the larger size the new chip is outperforming traditional chips by a factor of two to six times.

IBM also has released a neural chip it’s calling TrueNorth. The current chip contains 4,096 cores, each one representing 256 programmable ‘neurons’. In traditional terms that gives the chip the equivalent of 5.4 billion transistors.

Both chips have taken a different approach than traditional chips which use a von-Neumann architecture where the core processor and memory are separated by a buss. In most chips this architecture has been slowing down performance when the buss gets overloaded with traffic. The neural chips instead can simultaneously run a different algorithm in each core, instead of processing each algorithm in sequential order.

Both chips also use a fraction of the power required by traditional chips since they only power the parts of the chips that are being used at any one time. The chips seem to be best suited to an environment where the chips can learn from their experience. The ability of the chips to run simultaneous algorithms means that they can provide real-time feedback within the chip to the various processors. It’s not hard to imagine these chips being used to learn and control fiber networks and be able to tailor customer demand on the fly.

Improvements in WiFi. Researchers at MIT’s Computer Science and Artificial Intelligence Lab have developed a way to improve WiFi capabilities by a factor of three in crowded environments like convention centers or stadiums. They are calling the technology MegaMIMO 2.0.

The breakthrough comes from finding a way to coordinate the signals to users through multiple routers. WiFi signals in a real-world environment bounce off of objects and scatter easily, reducing efficiency. But by coordinating the signals to a given device like a cellphone through multiple routers the system can compensate for the interference and scattering by recreating a coherent understanding of the user signal.

While this has interesting application in crowded public environments, the real potential will be realized as we try to coordinate with multiple IoT sensors in an environment.

The Broadband Battle in Nashville

PoleThere is a regulatory battle going on in Nashville that is the poster child for the difficulty of building new fiber networks in urban areas. The battle involves Google Fiber, AT&T, Comcast, and the Metro Council and Mayor of Nashville, all fighting over access to poles.

Google Fiber wants to come to Nashville and needs access to existing poles. About 80% of the current poles are owned by the city-owned Nashville Electric Service with the other 20% belonging to AT&T.

The Metro Council recently enacted a new ordinance called the One Touch Make Ready (OTMR) law. This law would speed up the process called make-ready, which is the process for making room for a new wire to be hung on poles. Under the new rules, Google Fiber or other new pole attachers would be free to move wires belonging to another utility to make room for their new wires. And the new attacher must pay for the needed changes, at whatever rate the other wire owners bill them.

The FCC took a stab at this problem a few years ago and they allow a new attacher to add their cables to a pole without approval if the paperwork process takes too long. But those rules only apply to poles that don’t need any make-ready work – and in an urban area most poles need some amount of make-ready work to make room for a new wire.

Current make-ready rules require that the owner of each existing wire be notified so that they can move their own wire, as needed. As you might imagine, this means an overbuilder must issue  a separate request for multiple wire owners for each individual pole that needs to be modified, including detailed instruction the changes that must be made. Other pole owners are giving an opportunity to disagree with the recommended changes. And this whole paperwork process can’t even begin until the pole owner has first inspected each pole and decided on a make-ready solution.

As you can easily imagine, since many of the other companies with wires on poles don’t want competition from Google Fiber or any other new competitor, they do everything legally possible to delay this process.

What I find ironic about this process is that the current wire owners can drag their feet even if their own existing wires are in violation of code. The various industry codes dictate a specified distance between different kinds of wires in order to make it safe for a technician to work on the wires, particularly during bad weather. I’ve found that most poles in an urban area have at least one existing code violation.

It’s also ironic that the cable company can drag their feet in this process. I’ve heard numerous stories about how the installers for the original cable networks often went rogue and installed their wires without getting formal permission from the pole owners. At that time the telcos and cable companies were not competitors and so nobody made a big fuss about this.

It’s been reported that one City Council member tried to stop the new law from going into effect by introducing an alternate proposal – which supposedly was written by AT&T. That alternative law gave the incumbents 45 days to make changes, but also limited the fast pole response to 125 poles per week. In a City the size of Nashville there are tens of thousands, and possibly even more than 100,000 poles that might need to be changed – and so that limit basically means that it would take many years, even possibly decades for a new fiber provider to build a city-wide network.

The new One Touch rule would allow Google Fiber or others to make the necessary changes to poles if the incumbent wire owners don’t act quickly enough to move their wires. AT&T has already sued the City to block the new ordinance. They argue that the City has no authority to order this for the AT&T-owned poles. They also argue that this change will disrupt their service and put their customers out of business. The lawsuit is, of course, another delaying tactic, even should the City prevail.

There is little way to predict how the courts might decide on this. It’s a messy topic involving a complex set of existing and technical industry practices. Both sides have some valid concerns and good arguments to make to a court. Both sides also have access to the best lawyers and it will be an interesting court fight. But perhaps the most important thing to consider is that the existing rules can mean that it’s not economically feasible to build a new fiber network in a City – if so then something needs to change.

Fighting Over Wireless Pole Attachments

PoleOne of the next big industry battles is going to be between pole owners and the cellular and other wireless providers that want to use poles for wireless transmitters or mini-cell sites. All five FCC Commissioners have said they are in favor of streamlining the process for wireless providers to get onto poles and to locate new towers, and this is not going to sit well with pole owners or with cities.

We will see two different types of wireless companies wanting to use poles. First are the fiber-based ISPs like Google Fiber that want to deploy wireless loops. These companies are looking at using the millimeter wave length spectrum recently released by the FCC to get broadband into the homes. This spectrum won’t carry big bandwidth very far, and so there is a general assumption that these providers will want to mount transmitters on poles in neighborhoods.

The other providers are the big cellular companies. They will also want to use the millimeter wave spectrum using 5G protocols to provide fast local loops and to support big data. They also will want to support 5G cellular, which will use the standard cellular spectrum. The cellular providers will want both transmitters on poles plus they are likely to want to build new urban cell towers.

There are a number of issues with pole mounted antennas that will need to be addressed. Urban poles are often already densely packed with wires and it’s not going to be easy to somehow make space for a new device among the many wires. The biggest concern of other wire owners is that these transmitters might create a safety hazard for linemen who have to make repairs on poles. For the most part nobody yet knows much about the actual size or power requirements for these devices, so it’s premature to speculate. But it’s certainly possible that adding new boxes on already crowded poles will add complications, particularly for line work done in bad weather.

We don’t know yet what the FCC specifically has in mind, but the only real way to ‘streamline’ the process will be to force pole owners to accept wireless transmitters without objection. Current pole attachment rules don’t have any specific provisions on how to deal with connecting wireless transmitters. Most current rules create very distinct zones for wires for telcos, cable companies, electric companies, municipalities and competitive overbuilders. So I have to assume that the FCC will develop a specific process for dealing with requests by wireless providers.

Anything the FCC does is going to be complicated by the fact that about half of the states have their own pole attachment rules. The FCC allowed states to do this, and states without their own rules use the FCC rules. The various states have come up with significantly differing pole attachment rules and processes and it would go against tradition if the FCC was to preempt any of the state-specific rules.

The FCC discussion also makes it sound like they are going to want to require some sort of expedited process to enable wireless providers to deploy their facilities quickly. In many cases the wireless companies are going to want these transmitters to be fiber-fed and also tie into power. I’m sure the FCC has been watching the huge fights between Google Fiber and pole owners and I’m sure they want to avoid these same fights for the wireless providers. But it would be ironic if the FCC makes these exceptions for the wireless companies while they did nothing to aid the fiber overbuilders that have gotten bogged down in pole disputes.

The cell companies are also going to want to build new traditional towers or get permission to hang devices on buildings and places other than poles. Some cities have taken a hard line over the years on how and where cellular companies can place new towers and so we can expect a jurisdictional fight if the FCC tries to overturn state and city rights on tower placement.

The FCC is correct that pole issues and tower placement can be barriers to efficiently deploying the next generation of wireless transmitters. But it somehow doesn’t feel right if the FCC bends all of the rules for the wireless companies while they have allowed fiber overbuilders to be delayed for years over disputes with pole owners.



Broadband and the Elderly

Caduceus.svgAlmost every list of potential Internet benefits I have ever seen includes the goal of using broadband to allow people to remain in their homes as they age. It’s one of those uses of broadband that has always been right around the corner. And yet, there is still no suite of products that can deliver on this goal.

This is a bit surprising because America is aging and surveys show that a large majority of aging person wants to stay in their home as long as possible. Nursing home and other kinds of care are expensive and people are willing to spend the money on home care if that is possible.

But I think there is some hope on the horizon. AARP has been holding annual expos to allow vendors to display new products for the elderly. In the broadband / technology area the number of vendors at these demos have grown from 80 in 2012 to 228 in 2015. So there are companies working on the needed technologies and products.

It’s not hard to picture what such a suite of products would look like. It certainly would contain the following:

  • A health monitoring system that would check on vitals statistics such as heartbeat, blood pressure, blood sugar and whatever factors were most important for a particular person.
  • A monitoring system that can track the movements of an elderly person and report when they have fallen or not moved for a while.
  • A system that prompts people to take pills or other needed treatments on time.
  • A 2-way communications system that allows the elderly to stay socially connected to the outside world, to have visits with doctor, etc.
  • A smart bot of some sort (like the Apple Siri or the Amazon Echo) that can help the elderly get things done like make appointments or call for groceries.
  • Eventually there would be a robot or robots to make life easier. They could perform everyday functions like taking out the trash, washing dishes or other tasks that are needed by the stay-at-home person.

We are just now starting to see the first generation of useful personal bots, and it should not be too many more years before a smart bot like Siri can become the interface between an elderly person and the world. We still need bots to get better at understanding natural language, but that seems to be improving by leaps and bounds.

We are probably a decade before there will be the first truly useful house robots that can tackle basic household chores. But once these become common it won’t take long for them to improve to the point where they could become nurse, housekeeper and cook for an elderly person.

According the AARP, the biggest hurdle to developing the needed suite of products is the lack of investors willing to fund the needed technologies. For whatever reason investors are not readily backing companies that want to develop products in this space. This is not unusual for complex technologies like this one. Since the solution is complex, investments in any one part of the product suite are risky. Even should a new product work well there is no guarantee that any given product will be included in the eventual bundles of home care products. This makes investors leery about backing any one solution at the early stage of the industry.

But the pressure will remain to develop these products. The US (and much of the rest of the world) is aging. I just read yesterday that there are over 50,000 people in Japan over 100 years old, up from only a thousand or so a few decades ago. Health breakthroughs are letting people live longer and more productive lives. As a society we need to find a solution for our aging population (since we are all going to get there soon enough).

One thing is for sure – good broadband is a key component of this suite of products. If we don’t find a way to get broadband to everybody by the time these products hit the market, then we will be punishing the elderly that live where there is poor broadband. If you think there is a loud public outcry today from folks without broadband, wait until people’s lives depend upon it.

The Real Value of Bundling

Numismatics_and_Notaphily_iconMost telecom providers these days offer some sort of bundled product. Bundles have become such an automatic pricing tool that I think many providers don’t consider the value proposition behind bundles. Originally bundles were created to try to make customers want to stay with a provider – to make them sticky, in industry jargon. If your bundles don’t do that you are probably missing something.

Consider two different companies I know that have the best bundles. The first is not even a telecom company. I have a friend who owns and operates a CPA firm that has been in business for fifty years. His firm always did traditional bookkeeping and tax preparation work. A decade ago he started getting a lot of competition from other tax-preparation alternatives – the big companies like H&R Block and also software packages like TurboTax. He realized that his primary product didn’t offer any significant advantages over his competitors, and that if he didn’t change something he was going to see a lot of customer churn and he would always have to spend a lot of marketing just to retain a customer base.

So he decided to create bundles by offering other services that his customers already used and bought elsewhere. He first added a payroll service and made it easy for his customers to pay their employees. This was a product that was available from many other places, but he found that his customers preferred to buy the service from somebody they already trusted. He then added credit card processing since almost all of his customers accepted credit cards. Again, this is a widely available service, but many of his customers over time moved their business to him.

In recent years he has become even more creative. He’s become an insurance broker and can offer policies from a wide array of different insurance companies. Probably the most creative product he’s developed is a point of sale system that he developed himself. His customers are small retail stores like restaurants, nail salons, grocers – and he has a system on an iPad that can take credit card payments and that automatically logs each sale into the accounting system.

The bottom line is that he has created a suite of products that make his customers very sticky to him. He has priced each of these products competitively because his profit comes from selling the whole suite of products, not any one product. He has found that offering the bundle of services has greatly reducing churn and he rarely loses A customer. Customers have a hard time leaving him since they would need to find multiple vendors to replace him.

I have a telcom client who has done something similar. While they are a rural telco, they decided twenty years ago to expand into the business market in some nearly cities. They did okay selling telephone lines at first, but they saw churn and found that customers had no reason to be loyal to them as a provider.

So they decided to tackle a bundle of a wide variety of technical services needed by small businesses. Of course, that meant providing broadband as soon as that became a common need for businesses. But over time they have done a lot more.

They first tackled being the IT shop for small businesses. When they started this it meant installing and maintaining a server at a customer location. Over time that function has moved back to their own data center, but they still provide this service for most of their customers. They also created their own version of the Geek Squad, before there was such a thing at Best Buy. They will purchase, program, maintain and repair customer computers and associated electronics. They also have gotten into other lines of business – they resell, install and train on various major cloud software packages. For a while they offered video conferencing (before Skype made it free and easy). They even offer copiers, postage machines and other major office equipment.

Their goal was to make themselves indispensable to a small business by becoming a one-stop shop to buy everything electronic. And it has worked. They won over a significant portion of the businesses in their markets as customers and those customers are remaining loyal to them. A customer doesn’t have an easy time leaving them since that means replacing them with at least three or four other vendors.

These are two examples of bundles done right. If your bundles are only used as a pricing tool then you are missing the biggest benefit of the bundle – which is to create loyal customers who won’t leave you. Worse yet, I see mandatory bundles that trap customers into buying services they don’t want – and when these customers finally get fed up and find an alternative you’ll never see them again.

The FCC and Data Caps

FCC_New_LogoI’ve railed against low data caps in this blog a number of times over the last few years. Low data caps stop some households from partaking in the basic web services that most of us take for granted. The FCC is now being prodded to confront this issue since earlier this month Netflix filed at the FCC asking to eliminate data caps.

In that filing Netflix argued that web-based video is now an expected service for households. They threw out a new statistic I’ve never seen before and they say that the average household now uses 300 gigabits per month in download capacity just to satisfy their TV viewing habits. And they warn that that level of bandwidth demand is growing rapidly, particularly with the growing popularity of 4K video.

They argue quite correctly that households with low data caps can’t afford to watch video like everybody else. Our firm works a lot in rural America and I have talked to numerous households who tell me the same thing. Households with low data caps (like those found with satellite broadband) closely monitor and ration broadband usage and they say that lack of availability to the web is one of the major points of contention in their household. There are many horror stories where kids will watch a lot of video or do online gaming and the parents then get a gigantic monthly bill for the usage.

It’s hard to know where the FCC stands on data caps. Last year when they were getting flooded with complaints about Comcast’s data cap trials, the staff there made numerous statements that made you believe that data caps were under investigation. But then Comcast raised the data caps to a terabit and the issue faded away. More recently it seems that the FCC sees data caps as a pricing issue – something they told ISPs they would never get involved with.

But there are still numerous ISPs that enforce data caps and the issue is still very much alive. Certainly the most abusive form of data caps is with cellphone data, and our wireless data prices in this country are nearly the most expensive broadband in the world.

The data cap issue is going to get new legs as the big telcos build rural broadband using CAF II funds. The FCC in that docket said that networks built with CAF II funding could not have data caps any smaller than 150 GB. And so we expect most of the CAF companies to use the 150 GB cap. There are going to be millions of rural homes that get their first broadband only to find out that they can’t use it like they expected to watch video. I am sure a lot of them are going to get a shock when they see their first bill with huge data overages higher than the 150 GB cap.

The FCC is under no obligation to respond to the Netflix complaint. The FCC has always had the freedom to choose the issues it wants to investigate, and so they could file this complaint away and do nothing. They also have the ability to open a rulemaking to gather more facts on the issue, but would still have no obligation to act. There are numerous rulemakings and dockets at the agency that have been open for years and that may never be resolved.

But data caps are discriminatory to rural and poor customers. The big ISPs have placed severe data caps on Lifeline data connections, and through CAF II rules will do the same for rural customers. Since most of the country still has no choice among ISPs it can be devastating if the only ISP available imposes draconian data caps.

I certainly hope the FCC takes up the issue. They now have the authority to do so under Title II regulation. We’ve known for years that this is not a network issue for most ISPs. And that means that ISPs with data caps view them as a backdoor way to increase rates. They want to advertise cheap starter rates but then use data caps to get a lot of money out of customers at the end of the month. I think the FCC needs to talk to rural families that spend over $500 per month on cellular data just so that their kids can do homework.

Section 230 and the Internet

Scale_of_justice_2_newOne of the most important laws affecting the Internet that you’ve probably never heard of is Section 230 of the Communications Decency Act of 1996. The law provides immunity from liability to anybody that publishes information generated by others.

It is this law that holds Facebook harmless for content posted by its users, or protects a newspaper that allows comments on its article. The law shields web companies from liability from things posted by their users. Without this law social media couldn’t exist since somebody would attack a company like Facebook over something they didn’t like posted by one of their billion users.

This law has already been tested a number of times by various lawsuits and the law has always prevailed. For example, AOL was sued a number of times in the early days of the web for carrying defamatory statements or false customer profiles – all posted by its users. There are similar laws in Europe and Australia.

But there is a current lawsuit attacking Section 230 that is getting traction in the courts – and this has web companies worried. The case is Hassel vs. Bird and is being adjudicated in San Francisco. In the case, Ava Bird hired Dawn Hassel as an attorney to support her in a slip-and-fall case. But then Bird basically disappeared and so Hassel dropped the case.

But Bird subsequently sued her lawyer for damages and also posted a defamatory and erroneous review about the lawyer on Yelp. The courts agreed with the lawyer and awarded her damages, and Hassel then asked the court to have the defamatory review removed from Yelp. And that’s where Section 230 came into play.

Yelp was not listed as a party to the case and is refusing to remove the bad review claiming that it would create a bad precedent and is violation of Section 230 rules. They argue that to remove the review would be to admit wrongdoing and would create a liability to Hassel. Yelp appealed the ruling, but the California appeals court sided with the first ruling and ordered Yelp to remove the review.

Yelp’s legal arguments center around the fact that they were not named as a defendant in the original suit. If they had been, they could have been heard in court before being ordered to take down the defamatory posting. They argue that they have been blindsided and never got their day in court.

Yelp is appealing the case to the California Supreme Court and has been supported in amicus briefs by the whole web industry from Google, Facebook, Twitter, Microsoft, and many other smaller web services along with numerous newspapers.

These companies all argue that user-generated content and social media are how Americans communicate today. By definition there are people that don’t like what other people have to say, certainly witnessed during this political season. And without the protection of Section 230, companies that allow user commentary and content would eventually be driven out of business by becoming embroiled in countless lawsuits.

From a practical standpoint you can understand the lawyer’s concern. She successfully won a case against somebody that defamed her, and yet part of that slander is still available to all on the Yelp site. But the flip side of that is that if Yelp agrees to take down the defamatory posting then they are open to suit by Hassel by having admitted some responsibility for the process.

It’s not hard to picture what happens if Yelp loses this case. Web companies will not be able to take a chance on negative information and a site like Yelp would probably delete all negative reviews – which would invalidate what they do for a living. And social media sites like Facebook probably couldn’t function at all, because almost everything posted there – from skinhead websites through pictures of puppies – offends somebody.

I’m sure that the average person doesn’t appreciate the underlying laws and precedents that allow the web to function the way it does today. If even one of these basic linchpins is removed then the whole thing could come tumbling down, or at worst could morph into something we wouldn’t recognize or like. I don’t think any of us want a web where the corporate lawyers at each web company decide what content is or is not safe for them to carry.