Web TV Not Hitting the Mark

Old TVI am sure that the day will come when there will be OTT web programming packages that will be legitimate competitors to cable. But that day is not here yet. We are starting to see the beginning of web TV, but nothing out there is yet a game changer.

And that is not surprising. We still live in a world where content is under the very tight grasp of the programmers and they are not about to release products that cannibalize the cash cow they have from the cable providers. The early web products are being touted as attempts to lure in the cord cutters and cord nevers who no longer buy traditional cable.

Here is what we’ve seen so far:

  • Sling TV is certainly priced right, starting at only $20 per month. That price includes ESPN as well as a few other popular channels like the Food Network and the Travel Channel. They have a growing list of add-on bundles priced at $5 each. And they are just now launching HBO. But there are problems with the service. As I covered in a blog a few weeks ago, watching some NCAA first round basketball games on Sling TV was the most painful sports watching experience I’ve ever had. And it’s been widespread that they botched the NCAA finals. But there are drawbacks other than the quality. For example, you can only watch it on one device at a time, making it family unfriendly.
  • Sony Vue has two major limitations. First, right now it is only available through a Sony Playstation which costs between $200 and $400. And it’s not cheap. They have three packages set at $49.99, $59.99, and $69.99. Without even considering cable bundle discounts, these can cost as much or more than normal cable.
  • Apple’s TV product is not even on the market yet. Their biggest limiting factor is that it’s going to require the use of a $99 Apple TV box. That unit has been far less popular than the Roku. Apple says they will have ‘skinny’ pricing similar to Sling TV.

There are several major factors that will work against web TV for the foreseeable future:

  • Incumbent Bundle Discounts. All of the major incumbent providers sell bundles of products and they charge a premium price to drop the bundle and go to standalone broadband. That is, if they will sell naked broadband at all. For instance, Comcast has no option for standalone broadband faster than 25 Mbps. When people do the math for canceling traditional cable many of them are going to see very little net savings from the change.
  • Issues with Live Streaming. People have become used to a certain quality level of web viewing due to Netflix and Amazon Prime. But those services cache their product to viewers, meaning that when you first start watching they send a burst of data and they then stay about five minutes ahead of where you view. This eliminates problems due to variance in the Internet connections, making the viewing experience smooth and predictable. But there is a far different challenge when streaming live content, meaning shows that are broadcast at set times. Such shows are largely not cached, and thus are vulnerable to every little hiccup in a viewer’s local network (of which there are many which becomes apparent when watching live sports on the web).
  • Programmer Bundles. Programmers make a ton of money by bundling their content to the ISPs. Comcast, Verizon, and everybody else are not able to pick and choose the content they want. There are seven major program owners that control a big majority of cable channels, and when you want any of their content they generally insist that you take almost all of it. This lets the programmers force ISPs to take programs that they would likely never otherwise buy. Web TV is trying to differentiate itself by offering smaller bundles. But I am sure that programmers are making the web providers pay a premium price for choosing to take only a subset of their channels.

The FCC is currently looking at the issue of web TV and they might make it easier for web companies to obtain content. If they do so, one would hope that they also make it easier for wireline cable providers to do the same. Nielsen released statistics late last year that show that the average household largely watches around eleven channels out of the hundreds that are sent to them. Consumers and cable providers would all benefit greatly if the programming that is being forced upon us better matched what we actually want to buy.

The web TV companies are trying to do just that and put together packages of just the most popular content. But I laugh every time I see them talking about going after the cord cutters, who at this point are largely younger households, because the content they are choosing for the web so far is popular with people fifty and older (sometimes much older). I can’t see too many younger households being attracted to these first web TV packages. If the rules can be changed so that different providers can try different packages, then we might someday soon see a few killer web packages that can give traditional cable a run for the money. And perhaps what we are already seeing will be the wave of the future. Perhaps there will be numerous web TV offerings, each attracting its own group of followers, meaning no one killer package but dozens of small packages each with their loyal fans.

FCC Looking at Backup Power for CPE

Fuld-modell-frankfurtThe FCC is currently deliberating whether they should require battery or other power back-up for all voice providers. They asked this question late last year in a Notice for Proposed Rulemaking (NPRM) in Docket 14-185 and recently numerous comments have been filed, mostly against the idea.

This would affect both cable TV companies and fiber providers since those technologies don’t provide power to telephone sets during a power outage. Customers still on copper have their phones powered from the copper (assuming they have a handset that can work that way), but the FCC sees the trend towards phasing out copper and so they ask the question: should all voice providers be required to provide up to eight hours of backup so that customers can call 911 or call for repairs?

The FCC also asks dozens of other questions. For instance, they ask if there should be an option for customers to replace batteries or other back-up power. They ask if something universal like 9 volt batteries might be made the default backup standard.

One can tell by the questions asked in the NPRM that the FCC really likes the idea of requiring battery backup. I put this idea into the category of ‘regulators love to regulate’ and one can see the FCC wanting taking a bow by providing a ‘needed’ service to millions of people.

But one has to ask: how valuable would this really be for the general public? As you might expect, both cable companies and fiber providers both responded negatively to the idea. They made several major valid points against the idea:

  • Most Handsets Don’t Use Network Power. We all remember the days of the wonderful Bell telephones that all were powered from the copper network. If you had a problem with your phone, one of the first things you always tried was to carry your phone outside and plug it into the NID to see if your problem was inside or outside of the house. I remember once when I had an inside wiring issue that I spent several days squatting on my carport steps to carry on with my work. And those phones were indestructible; my mother still has her original black Bell telephone and it works great. But today you have to go out of your way to buy a plain phone that is network powered. If you get a phone with a portable handset or with any built-in features it’s going to need home power to work. So the question becomes: how many homes actually have phones that would work even there was some sort of backup during an outage?
  • Cell Phone Usage. Landline penetration has fallen significantly in the country. At peak it was at 98% yet today the nationwide penetration is under 60%, with the penetration rate in some major cities far below that. But as landlines have dropped, cellphone usage has exploded and there are now more cellphones in the US than there are adults. As many filers pointed out, when power is out to a home people will make emergency calls from their cellphones. And for the 40% or so of homes that only use cellphones, it’s their only way to make such calls anyway.
  • High Cost of Maintaining Batteries. I have clients that operate FTTP networks and who originally supplied batteries for all of their customers. This turned into a very expensive maintenance nightmare. In a FTTP system these batteries were inside the ONT (the electronics box on the side of the home). This means that the ONT had to be opened by a company technician to replace the batteries, meaning a truck roll, and meaning that a customer can’t replace their own batteries. When batteries go bad they must be replaced or they leak and damage the electronics, and these companies found themselves committing major resources to replacing batteries while they also realized that due to the above issues most of their customers didn’t care about having the backup.
  • What Do You Back-up? There are numerous different ways these days to provision broadband to people (and consequently voice). Some of these options don’t have a practical battery backup available. For example, a cable modem costs a lot more if it includes a power backup, particularly one that is supposed to last for 8 hours. I can’t imagine that there is any practical way to provide backup power other than to supply an off-the-shelf UPS for ISPs who deliver broadband with unlicensed wireless networks. And today, even the FTTP business is changing and ONTs are becoming tiny devices that are plugged into an inside-the-house outlet. Also, who is responsible for providing the backup when a customer buys third party voice from somebody like Vonage that is provisioned over their broadband product?
  • This Adds to the Cost of Deploying Fiber. Building new fiber to premises is already expensive and such a requirement would probably add another $100 per household to the cost of deploying fiber, without even considering the ongoing maintenance costs.
  • Today Most of the Alternatives Proposed by the FCC Don’t Exist. Nobody has ever bothered to create standard battery backup units for a number of network components in coaxial networks. Cable companies have been delivering voice for many years and have had very few requests or demand for providing backup. There certainly are not any backup products that would rely on something standard like 9 volt batteries. And in many networks, such a product would not be able to provide 8 hours of backup. For example, a cable modem would drain even a commercial UPS in a few hours (I know, I have mine set up that way).

I am certainly hopeful that the FCC heeds the many negative comments about the idea and doesn’t create a new requirement for which I think there is very little public demand. Sometimes the best regulation is doing nothing, and this is clearly one such case.

How Vulnerable is Our Web?

The InternetWe all live under the assumption that the web is unbreakable. After all, it has thousands of different nodes and is so decentralized that there isn’t even as many as a handful of places that control the Internet. But does that mean that something couldn’t do enough harm to it to cripple it or bring it down?

Before I look at disaster scenarios, which certainly exist, there is one other thing to consider. The big global Internet as we think about it has probably already died. The Internet security firm Kaspersky reports that by the end of 2014 there were dozens of countries that had effectively walled themselves off from the global Internet. A few examples like China are well known, but numerous other countries, including some in Europe, have walled off their Internet to some degree in response to spying being done by the NSA and other governments.

So the question that is probably more germane to ask is whether or not there is anything that could bring down the US Internet for any substantial amount of time? In the US there are a handful of major hubs in places like Atlanta, Dallas, San Francisco, Northern Virginia, and Chicago. A large percentage of Internet traffic passes through these major portals. But there are also secondary hubs in almost every major city that act as regional Internet switching hubs, and so even if a major hub is disrupted somehow, these regional hubs can pick up a lot of the slack. Additionally, there is a lot of direct peering between Internet companies, and companies like Google and Netflix have direct connections to numerous ISPs using routes that often don’t go through major hubs.

But still, it certainly could be disastrous for our economy if more than one of the major hubs went down at the same time. Many people do not appreciate the extent that we have moved a large chunk of our economy to the Internet as part of the migration to the cloud. A large portion of the daily work of most companies would come to a screeching halt without the Internet and many employees would be unable to function during an outage.

There have been numerous security and networking experts who have looked at threats to the Internet and they have identified a few:

  • Electromagnetic Pulse. A large EMP could knock out Internet hubs in ways that make them difficult to restart immediately. While it’s probably unlikely that we have to be too worried about nuclear war (and if we do, the Internet is one of my smaller worries), there is always the possibility of a huge and prolonged solar flare. We have been tracking solar flares for less than a century and we don’t really know that the sun doesn’t occasionally pump out flares much larger than the ones that we expect.
  • Introducing Noise. It is possible for saboteurs to introduce noise into the Internet such that it would accumulate to make it hard to communicate. This could be done by putting black boxes into numerous remote fiber switching points that would inject enough noise into the system to garble the signals. If enough of these were initiated at the same time the Internet wouldn’t stop, but most of what is being sent would have enough errors to make it unusable.
  • Border Gateway Hijacking. The border gateway protocol is the system on the Internet that tells packets where to go. If the BGP routers at major Internet hubs could be infected or hacked at the same time the Internet could lose the ability to route traffic.
  • Denial of Service Attacks. DDoS attacks have become common and for the most part these are more of a nuisance than a threat. But network experts say that prolonged DDoS attacks from numerous locations directed against the Internet hubs might be able to largely halt other web traffic. Certainly nothing of that magnitude has ever been undertaken.
  • Cyberwarfare. Perhaps the biggest worry in coming years will be cyberattacks that are aimed at taking down the US Internet. Certainly we have enough enemies in the world who might try such a thing. While the US government has recently beefed up funding and emphasis on defending against cyberattacks, many experts don’t think this effort will make much improvement in our security.

Perhaps one of the biggest issues we have in protecting against these various kinds of attacks is that there is no ‘Internet’ infrastructure that is under the control of any one company or entity. There are numerous firms that own internet electronics and the fibers that feed the Internet; most of these companies don’t seem to be making cybersecurity a high priority. I’m not even sure that most of them know what they ought to do. How do you really defend against an all-out cyberattack when you can’t know ahead of time what it might look like?

This isn’t the kind of thing that should keep us up all night worrying, but the threats are there and there are people in the world who would love to see the US economy take a huge hit. It certainly will not be surprising to see a few such attempts over the coming decades – let’s just hope we are ready for it.

Europe Attacking Our Tech Companies

european unionIt’s clear that the European Union is attacking American technology companies. Evidence is everywhere. Consider the following examples or recent crackdowns against US technology in Europe:

  • Last year stringent rules were imposed on Google and other search engines to allow people to remove negative things from searches – these rules are being called the “right to be forgotten”.
  • The European Union is getting ready to file a massive anti-trust case against Google for the way that it favors its own search engine over others. The estimates are that the fines they are seeking could be as high as $6 billion.
  • Last year the EU voted in favor of making Google divest into multiple companies.
  • Numerous countries in Europe have blocked services from Uber.
  • The EU is going after Apple’s fledgling music business saying that they have the market power to persuade labels to abandon ad-sponsored sites like Spotify.
  • A decade ago there were several major antitrust cases filed against Microsoft.

There are numerous reasons for the antipathy that Europe seems to have towards American companies. President Obama said in an interview last month that the negativity was largely driven by economic competition and that Europe wants to find a way to support its own burgeoning tech companies over the behemoth tech companies like Google, Facebook, and Microsoft. He thinks a lot of the complaints by the EU are due to lobbying by European tech companies. He said that “oftentimes what is portrayed as high-minded positions on issues sometimes is designed to carve out their (European) commercial interests.”

But the president also admitted that some of the reaction to American tech companies is in reaction to the European history of suppression of freedom by dictators. For example, Germany just spent decades merging with East Germany and their history of oppression from the Stasi, the secret police. This makes some of these countries very sensitive to the recent revelations of the extent of the spying by the NSA. This one revelation might eventually be the beginning of the end of the open Internet as numerous countries are now building countrywide firewalls to shield them from such spying. It’s natural that this mistrust carries over to companies like Google and Facebook, which clearly have a business model based upon profiling people.

Another reason for going after American companies is tax revenues. The American tech companies have become adroit at claiming revenues in jurisdictions where they pay little or no taxes. Of course, this means that they avoid claiming profits in European countries which have fairly high tax rates. (This also means they avoid paying taxes in the US as well).

Finally, there might be an even more fundamental reason for the apparent European distrust and dislike of American technology. In this article published by Business Insider UK there is a look at the fundamental differences between the way that Europeans and Americans view entrepreneurship, technology, and uncertainty avoidance. The article shows the results of a survey and study done by the European Commission looking at how citizens in various countries look at certain issues. I think there has been a natural assumption that since both places are democratic and share a lot of first world values that we naturally think the same about technology. But the study shows some major differences between Europe as a whole and the US. Interestingly, England is very similar to the US in attitudes and perhaps our Yankee ingenuity and willingness to take risks is really part of our British heritage.

Here are some of the findings of that study:

  • Over 90% of Americans think that individualism is more important than compliance with expected social values. In Europe only a little less than 60% of people value individuality first. And in some places like Russia and Denmark less than 30% valued individualism more than compliance with social expectations.
  • When asked to agree or disagree with the statement, “entrepreneurs exploit other people’s work”, only 28% of Americans agreed with that statement (and the American dream is largely to own your own business), while the results in Europe spanned from only 40% agreeing in France, to 50% in the Netherlands, and over 70% in parts of southern and eastern Europe.
  • The US has a much lower threshold of uncertainty avoidance (unwillingness to take a chance on new ideas and new technologies). In the US only a little over 40% of people view themselves as risk adverse while in Europe it’s over 70%.

This means that to some extent the European Union is representing the will of its people when they crack down on US technology firms, which are viewed negatively as entrepreneurial and high risk. These kind of cultural gaps are very hard to bridge and US companies might have problems in Europe for decades – if they’re even resolvable at all.

The Coming Wave of Disruption

ocean-waves-wallpapers_36746_1920x1200In a recent keynote address at South by Southwest, Steve Case, the founder of AOL, predicted that we are on the brink of what he calls the third wave of changes that have come as a result of the Internet. The first wave took place from 1985 to 2000 and consisted of the rollout of ISPs like AOL, which convinced people to use the Internet. The second wave consisted of entrepreneurs like Google, Amazon, Facebook, Twitter and many others using the Internet as a platform to create new businesses and to bring new features and online products to customers.

The first two waves have been disruptive and we have seen major changes in industries like communications, media, and commerce. We’ve seen whole industries transform, such as the music industry that first shifted to download music rather than from hard media and then morphed again with the advent of streaming media. We’ve seen the newspaper business change drastically due to online news and information that no longer requires a printed page and is not tied to a fixed publication schedule. And we have seen every major corporation change the way they operate due to the Internet and cellphones.

Case thinks we are now poised for the third wave of disruption. And he thinks this will be the biggest wave of all, bringing with it major disruptions to almost every segment of our economy including healthcare, education, transportation, energy, food – you name it.

He noted that the pace of change is accelerating. For example, it took AOL a decade to reach 10 million users. It took Facebook only ten months to do the same thing. And now new apps sometimes reach their first million users within a few weeks after launch.

Case sees four trends that he thinks will shape the next wave of innovation: crowdsourcing, strategic partnerships, impact investing (online companies that focus on a purpose and not just on profits), and a globalization of startups.

You can look at any major field and easily foresee the coming changes. Take higher education. In the US, traditional college education is quickly becoming too expensive for the average student. But we see an increasing number of courses online and it’s not hard to imagine people getting their entire education online. There are already numerous master’s degree programs that are entirely online and it probably won’t be long until most higher education takes that path.

I’ve taken several electrical engineering courses from MIT online and these offer the same course materials that are taught on campus. As the Internet reaches worldwide we are also going to see advanced education reach billions of people who never would have had the opportunity before for advanced training. This change will likely transform campuses into enclaves for the rich who can afford to pay for the college ‘experience’. But most people, including many from faraway places, are going to learn the same information as those on a campus, cheaper and on their own schedule. And this will lead to Case’s expectation that startups will come from anywhere in the world.

Look at almost every other industry and you can see the same kinds of disruption coming. Robots are soon going to be taking over huge numbers of mundane jobs since they can do it faster, cheaper, and more accurately. But robots are also going to take over a lot of white collar jobs, including any that involve repetitive paperwork tasks. Companies that take advantage of robots will have a huge cost advantage over those who do not. In the recent wave of innovation we have seen jobs flee to places where the work can be done cheaper. But the jobs are going to leave those places and be replaced by steel collar workers. This might bring manufacturing back to the US again, but not a ton of jobs with it.

We are spending a lot of effort in this country right now trying to figure out how to pay for our medical needs. But one can look out twenty years and foresee computers and robots transforming this industry as well. People will largely be able to get properly diagnosed by computers without even needing a live doctor as part of the process. In the next wave we are not going to see computers doing open heart surgery, but that is something that might be following in the fourth wave of innovation.

Interestingly, all of these changes are due to the improvement in computer chips and in the communication offered by the Internet. Those two things, and all of the innovations that follow from taking full advantage of those two things, are the drivers that are going to transform our world, transform our businesses and economies, and transform how we live our lives.

The Race to Zero

Zero

This is my 500th blog entry, which means I have written several books worth of words. When I started the blog I feared I might run out of ideas in a few months, but our industry has become so dynamic that I am regularly handed more topics than there are days in the week.  

The cloud industry is often characterized by what is being called the race to zero. This is the phenomenon of ever-dropping prices for data storage. The race towards cheaper prices has always been driven by Amazon through repeatedly reducing their cloud storage prices. Every time they reduce the price of their AWS storage services, the other big cloud companies like Microsoft and Google always go along.

There are numerous reasons for the price drops, all having to do with improved computer technology. Memory storage devices have dropped in price regularly, while at the same time a number of new storage technologies are being used. The large cloud companies have moved to more efficient large data centers to gain economy of scale. And lately the large companies are all designing their own servers to be faster and more energy efficient, since energy prices for cooling are one of the largest costs of running a data center.

I remember in the late 90s looking to back up my company LAN offsite for the peace of mind of having a backup of our company records. At that time our data consisted of Word, Excel, PowerPoint, and Outlook files for around twenty people, and I’m sure that wasn’t more than a a dozen gigabytes of total data. I got a quote for $2,000 per month, which consisted of setting up a shadow server that would mimic everything done on my server, backed up once per day. At the time I found that was too expensive, so we stayed with using daily cassette back-up.

Let’s compare that number to today. There are now numerous web services that give away a free terabyte of storage. I can get a free terabyte from Flickr to back up photos. I can get the same thing from Oosah that allows me to store any kind of media and not just pictures. I can get a huge amount of free storage from companies like Dropbox to store and transmit almost anything. And the Chinese company Tencent is offering up to 10 terabytes of storage for free.

It’s hard for somebody who doesn’t work in a data center to understand all of the cost components for storage. I’ve seen estimates on tech sites that say that storage costs for a gigabyte’s worth of data have dropped from $9,000 in 1993 to around 3 cents in 2013. Regardless of how accurate those specific numbers are, they demonstrate the huge decrease in storage cost over the last few decades.

But consumers and businesses don’t necessarily see all of these savings, because the industry has gotten smarter and now mostly charges for value-added services rather than the actual storage. Take the backup service Carbonite as an example. Their service will give you unlimited cloud storage for whatever data you have on your computer. Their software then activates each night and backs up whatever changes you made on your computer during the day. This is all done by software and there are no people involved in the process.

Carbonite charges $59.99 per year to back up any one computer. For $99.99 per year you can add in one external hard drive to any one computer. And for $149.99 per year you can back up videos (not included in the other packages) plus they will courier you a copy of your data if you have a crash.

The value of Carbonite is that their software automatically backs you up once a day (and we all know we forget to do that). But that is not a complicated process and there have been external hard drives available for years with the same feature. But Carbonite is selling the peace-of-mind of not losing your data by putting it in the cloud. It must be a very profitable business since the cost of the actual data storage is incredibly cheap. Consider how much extra profit they make when somebody pays them $40 extra to back up an external hard drive.

In the business world, the fees paid for the cloud are all about software and storage cost isn’t an issue other than for someone who wants to store massive amounts of data. One might think the companies in the cloud business are selling offsite storage, but their real revenue comes from selling value-added software that helps you operate your business and manage your data. The storage costs are almost an afterthought these days.

The race to zero is not even close to over. In one of my blogs last week I talked about how using magnetized graphene might increase the storage capacity of devices by a million-fold. That upgrade is still in the labs, but it demonstrates that progress to ever-cheaper storage is far from done. We’ve come a long way from the 720 kb that I used to squeeze onto a floppy diskl!

Comcast and Gigabit Fiber

Speed_Street_SignComcast announced last week that they are going to start offering symmetrical 2 gigabit data speeds in Atlanta and that over the next year they will offer this to as many as 18 million subscribers. The announcement also said that Comcast would have a 1 gigabit product rolled out to most of its markets sometime in 2016.

The announcement says that customers have to be “within close proximity to Comcast’s fiber network’” to get the product. And by that, they mean you basically must be living directly next to an existing fiber line. It’s hard to foresee Comcast building a lot of fiber to service residences, even at the expected high price of this product. For them to build fiber to those 18 million would cost a lot more than what Verizon spent to build FiOS, and they are extremely unlikely ever to do that.

I subscribe to Comcast’s Blast service and get 50 Mbps download speed for a listed price of $50 per month. They also offer a 105 Mbps product for $80 per month in my market. In some markets there is a Blast product that can deliver 505 Mbps and which costs $400 per month. But Comcast doesn’t sell naked cable modems above 25 Mbps and the Blast products all require a bundle with a cable TV product. The smallest cable package is about $15 per month and includes HBO and local channels. There are extra fees for the cable modem and the settop box. Plus there are a few of the mystery fees I’ve discussed in this blog like a ‘local programming charge’. I don’t own a TV and so the cable I buy is a throwaway just so I can get the faster data speed. My $50 cable broadband actually costs me nearly $80 per month.

They aren’t going to announce the gigabit pricing until May, but with their existing half gigabit product costing $400 per month this is not likely to be cheap, except perhaps in neighborhoods where they are going to compete directly with Google or somebody else with a very fast product.

The announcement says that a customer must allow the installation of professional grade equipment. This means it is likely that a customer on this service is going to get the same termination router that is given to business customers who already subscribe to Comcast’s ‘Gigabit Pro’ product.

I am going to guess that initially this is only going to be available to a relatively tiny number of customers. Comcast has been decreasing the size of their fiber nodes over the last decade, so they probably have fiber within a few miles of most homes. But those networks have not been built in most cases with enough fiber pairs to be able to support widespread FTTP. I live in an upscale neighborhood but I am over a mile from the closest fiber I can find. I don’t know if it belongs to Comcast or CenturyLink, but I find it totally unlikely that Comcast would build that last mile to give me this product, particularly in a residential neighborhood.

And even though the cable network in my town is relatively new and was largely rebuilt a decade ago after hurricane Charley leveled this area, we still don’t have access to the 505 Mbps product (nor would many here likely pony up $400 per month for it).

I am the last one to be negative about anybody who brings fast data speeds to customers, and I am sure that there will be some households that will buy the new product. But unless the pricing is made cheap enough to compete with Google, it’s going to be extremely unlikely that anybody who is not running a somewhat significant business out of their home is going to pay the likely high price of the product.

I’ve heard that CenturyLink’s residential gigabit product is priced at about $150 and I would probably pay that much for it if it was available (are you listening CenturyLink?) After all, I have been a broadband advocate for fifteen years or so and I doubt I could say no to a gigabit. But if Comcast’s product is priced in line with their half gigabit product, then even I would have to pass on it. A 2 gigabit bandwidth product in my home would give me great bragging rights, but unless I hit the lottery it’s likely going to be out of my price range.

Unless Comcast really fools me, this announcement is more fluff than reality. For Comcast to really get fiber past every home would cost them many billions of dollars, and I don’t know why they would do that when they already have a migration path on their coax to get to gigabit speeds. It’s just unfathomable that they would invest in an expensive new network to compete with their own existing expensive network.

How Did We Do with the National Broadband Plan?

FCC_New_LogoFive years ago the FCC published the National Broadband Plan. This was a monstrous 400 page document that laid forth a set of broadband goals for the first time. Within that document was a discussion of numerous goals the country should consider and the document remains an interesting list today – sort of a ‘want’ list for broadband policies and achievements.

The country has come a long way since 2010 in terms of broadband. We’ve seen numerous neighborhood fiber networks being built. We’ve seen cable modem technology get better and the speeds of those products are greatly improved, at least in major metropolitan areas. We’ve seen an explosion in smartphone usage and seen our cellular networks be largely upgraded to LTE 4G.

The FCC has led a few attempts to improve broadband. They have redirected the Universal Service Fund to bring broadband to rural areas and to bring broadband to schools and libraries. They have approved the use of more spectrum for cellular data. They have even updated the definition of broadband to a minimum of 25 mbps download as a way to goad providers to increase speeds.

Consider the six major goals adopted by the plan. Let’s see how we are doing on these:

Goal #1: At least 100 million U.S. homes should have affordable access to actual download speeds of at least 100 megabits per second and actual upload speeds of at least 50 megabits per second. This goal has mostly been met for download speeds and most urban areas now have cable modem products that can deliver at least 100 Mbps download. But we universally missed the 50 Mbps upload goal and I’m not entirely sure why that was set as a goal. But there are very few places in the country where the 100 Mbps product is affordable and so most households still buy something much slower.

Goal #2: The United States should lead the world in mobile innovation, with the fastest and most extensive wireless networks of any nation. While we have upgraded our mobile networks, a number of other countries have done it sooner and offer faster speeds. But I think this is eventually going to be taken care of as cellular network owners migrate to software defined networks where they can upgrade huge parts of the network at once.

Goal #3: Every American should have affordable access to robust broadband service, and the means and skills to subscribe if they so choose. The key word here is affordable and the US still has nearly the most expensive broadband among first world countries. While we have fast speeds available in many markets, they are often not affordable and the vast majority of people subscribe to something slower due to the economics. As the FCC recently pointed out, we don’t have much competition in the country and far too many people only have one or two options for buying broadband. And we still very much have a digital divide, be it a physical lack of broadband in rural areas are an economic barrier in poorer urban areas.

Goal #4: Every American community should have affordable access to at least 1 gigabit per second broadband service to anchor institutions such as schools, hospitals and government buildings. We have made some progress in this area, and through the Universal Service Fund we ought to be getting gigabit fiber to a lot more schools over the next few years. The big challenge for this goal is getting broadband to rural schools since there are numerous counties in the country that have barely any fiber.

Goal #5: To ensure the safety of the American people, every first responder should have access to a nationwide, wireless, interoperable broadband public safety network. We are slogging forward on this issue through the FirstNet program that intends to integrate all of the first responder networks into a single set of standards to insure interoperability. This is going to remain a challenge in rural areas where the wireless coverage is poor.

Goal #6: To ensure that America leads in the clean energy economy, every American should be able to use broadband to track and manage their real-time energy consumption. This really seems like an energy goal and not a broadband goal. But smart thermostats are now available at every hardware store that operate from home WiFi and that can be accessed using a smartphone. So, except in those areas with no broadband or cellular coverage, we have the technology to meet this goal. The percentage of homes with these devices is still relatively small, asking why this was a major broadband goal.

I can’t put a percentage on how we have done. Certainly people in urban areas have better broadband than they did five years ago, but affordability is still a major issue. The rural copper networks continue to age and deteriorate and while there is some construction of rural fiber, overall the rural areas are further behind the urban areas than they were five years ago. We are now seeing gigabit capable fiber networks starting to be made available to residents, but so far this reaches maybe one percent of homes in the country. There are still a surprisingly large number of people that still suffer with dial-up or satellite data who are being left behind. It will be interesting to see how much closer we are to those goals in five more years.

Regulatory Alert – One Many Seemed to Have Missed

Network_neutrality_poster_symbolThe original net neutrality ruling went into effect in October, 2011. This was an order from the FCC titled In the Matter of Preserving the Open Internet, GN Docket No. 09-191, Report and Order, FCC 10-201, known at the time as the Open Internet Order. Of course, the heart of that order was challenged in court by Verizon which led to the recent net neutrality order earlier this month.

However, there were parts of that original order that were not challenged in court and that are still in effect. There is one important requirement that everybody should notice having to do with disclosure for Internet data products. The disclosure requirements apply to all ISPs, both wireline and wireless. The gist of the requirements are that ISPs should “disclose the network management practices, performance characteristics, and terms and conditions of their broadband service.”

In the Order, the FCC included a long list of the types of information that would satisfy the disclosure requirement. ISPs should be reporting the following facts to their customers:

Network Practices    

  • Congestion Management. Descriptions of congestion management practices; types of traffic subject to practices; purposes served by practices; practices’ effects on end users’ experience; criteria used in practices, such as indicators of congestion that trigger a practice, and the typical frequency of congestion; usage limits and the consequences of exceeding them; and references to engineering standards, where appropriate.
  • Application-Specific Behavior. Whether and why the provider blocks or rate-controls specific protocols or protocol ports, modifies protocol fields in ways not prescribed by the protocol standard, or otherwise inhibits or favors certain applications or classes of applications.
  • Device Attachment Rules. Any restrictions on the types of devices and any approval procedures for devices to connect to the network.
  • Security. Practices used to ensure end-user security or security of the network

Performance Characteristics

  • Service Description. A general description of the service, including the service technology, expected and actual access speed and latency, and the suitability of the service for real-time applications. (Emphasis mine.)
  • Impact of Specialized Services. What specialized services, if any, are offered to end users, and whether and how any specialized services may affect the last-mile capacity available for, and the performance of, broadband Internet access service.

Commercial Terms

  • Pricing. Monthly prices, usage-based fees, and fees for early termination or additional network services.
  • Privacy Policies. Whether network management practices entail inspection of network traffic, and whether traffic information is stored, provided to third parties, or used by the carrier for non-network management purposes.
  • Redress Options. Practices for resolving end-user and edge provider complaints and questions.

I know that many ISPs took note of this requirement at the time of the original order. But most assumed that when the courts vacated the net neutrality provisions of the order that the entire order was vacated.

If you have a good network, these are things that you want to be telling your customers. And if you think you have a better network than your competitors then you also want to make sure that your competitors are disclosing this same kind of information. The most interesting thing on the list of requirements is a disclosure of actual speeds, as opposed to advertised speeds. I know that this is a really big deal in rural markets where the large companies often advertise their urban products that are not actually available in smaller markets with older technology.

If you have not put together this sort of disclosure, you really need to do so. It’s somewhat surprising that no customer has ever complained to the FCC about ISPs not making these disclosures. I would guess that everybody got so confused by the court cases that the requirement got lost in the shuffle. I recall years ago that the same sort of thing happened with the original access charge order in 1984, where some sections were challenged and overturned while others went into immediate effect. In any event, if you haven’t made these disclosures you should do so, and you also ought to look to make sure that your competitors have done the same.

The Deep Web

Keep OutAnybody reading this blog probably uses the Internet regularly, and we all have a general understanding of what the web is. It’s Google search, Facebook, Amazon.com, blogs, news sites and a wide range of other material that somebody has taken the time to put onto the Internet. But what surprises many people is that everything we can see on the web is just a miniscule fraction of the actual web and that most of what is out there is unavailable to us.

Mike Bergman, the founder of BrightPlanet, coined the phrase ‘deep web’ to describe all of the things that traverse the Internet that we can’t see. The part of the web we can see is called the surface web and it’s been estimated that the deep web is at least 500 times larger than the surface web.

The Google search engine probably looks at more of the surface web than any other crawler and it’s been estimated that Google looks at perhaps 15% of the surface web and none of the deep web. This means your Google searches are based upon only 0.03% of what is actually on the web. And that estimate might be conservatively high since it seems the deep web is growing exponentially.

So what are all of these things we can’t see on the web? They fall into a number of different categories:

  • Private web content that requires a password. This includes huge databases like Lexis-Nexis (which contains transcripts of all court orders), most scientific papers, trade group papers that are available to members only, corporate information that is meant only for employees, and anything for which somebody wants to control (or charge) access.
  • Unlinked web pages. Many web sites include pages that cannot be reached through links from the main page. Crawlers can’t generally find such pages.
  • Content that lies behind a form. In this industry I am often asked for my name and company before being given access to whitepapers and other content.
  • Web sites that are hidden on purpose. It’s possible to have a web page that fends off crawlers through the use of various techniques such as Robots Exclusion Standard or CAPTCHAS. Often these kinds of web sites are often part of the darknet, which consists of web sites used for nefarious purposes such as pornography, selling drugs, trading hacker information, sharing copyrighted material, and numerous things that the content owners want to keep under the radar. But the darknet isn’t always nefarious and might be used by political dissidents and others trying to hide their activities from authorities.
  • Cached content that is stored as pictures rather than as a pdf or other readable format.
  • Content in the form of a video. A search engine might note that a video exists, but cannot know the content of the video.
  • Scripted content. This means a page that is only enabled by JavaScript or Flash, such as online greeting cards.

There are techniques for finding things on the deep web and one can imagine that governments around the world constantly search for things on the darknet. Normal web crawlers search the web by following hyperlinks (the links that connect web pages). But these techniques cannot uncover the deep web since their content is all shielded from hyperlinks.

In 2005 Google built something called the Sitemap Protocol which is a process that collects all queries made to go to deep web sites. Over time this process will allow a company like Google to map a significant portion of the deep web by identifying all of the hidden sites as well as how much traffic goes to each. But that is only half of the battle and much of the deep web is encrypted through tools like Tor, leaving the contents immune to search by normal web crawlers. So the challenge remains to find a way to map and uncover data on the deep web and share it in a format that can be easily read and understood.

One imagines the NSA spends a lot of time crawling around the deep web, particularly the darknet, but for the rest of us this is largely going to remain hidden and out of sight. I’ve always wondered why some topics I look for don’t seem to be found on the web – now I know they are probably there somewhere in the 99.7% of the web that the Google search engine doesn’t see.