Using Gigabit Broadband

Mozilla recently awarded $280,000 in grants from its Gigabit Communities Fund to projects that are finding beneficial uses of gigabit broadband. This is the latest set of grants and the company has awarded more than $1.2 million to over 90 projects in the last six years. For any of you not aware of Mozilla, they offer a range of open standard software that promotes privacy. I’ve been using their Firefox web browser and operating software for years. As an avid reader of web articles I daily use their Pocket app for tracking the things I’ve read online.

The grants this year went to projects in five cities: Lafayette, LA; Eugene, OR; Chattanooga, TN; Austin, TX; and Kansas City. Grants ranged from $10,000 to $30,000. At least four of those cities are familiar names. Lafayette and Chattanooga are two of the largest municipally-owned fiber networks. Austin and Kansas City have fiber provided by Google Fiber. Eugene is a newer name among fiber communities and is in the process of constructing an open access wholesale network, starting in the downtown area.

I’m not going to recite the list of projects and a synopsis of them is on the Mozilla blog. The awards this year have a common theme of promoting the use of broadband for education. The awards were given mostly to school districts and non-profits, although for-profit companies are also eligible for the grants.

The other thing these projects have in common is that they are developing real-world applications that require robust broadband. For example, several of the projects involve using virtual reality. There is a project that brings virtual reality to several museums and another that shows how soil erosion from rising waters and sediment mismanagement has driven the Biloxi-Chitimacha-Choctaw band of Indians from the Isle de Jean Charles in Louisiana.

I clearly remember getting my first DSL connection at my house after spending a decade on dial-up. I got a self-installed DSL kit from Verizon and it was an amazing feeling when I connected it. That DSL connection provided roughly 1 Mbps, which was 20 to 30 times faster than dial-up. That speed increase freed me up to finally use the Internet to read articles, view pictures and shop without waiting forever for each web site to load. I no longer had to download software updates at bedtime and hope that the dial-up connection didn’t crap out.

I remember when Google Fiber first announced they were going to build gigabit networks for households. Gigabit broadband brings that same experience. When Google Fiber announced the gigabit fiber product most cable networks had maximum speeds of perhaps 30 Mbps – and Google was bringing more than a 30-times increase in speed.

Almost immediately we heard from the big ISPs who denigrated the idea saying that nobody needs gigabit bandwidth and that this was a gimmick. Remember that at that time the CEO of almost every major ISP was on the record saying that they provided more than enough broadband to households – when it was clear to users that they didn’t.

Interestingly, since the Google Fiber announcement the big cable companies have decided to upgrade their own networks to gigabit speeds and ISPs like AT&T and Verizon rarely talk about broadband without mentioning gigabit. Google Fiber reset the conversation about broadband and the rest of the industry has been forced to pay heed.

The projects being funded by Mozilla are just a few of the many ways that we are finding applications that need bigger broadband. I travel to communities all over the country and in the last year I have noticed a big shift in the way that people talk about their home broadband. In the past people would always comment that they seemed to have (or not have) enough broadband speed to stream video. But now, most conversations about broadband hit on the topic of using multiple broadband applications at the same time. That’s because this is the new norm. People want broadband connections that can connect to multiple video streams simultaneously while also supporting VoIP, online schoolwork, gaming and other bandwidth-hungry applications. I now routinely hear people talking about how their 25 Mbps connection is no longer adequate to support their household – a conversation I rarely heard as recently as a few years ago.

We are not going to all grow into needing gigabit speeds for a while. But the same was true of my first DSL connection. I had that connection for over a decade, and during that time my DSL got upgraded once to 6 Mbps. But even that eventually felt slow and a few years later I was the first one in my area using the new Verizon FiOS and a 100 Mbps connection on fiber. ISPs are finally facing up to the fact that households are expecting a lot of broadband speed. The responsive ISPs are responding to this demand, while some bury their heads in the sand and try to convince people that their slower broadband speeds are still all that people need.

New Video Format

alliance-for-open-mediaSix major tech companies have joined together to create a new video format. Google, Amazon, Cisco, Microsoft, Netflix, and Mozilla have combined to create a new group called the Alliance for Open Media.

The goal of this group is create a video format that is optimized for the web. Current video formats were created before there was wide-spread video using web browsers on a host of different devices.

The Alliance has listed several goals for the new format:

Open Source Current video codecs are proprietary, making it impossible to tweak them for a given application.

Optimized for the Web One of the most important features of the web is that there is no guarantee that all of the bits of a given transmission will arrive at the same time. This is the cause of many of the glitches one gets when trying to watch live video on the web. A web-optimized video codec will be allowed to plow forward with less than complete data. In most cases a small amount of missing bits won’t be noticeable to the eye, unlike the fits and starts that often come today when the video playback is delayed waiting for packets.

Scalable to any Device and any Bandwidth One of the problems with existing codecs is that they are not flexible. For example, consider a time when you wanted to watch something in HD but didn’t have enough bandwidth. The only option today is to fall back the whole way to an SD transmission, at a far lower quality. But in between these two standards is a wide range of possible options where a smart codec could analyze the bandwidth available and could then maximize the transmission by choosing different options among the many variables within a codec. This means you could produce ‘almost HD’ rather than defaulting to something of much poorer in quality.

Optimized for Computational Footprint and Hardware. This means that the manufacturers of devices would be able to maximize the codec specifically for their devices. All smartphones or all tablets or all of any device are not the same and manufacturers would be able to choose a video format that maximizes the video display for each of their devices.

Capable of Consistent, High-quality, Real-time Video Real-time video is a far greater challenge than streaming video. Video content is not uniform in quality and characteristics and there is thus a major difference in the quality between watching two different video streams on the same device. A flexible video codec could standardize quality much in the same way that a sound system can level out differences in listener volume between different audio streams.

Flexible for Both Commercial and Non-commercial Content A significant percentage of videos watched today are user-generated and not from commercial sources. It’s just as important to maximize the quality of Vine videos as it is for showing commercial shows from Netflix.

There is no guarantee that this group can achieve all of these goals immediately, because that’s a pretty tall task. But the power of these various firms combined certainly is promising. The potential for a new video codec that meets all of these goals is enormous. It would improve the quality of web videos on all devices. I know that personally, quality matters and this is why I tend to watch videos from sources like Netflix and Amazon Prime. By definition streamed video can be of much higher and more consistent quality than real-time video. But I’ve noticed that my daughter has a far lower standard of quality than I do and watches videos from a wide variety of sources. Improving web video, regardless of the source, will be a major breakthrough and will make watching video on the web enjoyable to a far larger percentage of users.

What is WebRTC?

logo-webrtcThere is yet another new threat/opportunity for the telecom industry in WebRTC. That stands for Web Real Time Communication and is a project to create an open standard for delivering high-quality voice and data applications for a wide variety of platforms including browsers and mobile phones, all using the same set of protocols.

The most immediate use for the new standard is building direct voice and video communication applications from every major web browser. The project is being funded and developed by Google, Mozilla, and Opera. Microsoft has said that they are working towards developing a real-time WebRTC app for Internet Explorer.

From a user perspective, WebRTC will enable anybody to initiate voice and/or video communication with anybody else using a browser or using a WebRTC-enabled device. What is unique about this effort is that the brains of the communication platform will be built into the browser, meaning that an external communications program will not be required to make such a connection. This creates browser-to-browser communication and cuts out a host of existing software platforms used today to perform this function.

This means that the big browser companies are making a big play for a piece of the communications market. The WebRTC platform will put a lot of pressure on other existing applications. For example, WebRTC could become the de facto standard for unified communications. This would let the browser companies tackle this business, which is today controlled by softswitch, router, or software vendors.

WebRTC is also going to directly compete with all of the various communication platforms like GoToMeeting and Skype. I know I maintain half a dozen such platforms on my computer that I’ve needed to view slide shows from different clients or vendors. WebRTC would do away with these intermediate platforms and let anybody on a WebRTC browser communicate with somebody else with WebRTC. You should be able to have a web meeting where there are participants on Google Chrome, Mozilla Foxfire, or Internet Explorer, all viewing and discussing a slide show together from their different platforms.

In the next generation of the standard the group will be developing what they call Object-RTC, which will be a platform that will integrate the Internet of Things into the same communications platform. This will enable anybody from any browser to easily communicate with devices that are on the Object-RTC platform, making it far easier for the normal person to integrate the IoT into their daily lives. This could become the standard platform that will allow you to communicate with your IoT devices equally easily from your PC, tablet, or smartphone. This is presumably a market grab by the browser companies to make sure that the smartphone doesn’t become the only interface to the IoT.

While the WebRTC development effort is largely being funded by Google and the other browser companies, numerous other companies have been developing WebRTC applications in an effort to keep themselves relevant in the future communications market.

Since the WebRTC platform is browser-based, it’s estimated that it will be available to 6 billion devices by the end of 2019. One would think that browser-based communications will grow to be a major means of communicating by then, putting additional pressure on companies today that make a living from providing voice.

Because it’s browser-based, WebRTC is likely to have more of an initial impact on the residential market. Larger businesses today communicate using custom software packages, and as WebRTC becomes the standard those platforms will likely all incorporate the new standard. To that effect we have already seen some large companies snag some of the early WebRTC developers. For example, Telfonica acquired start-up Tokbox in 2012. More recently, the education software services company Blackboard bought Requestec. And Snapchat paid $30 million to buy WebRTC startup AddLive.

One can expect a mature WebRTC platform to transform online communications. If people widely accept WebRTC (or the one of many different programs that will use the software), then it could quickly become the standard way of communicating. What is clear is that with companies like Google, Microsoft, and Mozilla behind the effort, this new communications standard is going to become a major player in the communications business. This is going to be mean fewer minutes on the POTS telephone network. It will also put huge pressure on intermediate communications platforms like GoToMeeting, and those kind of services might eventually disappear. I remember hearing somebody say a decade ago that voice would eventually be a commodity, and this is yet another step towards making voice free.

Compromise on Net Neutrality?

Network_neutrality_poster_symbolThe FCC is considering a compromise solution to net neutrality that already is satisfying almost nobody. Both through speeches given by FCC Chairman Wheeler along with some leaked memos, it’s clear that the FCC is strongly considering a solution that falls halfway between allowing fast lane deals and in regulating ISPs as common carriers.

While nobody has seen any actual proposed rules, the industry has already reacted strongly to the proposed solution. Apparently, what the FCC has in mind is to reclassify the interactions of the ‘back-end’ Internet as common carrier business while leaving the interactions between ISPS and end users the same as they are today, which is largely unregulated.

This means that the FCC would have the ability to overlook deals between companies like Level3 and Comcast, between companies that transport and switch the Internet. One would have to assume that if this was considered as a regulated common carrier business that rules similar to the way that large carriers interact today would apply. For example, there are rules in place today for agreements between telecom carriers that dictate defined timelines for the completion of a negotiated arrangement as well as defining some broad parameters that must be followed in such agreements. The current interconnection rules have stopped a lot of the abusive practices due to the natural advantage that large carriers hold over small carriers and tends to make the negotiations open and fair to both parties.

The FCC’s compromise is said to have come from a proposal submitted by Mozilla, although to me it seems to differ a lot from that proposal. Mozilla had suggested two forms of regulation. First, they had recommended common carrier regulation between the companies that own the networks that physically carry Internet traffic, about the same as what the FCC is now considering. But Mozilla went on to also say that the FCC should regulate arrangements between ISPS and content creators like Netflix, which Mozilla called “remote delivery services”. Mozilla thought this created a back-door way for the FCC to still have some say over deals that affect the last mile between consumers and the ISPs. That part of the Mozilla proposal seems to have been left on the floor.

I’ve given this some thought all weekend and there seems to be two things that this proposal gives the FCC. First, by not reclassifying the whole Internet business as Title II the FCC is probably trying to create a solution that has a chance of withstanding legal challenge. This proposal does not drastically change the industry enough to create a fatal flaw that would inevitably reverse the decision.

But unfortunately this is still very much a proposal that favors the large ISPs. They will rant and rave and say they hate it, because that is the public relations games they must play, but they will all be pleased and they will all chalk this up as a victory. I find it unlikely that they will challenge this, because if they do then the FCC is going to be left with little option but to try for total Title II regulation of the industry under the common carrier rules.

What I dislike about this, and what the public is going to dislike after they understand it is that this still allows for ISPs to do almost anything they want to consumers. They can cook up plans that give people special pricing if they limit their content to certain providers like Facebook. The ISPs are going to be free to implement more stringent data caps or to introduce plans that charge more for certain types of content. This ruling will make it clear that the FCC has given the ISPs free reign to do whatever they want at the customer level. The ISPs have been held in check for the last years from doing anything too crazy with customers due to this impending ruling. But once this is resolved the ISPs will be free to impose almost anything on customers they want to try.

When I first saw the headlines that there was a compromise solution I had a moment of hope where I thought the FCC would declare the connection to customers as common carrier business but would leave the network connections unregulated. Such a regime would be effective because the ISPs would be free to do whatever they want, up to the point of harming the customer product.

Regulating the customer connection is the only way to protect customers. The Mozilla proposal did this through regulating what they called the ‘remote delivery service’ aspects of the customer experience, meaning that ISPs could not undertake policies that would slow down Netflix at the customer level. To me that was a compromise because it still did not necessarily regulate everything about the customer interaction. For example, DSL services offered by the phone company have always been regulated, and years ago the FCC said that telcos must provide ‘naked DSL’, meaning they must sell DSL as a standalone service without requiring that it be bundled with something else. The FCC has no such authority over cable modems or fiber networks due to the lack of regulating the customer side of the Internet.

This proposal is no safe or wise solution and it is not cutting the baby in half like was done by the wise King Solomon. This is being made to look like a compromise, but it gives the ISPs what they have always wanted, which is free reign to offer any plans they want to consumers. One only has to look at our Internet to know that the ISPs are about nothing but their own bottom line, I saw several articles last week that reminded us that the US internet product is both the slowest and most expensive product among western nations. And this ruling is not going to change that.

A Solution for Net Neutrality?

Network_neutrality_poster_symbolToday Mozilla filed comments with the FCC with a clever solution that would fix the net neutrality fiasco. Attached is the Mozilla filing. I call the solution clever, because if the FCC wants to solve net neutrality Mozilla has shown them a path to do so.

Mozilla has asked to split Internet traffic into two parts. First is the traffic between ISPs and end-user customers. Mozilla is suggesting that this part of the business can remain under the current regulatory rules. The second portion is the traffic between ISPs like Comcast and AT&T and content providers like Facebook, NetFlix, etc. Mozilla recommends that the FCC reclassify this as transport under Title II of the Telecommunications Act of 1996.

The current dilemma we are facing with net neutrality is that FCC lacked the courage to classify the Internet network as common carrier business. Instead, in 2002, when broadband was growing explosively, the FCC classified all Internet traffic as an information service. And that decision is why we are even having the debate today about net neutrality. If the FCC had originally decided to regulate the Internet then it would have full authority to enforce the net neutrality rules it passed a few years ago.

But even in 2002 the FCC was a bit cowed by the political pressure put on them by lobbyists. The argument at the time was that the FCC needed to keep hands off the burgeoning Internet so as to not restrict its growth. It’s hard for me to see how classifying the Internet business as common carrier business would have changed the growth of the Internet and I believe it all boiled down to the fact that the cable companies did not want to be further regulated by the FCC.

The net neutrality rules written a few years ago by the FCC basically say that ISPs have an obligation to deliver all packets on the Internet without discrimination. Mozilla is suggesting that there is an additional legal obligation between ISPs and content providers to deliver their traffic without discrimination.

This argument might seem a bit obscure to somebody not in the industry, but it removes the dilemma of not being able to regulate the traffic between ISPs and content providers. The suggested change is to not classify data packets at the carrier level as information services, but to recognize it by its normal network function – that is the transporting of data from one place to another. Today transport is regulated in the sense that if a carrier sells a data pipe of a certain amount of bandwidth to another carrier they are obligated to deliver the bandwidth they have charged for. By putting the gigantic data pipes that extend between companies like NetFlix and Comcast under the transport regime it would treat Internet traffic like any other data pipe.

This change makes a lot of sense from a network perspective. After all, it’s hard to think of the transaction where NetFlix hands a huge data pipe to Comcast or AT&T as an information service. Comcast is doing no more than taking the data on that pipe and moving that data where it is supposed to go. That is the pure definition of transport. It only becomes an information service on the last mile of the network where the data traffic is handed off to end-user customers. There are already millions of other data circuits today that are regulated under the transport rules. It make logical sense to say that a 10 gigabit Internet circuit is basically the same, at the carrier level, as a 10 gigabit circuit carrying voice or corporate data. Data pipes are data pipes. We don’t peer into other data pipes to see what kind of traffic they are carrying. But by classifying the Internet as an information services that is exactly what we do with those circuits.

This idea gives the FCC an out if they really want net neutrality to work. I personally think that Chairman Wheeler is thrilled to death to see net neutrality being picked apart since he spent years lobbying against it before taking the job. So I am going to guess that the Mozilla suggestion will be ignored and ISPs will be allowed to discriminate among carriers, for pay. I hope he proves me wrong, but if he ignores this suggestion then we know he was only paying lip service to net neutrality.