The Urban Broadband Gap

apartment-buildings-mascot-frontIt’s natural to think that all city-dwellers have great broadband options. But when you look closer you find out it’s often not really so. For various reasons there are sizable pockets of urban folks with gaping broadband needs.

Sometimes the broadband gap is just partial. I was just talking to a guy yesterday from Connecticut who lives in a neighborhood that largely commutes to New York City for work. These are rich neighborhoods of investment bankers, stockbrokers and other white collar households. They have cable modem service from Comcast and can get home broadband, but he tells me that cell phone coverage is largely non-existent. He can’t even use his cellphone outside of his house. There is a lot of talk about broadband migrating to wireless, but 5G broadband isn’t going to benefit people that can’t even get low-bandwidth cellular voice service.

I also have a good friend who lives in a multi-million dollar home in Potomac, Maryland – the wealthiest town in one of the wealthiest counties in the country. He has no landline broadband – no cable company, no Verizon FiOS, and not even any usable DSL. His part of the town has winding roads and sprawling lots and was built over time. I’m sure that it never met the cable company’s franchise density requirement of at least 15 or 20 homes per street mile of fiber – so it never got built. I am sure that most of the city has broadband, but even within the richest communities there are homes without.

You often see this problem just outside of city boundaries. Cities generally have franchise agreements that require the cable company to serve everybody, or almost everybody. But since counties rarely have these agreements the cable and phone companies are free to pick and choose who to serve outside of town. You will see some neighborhoods outside of a city with a cable company network while another similar neighborhood nearby goes without. It’s easy to find these pockets by looking for satellite TV dishes. The difference between the two neighborhoods is often due to nothing more to the whim of the telco and cable companies at the time of original construction.

The fault for not having broadband can’t always be laid on the cable company. Apartment owners and real estate developers for new neighborhoods are often at fault. For example there are many apartments around where the apartment owner made a deal years ago with a satellite TV providers to provide bulk cable TV service on a revenue sharing basis. In electing satellite TV the apartment owner excluded the cable company and today has no broadband.

Real estate developers often make the same bad choices. For instance some of hoped to provide broadband themselves but it never came to fruition. I’ve even seen some developments that just waited too long to invite in the cable company or telco and the service providers declined to build after the streets were paved. The National Broadband Map is a great resource for understanding local broadband coverage. In my own area there are two neighborhoods on the map that show no broadband. When I first saw the map I assumed these were parks, but there are homes in both of these areas. I don’t know why these areas are sitting without broadband, but it’s as likely to be a developer issue as a cable company issue.

There have also been several articles written recently that accuse the large cable companies and telcos of economic redlining. These companies may use some of the above excuses for not building to the poorer parts of an urban area, but overlaying broadband coverage and incomes often paints a startling picture. Since deciding where a cable company expands is often at the discretion of local and regional staff it’s not hard to imagine bias entering the process.

I’ve seen estimates that between 6 and 8 million urban households don’t have broadband available. These have to be a mixture of the above situations – the neighborhoods are outside of a franchise area, or the developers or apartments owners didn’t allow ISPs in, or the ISPs are engaging in economic redlining. But for whatever the reasons this is a lot of households, especially when added to the 14 million rural homes without broadband.

I spend a lot of my time working on the rural broadband gap, but I don’t see much concentrated effort looking at the urban gap. That’s probably because this gap is one where it’s one subdivision, one apartment building or one street at a time with surrounding households having broadband. It’s hard to cobble together a constituency of these folks and even harder to find an economic solution to fix the problem.

Amazon as an ISP?

amazon_logo_rgbThere is an article on The Information that says that Amazon is considering becoming an ISP. They cite an unattributed insider at Amazon who says that the company has been discussing this. Officially the company denies the rumor, which is consistent with the way that Amazon has always operated.

It’s an interesting concept, but I honestly have a hard time seeing it. Amazon has been growing in Europe and it could make a little sense there. There are a number of cities on the continent as well as a few national ISP networks that allow open access to any ISP. On those networks Amazon could easily develop an ISP product. They already have massive data centers and it wouldn’t cost all that much to add the ISP functions.

But I just don’t see any big benefits to Amazon for doing this in the open access model. Sue to price competition there are not a lot of profit for ISPs on the open access networks. But maybe Amazon can have some edge from somehow bundling ISP access with its Amazon Prime video and music. But every ISP already carries Amazon’s content today and unless bundling somehow sells a lot more Prime subscriptions it’s hard to see this as a big win.

I also can’t see any sense of Amazon being an ISP in the US. There are no open access networks to speak of outside a tiny handful of small municipal networks. One only has to look at Google’s foray into broadband in the US to see that it’s really hard to make money by building broadband infrastructure – at least the kind of money that excites stockholders. There are decent long-term infrastructure returns from building and operating a fiber network well, but those returns are miniscule compared to the returns on tech ventures.

I still don’t fully understand why Google got into the broadband business. In the fiber business they are investing a lot of money that is going to make relatively small returns compared to the rest of their core business. Google’s stock value comes from the company making high technology returns and infrastructure returns can’t do anything better than pull down their overall return. I can’t imagine how it will be any less so for Amazon.

Perhaps Amazon is intrigued by the idea of gigabit wireless connections.  But I think everybody looking at this new technology is going to figure out that millimeter wave spectrum technology is still going to require a lot of fiber in the urban network.

And even if Amazon is comfortable with the lower returns, they still have to deal with network neutrality. It would seem that the best advantage to Amazon from being an ISP would be to somehow bundle their content and broadband connections together – something that is not allowed in the US, and only barely allowed in Europe.

The biggest problem we have with getting real broadband in the country is that big money is chasing big returns. There was a time in our past where there were a lot of conservative investors who were very happy having part of their portfolio invested in safe and steady telephone, electric and water companies because they knew that they would receive secure dividends forever in these safe investments.

But it seems today that investors look at all of the instant tech billionaires and they don’t want to pour money into the basics any more. To compound the problem the big telcos and cable companies invest no more than absolutely necessary in capital to meet basic customer expectations. But big company networks are not nearly as good as they should be. You can’t watch a quarterly presentation of one of these big companies without hearing them talk about how they have plans to curtail capital spending.

So is Amazon really going to become an ISP? They certainly have access to the cash if they really want to. But it’s just hard to believe that they want to shift the company to be more brick and mortar company since they have fought hard to not be that. I just can’t see enough benefits to a publicly traded tech company to be an ISP.

An Upgrade to

Speed_Street_SignNokia has announced the lab trial of the next generation of, the technology that can pump more bandwidth through telephone copper. They ae calling the technology

In a recent trial the equipment was able to send a 5 Gbps signal over copper for 100 meters and 8 Gbps for 30 meters. This is much faster than the top speed in trials of about 700 Mbps. In a real life situation using older copper the speeds will not be nearly this fast. in real life trials has gotten about half of the speeds seen in labs, and it would be impressive if that can also be achieved for

The technology works by utilizing higher-band frequencies on the copper. Traditional VDSL uses frequencies up to about 17 MHz. uses frequencies between 106 MHz and 212 MHz. climbs the spectrum even further and adds on spectrum between 350 MHz and 500 MHz.

There are a lot of issues involved in using all of this frequency on a small-gauge copper. The main problem is crosstalk interference – when adjoining copper wires interfere with each other, and this degrades the signal and drastically cuts down on the distance the signal can be transmitted.

Nokia mitigates the crosstalk using vectoring, the same as is done with VDSL and other DSL technologies. Vectoring generates an –out of-phase signal that can cancel out some of the interference. But there is so much interference at thise frequencies that vectoring can only keep the signal coherent for the short distances seen the trial.

To date there has not been a lot of interest in Adtran, the other competitor in the space claims to have now conducted ninety field trials of the technology worldwide. That’s an extraordinarily low number for a technology that can add speed to existing copper. But it looks like most phone companies are not interested in the technology, and they have some good reasons.

The short distances make and its new successor impractically expensive in the copper plant. In order to use the technology the telco would have to mount an XG.Fast transmitter at the pole outside each home, or in dense neighborhoods to perhaps serve a few homes. But if the telco wants to take advantage of the faster speeds that XG.Fast can get into the home they also would need to string fiber to feed the XG.Fast transmitters.

XG.Fast is largely a fiber-to-the-curb technology and the cost of the building fiber up and down streets is the big hurdle to using the technology. Any company willing to spend the money to build that much fiber probably isn’t willing to trust copper for the last 100 feet.

There is one application where makes good economic sense. It can be extremely costly to rewire older apartment buildings with fiber. But every apartment building has existing telephone wiring and can be used to move data from a telephone closet to the apartment units. This sounds to be far less costly than trying to snake fiber through older buildings. Since a lot of companies have avoided older apartment buildings this might offer a relatively inexpensive way to bring broadband.

You can’t fault Nokia for continuing to pursue the technology. There is a huge amount of copper still hanging on poles and the world keeps shouting for more broadband. But I get nervous about recommending any technology that isn’t widely accepted. I can picture a telco deploying this technology and then seeing support dropped for the product line.

But I can’t see this ever being much more than a niche technology. Telcos in the US seem to be looking for reasons to tear down copper and don’t seem willing to take one more shot at a copper technology. There might be a good business case for using the technology to extend broadband inside older buildings. But US telcos seem completely uninterested in using this in older copper networks.

Thinking about Electronics Obsolescence

carrier-cardsWe are in the process currently of helping a number of clients make major upgrades to networks, something we’ve done many times over the years. And this got me thinking about obsolescence and when and why we replace major electronics.

There are a couple of different kinds of obsolescence. First is physical obsolescence, which is when we replace things because they simply wear out. We do this all of the time with vehicles and hard assets but it’s rare with electronics. I can only think of a few times over the years we’ve helped people replace things electronics that were failing due to age. A few that come to mind are some T-carrier systems in the customer network that lasted for far more years than anybody expected.

A more common phenomenon is functional obsolescence where the electronics are not up to the task of handling newer needs. While this can happen with all kinds of electronics, the most common such upgrade has been replacing the electronics on fiber backbone or long-haul networks. There has been such a prolonged explosion in the amount of data our networks carry that it’s been common to overwhelm transport electronics.

In these cases we yank out fully functional electronics and replace them with something that can handle a lot more data. I would hope in the future that we will see a little less than this. One of the reasons we’ve needed these kinds of upgrades is that network engineers would not consider exponential bandwidth growth into their future projections. The naturally conservative nature of engineers didn’t let them to believe how much traffic would grow in just a few years after they build a network. But I finally see a lot of them getting this.

We also see technologies that are much more easily expandable. For instance, a lot of fiber electronics are now equipped with DWDM and other tools that allow for an upgrade on the electronics without a forklift upgrade. The network operator can light a few more lambdas of light and get a boost in throughput.

My least favorite form of obsolescence is vendor obsolescence where functional equipment is made obsolete when a vendor introduces a new generation of electronics and stops supporting the old generation. Far too many times this feels like nothing more than the vendors trying to force more sales onto their customers rather than looking out for the customer’s best interest.

This is not a new phenomenon and there was nobody better at this in the past than companies like Nortel and Lucent. They constantly pushed their customers to upgrade and were famous for cutting off support to older equipment while it was still functional. But the practice is still very much alive today.

Losing vendor support for electronics is a big deal to a network owner. It means you will no longer be able to buy a replacement for a card that goes bad unless you can find one on eBay. It means that the vendor won’t talk to you about any problems that crop up in your network.

The industry is now entering the second round of vendor obsolescence with FTTH electronics. Vendors cut off BPON and other first generation FTTH gear almost a decade ago and are now planning to do the same to GPON. I remember when BPON stopped being supported that every vendor of the next generation of equipment promised that the newer generation of electronics would be frontwards compatible – meaning that the ONTS and field electronics would work with future generations of core electronics. But as I always suspected this isn’t going to be the case and there is going to be another forklift from GPON to next generation of PON electronics.

The shame of this is the older PON equipment still works great. I have a few clients who have kept BPON working for a decade after it was supposedly obsolete by buying spares on eBay. Those networks are now finally becoming functionally obsolete as customers are using more data than the network can handle. But the equipment became functionally obsolete ten years after the equipment was declared as vendor obsolete. Most BPON electronics were well made and the ONTs and other field electronics have been chugging along a lot longer than the vendors wanted.

It’s not always easy to decide to keep operating equipment that the vendor stops supporting. But I’ve seen this done many times over the years and I can think of very few examples where this caused a major problem. It takes a little bravery to keep operating equipment without full vendor support, but management often chooses this option from the pragmatic perspective of economic reality. Most networks don’t make enough money to fund replacement all of the electronics every seven or ten years, and perhaps it is lack of money as much as anything that provides courage to network owners.

The IP Address Crunch

4cb1f2dc96040Sometimes it feels like small ISPs just move from one crisis to another. The latest problem I am hearing about is that ISPs are having a hard time getting new IP addresses – which is something they need in order to connect new customers to their network. I have clients who have been trying for months to find new addresses, and if they don’t find any they are soon going to have to turn away new customers.

We’ve known for decades that we would exhaust the current IP addresses. The IP world introduced IPV6 IP addresses back in 2011 and that was supposed to be enough new IP addresses to last the whole world for a long time into the future. Historically the original Internet used IPV4 IP addresses, of which there was about 4.3 billion. The new addresses have more digits and there are about 79 with 28 zeroes after it times more IPV6 addresses. Even the tens of billions of expected IoT devises won’t make a dent in the new inventory of IP addresses.

So how can there be a shortfall of IP addresses with so many new ones available? The problem is the speed at which the world is implementing the new IPV6 addresses. Some of the large companies like Comcast, Verizon Wireless and T-Mobile have swapped all of their customers to IPV6 addresses. But the implementation has been slow. Google probably has the best measure of IPV6 implementation since they see a large chunk of the world’s traffic. By 2014 they reported that only 2% of the IP addresses in the world had been converted to IPV6. At the end of last month that had finally climbed to 14% of all IP addresses.

But so far the conversions have been done by the largest ISPs. It is exceedingly hard for small ISPs to make this transition. They are more or less locked into the IP practices of the large carriers that sell them Internet bandwidth. It’s been estimated that the small companies might not be offered IPV6 until perhaps 50% to 60% of the Internet traffic is using the new addressing standard. By the looks of the growth curve that is still at least a few years away.

The bodies that assign IP addresses have all run out of new addresses. The Internet Assigned Number Authority (IANA) free pool of numbers ran dry in February 2011. There are five Regional Internet Registries (RIRs) around the world and the last one of them ran out of IP addresses last year. Since then ISPs can’t get IP addresses through the normal channels.

So small ISPs are stuck in limbo. If they want to grow they need new IP addresses, but there are none available in the traditional channels. As happens with any scarce resource a new market of brokers has stepped in to supply the demand for IP addresses. There are several of these brokers worldwide. These brokers have gone to large companies like GE, Haliburton and Ford and bought their inventories of unused IP addresses. And this process created a market.

Back in 2012 these brokers established market prices for IP addresses. The prices started at about $5 per IP address. But as these brokers have found fewer unused blocks, and as there are more ISPs looking for numbers, the prices have risen and IP addresses today sell for between $11 and $15 per IP address.

So small ISPs should just be able to buy what they need from these brokers, right? Unfortunately it’s not that easy. The addresses are sold through a periodic online auction process, and like happens with any rare resource there are now speculators buying IP addresses with the hope of selling them later at a higher price. The competition in the auction processes has become fierce. To some extent this is like the process for trading bitcoins and those with the fastest and most powerful computers can win the auctions. The small ISPs I know tell me they are not getting any addresses. I know one ISP who has failed at the process for over 6 months.

So we now have a situation where small ISPs are nearly locked out of the process of buying new IP addresses (and even if they buy them they are expensive). This shortfall and the auction arbitrage is likely to last for a few more years. The economics of the market tell us that at some point the arbitrage price for IP addresses will drop. When that happens the speculators in the market will ditch their inventory and there should be IP addresses available at lower prices than today and more easily available. But that’s not expected until there are a lot more IPV6 users. The ISPs might be facing this problem for the next two years. I feel certain that we are going to see small ISPs that will find themselves unable to add new customers to their networks – and in world where we want broadband everywhere that is a disaster.

The Death of the Big Cable Bundles

TelevisionThere is a ton of evidence that customers no longer want the traditional 200 – 300 channel cable packages. For example, we’ve seen the number of customers of ESPN plunge by millions over the last year to a far greater extent than the overall erosion of the cable industry. The ESPN phenomenon can only be caused by cord shaving – or customers downsizing to smaller packages.

We got more evidence of this last week when Verizon CEO Lowell McAdam said that 40% of cable packages sold on Verizon are now skinny bundles. He said that if he had a preference that Verizon would only offer skinny bundles. He doesn’t believe there is customer demand for the larger packages.

This makes sense and we have had the statistics for years to tell us this. A study by Nielsen earlier this year showed that the average person watches around 17 channels to the exclusion of others. That’s means that the average household is wasting a lot of money paying for channels they don’t want.

Other studies tell us the same thing. A Gallup poll earlier this year said that 37% of households don’t watch any sports. And yet sports programming has become the most expensive component of the big cable bundle. And it’s only common sense that within the 63% who watch sports that a lot of them must be just casual sports fans or fans of only one or two sports.

And the trend has to be downward for the channels on traditional cable. In May of this year Nielsen reported that almost 53 million US homes watch Netflix. Another 25 million watch Amazon Prime. Another 13 million watch Hulu, and since they beefed up their lineup and slashed their price the number of viewers is bound to climb.

Unfortunately skinny bundles are not universally available everywhere. Only the largest cable companies have been able to negotiate for the right to sell smaller bundles so far. And among the large cable providers only Verizon and Dish Network are really pushing the skinny bundles. There are also a few skinny bundles on the web, like Sling TV, but every time I look their packages are getting fatter.

I can’t help but speculate what would happen if every household was given the choice tomorrow to downsize their cable bundle and monthly cable bill. Leichtman Research Group announced a few months ago that the average cable bill in this country is now $103.10. That’s an astronomical number, and if that is the average a lot of homes are paying a lot more than that. Contrast this with new the Dish Network skinny bundle that offers 50 channels for $39.99 per month.

The skinny bundle that is doing so well at Verizon isn’t even cheap and starts at $55 per month – but it’s a lot less expensive than the big traditional bundles. And the Verizon price is reduced significantly for customers buying a triple-play bundle.

I just wrote a blog last week that talked about how Wall Street is becoming unhappy with cable programmers. At least one analyst has downgraded Discovery Networks and Scripps. We might finally be seeing is a whole host of issues coming to bear in the industry at the same time. Cable bills are finally getting too expensive for a lot of homes. People are becoming more interested in content that is not on traditional cable. And the programmers are losing a little bit of the total lock they have had on the industry.

It’s hard to say when, or even if the industry is going to break in any significant way. There are still just under 100 million homes paying for some version of cable TV. And the overall effect of cordcutting has only been shaving that by a little over 1% per year. But if the Verizon trend becomes the norm and most customers start preferring skinny bundles then the industry will still be transformed. ESPN has lost 10 million customers since 2013, but over half of those losses have been in the last year. The same thing has to be happening to many other of the less-popular cable channels, and at some point the math just isn’t going to work for the programmers.

We’ve seen a similar phenomenon once before. We saw a gradual erosion of home landline telephones after the advent of the cellphone. But after a few years of gradual declines we saw a deluge of people dropping home telephones. You could barely turn on a TV without hearing about how having a home telephone was a waste of money, and so it became the popular wisdom that home phones weren’t needed. The same thing could happen with skinny bundles and the industry could be transformed in a short period of time if tens of millions of homes downsize their cable bundle. It is going to happen, we’ll just have to wait and see how fast and to what degree it’s going to occur.

A New Telecom Act?

FCC_New_LogoThere has been a lot of talk during the last year about putting together a new Telecom Act. It’s been twenty years since the Telecom Act of 1996 which created CLECs. But a lot has changed in twenty years and that Act is largely obsolete. Unfortunately it’s unlikely with political gridlock that we’ll get a new Act that fixes our real problems. But I asked myself what I would include in a new Telecom Act if I was allowed to write it. Here are some of the top changes I would make:

Fund Fiber Everywhere. There was recently a bill introduced in Congress to add $50M to the RUS for rural broadband grants. That makes such a tiny dent in the problem as to be embarrassing. If we believe as a country that broadband is essential for our economic future, then let’s do what other countries have done and start a federal program to build fiber everywhere, from rural America to inner cities. I could write a week’s worth of blogs about how this could be done, but it needs to be done.

Make Broadband Affordable to All. The Lifeline program that subsidizes $9.25 per month for broadband for low-income households has the right intentions. But the amount of subsidy is ridiculously low. If we believe that schoolkids ought to have broadband to succeed then let’s do this right and pony up and find a way to pay for it.

Tax Broadband. The continuing ban against taxing the Internet is stupid. It was put in place years ago to protect a fledgling new Internet industry. Let’s put a tax on landline and cellular broadband to pay for getting fiber everywhere and broadband to everybody.

Stop Subsidizing Non-Broadband. It should be impossible for the FCC to provide any funding or subsidies to broadband connections that don’t meet their own definition of what constitutes broadband speeds.

Fix Pole Issues. Pole issues have been a bane to competitors since the last Telecom Act required pole owners to allow access. Let’s create common-sense rules that don’t allow pole owners to hold new competitors hostage.

Break the Power of the Programmers. Most of what has been broken in the cable TV industry has been due to the immense power and greed of the programmers to set the price and conditions for their content. It’s time to put a halt to contracts for content that force cable providers to buy programming they don’t want. And it’s also time to consider requiring programmers to offer each network a la carte and not in big bundles.

Unleash Skinny Bundles. Existing cable rules put handcuffs on cable providers. Rules that require specific kinds of bundles such as basic and expanded basic means that a cable provider has a nearly impossible task of putting together offerings that customers really want to buy. Let’s scrap those rules and start fresh with customer choice as the driver behind the new rules.

Make Cable Rules Apply to Everybody. Any new cable rules need to apply to everybody that provides content – over wirelines or over the Internet. Anything less than this gives massive advantages to one side or the other. I would be fine if the best way to do this is to have almost no rules!

Reinstitute Limitations on Ownership of Media. Allowing a handful of companies to own all of the television and radio stations has put a huge dent in our free press and in local control of news stations and reporting. Let’s break up these conglomerates and start over.

I could easily add forty more items to this list, but these were the ones that first came to mind as I was writing. What would you add to a new Telecom Act?

Free Broadband from Facebook

freebasics_facebook_thumbFacebook is talking to the FCC about launching a free Internet service in the US. This would provide a subset of the Internet for free to anybody with a smartphone and would provide such things as news, health information, job sites, and of course Facebook.

This would obviously benefit many people that can’t afford access to the web. Today the national broadband penetration of households that have some kind of access to the web is around 82%. Some of those without broadband live in rural places that don’t have access. Some don’t want Internet access. And the rest would like web access but can’t afford it.

Facebook has launched a similar product around the world in 53 emerging markets in the Middle East, Asia Pacific and Latin America. This is offered under the name Free Basics.

But the free product ran into problems and has been banned in India due to the fact that it violates net neutrality. The Indian net neutrality laws aren’t too different than our own laws and the service is what called zero-rated, meaning that any use of this plan is not counted against a data plan from a participating ISP.

In India the biggest complaint about the product was that it was restricted only to those things that Facebook wanted customers to see and not to the wider Internet. But in Facebook’s favor, it was free.

For this to work in the US, Facebook will need to find a US cellular partner which would not count usage of the app against a data plan. I recall that Facebook was close to this a few years ago in a partnership with T-Mobile that would have provided free access to a suite of products called GoSmart.

But more importantly, Facebook needs to convince the FCC that this is not a violation of net neutrality. The FCC has not formally made any pronouncements about zero-rating of wireless content, but it has talked to the major wireless carriers about the zero-rating they are already doing today.

This is the kind of situation that is really tough for regulators. With this kind of product Facebook could be providing some sort of free access to the web for millions of people in the country that might otherwise not have it. Even if it’s a scrubbed and sanitized piece of the web, it’s hard to find anything wrong with the results of that. People could buy a smartphone with no data plan and have access to parts of the web.

But the downside to the FCC is the same one faced by the Indian regulators. Once you let Facebook do this then the genie is out of the bottle and there doesn’t seem to be any way that the FCC could stop other kinds of zero-rating.

The dilemma is that Facebook is not quite like other companies. I am sure that somehow this isn’t costing Facebook too much and they might even make a little money from the idea. But Mark Zuckerberg seems to be on an altruistic mission to bring broadband access to the whole world. He has already used this idea to bring free broadband to many millions, and his goal is to bring it to billions.

But even with the altruism, this has certainly been good for Facebook – they had 1 billion users in 2015 and are now are reported to have over 1.7 billion users. That’s a lot of people to advertise to and to gather data from, which is how Facebook makes its money.

And of course, no matter how altruistic Facebook might be, nobody would expect the same motives from other large companies like Comcast, AT&T or Verizon. One of the main fears that drove the creation of net neutrality is that we could end up with a web that is filtered by the biggest ISPs and that the openness of the web would be killed by deals like the one Facebook wants to do. The web brought to you by Comcast is not the same web that we know today – and I think it’s a web that we don’t want as a society. But if we take the first step and let a big company like Facebook filter the web, we could be headed down the path where almost all future web access is filtered.

How to Collect Broadband Lifeline

USF-logoThe Wireline Bureau of the FCC released clarification rules last week in Docket DA 16-1118 that describe how companies can participate in the broadband Lifeline program. This is the program where the Universal Service Fund will compensate ISPs $9.25 per month for broadband customers that qualify for the Lifeline program.

The program requires landline speeds of 10/1 Mbps with a data cap of no less than 150 GB per month. Mobile speeds can be slower and there is also a much lower data cap starting at 500 MB and increasing to 2 GB by the end of 2018. The FCC has established a registry listing eligible participants called the National Eligibility Verifier. Only households in that registry can receive the Lifeline subsidy and only one subsidy is allowed per household.

The new clarification in the docket describes the process for ISPs to participate in the Lifeline Fund. The FCC will require ISPs to register as a Lifeline Broadband Provider (LBP). The FCC is developing an application process for ISPs that want to gain this designation.

The original order said that the FCC had up to six months to act on LBP applications, but there is now the ability to request a streamlined process where the FCC will approve requests within 60 days. Basically an ISP must complete the application, and if they don’t hear back from the FCC then they automatically have the designation on the 60th day after submission of the request. If the FCC asks questions or asks for changes to the submitted information then the request will be approved 60 days after the request filing has been amended and corrected.

In order to qualify for the streamlined and expedited review process an applicant must 1) serve at least 1,000 non-Lifeline voice customers and/or 1,000 Lifeline-eligible broadband Internet access service (BIAS) customers. This would be measured as a snapshot as of the time of making the application; and, 2) has offered broadband service to the public for at least two years, without interruption. So the expedited process is for established ISPs and not new ones.

Any ISP that doesn’t meet the streamlined review process will still have their application reviewed within six months.

Carriers that are already certified as Eligible Telecommunications Carriers (ETCs) or as Lifeline-only ETCs do not need to seek the LBP status unless they are seeking to ask for Lifeline subsidies in new geographic areas where they were not previously certified.

In a petition to seek LBP status a carrier must:

  • Certify that they will meet all of the service requirements of the Lifeline program.
  • They must demonstrate the ability to remain functional during emergency situations and that they have taken precautions such as having back-up power to remain functional.
  • Demonstrate that they will satisfy all applicable consumer protection service quality standards.
  • Demonstrate that they are financially and technically capable of meeting all of the FCC rules needed to provide Lifeline. The FCC will look to see that the company can be viable without receiving the subsidies.
  • Provide the terms and conditions that the ISP will offer to Lifeline subscribers.

The FCC is clearly trying to help as many ISPs as possible to participate in the Lifeline program. If your company is interested in taking part in this program feel free to contact CCG Consulting and we can help you through the application process.

Do We Need 10 Gbps?

wraparound-glassesWe are just now starting to see a few homes nationwide being served by a 1 Gbps data connection. But the introduction of DOCSIS 3.1 cable modems and a slow but steady increase in fiber networks will soon make these speeds available to millions of homes.

Historically we saw home Internet speeds double about every three years, dating back to the 1980s. But Google Fiber and others leapfrogged that steady technology progression with the introduction of 1 Gbps for the home.

There are not a whole lot of home uses today that require a full gigabit of speed – but there will be. Home usage of broadband is still doubling about every three years and homes will catch up to that speed easily within a few years. Cisco recently said that the average home today needs 24 Mbps speeds but by 2019 will need over 50 Mbps. It won’t take a whole lot of doublings of those numbers to mean that homes will expect a lot more speed than we are seeing today.

There is a decent chance that the need for speed is going to accelerate. Phil McKinney of CableLabs created this video that shows what a connected home might look like in the near future. The home owns a self-driving car. The video shows a mother working at home with others using a collaboration wall, with documents suspended in the air. It shows one daughter getting a holographic lecture from Albert Einstein while another daughter is talking with her distant grandmother, seemingly in a meadow somewhere. And it shows the whole family using virtual / enhanced reality goggles to engage in a delightful high-tech game.

This may seem like science fiction, but all of these technologies are already being developed. I’ve written before about how we are at the start of the perfect storm of technology innovation. Our past century was dominated by a few major new technologies and the recent forty years has been dominated by the computer chip. But there are now literally dozens of potentially transformational technologies all being developed at the same time. It’s impossible to predict which ones will have the biggest influence on daily life – but many of them will.

Most of these new technologies are going to require a lot of bandwidth. Whether it’s enhanced reality, video collaboration, robots, medical monitoring, self-driving cars or the Internet of Things, we are going to see a lot of needs for bandwidth much greater than today’s surge due to video. The impact of video, while huge today, will pale against the bandwidth needs of these new technologies – particularly when they are used together as implied in this video.

So it’s not far-fetched to think that we are going to need homes with bandwidth needs beyond the 1 Gbps data speeds we are just now starting to see. I’m always disappointed when I see ISP executives talking about how their latest technology upgrades are making them future proof. There are only two technologies that can meet the kinds of speeds envisioned in McKinney’s video – fiber and cable networks. These speeds are not going to be delivered by telephone copper or wirelessly, and to think so is to ignore the basic physics underlying each technology.

Some of the technologies shown in KcKinney’s video are going to start becoming popular within five years, and within twenty years they will all be mature technologies that are part of everyday life. We need to have policies and plans that look towards building the networks we are going to need to achieve that future. We have to stop having stupid government programs that throw away money on expanding DSL and we need to build networks that have use beyond just a few years.

McKinney’s video is more than just an entertaining glimpse into the near-future; it’s also meant to prod us into making sure that we are ready for that future. There are many companies today investing in technologies that can’t deliver gigabit speeds – and such companies will grow obsolete and disappear within a decade or two. And policies that do anything other than promote gigabit networks are a waste of time and resources.