Means Testing for FCC Funding – Part II

Yesterday I wrote about the recent blog by FCC Commissioners Michael O’Rielly and Mignon Clyburn that suggests that there ought to be a means test for anybody accepting Universal Service Funds. Yesterday I looked at the idea of using reverse auctions for allocating funds – an idea that I think would only serve to shift broadband funds to slower technologies, most likely rural cellular service for broadband. Today I want to look at two other ideas suggested by the blog.

The blog suggests that rural customers ought to pay more for broadband since it costs more to provide broadband in sparsely populated areas. I think the FCC might want to do a little research and look at the actual prices charged today for broadband where commercial companies have built rural broadband networks. It’s something I look at all of the time and all over the country, and from what I can see the small telcos, cooperatives, WISPs and others that serve rural America today already charge more than what households pay for broadband in urban areas – sometimes considerably more. I am sure there are exceptions to this and perhaps the Commissioners have seen some low rural pricing from some providers. But I’ve looked at the prices of hundreds of rural ISPs and have never seen prices below urban rates.

The small rural ISPs have to make a commercial go of their broadband networks and they’ve realized for years that the only way to do that is to charge more. In most urban areas there is a decent broadband option starting around $40 per month and you rarely see a price close to that in rural America. If you see a low price in rural America it probably offers a very slow speed of perhaps a few Mbps, which certainly doesn’t compare to the 60 Mbps I get from Charter for $44.95 per month.

The issue of rural pricing does raise one policy issue. Historically the Universal Service Fund was used for precisely what this blog seems not to like – to hold telephone rates down in rural America so that everybody in the country could afford to be connected. That policy led to the country having telephone penetration rates for decades north of 98%. I’m not advocating that USF funds ought to be used to directly hold down rural broadband rates, but it’s worth a pause to remember that was the original reason that the Universal Service Fund was started and it worked incredibly well.

The second idea raised by the blog is that Universal Service Funds ought not be used to build broadband to wealthy customers. They suggest that perhaps federal funding ought not to be used to bring broadband to “very rich people who happen to live in the more rural portions of our nation.”  The blog worries that poor urban people will be subsidizing ‘some of the wealthiest communities in America.’  I am sure in making that statement that the Commissioners must have a few real-life communities in mind. But I work all over the country and there are not very many pockets of millionaires in rural America, except perhaps for farmers.

Farmers are an interesting case when it comes to broadband. By definition farmers are rural. But US agriculture is the largest industry in the country and the modern farmer needs broadband to be effective. We are headed soon towards a time when farm yields can increase dramatically by use of IoT sensors, farm robots and other high technology that is going to require broadband. I know that a lot of the rural communities that are clamoring for broadband are farming communities – because those farms are the economic engine that drives numerous counties and regions of the country. I don’t think it’s unreasonable if we are going to rethink policy to talk about bringing broadband to our largest industry.

The FCC blog suggests that perhaps wealthier individuals ought to pay for the cost of getting connected to a broadband network. It’s certainly an interesting idea, and there is precedent. Rural electric companies have always charged the cost of construction to connect customers that live too far from their grid. But with that said we also have to remember that rural electric grids were purposefully built to reach as many people as possible, often with the help of federal funding.

This idea isn’t practical for two reasons. It’s already incredibly hard today to finance a fiber network. I picture the practical problem of somehow trying to get commitments from farmers or other wealthy individuals as part of the process of funding and building a broadband network. As somebody who focuses mostly on financing fiber networks this would largely kill funding new networks. To get the primary borrower and all of the ‘rich’ people coordinated in order to close a major financing is something that would drive most lenders away – it’s too complicated to be practically applied. The FCC might want to consult with a few bankers before pushing this idea too far.

But there is a more fundamental issue and the FCC blog touches upon it. I’m trying to imagine the FCC passing a law that would require people to disclose their income to some commercial company that wants to build a fiber network. I’m not a lawyer, but that sounds like it would bump against all sorts of constitutional issues, let alone practical ones. For example, can you really picture having to report your income to AT&T?  And I then go back to the farmers again. Farmers don’t make a steady income – they have boom years and bust years. Would we put them on or off the hook for contributing towards a fiber network based upon their most recent year of income?

I certainly applaud the Commissioners for thinking outside the box, and that is a good thing when it leads to discussions of ways to improve the funding process. I will be the first to tell you that the current USF distributions are not always sensible and equitable and there is always room for improvement. Some of the ideas suggested by the blog have been discussed in the past and it never hurts to revisit ideas. But what most amazes me about the suggestions made by this blog is that the proposed solutions would require a heavy regulatory hand – and this FCC, or at least its new Chairman has the goal of reducing regulation. To impose a means test or income test would go in the opposite direction and would require a new layer of intrusive regulations.

Means Testing for FCC Funding – Part I

A recent blog by FCC Commissioners Michael O’Rielly and Mignon Clyburn asks if there should be a means test in federal high cost programs. This blog is something every telco, school, library or health care provider that gets any form of Universal Service funding needs to read.

There is already some means testing in the Universal Service Fund. For instance, the Lifeline program brings subsidized voice and broadband only to households that meet certain poverty tests. And the Schools and Libraries program uses a mean test to make certain that subsidies go to schools with the most low-income students. The FCC blog talks about now applying a means test to the Universal Service Funds that are used to promote rural broadband. There are several of these programs, with the biggest dollar ones being the CAF II funding for large telcos and the ACAM program for small telcos to expand rural broadband networks.

The blog brings up the latest buzzword at the FCC, which is reverse auction. The FCC embraces the concept that there should be a competition to get federal money to expand broadband networks, with the funding going to the carrier that is willing to accept the lowest amount of funding to expand broadband into an area. On the surface that sounds like a reasonable suggestion in that it would give money to the company that is the most efficient.

But in real-life practice reverse auctions don’t work, at least for building rural broadband networks. Today these FCC infrastructure programs are aimed at bringing broadband to places that don’t have it. And the reason they don’t have it is because the areas are largely rural and sparsely populated, meaning costly for building broadband infrastructure. In most of these places nobody is willing to build without significant government subsidy because there is no reasonable business plan using commercial financing.

If there was a reverse auction between two companies willing to bring fiber to a given rural area, then in my experience there wouldn’t be much difference between them in terms of the cost to build the network. They have to deploy the same technology over the same roads to reach the same customers. One might be slightly lower in cost, but not enough to justify going through the reverse auction process.

And that is the big gotcha with the preference for reverse auctions. A reverse auction will always favor somebody using a cheaper technology. And in rural broadband, a cheaper technology means an inferior technology. It means using federal funding to expand DSL or cellular wireless as is being done with big telco CAF II money instead of building fiber, as is being done by the small telcos accepting ACAM money.

Whether intentional or not, the FCC’s penchant for favoring reverse auctions would shift money from fiber projects – mostly being done by small telcos – to the wireless carriers. It’s clear that building cellular technology in rural areas is far cheaper than building fiber. But to use federal money to build inferior technology means relegating rural areas to dreadfully inadequate broadband for decades to come.

Forget all of the hype about how 5G cellular is going to bring amazing broadband speeds – and I hope the FCC Commissioners have not bought into cellular company’s press releases. Because in rural areas fast 5G requires bringing fiber very close to customers – and that means constructing nearly the same fiber networks needed to provide fiber into homes. The big cellular companies are not going to invest in rural 5G any more than the big telcos have ever invested in rural fiber. So a reverse auction would divert federal funds to Verizon and AT&T to extend traditional cellular networks, not for super-fast wireless networks.

We already know what it looks like to expand rural cellular broadband. It means building networks that deliver perhaps 20 Mbps to those living close to cell towers and something slower as you move away from the towers. That is exactly what AT&T is building with their CAF II funding today. AT&T is taking $426 million per year for six years, or $2.5 billion in total to expand cellular broadband in rural areas. As I’ve said many times in the past this is perhaps the worse use of federal telecom funding I have ever seen. Customers on these cellular networks are getting broadband on day one that is too slow and that doesn’t even meet the current FCC’s definition of broadband. And in the future these customers and rural communities are going to be light-years behind the rest of the country as household demand for broadband continues to grow at a torrid pace while these customers are stuck with an inadequate technology.

The FCC blog also mentions the concept of possibly re-directing future USF payments, and if I am a small telco that scares me to death. This sounds like the FCC may consider redirecting this already-committed ACAM funding. Numerous small telcos just accepted a 10-year commitment to receive ACAM funding from the USF Fund to expand broadband in rural areas, and many are already borrowing matching funds from banks based upon that commitment. Should that funding be redirected into a reverse auction these small companies will not be able to complete their planned expansion, and if they already borrowed money based upon the promise of that ACAM funding they could find themselves in deep financial trouble.

New Technologies, June 2017

Following are some interesting new technologies I’ve run across recently.

WiFi Imaging. Cognitive Systems has a product they call Aura that can detect motion inside of a home using WiFi. The technology was developed a few years ago at MIT. The technology used is called Radio Frequency (RF) Capture. The device can sense subtle changes in wireless signals to determine if something is moving in the home. It can be set to different sensitivities to be able to detect people, but not animals. It can also be set to track specific cellphones so that you’ll know when a known person has entered or left the home. For now the device does not connect to external security services but sends a message to a smartphone.

Some German researchers at the University of Munich have already taken this same idea a lot farther. In a paper published in the Physical Review of Letters they describe a technique where they can use WiFi to create 3D holographic images through walls. The lab unit they have built can detect objects down to about 4 centimeters in size. They scan ten times per second and can see outlines of people or pets moving inside of another room. This technology is eerily reminiscent of the surveillance machine in The Dark Knight that Bruce Wayne destroys at the end of the movie since it was a scary invasion of privacy.

Eliminating IoT Batteries. One of the scariest things about the exploding number of devices used for IoT is the need to power them, and the potential huge waste, cost and hassle of needing batteries for tons of devices. Tryst Energy from the Netherlands has developed an extremely efficient solar device that only needs 200 lux of light for four hours per day to operate a small sensor that communicates with Bluetooth or WiFi. That is the amount of light normally found under a desk. The device also ought to last for 75 – 100 years, opening the ability to place small IoT sensors in all sorts of places to monitor things. When you consider the expected billions of devices that are expected over the next decade this could provide a huge boost to the IoT industry and also provide a green solution for powering tiny devices. The device is just starting to go into production.

Bots Have Created Their Own Language. A team at OpenAI, the artificial intelligence lab founded by Elon Musk and Sam Altman, has published a paper describing how bots have created their own language to communicate with each other. They accomplished this by presenting simple challenges that require collaboration to bots, which are computer programs that are taught to accomplish tasks. Bots are mostly being used these days to learn to communicate with people. But the OpenAI team instead challenged the bots to solve spatial challenges such as devising a way to move together to a specific location inside of a simulated world. Rather than tell the bots how to accomplish this they simply required that the bots collaborate with other bots to accomplish the assigned tasks. What they found was that the bots created their own ‘language’ to communicate with each other and that the language got more efficient over time. This starts sounding a bit like bad Sci-Fi world where computers can talk to each in languages we can’t decipher.

Recycling CO2. Liang-shi Li at Indiana University has found a way to recycle CO2 for the production of power. He has created a molecule that, with the addition of sunlight, can turn CO2 from the atmosphere into carbon monoxide. The carbon monoxide can then be burnt to create power, with the byproduct being CO2. If scaled this would provide for a method to produce power that would add no net CO2 to the atmosphere (since it recycles the CO2). Li uses a nanographene molecule that has a dark color and that absorbs large amounts of sunlight. The molecule also includes rhenium which is then used as a catalyst to turn nearby CO2 into carbon dioxide. He’s hoping to be able to accomplish this instead with more easily obtained magnesium.

Liquid Light. It’s common knowledge that light usually acts like a wave, expanding outward until it’s reflected or absorbed by an object. But in recent years scientists have also discovered that under extreme conditions near absolute zero that light can also act like a liquid and flow around objects and join back together on the other side. The materials and processes used to produce the liquid light are referred to as Einstein-Bose condensates.

Scientists from CNR Nanotec in Italy, Ecole Polytechnique de Montreal in Canada, and Aalto University in Finland just published an article in Nature Physics that shows that light can also exist in a ‘superliquid’ state where light flows around objects with no friction. Of most interest is that this phenomenon can be produced at normal room temperature and air pressure. The scientists created this effect by sandwiching organic molecules between two highly-reflective mirrors. The scientists believe that interaction of light with the molecules induces the photons in the light to take on characteristics of electrons in the molecules.

The potential uses for this technique, if perfected, are huge. It would mean that light could be made to pass through computer chips with no friction, meaning no creation of the heat that is the bane of data centers.

Latest Industry Statistics

The statistics are out for the biggest cable TV and data providers for the first quarter of the year and they show an industry that is still undergoing big changes. Broadband keeps growing and cable TV is starting to take some serious hits.

Perhaps the most relevant statistic of all is that there are now more broadband customers in the country than cable TV customers. The crossover happened sometime during the last quarter. This happened a little sooner than predicted due to plunging cable subscribers.

For the quarter the cable companies continued to clobber the telcos in terms of broadband customers. Led by big growth in broadband customers at Comcast and Charter the cable companies collectively added a little over 1 million new broadband customers for the quarter. Charter led the growth with 458,000 new broadband subscribers with Comcast a close second at 430,000 new customers.

Led by Frontier’s loss of 107,000 broadband customers for the quarter the telcos collectively lost 45,000 net customers for the quarter. Most of Frontier’s losses stem from the botched acquisition of Verizon FiOS properties. Verizon lost 27,000 customers for the quarter while AT&T U-verse was the only success among telcos adding 90,000 new customers for the quarter.

Looking back over the last year the telcos together lost 727,000 broadband customers while the cable companies together gained 3.11 million customers during the same period. The cable companies now control 63.2% of the broadband market, up from 61.5% of the market a year ago.

Overall the broadband market grew by 2.38 million new broadband subscribers for over the last year ending March 31. It’s a market controlled largely by the giant ISPs and the largest cable companies and telcos together account for 93.9 million broadband subscribers.

Cable TV shows a very different picture. The largest seven cable providers collectively lost 487,000 video subscribers for the quarter. That includes AT&T losing 233,000, Charter losing 100,000, Dish Networks losing 143,000, Verizon losing 13,000, Cox losing 4,000 and Altice losing 35,000. The only company to gain cable subscribers was Comcast, which gained 41,000.

Total industry cable subscriber losses were 762,000 for the quarter as smaller cable companies and telcos are also losing customers. That is five times larger than the industry losses of 141,000 in the first quarter of last year. This industry is now losing 2.4% of the market per year, but that r is clearly accelerating and will probably grow larger. The annual rate of decline is already significantly higher than last year’s rate of 1.8%.

At this point it’s clear that cord cutting is picking up steam and this was the worst performance ever by the industry.

The biggest losers have stories about their poor performance. Charter says it is doing better among its own historic customers but is losing a lot of customers from the Time Warner acquisition as Charter raises rates and does away with Time Warner promotional discounts. AT&T has been phasing out of cable TV over its U-Verse network. This is a DSL service that has speeds as high as 45 Mbps, but which is proving to be inadequate to carry both cable TV and broadband together. Dish Networks has been bogged down in numerous carriage and retransmission fights with programmers and has had a number of channels taken off the air.

But even considering all of these stories it’s clear that customers are leaving the big companies. Surveys of cord cutters show that very few of them come back to traditional cable after cutting the cord after they get used to getting programming in a different way.

What is probably most strikingly different about the numbers is that for years the first quarter has performed the best for the cable industry, which in recent years has still seen customer gains even while other quarters were trending downward. We’ll have to see what this terrible first quarter means for the rest of 2017.

 

 

Comparing Streaming and Broadcast Video

One thing that doesn’t get talked about a lot in the battle between broadcast TV and on-line video is video quality. For the most part today broadcast TV still holds the edge over on-line video.

When you think of broadcast TV over a cable system I can’t help but remember back twenty years ago when the majority of the channels on a cable system were analog. I remember when certain channels were snowy, when images were doubled with ghosts and the first couple of channels in the cable system were nearly unwatchable. Today the vast majority of channels on most cable systems are digital, but there are still exceptions. The conversion to digital resulted in a big improvement in transmission quality.

When cable systems introduced HDTV and the quality got even better. I can remember flipping back and forth between the HD and SD versions of the same channel on my Comcast system just to see the huge difference.

This is not to say that cable systems have eliminated quality issues. It’s still common on many cable systems to see pixilation, especially during high action scenes where the background is constantly changing. All cable systems are not the same, so there are differences in quality from one city to the next. All digital video on cable systems is compressed at the head-end and decompressed at the settop box. That process robs a significant amount of quality from a transmission and one only has to compare any cable movie to one from a Blu-ray to realize how much is lost in the translation.

In the on-line world buffered video can be as good as good as cable system video. But on-line video distributors tend to compress video even more than cable systems – something they largely can get away with since a lot of on-line video is watched on smaller screens. And this means that a side-by-side comparison of SD or HD movies would usually favor the cable system. But Netflix, Amazon and a few others have one advantage today with the spectacular quality of their 4K videos – there is nothing comparable on cable networks.

But on-line live-streamed video still has significant issues. I watch sports on-line and the quality is often poor. The major problem with live-streamed video is mostly due to delays in the signal getting to the user. Some of that delay is due to latency – either latency in the backbone network between the video creator and the ISP or latency in the connection between the ISP and the end-user. Unlike downloading a data file where your computer will wait until it has collected all of the needed packets, live-streamed video is sent to end-users with whatever pixels have arrived at the needed time. This creates all sorts of interesting issues when watching live sports. For instance, there is pixilation, but it doesn’t look like the pixilation you see on cable network. Instead parts of the screen often get fuzzy when they aren’t receiving all the pixels. There are also numerous problems with the video. And it’s still not uncommon for the entire picture to freeze for a while, which can cause an agonizing gap when you are watching sports since it always seems to happen at a critical time.

Netflix and Amazon have been working with the Internet backbone providers and the ISPs to fix some of these issues. Latency delays in getting to the ISPs is shrinking and, at least for the major ISPs, will probably not be an issue. But the one issue that still needs to be resolved is the crashes that happen when the Internet gets overloaded when the demand is too high. We’re seeing ISPs bogging down when showing a popular stream like the NBA finals, compared to a normal NBA game that might only be watched by a hundred thousand viewers nationwide.

One thing in the cable system’s favor is that their quality ought to be improving a lot over the next few years. The big cable providers will be implementing the new ATSC 3.0 video standard that is going to result in a significant improvement in picture quality on HD video streams. The FCC approved the new standard earlier this year and we ought to see it implemented in systems starting in 2018. This new standard will allow cable operators to improve the color clarity and contrast on existing HD video. I’ve seen a demo of a lab version of the standard and the difference is pretty dramatic.

One thing we don’t know, of course, is how much picture quality means to the average video user. I know my teenage daughter seems quite happy watching low-quality video made by other teens on Snapchat, YouTube or Facebook Live. Many people, particularly teens, don’t seem to mind watching video on a smartphone. Video quality makes a difference to many people, but time will tell if improved video quality will stem the tide of cord cutting. It seems that most cord cutters are leaving due to the cost of traditional TV as well as the hassle of working with the cable companies and better video might not be a big enough draw to keep them paying the monthly cable bill.

The End of the MP3?

Last month the Fraunhofer Institute for Integrated Circuits ended its licensing program for the MP3 digital file format. This probably means that the MP3 format will begin fading away to be replaced over time by newer file formats. MP3 stands for MPEG Audio Layer III and was the first standard that allowed for the compression of audio files without loss of sound quality. The US patent for MP3 was issued in 1996 by Fraunhofer and since then they have collected royalties for all devices that were able to create or use files in that format.

While it might seem a bit odd to be reading a blog about the end of a file format, MP3 files have had a huge impact in the tech and music industries that they are partly responsible for the early success of the Internet.

The MP3 file revolutionized the way that people listened to music. In the decade before that there had been a proliferation of portable devices that would play cassette tapes or CDs. But those devices did not really bring freedom to listen to music easily everywhere. I can remember the days when I’d have a pile of tapes or CDs in the car so that I could listen to my favorite music while I drove. But the MP3 file format meant that I could rip all of my music into digital files and could carry my whole music collection along with me.

And the MP3 digital files were small enough that people could easily share files with friends and could send music as attachments to emails. But file-sharing of MP3 files really took off in 1999 when Shawn Fanning, John Fanning, and Sean Parker launched the peer-to-peer network Napster. This service gave people access to the entire music collections of huge numbers of others. Napster was so popular that the traffic generated by the platform crashed broadband networks at colleges and caused havoc with many ISP networks.

In 2001 Apple launched iTunes, a service where people could legally download MP3 files. Apple used the MP3 format initially but in 2003 changed to the AAC format, probably mostly to avoid paying the MP3 licensing fees. Internet traffic to iTunes grew to be gigantic. It’s hard to remember when the Internet was so much smaller, but the transfer of MP3 files was as significant to Internet traffic in the early 2000s as Netflix is today.

Napster, along with Apple iTunes, revolutionized the music industry and the two are together credited with ending the age of albums. People started listening to their favorite songs and not to entire albums – and this was a huge change for the music industry. Album sales dropped precipitously and numerous music labels went out of business. I remember the day I cancelled my subscription to Columbia House because I no longer felt the need to buy CDs.

Of course, Napster quickly ran into trouble for helping people violate music copyrights and was driven out of business. But the genie was out of the bottle and the allure of sharing MP3 files was too tempting for music lovers. I remember musician friends who always had several large-capacity external hard drives in their car and would regularly swap music collections with others.

One of the consequences from ending the licensing of the MP3 format is that over time it’s likely that computers and other devices won’t be able to read the MP3 format any longer. MP3s are still popular enough that the music players on computers and smartphones all still recognize and play MP3 files. But the history of the Internet has shown us that unsupported formats eventually fizzle away into obscurity. For example, much of the programming behind the first web sites is no longer supported and many of today’s devices can no longer view old web sites without downloading software capable of opening the old files.

It’s interesting that most people think that once something has been digitized that it will last forever. That might be true for important data if somebody makes special effort to save the digitized files in a place that will keep them safe for a long time. Bu we’ve learned that digital storage media are not permanent. Old CDs become unreadable. Hard drives eventually stop working. And even when files are somehow kept, the software needed to run the files can fall into obscurity.

There are huge amounts of music since 2000 that has been created only in a digital format. Music by famous musicians will likely be maintained and replayed as long as people have an interest in those musicians. But music by lesser-known artists will probably fade away and much of it will disappear. It’s easy to envision that in a century or two that that most of the music we listen to today might have disappeared.

Of course there are the online music streaming services like Spotify that are maintaining huge libraries of music. But if we’ve learned anything in the digital age it’s that companies that make a living peddling digital content don’t themselves have a long shelf life. So we have to wonder what happens to these large libraries when Spotify and similar companies fade away or are replaced by something else.

Regional Differences in Broadband Adoption

The latest Akamai report State of the Internet Q1 2017 contains a lot of interesting facts about broadband adoption and usage in the US and around the world. One of the things that they track is the percentage of broadband users at various data speeds. I think their tracking is the most accurate in the industry because they measure the actual speeds of connectivity, not the subscribed rate that users think they are getting. Most of the largest Internet hubs use Akamai and so they get to see huge volumes of web connections.

Probably the most interesting statistic in the report from a US perspective is that the average broadband connection speed for the whole US has grown to 18.7 Mbps. This is up 8.8% over the last quarter of 2016 and is up 22% from a year ago. This increase was enough to move the US up to tenth place in the world in terms of average connectivity speed. The worldwide connectivity speed is 7.2 Mbps, but that comes with the caveat that it doesn’t include some parts of the world and also doesn’t include the billions who don’t yet have any broadband available.

What I find most interesting in the connectivity data is how disparate broadband is in different parts of the US. For the first time there are places in the US with average connectivity speeds greater than the FCC definition of broadband – the District of Columbia at 28.1 Mbps and Delaware at 25.2 Mbps. Contrast this with Idaho with an average connectivity speed of 12 Mbps, which is less than half of the speeds for the fastest states. Perhaps the most useful statistics in the report is the percentage of connections in each state that meet various speed thresholds:

4 Mbps Adoption. Akamai says that Delaware leads in this category with 98% of connections exceeding a speed of 4 Mbps, with Rhode Island close behind at 97%. Contrast this to the bottom of the list where West Virginia has only 77% of connections exceeding 4 Mbps and Arkansas the next lowest at 81%.

10 Mbps Adoption Rate. Delaware also leads this category with 86% of the broadband connections from the state exceeding 10 Mbps, again just ahead of Rhode Island with 85%. But at the bottom of this list are Idaho at 45%, and Arkansas and New Mexico at 47%.

15 Mbps Adoption Rate. Rhode Island leads this category with 66% of broadband connections exceeding 15 Mbps. At the bottom of this list was Idaho with only 23% of connections exceeding 15 Mbps.

25 Mbps Adoption Rate. The District of Columbia tops this list with 38% of connections exceeding 25 Mbps, with Delaware second at 33%. At the bottom of the list is Idaho where only 7.5% of connections exceeded 25 Mbps, with New Mexico the next lowest at 7.9%.

Since these are the actual speeds of Internet connections one can conjecture there are a number of reasons that contribute to the differences across various states, such as:

  • Availability of fast broadband. The states with the fastest broadband rates happen to be those where a significant percentage of the population has both fiber (Verizon FiOS) and cable modem broadband available. By contrast the states near the bottom of the list tend to have far fewer communities with fiber, and even many communities without cable systems.
  • Affordability. Numerous surveys have shown that affordability is still a major factor for homes being able to afford the broadband connection they want.
  • Choice. Even in places where there is fast broadband available, many households choose slower broadband speeds due to lack of perceived need.
  • Geography. Terrain plays a role as well. In working with rural communities across the country I see that in the Plains states with wide-open expanses of land that there has been a proliferation of rural homes served by point-to-multipoint wireless networks that are delivering speeds of 10 – 50 Mbps. But this technology is of far less value in places like West Virginia with hilly and wooded terrain.

One thing this report shows is that the disparity between the top and the bottom states on these various lists is widening. In places where fast broadband is available, the statistics show that a lot of people are upgrading to faster speeds. But in states near the bottom of the list where the broadband networks are the least robust the same upward migrations to faster speeds is not possible due to the lack of options. One would think that most of the country would look like Delaware in terms of broadband adoption rates if broadband was available to everybody. But the difference in technologies and infrastructure limits households from buying the broadband speeds they want.

The other thing to remember about these statistics is that they are only measuring the speeds for actual broadband connections, and so obviously exclude the millions of households in the country that still don’t have a reasonable broadband alternative. If those households were weighted into these statistics then states with large rural areas with no broadband would sink down the list.

The WISP Dilemma

For the last decade I have been working with many rural communities seeking better broadband. For the most part these are places that the large telcos have neglected and never provided with any functional DSL. Rural America has largely rejected the current versions of satellite broadband because of the low data caps and because the latency won’t support streaming video or other real-time activities. I’ve found that lack of broadband is at or near the top of the list of concerns in communities without it.

But a significant percentage of rural communities have access today to WISPs (wireless ISPs) that use unlicensed frequency and point-to-multipoint radios to bring a broadband connection to customers. The performance of WISPs varies widely. There are places where WISPs are delivering solid and reliable connections that average between 20 – 40 Mbps download. But unfortunately there are many other WISPs that are delivering slow broadband in the 1 – 3 Mbps range.

The WISPs that have fast data speeds share two characteristics. They have a fiber connection directly to each wireless transmitter, meaning that there are no bandwidth constraints. And they don’t oversubscribe customers. Anybody who was on a cable modem five or ten years ago understands oversubscription. When there are too many people on a network node at the same time the performance degrades for everybody. A well-designed broadband network of any technology works best when there are not more customers than the technology can optimally serve.

But a lot of rural WISPs are operating in places where there is no easy or affordable access to a fiber backbone. That leaves them with no alternative but to use wireless backhaul. This means using point-to-point microwave radios to get bandwidth to and from a tower.

Wireless backhaul is not in itself a negative issue. If an ISP can use microwave to deliver enough bandwidth to a wireless node to satisfy the demand there, then they’ll have a robust product and happy customers. But the problems start happening when networks include multiple ‘hops’ between wireless towers. I often see WISP networks where the bandwidth goes from tower to tower to tower. In that kind of configuration all of the towers and all of the customers on those towers are sharing whatever bandwidth is sent to the first tower in the chain.

Adding hops to a wireless network also adds latency and each hop means it takes longer for the traffic to get to and from customers at the outer edges of one of these wireless chains. Latency, or time lag, in signal is an important factor in being able to perform real-time functions like data streaming, voice over IP, gaming, or functions like maintaining connections to an on-line class or a distant corporate WAN.

Depending upon the brand of the radios and the quality of the internet backbone connection, a wireless transmitter that is connected directly to fiber can have a latency similar to that of a cable or DSL network. But when chaining multiple towers together the latency can rise significantly, and real-time applications start to suffer at latencies of 100 milliseconds or greater.

WISPs also face other issues. One is the age of the wireless equipment. There is no part of our industry that has made bigger strides over the past ten years than the manufacturing of subscriber microwave radios. The newest radios have significantly better operating characteristics than radios made just a few years ago. WISPs are for the most part relatively small companies and have a hard time justifying upgrading equipment until it has reached its useful life. And unfortunately there is not much opportunity for small incremental upgrades of equipment. The changes in the technologies have been significant enough that that upgrading a node often means replacing the transmitters on towers as well as subscriber radios.

The final dilemma faced by WISPs is that they often are trying to serve customers that are in locations that are not ideally situated to receive a wireless signal. The unlicensed frequencies require good line-of-sight and also suffer degraded signals from foliage, rain and other impediments and it’s hard to serve customer reliably who are surrounded by trees or who live in places that are somehow blocked by the terrain.

All of the various issues mean that reviews of WISPs vary as widely as you can imagine. I was served by a WISP for nearly a decade and since I lived a few hundred feet from the tower and had a clear line-of-sight I was always happy with the performance I received. I’ve talked to a few people recently who have WISP speeds as fast as 50 Mbps. But I have also talked to a lot of rural people who have WISP connections that are slow and have high latency that provides a miserable broadband experience.

It’s going to be interesting to see what happens to some of these WISPs as rural telcos deploy CAF II money and provide a faster broadband alternative that will supposedly deliver at least 10 Mbps download. WISPs who can beat those speeds will likely continue to thrive while the ones delivering only a few Mbps will have to find a way to upgrade or will lose most of their customers.

Can the States Regulate Internet Privacy

Since Congress and the FCC have taken steps to remove restrictions on ISPs using customer data, a number of states and even some cities have taken legislative steps to reintroduce some sort of privacy restrictions on ISPs. This is bound to end up in the courts at some point to determine where the authority lies to regulate ISPs.

Congress just voted in March to end restrictions on the ways that ISPs can use customer data, leading to a widespread fear that ISPs could profit from selling customer browsing history. Since then all of the large telcos and cable companies have made public statements that they would not sell customer information in this way, but many of these companies have histories that would indicate otherwise.

Interestingly, a new bill has been introduced in Congress called the BROWSER Act of 2017 that would add back some of the restrictions imposed on ISPs and would also make those restrictions apply to edge providers like Google and Facebook. The bill would give the authority to enforce the privacy rules to the Federal Trade Commission rather than the FCC. The bill was introduced by Rep. Marsha Blackburn who was also one of the architects of the earlier removal of ISP restrictions. This bill doesn’t seem to be getting much traction and there is a lot of speculation that the bill was mostly offered to save face for Congress for taking away ISP privacy restrictions.

Now states have jumped in to fill the void. Interestingly the states looking into this are from both sides of the political spectrum which makes it clear that privacy is an issue that worries everybody. Here is a summary of a few of the state legislative efforts:

Connecticut. The proposed law would require consumer buy-in before any “telecommunication company, certified telecommunications provider, certified competitive video service provider or Internet service provider” could profit from selling such data.

Illinois. The privacy measures proposed would allow consumers to be able to ask what information about them is being shared. The bills would also require customer approval before apps can track and record location information on cellphones.

Massachusetts. The proposed legislation would require customer buy-in for sharing private information. It would also prohibit ISPs from charging more to customers who don’t want to share their personal information (something AT&T has done with their fiber product).

Minnesota. The proposed law would stop ISPs from even recording and saving customer information without their approval.

Montana. The proposed law there would prohibit any ISPs that share customer data from getting any state contracts.

New York. The proposed law would prohibit ISPs from sharing customer information without customer buy-in.

Washington. One proposed bill would require written permission from customers to share their data. The bill would also prohibit ISPs from denying service to customers that don’t want to share their private information.

Wisconsin. The proposed bill essentially requires the same restrictions on privacy that were included in the repealed FCC rules.

This has even made it down to the City level. For example, Seattle just issued new rules for the three cable providers with a city franchise telling them not to collect or sell customer data without explicit customer permission or else face losing their franchise.

A lot of these laws will not pass this year since the new laws were introduced late into the legislative sessions for most states. But it’s clear from the laws that have been proposed that this is a topic with significant bipartisan support. One would expect a lot of laws to be introduced and enacted in legislative sessions that will occur later this year or early next year.

There is no doubt that at some point this is going to result in lawsuits to resolve the conflict between federal and state rules. An issue of this magnitude will almost certainly will end up at the Supreme Court at some point. But as we have seen in the past, during the period of these kinds of legislative and legal fights the status of any rules is muddy. And that generally means that ISPs are likely to continue with the status quo until the laws become clear. That likely means that ISPs won’t openly be selling customer data for a few years, although one would think that the large ones have already been collecting data for future use.

Are You Ready for 4K Video?

The newest worry for ISPs is the expansion of 4K video. Already today Netflix and Amazon are offering on-line 4K video to customers. Almost all of the new programming being created by both companies is being shot in 4K.

Why is this a concern for ISPs? Netflix says that in order to enjoy a streaming 4k signal that a user ought to have a spare 15 – 20 Mbps of bandwidth available if streaming with buffering. The key word is spare, meaning that any other household activity ought to be using other bandwidth. Netflix says that without buffering that a user ought to have a spare 25 Mbps.

When we start seeing a significant number of users stream video at those speeds even fiber networks might begin experiencing problems. I’ve never seen a network that doesn’t have at least a few bottlenecks, which often are not apparent until traffic volumes are high. Already today busy-hour video is causing stress to a lot of networks. I think about millions of homes trying to watch the Super Bowl in 4K and shudder to think what that will mean for most networks.

While 4K video is already on-line it is not yet being offered by cable companies. The problem for most of the industry is that there is no clear migration path between today and tomorrow’s best video signal. There are alternatives to 4K being explored by the industry that muddy the picture. Probably the most significant new technology is HDR (high-dynamic range) video. HDR has been around for a few years, but the newest version which captures video in 10-bit samples adds both contrast and color accuracy to TVs. There are other video improvements also being explored such as 10-bit HEVC (high-efficiency video coding) which is expected to replace today’s H.264 standard.

The uncertainty of the best technology migration path has stopped cable companies from making upgrades to HDR or 4K. They are rightfully afraid to invest too much in any one version of the early implementations of the technology to then face more upgrades in just a few years. But as the popularity of 4K video increases, the pressure is growing for cable companies to introduce something soon. It’s been reported that Comcast’s latest settop box is 4K capable, although the company is not making any public noise about it.

But as we’ve seen in the past, once customers start buying 4K capable TVs they are going to want to use them. It’s expected by 2020 that almost every new TV will include some version of HDR technology, which means that the quality of watching today’s 1080 pixel video streams will improve. And by then a significant number of TVs will come standard with 4K capabilities as well.

I remember back when HD television was introduced. I have one friend who is a TV buff and once he was able to get HD channels from Comcast he found that he was unable to watch anything that was broadcast in standard definition. He stopped watching any channel that did not broadcast HD and ignored a huge chunk of his Comcast line-up.

The improvements of going to 4K and/or true HDR will be equally as dramatic. The improvement in clarity and color is astonishing as long as you have a TV screen large enough to see the difference. And this means that as people grow to like 4K quality they will migrate towards 4K content.

One thing that is clear is that 4K video will force cable companies to broadcast video over the IP stream. A single 4K signal eats up an entire 6 MHz channel on a cable system making it impossible for any cable system to broadcast more than a tiny number of 4K channels in the traditional way. And, like Comcast is obviously preparing to do, it also means all new settop boxes and a slew of new electronics at the cable headend to broadcast IPTV.

Of course, like any technology improvement we’ve seen lately, the improvements in video quality don’t stop with 4K. The Japanese plan to broadcast the 2020 Olympics in 8K video. That requires four times as much bandwidth as 4K video – meaning an 80 – 100 Mbps spare IP path. I’m sure that ways will be found to compress the transmission, but it’s still going to require a larger broadband pipe than what most homes buy today. It’s expected that by 2020 that there will only be a handful of users in Japan and South Korea ready to view 8K video, but like anything dramatically new, the demand is sure to increase in the following decade.