Improving Rural Wireless Broadband

Microsoft has been implementing rural wireless broadband using white space spectrum – the slices of spectrum that sits between traditional TV channels. The company announced a partnership with ARK Multicasting to introduce a technology that will boost the efficiency of fixed wireless networks.

ARK Multicasting does just what their name implies. Today about 80% of home broadband usage is for video, and ISPs unicast video, meaning that the send a separate stream of a given video to each customer that wants to watch it. If ten customers in a wireless node are watching the same new Netflix show, the ISP sends out ten copies of the program. Today, in even a small wireless node of a few hundred customers an ISP might be transmitting dozens of simultaneous copies of the most popular content in an evening. The ARK Multicasting technology will send out just one copy of the most popular content on the various OTT services like Netflix, Amazon Prime, and Apple TV. This one copy will be cached in an end user storage device, and if a customer elects to watch the new content they view it from the local cache.

The net impact of multicasting should be a huge decrease in demand for video content during peak network hours. It would be interesting to know the percentage of video viewing in a given week comes from watching newly released content. I’m sure all of the OTT providers know that number, but I’ve never seen anybody talk about it. If anybody knows that statistic, please post in reply comments to this blog. Anecdotal evidence suggests the percentage is significant because people widely discuss new content on social media soon after it’s released.

The first trial of the technology is being done in conjunction with a Microsoft partner wireless network in Crockett. Texas. ARK Multicasting says that it is capable of transmitting 7-10 terabytes of content per month, which equates to 2,300 – 3,300 hours of HD video. We’ll have to wait to see the details of the deployment, but I assume that Microsoft will provide the hefty CPE capable of multi-terabyte storage – there are no current consumer settop boxes with that much capacity. I also assume that cellphones and tablets will grab content using WiFi from the in-home storage device since there are no tablets or cellphones with terabyte storage capacity.

To be effective ARK must be deleting older programming to make room for new, meaning that the available local cache will always contain the latest and most popular content on the various OTT platforms.

There is an interesting side benefit of the technology. Viewers should be able to watch cached content even if they lose the connection to the ISP. Even after a big network outage due to a storm, ISP customers should still be able to watch many hours of popular content.

This is a smart idea. The weakest part of the network for many fixed wireless systems is the backhaul connection. When a backhaul connection gets stressed during the busiest hours of network usage all customers on a wireless node suffer from dropped packets, pixelization, and overall degraded service. Smart caching will remove huge amounts of repetitive video signals from the backhaul routes.

Layering this caching system onto any wireless system should free up peak evening network resources for other purposes. Fixed wireless systems are like most other broadband technologies where the bandwidth is shared between users of a given node. Anything that removes a lot of video downloading at peak times will benefit all users of a node.

The big OTT providers already do edge-caching of content. Providers like Netflix, Google, and Amazon park servers at or near to ISPs to send local copies of the latest content. That caching saves a lot of bandwidth on the internet transport network. The ARK Multicasting will carry caching down to the customer level and bring the benefits of caching to the last-mile network.

A lot of questions come to mind about the nuances of the technology. Hopefully the downloads are done in the slow hours of the network so as to not to add to network congestion. Will all popular content be sent to all customers – or just content from the services they subscribe to? The technology isn’t going to work for an ISP with data caps because the cashing means customers might be downloading multiple terabytes of data that may never be viewed.

I assume that if this technology works well that ISPs of all kinds will consider it. One interesting aspect of the concept is that this means getting ISPs back into the business of supplying boxes to customers – something that many ISPs avoid as much as possible. However, if it works as described, this caching could create a huge boost to last-mile networks by relieving a lot of repetitive traffic, particularly at peak evening hours. I remember local caching being tried a decade or more ago, but it never worked as promised. It will be interesting to see if Microsoft and ARK can pull this off.

Continued Lobbying for White Space Spectrum

In May, Microsoft submitted a petition to the FCC calling for some specific changes that will improve the performance of white space spectrum used to provide rural broadband. Microsoft has now taken part in eleven white space trials and makes these recommendations based up on the real-life performance of the white space spectrum. Not included in this filing is Microsoft’s long-standing request for the FCC to allocate three channels of unlicensed white space spectrum in every rural market. The FCC has long favored creating just one channel of unlicensed white space spectrum per market – depending on what’s available.

A number of other parties have subsequently filed comments in support the Microsoft proposals including the Wireless Internet Service Providers Association (WISPA), Next Century Cities, New America’s Open Technology Institute, Tribal Digital Village and the Gigabit Libraries Network. One of the primary entities opposed to earlier Microsoft proposals is the National Association of Broadcasters (NAB), which worries about interference with TV stations from white space broadband. However, the group now says that it can support some of the new Microsoft proposals.

As a reminder, white space spectrum consists of the unused blocks of spectrum that are located between the frequencies assigned to television stations. Years ago, at the advent of broadcast television, the FCC provided wide buffers between channels to reflect the capability of the transmission technology at the time. Folks my age might remember back to the 1950s when neighboring TV stations would bleed into each other as ghost signals. As radio technology has improved the buffers are now larger than needed and are larger than buffers between other blocks of spectrum. White space spectrum is using those wide buffers.

Microsoft has proposed the following:

  • They are asking for higher power limits for transmissions in cases where the spectrum sits two or more channels away from a TV station signal. Higher power means greater transmission distances from a given transmitter.
  • They are asking for a small power increase for white space channels that sit next to an existing TV signal.
  • They are asking for white space transmitters to be placed as high as 500 meters above ground (1,640 feet). In the US there are only 71 existing towers taller than 1,000 feet.
  • Microsoft has shown that white space spectrum has a lot of promise for supporting agricultural IoT sensors. They are asking the FCC to change to white space rules to allow for narrowband transmission for this purpose.
  • Microsoft is asking that the spectrum be allowed to support portable broadband devices used for applications like school buses, agricultural equipment and IoT for tracking livestock.

The last two requests highlight the complexity of FCC spectrum rules. Most people would probably assume that spectrum licenses allow for any possible use of spectrum. Instead, the FCC specifically defines how spectrum can be used and the rural white space spectrum is currently only allowed for use as a hot spot or for fixed point-to-point data using receiving antennas at a home or business. The FCC has to modify the rules to allow use for IoT for farms sensors, tractors and cows.

The various parties are asking the FCC to issue a Notice of Proposed Rulemaking to get comments on the Microsoft proposal. That’s when we’ll learn if any other major parties disagree with the Microsoft proposals. We already know that the cellular companies oppose providing multiple white space bands for anything other than cellular data, but these particular proposals are to allow the existing white space spectrum to operate more efficiently.

Is the FCC Really Solving the Digital Divide?

The FCC recently released the 2019 Broadband Deployment Report, with the subtitle: Digital Divide Narrowing Substantially. Chairman Pai is highlighting several facts that he says demonstrate that more households now have access to fast broadband. The report highlights rural fiber projects and other efforts that are closing the digital divide. The FCC concludes that broadband is being deployed on a reasonable and timely basis – a determination they are required to make every year by Congressional mandate. If the FCC ever concludes that broadband is not being deployed fast enough, they are required by law to rectify the situation.

To give the FCC some credit, there is a substantial amount of rural fiber being constructed – mostly from the ACAM funds being provided to small telephone companies with some other fiber being deployed via rural broadband grants. Just to provide an example, two years ago Otter Tail County Minnesota had no fiber-to-the-premise. Since then the northern half of the county is seeing fiber deployed from several telephone companies. This kind of fiber expansion is great news to rural counties, but counties like Otter Tail are now wondering how to upgrade the rest of their county.

Unfortunately, this FCC has zero credibility on the issue. The 2018 Broadband Deployment Report reached the same conclusion, but it turns out that there was a huge reporting error in the data supporting that report where the ISP, Barrier Free, had erroneously reported that they had deployed fiber to 62 million residents in New York. Even after the FCC recently corrected for that huge error they still kept the original conclusion. This raises a question about what defines ‘reasonable and timely deployment of broadband’ if having fiber to 52 million fewer people doesn’t change the answer.

Anybody who works with rural broadband knows that the FCC databases are full of holes. The FCC statistics come from the data that ISPs report to the FCC each year about their broadband deployment. In many cases, ISPs exaggerate broadband speeds and report marketing speeds instead of actual speeds. The reporting system also contains a huge logical flaw in that if a census block has only one customer with fast broadband, the whole census block is assumed to have that speed.

I work with numerous rural counties where broadband is still largely non-existent outside of the county seat, and yet the FCC maps routinely show swaths of broadband availability in many rural counties where it doesn’t exist.

Researchers at Penn State recently looked at broadband coverage across rural Pennsylvania and found that the FCC maps grossly overstate the availability of broadband for huge parts of the state. Anybody who has followed the history of broadband in Pennsylvania already understands this. Years ago, Verizon reneged on a deal to introduce DSL everywhere – a promise made in exchange for becoming deregulated. Verizon ended up ignoring most of the rural parts of the state.

Microsoft has blown an even bigger hole in the FCC claims. Microsoft is in an interesting position in that customers in every corner of the country ask for online upgrades for Windows and Microsoft Office. Microsoft is able to measure the actual speed of customer download for tens of millions of upgrades every quarter. Microsoft reports that almost half of all downloads of their software is done at speeds that are slower than the FCC’s definition of broadband of 25/3 Mbps. Measuring a big download is the ultimate test of broadband speeds since ISPs often boost download speeds for the first minute or two to give the impression they have fast broadband (and to fool speed tests). Longer downloads show the real speeds. Admittedly some of Microsoft’s findings are due to households that subscribe to slower broadband to save money, but the Microsoft data still shows that a huge number of ISP connections underperform. The Microsoft figures are also understated since they don’t include the many millions of households that can’t download software since they have no access to home broadband.

The FCC is voting this week to undertake a new mapping program to better define real broadband speeds. I’m guessing that effort will take at least a few years, giving the FCC more time to hide behind bad data. Even with a new mapping process, the data is still going to have many problems if it’s self-reported by the ISPs. I’m sure any new mapping effort will be an improvement, but I don’t hold out any hopes that the FCC will interpret better data to mean that broadband deployment is lagging.

How Bad is the Digital Divide?

The FCC says that approximately 25 million Americans living in rural areas don’t have access to an ISP product that would be considered as broadband – currently defined as 25/3 Mbps. That number comes out of the FCC’s mapping efforts using data supplied by ISPs.

Microsoft tells a different story. They say that as many as 163 million Americans do not use the Internet at speeds that the FCC considers as broadband. Microsoft might be in the best position of anybody in the industry to understand actual broadband performance because the company can see data speeds for every customer that updates Windows or Microsoft Office – that’s a huge percentage of all computer users in the country and covers every inch of the country.

Downloading a big software update is probably one of the best ways possible to measure actual broadband performance. Software updates tend to be large files, and the Microsoft servers will transmit the files at the fastest speed a customer can accept. Since the software updates are large files, Microsoft gets to see the real ISP performance – not just the performance for the first minute of a download. Many ISPs use a burst technology that downloads relatively fast for the first minute or so, but then slows for the rest of a download – a customer’s true broadband speed is the one that kicks in after the burst is finished. The burst technology has a side benefit to ISPs in that it inflates performance on standard speed tests – but Microsoft gets to see the real story.

I’ve ranted about the FCC’s broadband statistics many times. There are numerous reasons why the FCC data is bad in rural America. Foremost, the data is self-reported by the big ISPs who have no incentive to tell the FCC or the public how poorly they are doing. It’s also virtually impossible to accurately report DSL speeds that vary from customer to customer according to the condition of specific copper wires and according to distance from the DSL core router. We also know that much of the reporting to the FCC represents marketing speeds or ‘up-to’ speeds that don’t reflect what customers really receive. Even the manner of reporting to the FCC, by Census block, distorts the results because when a few customers in a block get fast speeds the FCC assumes that everyone does.

To be fair, the Microsoft statistics measure the speeds customers are actually achieving, while the FCC is trying to measure broadband availability. The Microsoft data includes any households that elect to buy slower broadband products to save money. However, there are not 140 million households that purposefully buy slow broadband (the difference between 163 million and 24 million). The Microsoft numbers tell us that the actual speeds in the country are far worse than described by the FCC – and for half of us slower than 25/3 Mbps. That is a sobering statistic and doesn’t just reflect that rural America is getting poor broadband, but also that many urban and suburban households also aren’t achieving 25/3 Mbps.

I’ve seen many real-life examples of what Microsoft is telling us. At CCG Consulting we do community surveys for broadband and we sometimes see whole communities where the achieved speeds for customers is lower than the speeds advertised by the ISPs. We often see a lot more households claim to have no broadband or poor broadband than would be expected using the FCC mapping data. We constantly see residents in urban areas complain that broadband with a relatively fast speed seems slow and sluggish.

Microsoft reported their findings to the FCC, but I expect the FCC to ignore their story. This is a drastic departure from the narrative that the FCC is telling Congress and the public. I wrote a blog just a few weeks ago describing how the FCC is claiming that big ISPs are delivering the speeds that they market. Deep inside the recent reports the FCC admitted that DSL often wasn’t up to snuff – but the Microsoft statistics mean that a lot of cable companies and other ISPs are also under-delivering.

In my mind the Microsoft numbers invalidate almost everything that we think we know about broadband in the country. We are setting national broadband policy and goals based upon false numbers – and not numbers that are a little off, but rather than are largely a fabrication. We have an FCC that is walking away from broadband regulation because they have painted a false narrative that most households in the country have good broadband. It would be a lot harder for politicians to allow broadband deregulation if the FCC admitted that over half of the homes in the country aren’t achieving the FCC definition of broadband.

The FCC has been tasked by Congress to find ways to improve broadband in areas that are unserved or underserved – with those categories being defined by the FCC maps. The Microsoft statistics tell us that there are huge numbers of underserved households, far higher than the FCC is recognizing. If the FCC was to acknowledge the Microsoft numbers, they’d have to declare a state of emergency for broadband. Sadly, the FCC has instead doomed millions of homes from getting better broadband by declaring these homes as already served with adequate broadband – something the Microsoft numbers say is not true.

The current FCC seems hellbent on washing their hands of broadband regulation, and the statistics they use to describe the industry provide the needed cover for them to do so. To be fair, this current FCC didn’t invent the false narrative – it’s been in place since the creation of the national broadband maps in 2009. I, and many others predicted back then that allowing the ISPs to self-report performance would put us right where we seem to be today – with statistics that aren’t telling the true story. Microsoft has now pierced the veil to see behind the curtain – but is there anybody in a position of authority willing to listen to the facts?

White Space Spectrum for Rural Broadband – Part II

Word travels fast in this industry, and in the last few days I’ve already heard from a few local initiatives that have been working to get rural broadband. They’re telling me that the naysayers in their communities are now pushing them to stop working on a broadband solution since Microsoft is going to bring broadband to rural America using white space spectrum. Microsoft is not going to be doing that, but some of the headlines could make you think they are.

Yesterday I talked about some of the issues that must be overcome in order to make white space spectrum viable. It certainly is no slam dunk that the spectrum is going to be viable for unlicensed use under the FCC spectrum plan. And as we’ve seen in the past, it doesn’t take a lot of uncertainty for a spectrum launch to fall flat on its face, something I’ve seen a few times just in recent decades.

With that in mind, let me discuss what Microsoft actually said in both their blog and whitepaper:

  • Microsoft will partner with telecom companies to bring broadband by 2022 to 2 million of the 23.4 million rural people that don’t have broadband today. I have to assume that these ‘partners’ are picking up a significant portion of the cost.
  • Microsoft hopes their effort will act as a catalyst for this to happen in the rest of the country. Microsoft is not themselves planning to fund or build to the remaining rural locations. They say that it’s going to take some combination of public grants and private money to make the numbers work. I just published a blog last Friday talking about the uncertainty of having a federal broadband grant program. Such funding may or may not ever materialize. I have to wonder where the commercial partners are going to be found who are willing to invest the $8 billion to $12 billion that Microsoft estimates this will cost.
  • Microsoft only thinks this is viable if the FCC follows their recommendation to allocate three channels of unlicensed white space spectrum in every rural market. The FCC has been favoring creating just one channel of unlicensed spectrum per market. The cellular companies that just bought this spectrum are screaming loudly to keep this at one channel per market. The skeptic in me says that Microsoft’s white paper and announcement is a clever way for Microsoft to put pressure on the FCC to free up more spectrum. I wonder if Microsoft will do anything if the FCC sticks with one channel per market.
  • Microsoft admits that for this idea to work that manufacturers must mass produce the needed components. This is the classic chicken-and-egg dilemma that has killed other deployments of new spectrum. Manufacturers won’t commit to mass producing the needed gear until they know there is a market, and carriers are going to be leery about using the technology until there are standardized mass market products available. This alone could kill this idea just as the FCC’s plans for the LMDS and MMDS spectrum died in the late 1990s.

I think it’s also important to discuss a few important points that this whitepaper doesn’t talk about:

  • Microsoft never mentions the broadband data speeds that can be delivered with this technology. The whitepaper does talk about being able to deliver broadband to about 10 miles from a given tower. One channel of white space spectrum can deliver about 30 Mbps up to 19 miles in a point-to-point radio shot. From what I know of the existing trials these radios can deliver speeds of around 40 Mbps at six miles in a point-to-multipoint network, and less speed as the distance increases. Microsoft wants multiple channels in a market, because bonding multiple channels could greatly increase speeds to perhaps 100 Mbps. But even with one channel this is great broadband for a rural home that’s never had broadband. But the laws of physics means these radios will never get faster and those will still be the speeds offered a decade and two from now when those speeds are going to feel like slow DSL does today. It seems like too many broadband technology plans fail to recognize the fact that our demand for broadband has been doubling every three years since 1980. What’s pretty good speeds today can become inadequate in a surprisingly short period of time.
  • Microsoft wants to be the company to operate the wireless databases behind this and other spectrum. That gives them a profit motive to spur the wireless spectrums to be used. There is nothing wrong with wanting to make money, but this is not a 100% altruistic offer on their part.

It’s hard to know what to conclude about this. Certainly Microsoft is not bringing broadband to all of rural America. But it sounds like they are willing to work towards making this work. But we can’t ignore the huge hurdles that must be overcome to realize the vision painted by Microsoft in the white paper.

  • First, the technology has to work and the interference issues I discussed in yesterday’s blogs need to be solved for anybody to trust using this spectrum on an unlicensed basis. Nobody will use this spectrum if unlicensed users constantly get bumped off by licensed ones. The trials done for this spectrum to date were not done in a busy spectrum environment.
  • Second, somebody has to be willing to fund the $8B to $12B Microsoft estimates this will cost. There may or may not be any federal grants ever available for this technology, and there may never be commercial investors willing to spend that much on a new technology in rural America. The fact that Microsoft thinks this needs grant funding tells me that a business plan based upon this technology might not stand on its own.
  • Third, the chicken-and-egg issue of getting over the hurdle to have mass-produced gear for the spectrum must be overcome.
  • Finally, the FCC needs to adopt Microsoft’s view that there should be 3 unlicensed channels available everywhere – something that the licensed holders are strongly resisting. And from what I see from the current FCC, there is a god chance that they are going to side with the big cellular companies.

White Space Spectrum for Rural Broadband – Part I

Microsoft has announced that they want to use white space spectrum to bring broadband to rural America. In today and tomorrow’s blog I’m going to discuss the latest thoughts on the white space spectrum. Today I’ll discuss the hurdles that must be overcome to use the spectrum and tomorrow I will discuss in more detail what I think Microsoft is really proposing.

This spectrum being called white space has historically been used for the transmission of television through the air. In the recent FCC incentive auction the FCC got a lot of TV stations to migrate their signals elsewhere to free up this spectrum for broadband uses. And in very rural America much of this spectrum has been unused for decades.

Before Microsoft or anybody can use this spectrum on a widespread basis the FCC needs to determine how much of the spectrum will be available for unlicensed use. The FCC has said for several years that they want to allocate at least one channel of the spectrum for unlicensed usage in every market. But Microsoft and others have been pushing the FCC to allocate at least three channels per market and argue that the white space spectrum, if used correctly, could become as valuable as WiFi. It’s certainly possible that the Microsoft announcement was aimed at putting pressure on the FCC to provide more than one channel of spectrum per market.

The biggest issue that the FCC is wrestling with is interference. One of the best characteristics of white space spectrum is that it can travel great distances. The spectrum passes easily through things that kill higher frequencies. I remember as a kid being able to watch UHF TV stations in our basement that were broadcast from 90 miles away from a tall tower in Baltimore. It is the ability to travel significant distances that makes the spectrum promising for rural broadband. Yet these great distances also exacerbate the interference issues.

Today the spectrum has numerous users. There are still some TV stations that did not abandon the spectrum. There are two bands used for wireless microphones. There was a huge swath of this spectrum just sold to various carriers in the incentive auction that will probably be used to provide cellular data. And the FCC wants to create the unlicensed bands. To confound things, the mix between the various users varies widely by market.

Perhaps the best way to understand white space interference issues is to compare it to WiFi. One of the best characteristics (and many would also say the worse characteristics) of WiFi is that it allows multiple users to share the bandwidth at the same time. These multiple uses cause interference and so no user gets full use of the spectrum, but this sharing philosophy is what made WiFi so popular – except for the most crowded environments anybody can create an application using WiFi and knows that in most cases the bandwidth will be adequate.

But licensed spectrum doesn’t work that way and the FCC is obligated to protect all spectrum license holders. The FCC has proposed to solve the interference issues by requiring that radios be equipped so that unlicensed users will first dynamically check to make sure there are no licensed uses of the spectrum in the area. If they sense interference they cannot broadcast, or, once broadcasting, if they sense a licensed use they must abandon the signal.

This would all be done by using a database that identifies the licensed users in any given area along with radios that can search for licensed usage before making a connection. This sort of frequency scheme has never been tried before. Rather than sharing spectrum, like WiFi, the unlicensed user will be only allowed to use the spectrum when there is no interference. As you can imagine the licensed cellular companies, which just spent billions for this spectrum are worried about interference. But there are also concerns by churches, city halls and musicians who use wireless microphones.

It seems unlikely to me that in an urban area with a lot of usage on the spectrum that unlicensed white space spectrum is going to be very attractive. If it’s hard to make or maintain an unlicensed connection then nobody is going to try to use the spectrum in a crowded-spectrum environment.

The question that has yet to be answered is if this kind of frequency plan will work in rural environments. There have been a few trials of this spectrum over the past five years, but those tests really proved the viability of the spectrum for providing broadband and did not test the databases or the interference issue in a busy spectrum environnment. We’ll have to see what happens in rural America once the cellular companies start using the spectrum they just purchased. Because of the great distances in which the spectrum is viable, I can imagine a scenario where the use of licensed white space in a county seat might make it hard to use the spectrum in adjoining rural areas.

And like any new spectrum, there is a chicken and egg situation with the wireless equipment manufacturers. They are not likely to commit to making huge amounts of equipment, which would make this affordable, until they know that this is really going to work in rural areas. And we might not know if this is going to work in rural areas until there have been mass deployments. This same dilemma largely sunk the use fifteen years ago of the LMDS and the MMDS spectrums.

The white space spectrum has huge potential. One channel can deliver 30 Mbps to the horizon on a point-to-point basis. But there is no guarantee that the unlicensed use of the spectrum is going to work well under the frequency plan the FCC is proposing.

New Video Format

alliance-for-open-mediaSix major tech companies have joined together to create a new video format. Google, Amazon, Cisco, Microsoft, Netflix, and Mozilla have combined to create a new group called the Alliance for Open Media.

The goal of this group is create a video format that is optimized for the web. Current video formats were created before there was wide-spread video using web browsers on a host of different devices.

The Alliance has listed several goals for the new format:

Open Source Current video codecs are proprietary, making it impossible to tweak them for a given application.

Optimized for the Web One of the most important features of the web is that there is no guarantee that all of the bits of a given transmission will arrive at the same time. This is the cause of many of the glitches one gets when trying to watch live video on the web. A web-optimized video codec will be allowed to plow forward with less than complete data. In most cases a small amount of missing bits won’t be noticeable to the eye, unlike the fits and starts that often come today when the video playback is delayed waiting for packets.

Scalable to any Device and any Bandwidth One of the problems with existing codecs is that they are not flexible. For example, consider a time when you wanted to watch something in HD but didn’t have enough bandwidth. The only option today is to fall back the whole way to an SD transmission, at a far lower quality. But in between these two standards is a wide range of possible options where a smart codec could analyze the bandwidth available and could then maximize the transmission by choosing different options among the many variables within a codec. This means you could produce ‘almost HD’ rather than defaulting to something of much poorer in quality.

Optimized for Computational Footprint and Hardware. This means that the manufacturers of devices would be able to maximize the codec specifically for their devices. All smartphones or all tablets or all of any device are not the same and manufacturers would be able to choose a video format that maximizes the video display for each of their devices.

Capable of Consistent, High-quality, Real-time Video Real-time video is a far greater challenge than streaming video. Video content is not uniform in quality and characteristics and there is thus a major difference in the quality between watching two different video streams on the same device. A flexible video codec could standardize quality much in the same way that a sound system can level out differences in listener volume between different audio streams.

Flexible for Both Commercial and Non-commercial Content A significant percentage of videos watched today are user-generated and not from commercial sources. It’s just as important to maximize the quality of Vine videos as it is for showing commercial shows from Netflix.

There is no guarantee that this group can achieve all of these goals immediately, because that’s a pretty tall task. But the power of these various firms combined certainly is promising. The potential for a new video codec that meets all of these goals is enormous. It would improve the quality of web videos on all devices. I know that personally, quality matters and this is why I tend to watch videos from sources like Netflix and Amazon Prime. By definition streamed video can be of much higher and more consistent quality than real-time video. But I’ve noticed that my daughter has a far lower standard of quality than I do and watches videos from a wide variety of sources. Improving web video, regardless of the source, will be a major breakthrough and will make watching video on the web enjoyable to a far larger percentage of users.

Universal Internet Access

navigator_globe_lgWhile many of us are spending a lot of time trying to find a broadband solution for the unserved and underserved homes in the US, companies like Facebook, Google, and Microsoft are looking at ways of bringing some sort of broadband to everybody in the world.

Mark Zuckerberg of Facebook spoke to the United Nations this past week and talked about the need to bring Internet access to the five billion people on the planet that do not have it. He says that bringing Internet access to people is the most immediate way to help lift people out of abject poverty.

And one has to think he is right. Even very basic Internet access, which is what he and those other companies are trying to supply, will bring those billions into contact with the rest of the world. It’s hard to imagine how much untapped human talent resides in those many billions and access to the Internet can let the brightest of them contribute to the betterment of their communities and of mankind.

But on a more basic level, Internet access brings basic needs to poor communities. It opens up ecommerce and ebanking and other fundamental ways for people to become engaged in ways of making a living beyond a scratch existence. It opens up communities to educational opportunities, often for the first time. There are numerous stories already of rural communities around the world that have been transformed by access to the Internet.

One has to remember that the kind of access Zuckerberg is talking about is not the same as what we have in the developed countries. Here we are racing towards gigabit networks on fiber, while in these new places the connections are likely to be slow connections almost entirely via cheap smartphones. But you have to start somewhere.

Of course, there is also a bit of entrepreneurial competition going on here since each of these large corporations wants to be the face of the Internet for all of these new billions of potential customers. And so we see each of them taking different tactics and using different technologies to bring broadband to remote places.

Ultimately, the early broadband solutions brought to these new places will have to be replaced with some real infrastructure. As any population accepts Internet access they will quickly exhaust any limited broadband connection from a balloon, airplane, or satellite. And so there will come a clamor over time for the governments around the world to start building backbone fiber networks to get real broadband into the country and the region. I’ve talked to consultants who work with African nations and it is the lack of this basic fiber infrastructure that is one of the biggest limitations on getting adequate broadband to remote parts of the world.

And so hopefully this early work to bring some connectivity to remote places will be followed up with a program to bring more permanent broadband infrastructure to the places that need it. It’s possible that the need for broadband is going to soon be ranked right after food, water, and shelter as a necessity for a community. I would expect the people of the world to expect, and to then push their governments into making broadband a priority. I don’t even know how well we’ll do to get fiber to each region of our own country, and so the poorer parts of the world face a monumental task over the coming decades to satisfy the desire for connectivity. But when people want something badly enough they generally find a way to get what they want, and so I think we are only a few years away from a time when most of the people on the planet will be clamoring for good Internet access.

 

Should the FCC Regulate OTT Video?

FCC_New_LogoA funny thing happened on the way to make it easier for OTT video providers to get content. Some of the biggest potential providers of online content like Amazon, Apple, and Microsoft have told the FCC that they don’t think that online video companies ought to be regulated as cable companies.

Of course, these couple of large companies don’t represent everybody who is interested in providing online video, and so they are just another faction to deal with for the issue. For example, FilmOn X recently got a court order allowing them to buy video as a regulated video provider and in the past Aereo had asked for the same thing.

A lot of the issue boils down to companies that want to put local networks online or else deliver them in some non-traditional way as was being done by FilmOnX or Aereo. These kind of providers are seeking to get the ability to force the local network stations to negotiate local retransmission agreements with them. Under current law the stations are not required to do so and are, in fact, refusing to do so.

The FCC is in a tough spot here because they don’t have a full quiver of tools at their disposal. The FCC’s hands are very much tied by the various sets of cable laws that have been passed by Congress over the years – the rules that define who is and is not a cable company, and more importantly, the rules and obligations of being a cable company. It will be interesting to see how much the FCC thinks it can stretch those rules to fit the situation of online programming, which was never anticipated in the rules.

I can certainly understand why the large companies mentioned above don’t want to be cable companies, because there are pages and pages of rules about what that means; the FCC is unlikely to be able to grant a company just a few of those rules without also requiring ones that these companies don’t want.

For example, the current cable law defines required tiers of service. Cable companies must have at least a basic and an expanded basic tier, and those are very narrowly defined. A basic tier includes all of the ‘must-carry’ local networks and the expanded basic carries all of the things we think of as cable channels.

I think what the FCC has in mind is a set of rules that require programmers to negotiate in good faith with online companies that want to buy their content. Certainly any company that wants to put content online today is completely at the mercy of programmers saying yes or no to giving them the content they want to carry. And there is nothing from stopping the programmers from changing their mind if they see an OTT company being more successful than they like.

So I would think that even Amazon, Apple, and Microsoft would like the ability to force the programmers to negotiate with them, but they obviously don’t want other FCC rules that they think will come along with that ability. Of course, these are very large companies with deep pockets and one has to imagine that they get a fairly decent hearing when they talk to programmers. The FCC’s real concern is not these giant companies, but companies smaller than them who don’t have any ability to force the programmers to even talk to them. I think the FCC believes that if online content is to be successful that there ought to be widespread competition and innovation online, not just content provided by a few giant tech companies along with other huge companies like Verizon.

Today the programmers have most of the power in the industry. They are making a huge amount of money from the mega-subscription models where all of their content is forced upon US cable companies. And they have no reason to become more reasonable because most of them are seeing gigantic growth in selling content overseas, so they have no real reason to upset the cart in the US market.

If online content is to become a vibrant alternative and not just be expensive packages foisted on the public by a small group of huge corporations, then something has to change. I just don’t know how much the FCC can do realistically considering how they are hamstrung by the current cable laws.

Is the Universal Translator Right Around the Corner?

star trek comm badgeWe all love a race. There is something about seeing somebody strive to win that gets our blood stirring. But there is one big race going on now that it’s likely you never heard of, which is the race to develop deep learning.

Deep learning is a specialized field of Artificial Intelligence research that looks to teach computers to learn by structuring them to mimic the neurons in the neocortex, that portion of our brain that does all of the thinking. The field has been around for decades, with limited success, and has needed faster computers to make any real headway.

The race is between a few firms that are working to be the best in the field. Microsoft and Google have gone back and forth with public announcements of breakthroughs, while other companies like Facebook and China’s Baidu are keeping their results quieter. It’s definitely a race, because breakthroughs are always compared to the other competitors.

The current public race deals with pattern recognition. The various teams are trying to get a computer to identify various objects in a defined data set of millions of pictures. In September Google announced that it had the best results on this test and just this month Microsoft said their computers beat not only Google, but did better than what people can do on the test.

All of the companies involved readily admit that their results are still far below what a human can do naturally in the real world, but they have made huge strides. One of the best known demonstrations was done last summer by Google who had their computer look at over 10 million YouTube videos and asked it to identify cats. Their computer did twice as good as any previous test, which was particularly impressive since the Google team had not pre-defined what a cat was to the computer ahead of time.

There are some deep learning techniques in IBM’s Watson computer that beat the best champs in Jeopardy. Watson is currently being groomed to help doctors make diagnoses, particularly in the third world where there is a huge lack of doctors. IBM has also started selling time on the machine to anybody and there is no telling all of the ways it is now being used.

Probably the most interesting current research is in teaching computers to learn on their own. This is done today by enabling multiple levels of ‘neurons’. The first layer learns the basic concept, like recognizing somebody speaking the letter S. Several first-layer inputs are fed to the second layer of neurons which can then recognize more complex patterns. This process is repeated until the computer is able to recognize complex sounds.

The computers being used for this research are already getting impressive. The Google computer that did well learning to recognize cats had a billion connections. This computer was 70% better at recognizing objects than any prior computer. For now, the breakthroughs in the field are being accomplished by applying brute computing force and the cat-test computer used over 16,000 computer processors, something that only a company like Google or Microsoft has available. .

Computer scientists all agree that we are probably still a few decades away from a time when computers can actually learn and think on their own. We need a few more turns of Moore’s Law for the speed of computers to increase and the size of the processors to decrease. But that does not mean that there are not a lot of current real life applications that can benefit from the current generation of deep learning computers.

There are real-world benefits of the research today. For instance, Google has used this research to improve the speech recognition in Android smartphones. But what is even more exciting is where this research is headed for the future. Sergey Brin says that his ultimate goal is to build a benign version of HAL from 2001: A Space Odyssey. It’s likely to take multiple approaches in addition to deep learning to get to such a computer.

But long before a HAL-like computer we could have some very useful real-world applications from deep learning. For instance, computers could monitor complex machines like electric generators and predict problems before they occur. They could be used to monitor traffic patterns to change traffic lights in real time to eliminate traffic jams. They could be used to enable self-driving cars. They could produce a universal translator that will let people with different languages converse in real-time. In fact, in October 2014, Microsoft researcher Rick Rashid gave a lecture in China. The deep learning computer transcribed his spoken lecture into written text with a 7% error rate. It then translated it into Chinese and spoke to the crowd while simulating his voice. It seems like with deep learning we are not far away from having that universal translator promised to us by science fiction.