Is the FCC Really Solving the Digital Divide?

The FCC recently released the 2019 Broadband Deployment Report, with the subtitle: Digital Divide Narrowing Substantially. Chairman Pai is highlighting several facts that he says demonstrate that more households now have access to fast broadband. The report highlights rural fiber projects and other efforts that are closing the digital divide. The FCC concludes that broadband is being deployed on a reasonable and timely basis – a determination they are required to make every year by Congressional mandate. If the FCC ever concludes that broadband is not being deployed fast enough, they are required by law to rectify the situation.

To give the FCC some credit, there is a substantial amount of rural fiber being constructed – mostly from the ACAM funds being provided to small telephone companies with some other fiber being deployed via rural broadband grants. Just to provide an example, two years ago Otter Tail County Minnesota had no fiber-to-the-premise. Since then the northern half of the county is seeing fiber deployed from several telephone companies. This kind of fiber expansion is great news to rural counties, but counties like Otter Tail are now wondering how to upgrade the rest of their county.

Unfortunately, this FCC has zero credibility on the issue. The 2018 Broadband Deployment Report reached the same conclusion, but it turns out that there was a huge reporting error in the data supporting that report where the ISP, Barrier Free, had erroneously reported that they had deployed fiber to 62 million residents in New York. Even after the FCC recently corrected for that huge error they still kept the original conclusion. This raises a question about what defines ‘reasonable and timely deployment of broadband’ if having fiber to 52 million fewer people doesn’t change the answer.

Anybody who works with rural broadband knows that the FCC databases are full of holes. The FCC statistics come from the data that ISPs report to the FCC each year about their broadband deployment. In many cases, ISPs exaggerate broadband speeds and report marketing speeds instead of actual speeds. The reporting system also contains a huge logical flaw in that if a census block has only one customer with fast broadband, the whole census block is assumed to have that speed.

I work with numerous rural counties where broadband is still largely non-existent outside of the county seat, and yet the FCC maps routinely show swaths of broadband availability in many rural counties where it doesn’t exist.

Researchers at Penn State recently looked at broadband coverage across rural Pennsylvania and found that the FCC maps grossly overstate the availability of broadband for huge parts of the state. Anybody who has followed the history of broadband in Pennsylvania already understands this. Years ago, Verizon reneged on a deal to introduce DSL everywhere – a promise made in exchange for becoming deregulated. Verizon ended up ignoring most of the rural parts of the state.

Microsoft has blown an even bigger hole in the FCC claims. Microsoft is in an interesting position in that customers in every corner of the country ask for online upgrades for Windows and Microsoft Office. Microsoft is able to measure the actual speed of customer download for tens of millions of upgrades every quarter. Microsoft reports that almost half of all downloads of their software is done at speeds that are slower than the FCC’s definition of broadband of 25/3 Mbps. Measuring a big download is the ultimate test of broadband speeds since ISPs often boost download speeds for the first minute or two to give the impression they have fast broadband (and to fool speed tests). Longer downloads show the real speeds. Admittedly some of Microsoft’s findings are due to households that subscribe to slower broadband to save money, but the Microsoft data still shows that a huge number of ISP connections underperform. The Microsoft figures are also understated since they don’t include the many millions of households that can’t download software since they have no access to home broadband.

The FCC is voting this week to undertake a new mapping program to better define real broadband speeds. I’m guessing that effort will take at least a few years, giving the FCC more time to hide behind bad data. Even with a new mapping process, the data is still going to have many problems if it’s self-reported by the ISPs. I’m sure any new mapping effort will be an improvement, but I don’t hold out any hopes that the FCC will interpret better data to mean that broadband deployment is lagging.

How Bad is the Digital Divide?

The FCC says that approximately 25 million Americans living in rural areas don’t have access to an ISP product that would be considered as broadband – currently defined as 25/3 Mbps. That number comes out of the FCC’s mapping efforts using data supplied by ISPs.

Microsoft tells a different story. They say that as many as 163 million Americans do not use the Internet at speeds that the FCC considers as broadband. Microsoft might be in the best position of anybody in the industry to understand actual broadband performance because the company can see data speeds for every customer that updates Windows or Microsoft Office – that’s a huge percentage of all computer users in the country and covers every inch of the country.

Downloading a big software update is probably one of the best ways possible to measure actual broadband performance. Software updates tend to be large files, and the Microsoft servers will transmit the files at the fastest speed a customer can accept. Since the software updates are large files, Microsoft gets to see the real ISP performance – not just the performance for the first minute of a download. Many ISPs use a burst technology that downloads relatively fast for the first minute or so, but then slows for the rest of a download – a customer’s true broadband speed is the one that kicks in after the burst is finished. The burst technology has a side benefit to ISPs in that it inflates performance on standard speed tests – but Microsoft gets to see the real story.

I’ve ranted about the FCC’s broadband statistics many times. There are numerous reasons why the FCC data is bad in rural America. Foremost, the data is self-reported by the big ISPs who have no incentive to tell the FCC or the public how poorly they are doing. It’s also virtually impossible to accurately report DSL speeds that vary from customer to customer according to the condition of specific copper wires and according to distance from the DSL core router. We also know that much of the reporting to the FCC represents marketing speeds or ‘up-to’ speeds that don’t reflect what customers really receive. Even the manner of reporting to the FCC, by Census block, distorts the results because when a few customers in a block get fast speeds the FCC assumes that everyone does.

To be fair, the Microsoft statistics measure the speeds customers are actually achieving, while the FCC is trying to measure broadband availability. The Microsoft data includes any households that elect to buy slower broadband products to save money. However, there are not 140 million households that purposefully buy slow broadband (the difference between 163 million and 24 million). The Microsoft numbers tell us that the actual speeds in the country are far worse than described by the FCC – and for half of us slower than 25/3 Mbps. That is a sobering statistic and doesn’t just reflect that rural America is getting poor broadband, but also that many urban and suburban households also aren’t achieving 25/3 Mbps.

I’ve seen many real-life examples of what Microsoft is telling us. At CCG Consulting we do community surveys for broadband and we sometimes see whole communities where the achieved speeds for customers is lower than the speeds advertised by the ISPs. We often see a lot more households claim to have no broadband or poor broadband than would be expected using the FCC mapping data. We constantly see residents in urban areas complain that broadband with a relatively fast speed seems slow and sluggish.

Microsoft reported their findings to the FCC, but I expect the FCC to ignore their story. This is a drastic departure from the narrative that the FCC is telling Congress and the public. I wrote a blog just a few weeks ago describing how the FCC is claiming that big ISPs are delivering the speeds that they market. Deep inside the recent reports the FCC admitted that DSL often wasn’t up to snuff – but the Microsoft statistics mean that a lot of cable companies and other ISPs are also under-delivering.

In my mind the Microsoft numbers invalidate almost everything that we think we know about broadband in the country. We are setting national broadband policy and goals based upon false numbers – and not numbers that are a little off, but rather than are largely a fabrication. We have an FCC that is walking away from broadband regulation because they have painted a false narrative that most households in the country have good broadband. It would be a lot harder for politicians to allow broadband deregulation if the FCC admitted that over half of the homes in the country aren’t achieving the FCC definition of broadband.

The FCC has been tasked by Congress to find ways to improve broadband in areas that are unserved or underserved – with those categories being defined by the FCC maps. The Microsoft statistics tell us that there are huge numbers of underserved households, far higher than the FCC is recognizing. If the FCC was to acknowledge the Microsoft numbers, they’d have to declare a state of emergency for broadband. Sadly, the FCC has instead doomed millions of homes from getting better broadband by declaring these homes as already served with adequate broadband – something the Microsoft numbers say is not true.

The current FCC seems hellbent on washing their hands of broadband regulation, and the statistics they use to describe the industry provide the needed cover for them to do so. To be fair, this current FCC didn’t invent the false narrative – it’s been in place since the creation of the national broadband maps in 2009. I, and many others predicted back then that allowing the ISPs to self-report performance would put us right where we seem to be today – with statistics that aren’t telling the true story. Microsoft has now pierced the veil to see behind the curtain – but is there anybody in a position of authority willing to listen to the facts?

White Space Spectrum for Rural Broadband – Part II

Word travels fast in this industry, and in the last few days I’ve already heard from a few local initiatives that have been working to get rural broadband. They’re telling me that the naysayers in their communities are now pushing them to stop working on a broadband solution since Microsoft is going to bring broadband to rural America using white space spectrum. Microsoft is not going to be doing that, but some of the headlines could make you think they are.

Yesterday I talked about some of the issues that must be overcome in order to make white space spectrum viable. It certainly is no slam dunk that the spectrum is going to be viable for unlicensed use under the FCC spectrum plan. And as we’ve seen in the past, it doesn’t take a lot of uncertainty for a spectrum launch to fall flat on its face, something I’ve seen a few times just in recent decades.

With that in mind, let me discuss what Microsoft actually said in both their blog and whitepaper:

  • Microsoft will partner with telecom companies to bring broadband by 2022 to 2 million of the 23.4 million rural people that don’t have broadband today. I have to assume that these ‘partners’ are picking up a significant portion of the cost.
  • Microsoft hopes their effort will act as a catalyst for this to happen in the rest of the country. Microsoft is not themselves planning to fund or build to the remaining rural locations. They say that it’s going to take some combination of public grants and private money to make the numbers work. I just published a blog last Friday talking about the uncertainty of having a federal broadband grant program. Such funding may or may not ever materialize. I have to wonder where the commercial partners are going to be found who are willing to invest the $8 billion to $12 billion that Microsoft estimates this will cost.
  • Microsoft only thinks this is viable if the FCC follows their recommendation to allocate three channels of unlicensed white space spectrum in every rural market. The FCC has been favoring creating just one channel of unlicensed spectrum per market. The cellular companies that just bought this spectrum are screaming loudly to keep this at one channel per market. The skeptic in me says that Microsoft’s white paper and announcement is a clever way for Microsoft to put pressure on the FCC to free up more spectrum. I wonder if Microsoft will do anything if the FCC sticks with one channel per market.
  • Microsoft admits that for this idea to work that manufacturers must mass produce the needed components. This is the classic chicken-and-egg dilemma that has killed other deployments of new spectrum. Manufacturers won’t commit to mass producing the needed gear until they know there is a market, and carriers are going to be leery about using the technology until there are standardized mass market products available. This alone could kill this idea just as the FCC’s plans for the LMDS and MMDS spectrum died in the late 1990s.

I think it’s also important to discuss a few important points that this whitepaper doesn’t talk about:

  • Microsoft never mentions the broadband data speeds that can be delivered with this technology. The whitepaper does talk about being able to deliver broadband to about 10 miles from a given tower. One channel of white space spectrum can deliver about 30 Mbps up to 19 miles in a point-to-point radio shot. From what I know of the existing trials these radios can deliver speeds of around 40 Mbps at six miles in a point-to-multipoint network, and less speed as the distance increases. Microsoft wants multiple channels in a market, because bonding multiple channels could greatly increase speeds to perhaps 100 Mbps. But even with one channel this is great broadband for a rural home that’s never had broadband. But the laws of physics means these radios will never get faster and those will still be the speeds offered a decade and two from now when those speeds are going to feel like slow DSL does today. It seems like too many broadband technology plans fail to recognize the fact that our demand for broadband has been doubling every three years since 1980. What’s pretty good speeds today can become inadequate in a surprisingly short period of time.
  • Microsoft wants to be the company to operate the wireless databases behind this and other spectrum. That gives them a profit motive to spur the wireless spectrums to be used. There is nothing wrong with wanting to make money, but this is not a 100% altruistic offer on their part.

It’s hard to know what to conclude about this. Certainly Microsoft is not bringing broadband to all of rural America. But it sounds like they are willing to work towards making this work. But we can’t ignore the huge hurdles that must be overcome to realize the vision painted by Microsoft in the white paper.

  • First, the technology has to work and the interference issues I discussed in yesterday’s blogs need to be solved for anybody to trust using this spectrum on an unlicensed basis. Nobody will use this spectrum if unlicensed users constantly get bumped off by licensed ones. The trials done for this spectrum to date were not done in a busy spectrum environment.
  • Second, somebody has to be willing to fund the $8B to $12B Microsoft estimates this will cost. There may or may not be any federal grants ever available for this technology, and there may never be commercial investors willing to spend that much on a new technology in rural America. The fact that Microsoft thinks this needs grant funding tells me that a business plan based upon this technology might not stand on its own.
  • Third, the chicken-and-egg issue of getting over the hurdle to have mass-produced gear for the spectrum must be overcome.
  • Finally, the FCC needs to adopt Microsoft’s view that there should be 3 unlicensed channels available everywhere – something that the licensed holders are strongly resisting. And from what I see from the current FCC, there is a god chance that they are going to side with the big cellular companies.

White Space Spectrum for Rural Broadband – Part I

Microsoft has announced that they want to use white space spectrum to bring broadband to rural America. In today and tomorrow’s blog I’m going to discuss the latest thoughts on the white space spectrum. Today I’ll discuss the hurdles that must be overcome to use the spectrum and tomorrow I will discuss in more detail what I think Microsoft is really proposing.

This spectrum being called white space has historically been used for the transmission of television through the air. In the recent FCC incentive auction the FCC got a lot of TV stations to migrate their signals elsewhere to free up this spectrum for broadband uses. And in very rural America much of this spectrum has been unused for decades.

Before Microsoft or anybody can use this spectrum on a widespread basis the FCC needs to determine how much of the spectrum will be available for unlicensed use. The FCC has said for several years that they want to allocate at least one channel of the spectrum for unlicensed usage in every market. But Microsoft and others have been pushing the FCC to allocate at least three channels per market and argue that the white space spectrum, if used correctly, could become as valuable as WiFi. It’s certainly possible that the Microsoft announcement was aimed at putting pressure on the FCC to provide more than one channel of spectrum per market.

The biggest issue that the FCC is wrestling with is interference. One of the best characteristics of white space spectrum is that it can travel great distances. The spectrum passes easily through things that kill higher frequencies. I remember as a kid being able to watch UHF TV stations in our basement that were broadcast from 90 miles away from a tall tower in Baltimore. It is the ability to travel significant distances that makes the spectrum promising for rural broadband. Yet these great distances also exacerbate the interference issues.

Today the spectrum has numerous users. There are still some TV stations that did not abandon the spectrum. There are two bands used for wireless microphones. There was a huge swath of this spectrum just sold to various carriers in the incentive auction that will probably be used to provide cellular data. And the FCC wants to create the unlicensed bands. To confound things, the mix between the various users varies widely by market.

Perhaps the best way to understand white space interference issues is to compare it to WiFi. One of the best characteristics (and many would also say the worse characteristics) of WiFi is that it allows multiple users to share the bandwidth at the same time. These multiple uses cause interference and so no user gets full use of the spectrum, but this sharing philosophy is what made WiFi so popular – except for the most crowded environments anybody can create an application using WiFi and knows that in most cases the bandwidth will be adequate.

But licensed spectrum doesn’t work that way and the FCC is obligated to protect all spectrum license holders. The FCC has proposed to solve the interference issues by requiring that radios be equipped so that unlicensed users will first dynamically check to make sure there are no licensed uses of the spectrum in the area. If they sense interference they cannot broadcast, or, once broadcasting, if they sense a licensed use they must abandon the signal.

This would all be done by using a database that identifies the licensed users in any given area along with radios that can search for licensed usage before making a connection. This sort of frequency scheme has never been tried before. Rather than sharing spectrum, like WiFi, the unlicensed user will be only allowed to use the spectrum when there is no interference. As you can imagine the licensed cellular companies, which just spent billions for this spectrum are worried about interference. But there are also concerns by churches, city halls and musicians who use wireless microphones.

It seems unlikely to me that in an urban area with a lot of usage on the spectrum that unlicensed white space spectrum is going to be very attractive. If it’s hard to make or maintain an unlicensed connection then nobody is going to try to use the spectrum in a crowded-spectrum environment.

The question that has yet to be answered is if this kind of frequency plan will work in rural environments. There have been a few trials of this spectrum over the past five years, but those tests really proved the viability of the spectrum for providing broadband and did not test the databases or the interference issue in a busy spectrum environnment. We’ll have to see what happens in rural America once the cellular companies start using the spectrum they just purchased. Because of the great distances in which the spectrum is viable, I can imagine a scenario where the use of licensed white space in a county seat might make it hard to use the spectrum in adjoining rural areas.

And like any new spectrum, there is a chicken and egg situation with the wireless equipment manufacturers. They are not likely to commit to making huge amounts of equipment, which would make this affordable, until they know that this is really going to work in rural areas. And we might not know if this is going to work in rural areas until there have been mass deployments. This same dilemma largely sunk the use fifteen years ago of the LMDS and the MMDS spectrums.

The white space spectrum has huge potential. One channel can deliver 30 Mbps to the horizon on a point-to-point basis. But there is no guarantee that the unlicensed use of the spectrum is going to work well under the frequency plan the FCC is proposing.

New Video Format

alliance-for-open-mediaSix major tech companies have joined together to create a new video format. Google, Amazon, Cisco, Microsoft, Netflix, and Mozilla have combined to create a new group called the Alliance for Open Media.

The goal of this group is create a video format that is optimized for the web. Current video formats were created before there was wide-spread video using web browsers on a host of different devices.

The Alliance has listed several goals for the new format:

Open Source Current video codecs are proprietary, making it impossible to tweak them for a given application.

Optimized for the Web One of the most important features of the web is that there is no guarantee that all of the bits of a given transmission will arrive at the same time. This is the cause of many of the glitches one gets when trying to watch live video on the web. A web-optimized video codec will be allowed to plow forward with less than complete data. In most cases a small amount of missing bits won’t be noticeable to the eye, unlike the fits and starts that often come today when the video playback is delayed waiting for packets.

Scalable to any Device and any Bandwidth One of the problems with existing codecs is that they are not flexible. For example, consider a time when you wanted to watch something in HD but didn’t have enough bandwidth. The only option today is to fall back the whole way to an SD transmission, at a far lower quality. But in between these two standards is a wide range of possible options where a smart codec could analyze the bandwidth available and could then maximize the transmission by choosing different options among the many variables within a codec. This means you could produce ‘almost HD’ rather than defaulting to something of much poorer in quality.

Optimized for Computational Footprint and Hardware. This means that the manufacturers of devices would be able to maximize the codec specifically for their devices. All smartphones or all tablets or all of any device are not the same and manufacturers would be able to choose a video format that maximizes the video display for each of their devices.

Capable of Consistent, High-quality, Real-time Video Real-time video is a far greater challenge than streaming video. Video content is not uniform in quality and characteristics and there is thus a major difference in the quality between watching two different video streams on the same device. A flexible video codec could standardize quality much in the same way that a sound system can level out differences in listener volume between different audio streams.

Flexible for Both Commercial and Non-commercial Content A significant percentage of videos watched today are user-generated and not from commercial sources. It’s just as important to maximize the quality of Vine videos as it is for showing commercial shows from Netflix.

There is no guarantee that this group can achieve all of these goals immediately, because that’s a pretty tall task. But the power of these various firms combined certainly is promising. The potential for a new video codec that meets all of these goals is enormous. It would improve the quality of web videos on all devices. I know that personally, quality matters and this is why I tend to watch videos from sources like Netflix and Amazon Prime. By definition streamed video can be of much higher and more consistent quality than real-time video. But I’ve noticed that my daughter has a far lower standard of quality than I do and watches videos from a wide variety of sources. Improving web video, regardless of the source, will be a major breakthrough and will make watching video on the web enjoyable to a far larger percentage of users.

Universal Internet Access

navigator_globe_lgWhile many of us are spending a lot of time trying to find a broadband solution for the unserved and underserved homes in the US, companies like Facebook, Google, and Microsoft are looking at ways of bringing some sort of broadband to everybody in the world.

Mark Zuckerberg of Facebook spoke to the United Nations this past week and talked about the need to bring Internet access to the five billion people on the planet that do not have it. He says that bringing Internet access to people is the most immediate way to help lift people out of abject poverty.

And one has to think he is right. Even very basic Internet access, which is what he and those other companies are trying to supply, will bring those billions into contact with the rest of the world. It’s hard to imagine how much untapped human talent resides in those many billions and access to the Internet can let the brightest of them contribute to the betterment of their communities and of mankind.

But on a more basic level, Internet access brings basic needs to poor communities. It opens up ecommerce and ebanking and other fundamental ways for people to become engaged in ways of making a living beyond a scratch existence. It opens up communities to educational opportunities, often for the first time. There are numerous stories already of rural communities around the world that have been transformed by access to the Internet.

One has to remember that the kind of access Zuckerberg is talking about is not the same as what we have in the developed countries. Here we are racing towards gigabit networks on fiber, while in these new places the connections are likely to be slow connections almost entirely via cheap smartphones. But you have to start somewhere.

Of course, there is also a bit of entrepreneurial competition going on here since each of these large corporations wants to be the face of the Internet for all of these new billions of potential customers. And so we see each of them taking different tactics and using different technologies to bring broadband to remote places.

Ultimately, the early broadband solutions brought to these new places will have to be replaced with some real infrastructure. As any population accepts Internet access they will quickly exhaust any limited broadband connection from a balloon, airplane, or satellite. And so there will come a clamor over time for the governments around the world to start building backbone fiber networks to get real broadband into the country and the region. I’ve talked to consultants who work with African nations and it is the lack of this basic fiber infrastructure that is one of the biggest limitations on getting adequate broadband to remote parts of the world.

And so hopefully this early work to bring some connectivity to remote places will be followed up with a program to bring more permanent broadband infrastructure to the places that need it. It’s possible that the need for broadband is going to soon be ranked right after food, water, and shelter as a necessity for a community. I would expect the people of the world to expect, and to then push their governments into making broadband a priority. I don’t even know how well we’ll do to get fiber to each region of our own country, and so the poorer parts of the world face a monumental task over the coming decades to satisfy the desire for connectivity. But when people want something badly enough they generally find a way to get what they want, and so I think we are only a few years away from a time when most of the people on the planet will be clamoring for good Internet access.

 

Should the FCC Regulate OTT Video?

FCC_New_LogoA funny thing happened on the way to make it easier for OTT video providers to get content. Some of the biggest potential providers of online content like Amazon, Apple, and Microsoft have told the FCC that they don’t think that online video companies ought to be regulated as cable companies.

Of course, these couple of large companies don’t represent everybody who is interested in providing online video, and so they are just another faction to deal with for the issue. For example, FilmOn X recently got a court order allowing them to buy video as a regulated video provider and in the past Aereo had asked for the same thing.

A lot of the issue boils down to companies that want to put local networks online or else deliver them in some non-traditional way as was being done by FilmOnX or Aereo. These kind of providers are seeking to get the ability to force the local network stations to negotiate local retransmission agreements with them. Under current law the stations are not required to do so and are, in fact, refusing to do so.

The FCC is in a tough spot here because they don’t have a full quiver of tools at their disposal. The FCC’s hands are very much tied by the various sets of cable laws that have been passed by Congress over the years – the rules that define who is and is not a cable company, and more importantly, the rules and obligations of being a cable company. It will be interesting to see how much the FCC thinks it can stretch those rules to fit the situation of online programming, which was never anticipated in the rules.

I can certainly understand why the large companies mentioned above don’t want to be cable companies, because there are pages and pages of rules about what that means; the FCC is unlikely to be able to grant a company just a few of those rules without also requiring ones that these companies don’t want.

For example, the current cable law defines required tiers of service. Cable companies must have at least a basic and an expanded basic tier, and those are very narrowly defined. A basic tier includes all of the ‘must-carry’ local networks and the expanded basic carries all of the things we think of as cable channels.

I think what the FCC has in mind is a set of rules that require programmers to negotiate in good faith with online companies that want to buy their content. Certainly any company that wants to put content online today is completely at the mercy of programmers saying yes or no to giving them the content they want to carry. And there is nothing from stopping the programmers from changing their mind if they see an OTT company being more successful than they like.

So I would think that even Amazon, Apple, and Microsoft would like the ability to force the programmers to negotiate with them, but they obviously don’t want other FCC rules that they think will come along with that ability. Of course, these are very large companies with deep pockets and one has to imagine that they get a fairly decent hearing when they talk to programmers. The FCC’s real concern is not these giant companies, but companies smaller than them who don’t have any ability to force the programmers to even talk to them. I think the FCC believes that if online content is to be successful that there ought to be widespread competition and innovation online, not just content provided by a few giant tech companies along with other huge companies like Verizon.

Today the programmers have most of the power in the industry. They are making a huge amount of money from the mega-subscription models where all of their content is forced upon US cable companies. And they have no reason to become more reasonable because most of them are seeing gigantic growth in selling content overseas, so they have no real reason to upset the cart in the US market.

If online content is to become a vibrant alternative and not just be expensive packages foisted on the public by a small group of huge corporations, then something has to change. I just don’t know how much the FCC can do realistically considering how they are hamstrung by the current cable laws.

Is the Universal Translator Right Around the Corner?

star trek comm badgeWe all love a race. There is something about seeing somebody strive to win that gets our blood stirring. But there is one big race going on now that it’s likely you never heard of, which is the race to develop deep learning.

Deep learning is a specialized field of Artificial Intelligence research that looks to teach computers to learn by structuring them to mimic the neurons in the neocortex, that portion of our brain that does all of the thinking. The field has been around for decades, with limited success, and has needed faster computers to make any real headway.

The race is between a few firms that are working to be the best in the field. Microsoft and Google have gone back and forth with public announcements of breakthroughs, while other companies like Facebook and China’s Baidu are keeping their results quieter. It’s definitely a race, because breakthroughs are always compared to the other competitors.

The current public race deals with pattern recognition. The various teams are trying to get a computer to identify various objects in a defined data set of millions of pictures. In September Google announced that it had the best results on this test and just this month Microsoft said their computers beat not only Google, but did better than what people can do on the test.

All of the companies involved readily admit that their results are still far below what a human can do naturally in the real world, but they have made huge strides. One of the best known demonstrations was done last summer by Google who had their computer look at over 10 million YouTube videos and asked it to identify cats. Their computer did twice as good as any previous test, which was particularly impressive since the Google team had not pre-defined what a cat was to the computer ahead of time.

There are some deep learning techniques in IBM’s Watson computer that beat the best champs in Jeopardy. Watson is currently being groomed to help doctors make diagnoses, particularly in the third world where there is a huge lack of doctors. IBM has also started selling time on the machine to anybody and there is no telling all of the ways it is now being used.

Probably the most interesting current research is in teaching computers to learn on their own. This is done today by enabling multiple levels of ‘neurons’. The first layer learns the basic concept, like recognizing somebody speaking the letter S. Several first-layer inputs are fed to the second layer of neurons which can then recognize more complex patterns. This process is repeated until the computer is able to recognize complex sounds.

The computers being used for this research are already getting impressive. The Google computer that did well learning to recognize cats had a billion connections. This computer was 70% better at recognizing objects than any prior computer. For now, the breakthroughs in the field are being accomplished by applying brute computing force and the cat-test computer used over 16,000 computer processors, something that only a company like Google or Microsoft has available. .

Computer scientists all agree that we are probably still a few decades away from a time when computers can actually learn and think on their own. We need a few more turns of Moore’s Law for the speed of computers to increase and the size of the processors to decrease. But that does not mean that there are not a lot of current real life applications that can benefit from the current generation of deep learning computers.

There are real-world benefits of the research today. For instance, Google has used this research to improve the speech recognition in Android smartphones. But what is even more exciting is where this research is headed for the future. Sergey Brin says that his ultimate goal is to build a benign version of HAL from 2001: A Space Odyssey. It’s likely to take multiple approaches in addition to deep learning to get to such a computer.

But long before a HAL-like computer we could have some very useful real-world applications from deep learning. For instance, computers could monitor complex machines like electric generators and predict problems before they occur. They could be used to monitor traffic patterns to change traffic lights in real time to eliminate traffic jams. They could be used to enable self-driving cars. They could produce a universal translator that will let people with different languages converse in real-time. In fact, in October 2014, Microsoft researcher Rick Rashid gave a lecture in China. The deep learning computer transcribed his spoken lecture into written text with a 7% error rate. It then translated it into Chinese and spoke to the crowd while simulating his voice. It seems like with deep learning we are not far away from having that universal translator promised to us by science fiction.

Who Will Own the Internet of Things?

Tribrid_CarYesterday’s blog talked about the current Internet that is falling under the control of a handful of large corporations – Apple, Amazon, Facebook, Google and Microsoft. This leads me to ask if the upcoming Internet of Things is also going to be owned by a handful of companies

This is not an idle question because it has become clear lately that you don’t necessarily own a connected device even though you might pay for it. As an example, there was recently an article in the New York Times that reported that a car company was able to disable cars for which the owners were late in making payments. The idea of Ford or General Motors still having access to the brains of your vehicle even after you buy it is unsettling. It’s even more unsettling to think access is in the hands of somebody at your local car dealer. Imagine them turning off your car when you are far away from home or when you have a car full of kids. But even far worse to me is that if somebody can turn off your car then somebody else can hack it

The car companies are able to do this because they maintain access to the root directory of your car’s computer system. Whether you financed the car with them or paid cash, they still maintain a backdoor that lets them get remotely into your car’s computer. They might use this backdoor to disable the vehicle as in this example or to download software upgrades. But the fact is, as long as they have that ability, then to some degree they still have some control over your car and you. You have to ask if you truly own your own car. As an aside, most people don’t realize that almost all cars today also contain a black box, much like the recorder in airplanes that records a lot of data about your car and your specific driving habits. It records data on how fast you drive or if you are wearing your seatbelt – and this data is available to the car companies

Perhaps the car is an extreme example because car is probably the most complicated device that you own. But it’s likely that every IoT device is going to have the same backdoor access to the root directory. This means that the company that made an IoT device is going to have a way to gain access. This means every smartphone, appliance, thermostat, door lock, burglar alarm and security camera can be controlled to some degree by somebody else. It makes you seriously ask the question if you entirely own any smart device

Over time it is likely that the IoT industry will consolidate and that there will be a handful of companies that control the vast majority of IoT devices just like the big five companies control a lot of the Internet. And it might even be the same companies. Certainly Apple, Google and Microsoft are all making a big play for the IoT

I’ve written before about the lack of security in a most IoT devices. My prediction is that it’s going to take a few spectacular failures and security breaches of IoT devices before the companies that make them pay real attention to security. But even should they tighten up every security breach, if Google or Apple maintains backdoor access to your devices, then they are not truly secure

I think that eventually there will be a market for devices that a buyer con control and that don’t keep backdoor access. It certainly would be possible to set up an IoT network that doesn’t communicate outside the home but where devices all report to a master controller within the home. But it’s going to take people asking for such devices to create the market for them

If people are happy to have Apple or Google spy on them in their homes then those companies will be glad to do it. One of the first things that crossed my mind when Google bought Nest was that Google was going to be able to start tracking a lot of behavior about people inside their homes. They will know when you wake and sleep and how you move around the home. That may not sound important to you, but every smart device you add to your house will report something else about you. With the way that the big companies mine big data, the more they know about you the better they can profile you and the easier it is for them to sell to you. I don’t really want Google to know my sleep habits and when I go to the bathroom. To be truthful, it sounds creepy.

Do the Cloud Guys Get It?

English: Cloud Computing Image

English: Cloud Computing Image (Photo credit: Wikipedia)

I just read an article this week that cites five reasons why cloud computing isn’t taking off as fast as the companies selling the solution were hoping for. The reasons unfortunately make me feel like the cloud industry folks are out of touch with the real world. This is not an uncommon phenomenon in that high-tech industries are run by innovators. Innovators often don’t understand why the rest of the world doesn’t see things with the same clarity as they do.

Following are the five reasons cited in the article about why cloud computing is not selling as fast as hoped, with my observations after each point.

The Organization. Organizations often are structured in a way that does not make the kind of shift to cloud easy. For instance, IT shops are often organized into separate groups for compute, network and storage.

Changes that affect people are never easy for companies. Going to the cloud is supposed to save a lot of labor costs for larger companies, but that is not necessarily the case for smaller companies.  But even larger companies are going to take a while to make sure they are not walking off a cliff. Every old-timer like me remembers a few examples of where major technology conversions went poorly, and nobody wants to be the one blamed if a big conversion goes wrong.

Security. Companies are afraid that the cloud is not going to be as safe as keeping all of their data in-house.

Everything I have read says that if done right that the cloud can be very secure. However, the fear is that not every conversion is going to be done right. You can place your bets with me now, but sometime in the next year or two there is going to be a major ugly headline about a company that converted to the cloud poorly which led to a major breach of customer records. The problem is that everybody is human and not every cloud company is going to do every conversion perfectly.

Legacy Applications. Cloud companies want you to get rid of legacy systems and upgrade to applications made for the cloud.

This is where cloud companies just don’t get it. First, almost every company uses a few legacy systems that are not upgradable and for which there is no cloud equivalent. Every industry has some quirky homegrown programs and applications that are important for their core business. When you tell a company to kill every legacy application most of them are going to rightfully be scared this is going to create more problems than it solves.

Second, nobody wants to be automatically upgraded with the latest and greatest software. It’s a company nightmare to come in on a Monday and find out that the cloud provider has upgraded everybody to some new Microsoft version of Office that is full of bugs and that everybody hates and that brings productivity to a halt. Companies keep legacy systems because they work. I recently wrote about the huge number of computers still running on Windows XP. That is how the real world works.

Legacy Processes. In addition to legacy software, companies have many legacy processes that they don’t want to change.

Honestly this is arrogant. Companies buy software to make what they do easier. To think that you need to change all of your processes to match the software is really amazingly out of touch with what most companies are looking for. Where a cloud salesman sees ‘legacy system’ most companies see something that works well and that they took years to get the way they want it.

Regulatory Compliance. Companies are worried that the cloud is going to violate regulatory requirements. This is especially true for industries such as financial, health and the power industries.  

This is obviously a case-by-case issue, but if you are in one of the heavily regulated industries then this has to be a significant concern.

I hope this doesn’t make me sound anti-cloud, because I am not. But I completely understand why many companies are going to take their time considering this kind of huge change. There is no product ever made that should not be taking their customers into consideration. When I see articles like this I feel annoyed, because the gist of the article is, “Why won’t these dumb customers see that what I have is good for them”. That is never a good way to get people to buy what you are selling.