Government’s Role in Broadband

capitalIn a discussion in one of the many industry forums where people chat and exchange ideas, I received some pushback the other day after I said that, “It’s governments role to bring broadband to everybody.” The primary pushback came from commercial ISPs who said it should be the private sector’s role to bring broadband and government should not be competing with private companies.

I’d like to expand on what I meant by my statement. Let me start by discussing how the private sector has done with bringing broadband. Most cities and suburbs these days have broadband of at least 25 Mbps download, and in many places faster broadband is available. And there is now a long list of rural places that now have fiber networks built by telcos and cooperatives that have speeds faster than in most cities.

But even as speeds are increasing, most of us yearn for more competition and better prices. Yet only 4% of census blocks in the country have at least 3 ISPs offering 25 Mbps or greater. However, competition aside, a lot of people in the US have good broadband available if they are willing to pay the price.

But the latest FCC statistics still show that 29% of all census blocks in the US don’t have an ISP that offers 25 Mbps or faster broadband, and 53% of census blocks don’t have an ISP that offers 100 Mbps. The FCC still estimates that there are about 20 million people in the country with no landline broadband option at all. And while there continues to be construction of rural fiber in some parts of rural America, the number of homes with no landline connection will grow quickly if AT&T and Verizon abandon rural copper.

To be fair, WISPs have stepped into a lot of the areas with no landline broadband and are offering an alternative with wireless broadband. And some WISPs do a great job and I know folks who have wireless connections of 30 – 50 Mbps. In today’s world that’s great broadband for the average household. But there are also a whole lot of WISPs that don’t deliver decent broadband. I hear stories every day of people having WISP connections under 5 Mbps, and often with high latency. Many WISPs are either unable or unwilling to provide for the backhaul needed to deploy the wireless technology to its full capabilities.

My overall conclusion is that commercial providers have failed a lot of Americans. Many of these customers are rural where it’s hard to serve. Or some just happen to live where there is no competitive ISP in the market. But we clearly are now a nation of broadband haves and have-nots. And because the demand for both speeds and total download capability continues to grow at a rapid pace, I believe the areas with poor broadband are far worse off compared to their urban neighbors than they were five years ago. Where a 3 Mbps connection might have satisfied a household five years ago, today it is inadequate and within a few years it will feel as slow as dial-up to somebody trying to take part in the modern digital world.

And so I still stand by my statement that it’s the government’s role to get broadband to those parts of the country that don’t have it. That doesn’t necessarily mean that government needs to build and own broadband networks, although local communities willing to spend their tax money to get broadband ought to be allowed to make that choice.

But government needs to play the same role with broadband that they did a century ago with electricity. When electricity exploded onto the scene, commercial companies sprung up all over the country to build electric networks to fill the demand. But it soon became obvious that huge geographic parts of the country were not going to get electricity. And so government stepped in to fill the void. Numerous municipal electric companies were started in small towns where no commercial company filled the gap. At the federal level the US government funded big projects like the Tennessee Valley Authority to generate power for rural areas. And the feds developed low interest loan programs that aided cooperatives and small rural electric companies to be able to afford to build the needed infrastructure. Without these programs there probably would still be significant parts of the country without electric grids (and such places would be abandoned ghost towns).

I see government needing to fill the same role with broadband. I think it’s becoming clear that communities with no broadband are going to wither and fade away over time. These areas won’t have many jobs and kids will go elsewhere when they finish high school. Home prices will tumble and areas without broadband will begin a long slow decay.

I don’t have any particular preference as to how the government helps to fill the broadband gap as long as whatever they do works. There could be more significant grant/loan programs. There could be tax incentives or other ways to promote private money to build broadband (more things like New Market Tax Credits). And perhaps cities can demand that ISPs serve every home in a city in the same way that used to be done with cable TV franchises. But I am convinced that if government doesn’t step into this void that nobody will. We already have all the market evidence we need to understand that there are a whole lot of places in the country where no commercial entity is willing or able to serve today – and that gap is widening, not shrinking.

FCC Takes Shot at Zero-rating

Network_neutrality_poster_symbolIn perhaps the most futile government decision I’ve ever seen from the FCC, the agency last week ruled last week that AT&T was in violation of net neutrality rules with its zero-rated Sponsored Data plans. AT&T allows customers who buy DirecTV Now the ability to stream the service over cellphones without counting the data against wireless data caps. The agency didn’t take any action against AT&T as a result of the decision, and probably will not.

I call the gesture futile since it’s clear that the new Republican-led FCC is going to either gut or weaken the net neutrality rules. There are even those in Congress talking about disbanding the FCC and spreading its responsibilities elsewhere – something that would require a new Telecommunications Act. So it’s obvious that this decision doesn’t have any teeth.

I guess it’s not hard to understand that the current FCC staff wants to make one last stand for its signature policy. I don’t think there was anything in the history of the agency that got so much positive public feedback. It’s still hard to imagine that over a million people made formal comments in the FCC net neutrality docket.

And yet, as popular as the concept of net neutrality is – the concept of keeping an open internet – there probably is not a worst place to take a stand than zero-rating. This is a practice that the public is going to love. For the first time people will have the ability to watch video on cellphones without worrying about the stingy cellular data caps. I’m probably a bit old and my eyes have a problem enjoying video on a small cellphone screen. But after seeing my daughter watching video on her Apple smartwatch I am positive that this is going to be popular.

But zero-rating is eventually going to lead to exactly what net neutrality was designed to protect against. In this case AT&T is promoting its own programming with DirecTV Now, and perhaps there is nothing wrong with that. But it won’t be long until other content providers are going to be willing to pay AT&T to also carry their video on cellphones outside the data caps. And that will eventually create an environment where only the content of the biggest and richest companies will be sponsored.

The only video that will be available on cellphones will be from companies with the ability to pay AT&T to carry it. And that eventually means the end of innovation and of new start-ups. It means that Google and Facebook and Netflix will be available because they can afford to pay to sponsor their content, but that the next generations of companies that would naturally have supplanted them, as is inevitable in the tech world, will never get started. You can’t become popular if nobody watches you.

On the flip side, zero-rating is going to point out the hypocrisy of the current cellular data prices. A customer will be able to watch 100 gigabytes of DirecTV Now with no extra fees, but will quickly figure out that watching other video would have cost them $1,000 at the current price of $10 for each gigabyte of extra download. The supposed reason for the high data prices is to protect the cellular network – but it will quickly become clear that the high prices are only about profits. So perhaps this will begin the process of lowering the outrageous cost of cellular data – which is clearly the most expensive data in the world today.

The Battle for IoT Connectivity

Amazon EchoThere is a major battle brewing for control of the connections that control the Internet of Things. Today in the early stage of home IoT most devices are being connected using WiFi. But there is going to be a huge push to have connection instead made through 5G cellular.

I saw an article this week where Qualcomm said that they were excited about 5G and that it would be a world-changing technology. The part of 5G that they are most excited about is the possibility of using 5G to connect IoT devises together. Qualcomm’s CEO Stephen Mollenkopf talked about 5G at the recent CES show and talked about a future where 5G is used for live-streaming virtual reality, autonomous cars and connected cities where street lamps are networked together.

Of course, Qualcomm and the cellular vendors are most interested in the potential for making money using 5G technology. Qualcomm wants to make the hundreds of millions of chips they envision in a 5G connected world. And Verizon and AT&T want to sell data connections to all of the 5G connected devices. It’s an interesting vision of the world. Some of that vision makes sense and 5G is the obvious way to connect outdoors for things like street lights.

But it’s not obvious to me at this early stage of IoT that either 5G or WiFi are the obvious winner of the battle for IoT connectivity in the home. There are pros and cons for each technology.

WiFi has an upper hand today because it’s already in almost every home. People are comfortable using WiFi because it doesn’t cost anything extra to connect an IoT device. But WiFi has some natural limitations that might make it a harder choice in the future if our homes get filled with IoT devices. As I’ve discussed in some recent blogs, the way that WiFi shares data can be a big problem when there is a lot of steady and continuous demand for the bandwidth. WiFi is probably a great choice for IoT devices that only occasionally need to make a connection or that need short-burst connections to share information.

But the WiFi standard doesn’t include quality of service and any prioritization of which connections are the most important. WiFi instead always does its best to share bandwidth, regardless of the number of devices that are asking to connect to it. When a WiFi router gets multiple demands it shuts down for a short period and then tries to reinitiate connections again. If too many devices are demanding connection, a WiFi system goes into a mode of continuously stopping and restarting and none of the connections get a satisfactory connection. Even if there is enough bandwidth in the network to handle most of the requests, too many simultaneous requests simply blows the brains out of WiFi. The consequence for this is that having a lot of small and inconsequential connections can ruin the important connections like video streaming or gaming.

But cellular data is also not an automatic answer. Certainly today there is no way to cope with IoT using 4G cellular networks. Each cell site has a limited number of connections. A great example of this is that I often talk to a buddy of mine in DC while he commutes, and he usually loses his cellular signal when crossing the between Maryland and Virginia. This is due to there not being enough cellular connections available in the limited area of the American Legion bridge. 5G will supposedly solve this problem and promises to expand the number of connections from a cell site by a factor of 50 times or so – meaning that there will be a lot more possible connections. But you still have to wonder if that will be sufficient in a world when every IoT device wants a connection. LG just announced that every appliance it sells will now come with an IoT connection, and I imagine this will soon be true of all appliances, toys and almost anything else you buy in the future that has any electronics.

Of a bigger concern to me is that 5G connections are not going to be free. With WiFi, once I’ve bought my home broadband connection I can add devices at will (until I overload my router). But I think Verizon and AT&T are excited about IoT because they want to charge a small monthly fee for every device you connect through them. It may not be a lot – perhaps a dollar per device per month – but the next thing you know every home will be sending then an additional $50 or more per month to keep IoT devices connected. It’s no wonder they are salivating at the possibility. And it’s no wonder that the big cable companies are talking about buying T-Mobile.

I’m also concerned from a security perspective of sending the data from all of my IoT devices to the same core routers at Verizon or AT&T. Since it’s likely that the recent privacy rules for broadband will be overturned or weakened, I am concerned about having one company know so much about me. If I use a WiFi network my feeds will still go out through my data ISP, but if I’m concerned about security I can encrypt my network and make it harder for them to know what I’m doing. That is going to be impossible to do with a cellular connection.

But one thing is for sure and this is going to be a huge battle. And it’s likely to be fought behind the scenes as the cellular companies try to make deals with device manufacturers to use 5G instead of WiFi. WiFi has the early lead today and it’s still going to be a while until there are functional 5G cellular networks. But once those are in place it’s going to be a war worth watching.

Advertising and Technology

attention-merchantsMost industry folks know the name Tim Wu. He’s the Columbia professor that coined the phrase ‘net neutrality’ and who has been an advisor to the FCC on telecom issues. He’s written a new book, The Attention Merchants, about the history of advertising that culminates with the advertising we see today on the Internet.

Wu specifically looks at what he calls the attention industry, being that part of advertising that works hard to get people’s attention – as opposed to the part of the industry that produces advertising copy and materials. Wu pegs the start of the attention industry with the New York Sun, a scandal sheet started in 1833 that built up circulation by selling papers at a low price that included sensational (and untrue) content. The Sun was the first generation of publications like today’s National Enquirer and like a lot of websites today that peddle fake news. But that model worked and Herbert Simon of the Sun created an industry and made a lot of money selling advertisements.

Wu has painted a picture about advertising in terms of its place in the larger society. He observes that advertising has always come in cycles. At times advertisers grow to become too pervasive and annoying, and society then reacts by ignoring the abuses or by forcing the end to the largest abuses of the industry.

Wu traces the history of the attention industry through the years. He looks at the development of billboards and at state-sponsored propaganda machines like the British during WW1 and the Germans in WW2. He ends up by looking at Google, Facebook, Instagram and others as today’s latest manifestation of industries built from the concept of gaining people’s attention.

The attention industry has changed along with technology, and so Wu’s story is as much about technology as advertising. From the early days of sensational newspapers the attention industry morphed over the years to adapt to the new technologies of radio, television and now the Internet.

Probably the heyday of advertising was during the 1950s in the US when as many as two-thirds of the nation tuned in to watch the same shows like I Love Lucy or the Ed Sullivan Show. Advertisers for those shows caught the attention of the whole nation at the same time. But that uniformity of a huge market fragmented over time with the advent of cable TV and multiple channels for people to watch.

Today we are in the process of carrying advertising to the ultimate degree where ads are being aimed at specific people. The attention industry is spending a lot of money today on big data and on building profiles for each of us that are then sold to specific advertisers.

But we are already seeing the pushback from this effort. At the end of 2016 it was reported that over 70 million Americans were using ad blockers. These ad blockers don’t stop all ads and the advertising industry is working hard to do an end run around ad blockers. But it’s clear that like at times in the past, the advertisers have gone too far for many people. In the early days of the tabloids there was a lot of advertising for fake health products and other dangerous items and the government stepped in and stopped the worst of the practices. When TV ads became too pervasive and repetitive people invested in TiVo and DVRs in order to be able to skip the ads.

And the same is happening with online advertising. I am probably a good example and I rarely notice online advertising any more. I use an ad blocker to block a lot of it. I refuse to use web sites that are too annoying with pop-ups or other ads. And over time I’ve trained my eyes to just not notice online ads on web pages and on social media streams. And so advertisers are wasting their money on me, as they are on many people who have grown immune to the new forms of online ads.

But advertisers wouldn’t be going through the efforts if it didn’t work. Obviously online advertising is bringing tangible results or companies wouldn’t be moving the majority of their ad revenues from other media to the web. Wu’s book is a fascinating read that puts today’s advertising into perspective – it’s mostly the attention industry doing the same things they’ve always done, wrapped into a new medium. The technology may be new, but this is still the same attention industry that was trying to gain eyeballs in the 1800s. If nothing else, the book reminds us that the goal of the industry is to get your attention – and that you have a choice to participate or not.

Is the Lifeline Program in Danger?

FCC_New_LogoOne has to ask if the FCC’s Lifeline program is in trouble. First, within the last month 80 carriers have asked to be relieved from participating in the program. This includes many of the largest ISPs / telcos and includes AT&T, Verizon, CenturyLink, Charter, Cox, Frontier, Fairpoint, Windstream and Cincinnati Bell. There are a lot of wireless companies on the list and it’s easier to understand why they might not want to participate. The rest of the list is filled out with smaller telcos and some fiber overbuilders.

These companies easily represent more than half of all the telephone customers and a significant percentage of data customers in the country. If these companies don’t participate in the Lifeline program then it’s not going to be available to a large portion of the country. The purpose of the Lifeline program is to provide assistance to low-income households to buy telecom services. It’s hard to see how the program can be sustained with such a reduced participation.

Originally the program was used only to subsidize landline telephone service. For the las few years it also has been available to cover cellphone service as an alternative to a landline. The most recent changes expand the definition to also allow the plan to cover broadband connections, with the caveat that only one service can be subsidized per household. While it’s not yet official, one can foresee that ultimately it will be used to subsidize only broadband and that coverage of telephones will eventually disappear.

The coverage that the new Lifeline provides for cellular data is a mystery. The plan covers 3G data connections and allows the providers to cap such services at a measly 500 megabytes of total downloaded per month. This seems to be in direct opposition to the stated goal of the Lifeline program to provide support to close the ‘homework gap’.

I also foresee larger problems looming for the entire Universal Service Fund program, of which Lifeline is one component. It’s already clear that the new administration is going to remake the FCC to be a weaker regulatory body. At a minimum the new FCC will reverse many of the regulations affecting the large telcos and cable companies.

But there is a bigger threat in that there are many in Congress that have been calling for years for the abolishment of the FCC and for scattering their responsibilities to other parts of the government. This could be done during budget appropriations or by including it in a new Telecom Act.

The opponents of the FCC in Congress have also specifically railed against the Lifeline program for years. There was a huge furor a few years ago about the so-called Obamaphones, where carriers were supposedly giving smartphones to customers, all paid for by the government. It turns out those claims were false. The only plan that was anywhere close to this was a plan from SafeLink Wireless. They used the Lifeline subsidy to provide eligible low-income households with a cheap flip-phone that came with one-hour of free calling plus voice mail. This very minimalist telephone connection gave people a way to have a phone number to use while hunting for a job and to connect with social services. But there were no Lifeline plans that provided smartphones to low income households like was portrayed by many opponents of the Lifeline program.

But rightly or wrongly, there are now a number of opponents to the Lifeline program, and that means that the plan could be a target for those trying to trim back the FCC. It’s going to be a lot harder to defend the Lifeline program if none of the major carriers are participating in it. There certainly will be a lot of changes made in the coming year at the FCC, and my gut tells me that programs like Lifeline could be on the chopping block if the big players in the industry don’t support it. If nothing else, the big ISPs would prefer to have funds allocated to Lifeline today to be re-purposed for something that benefits them more directly.

Note:  In an interesting development the FCC just rejected a petition from the NTCA and the WTA that asked that small companies be excused from some provisions of the Lifeline order. The FCC ruling basically says that any small company that is receiving high cost support and that offers a standalone data product must accept requests from customers who want to participate in the Lifeline Program. I am sure that this is not the end of the story and there will be more back and forth on the issue.

International Regulatory Trends to Watch

european unionIt’s always interesting to watch regulatory decisions around the world. The same basic telecom issues face every country, and over time, decisions on how to solve common problems tend to spread around the world. A good example is net neutrality which is being debated currently in a number of countries. I found the three following recent rulings to be of the highest interest:

U.K.’s Snooper Charter. The British parliament approved a law that gives bulk surveillance powers to police and intelligence agencies that are more far-reaching than anything else in the west. Titled the Investigatory Powers Bill (but commonly referred to as the Snooper Charter), the new law grants new powers to law enforcement agencies.

For example, it provides a legal basis to hack computers and mobile phones from afar to see what they contain, without a warrant. It gives law enforcement broad powers to access and retain emails, telephone calls, texts and web browsing activity. The law also requires telecom companies to unencrypt messages and devices where ‘practicable’, a fight that we are also seeing in the US.

Critics say the law doesn’t have any checks on government power and are challenging it in the European Court of Justice. They argue that the rights afforded to governments – and the demands made on large ISPs to cooperate – go too far and violate a number of existing privacy laws.

New European Privacy Laws. At the other end of the scale, the EU passed new privacy rules referred as the EU Data Protection Regulations. The purpose of the directive is to harmonize the various data protection rules already in place in various member countries of the EU. As a regulation the new rules supercede any specific national privacy rules. There are specific rules being created to implement the directive and are expected to be effective in May of 2018.

The underlying principles of the new rules are that customers have a fundamental right to privacy and will retain effective control over their personal data. This means that anybody wanting to use your data must explain in a clear manner how that data is to be used, and must obtain permission from each person to use it.

The new rules also create something new – the ability of citizens to access and review data that companies have about them. The vision is that this will create one set of data about each person and will allow for ‘data portability’ such that the same data about somebody can be used everywhere they give permission. The new rules also strengthen the right to be forgotten, something that was created a few years ago as a result of a lawsuit.

The EU believes that these laws will foster trust from the public for online activities and will alleviate the growing concern that companies are using information about people in ways people don’t approve or understand. The rules are clearly going to cause a lot of changes for ISPs and online providers like social networks. It will be interesting to see how they cope with the changes.

Australian Piracy Ruling. A court in Australia is being asked by content owners to develop specific procedures to require that ISPs and search engines block piracy sites on the web that are being used to illegally distribute music, video and other content.

The specific case involves the attempt by several content providers like Universal Music, Sony, and Warner Music to stop ISPs and search engines from providing access to Kickass Torrents. That particular service has subsequently gone out of business when the owner was arrested in the US and the various Tor sites were shut down. But the case continued since there are numerous other piracy sites springing up all of the time.

The interesting point in the case is the presumption by the state and the content owners that it is a willful violation of copyright laws for a search engine or ISP to allow access to piracy sites. A ruling siding with that idea would effectively mean that ISPs and search engines are in criminal violation of the law every time they allow a customer to gain access to a piracy site.

In the US we handle this issue by allowing content providers to ask that illegal content be ‘taken down’ from the web. The only time an ISP can have any liability in the US is if they ignore take down requests – something that Cox was accused of in 2016.

Replacing Legacy Telephony

office-handsetI remember sitting in on an industry panel somewhere in the mid-2000s and hearing a discussion about how VoIP was going to sweep the business world and that the PBX would be obsolete within just a few years. I took this with a grain of salt since those on the panel were mostly VoIP vendors or sellers. But still, the general consensus in the industry was that the new would quickly replace the old.

And yet here we are more than ten years later and there are still thriving PBX providers serving businesses. I have a client who sells PBXs and resells PRIs to serve them who has been steadily growing his business every year for the last decade. He still made a significant number of new PRI sales in 2016, many of them for two and three year contracts going forward.

There are several reasons that the PBX industry is still going so strong. The first is that a few years after I saw that panel, SIP came along as a big improvement to PBXs. SIP allows PBXs to mimic some of the best features of VoIP and reduced the sharp contrast between the old and the new technologies. SIP meant there was no longer a huge contrast between old PBX phones and phones with newer features.

But SIP alone doesn’t account for the continuing popularity of PBXs for businesses. As I mentioned earlier, there is still a thriving PBX industry that uses traditional PRIs and not SIP trunks and which still support that same old telephones that businesses have been using for decades.

There are a number of reasons why PBXs are still being used by businesses. Probably first among these is captured by the old adage, “if it ain’t broke, don’t fix it”. Offices full of information workers have probably upgraded phones during the last decade. But a lot of businesses operate in a different environment. There is no particular urgency to change a phone system that’s operating in a warehouse or a lumber yard or a milking barn. As long as such phones work well, the easiest path for the business operator is to keep renewing the phone systems and to not make a change.

I remember back when CCG still operated several offices that we were always being bombarded by vendors to upgrade our key systems. But it’s easy in a business office to defer such upgrades because they are disruptive and time consuming. My employees universally told me that they didn’t want to learn a new phone system – and so we never made an upgrade.

It seems like a lot of businesses also don’t want to make the capital spending decision to change technologies. Tearing out a PBX and installing new phones can mean a big one-time fee. Even if this is financed over time, businesses seem to put off making that decision until their old system stops meeting their needs.

And while most businesses still have office phones, you can’t discount the influence of cellphones on the workplace. It is a daily occurrence for me to be talking to somebody who is on a cellphone while they are sitting at their desk next to their office phone. Businesses are often not ready to get rid of office phones, but a lot of their business is handled with cellphones. This is only going to be bolstered by the widespread introduction of HD voice where the quality of cellphone calls promises to meet or exceed the quality of landline calls. Perhaps the real transition we will see in a few years will be businesses finally walking away from office phones altogether.

This all has a material impact upon those who sell phone service to businesses. I know a number of ISPs, for example, that only offer VoIP and they are often flummoxed by the number of businesses that are not interested in what they have to sell.

A lot of ISPs don’t want to hear the market’s message – that a lot of businesses are still happy with legacy voice products. My clients that do the best in sales of voice to businesses still operate their own voice switches and offer a variety of products to businesses including IP Centrex, PRIs, SIP trunks and traditional POTs lines. A seller who offers both the old and the new technologies is always offering something that people want to buy. I think a lot of us get wrapped up in the idea that newer is always better and it often takes customers to tell us that isn’t always true.

Is 2017 the Year of AI?

Data CenterArtificial Intelligence is making enormous leaps and in 2016 produced several results that were unimaginable a few years ago. The year started with Google’s AlphaGo beating the world champion in Go. The year ended up with an announcement by Google that its translation software using artificial intelligence had achieved the same level of competency as human translators.

This has all come about through applying the new techniques of machine learning. The computers are not yet intelligent in any sense of being able to pass the Turing test (a computer being able to simulate human conversation), but the new learning software builds up competency in specific fields of endeavor using trial and error, in much the same manner as people learn something new.

It is the persistent trials and errors that enable software like that used at Facebook to be getting eerily good at identifying people and places in photographs. The computer software can examine every photograph posted to Facebook or the open internet. The software then tries to guess what it is seeing, and its guess is then compared to what the photograph is really showing. Over time, the computer makes more and more refined guesses and the level of success climbs. It ‘learns’ and in a relatively short period of time can pick up a very specific competence.

2017 might be the year where we finally start seeing real changes in the world due to this machine learning. Up until now, each of the amazing things that AI has been able to do (such as beat the Go champion) were due to an effort by a team aimed at a specific goal. But the main purpose of these various feats was to see just how far AI could be pushed in terms of competency.

But this might be the year when AI computing power goes commercial. Google has developed a cloud product they are calling the Google Brain Team that is going to make Google’s AI software available to others. Companies of all sorts are going to be able, for the first time, to apply AI techniques to what they do for a living.

And it’s hard to even imagine what this is going to mean. You can look at the example of Google Translate to see what is possible. That service has been around for a decade, and was more of an amusement than a real tool. It was great for translating individual words or short phrases but could not handle the complicated nuances of whole sentences. But within a short time after applying the Google Brain Team software to the existing product it leaped forward in the competence of translating. The software can now accurately translate sentences between eight languages and is working to extend that to over one hundred languages. Language experts already predicted that this is likely to put a lot of human translators out of business. But it will also make it easier to converse and do business between those using different languages. We are on the cusp of having a universal human translator through the application of machine learning.

Now companies in many industries will unleash AI on their processes. If AI can figure out how to play Go at a championship level then it can learn a whole lot of other things that could be of great commercial importance. Perhaps it can be used to figure out the fastest way to create vaccines for new viruses. There are firms on Wall Street that have the goal of using AI to completely replace human analysts. It could be used to streamline manufacturing processes to make it cheaper to make almost anything.

The scientists and engineers working on Google Translate said that AI improved their product far more within a few months than what they had been able to do in over a decade. Picture that same kind of improvements popping up in every industry and within just a few years we could be looking at a different world. A lot of companies have already figured out that they need to deploy AI techniques or fall behind competitors that use them. We will be seeing a gold rush in AI and I can’t wait to see what this means in our daily lives.

Is 25 Mbps Still Broadband?

FCC_New_LogoYesterday’s blog noted that the CRTC in Canada (their version of the FCC) adopted a new definition of broadband at 50 Mbps download and 10 Mbps upload. They also said that broadband is now a ‘basic telecommunications service’, meaning that everybody in Canada ought to have access to broadband.

It’s not unusual for a government to define broadband. Two years ago at the end of January 2015 the FCC defined US broadband to be connections that are at least 25 Mbps down and 3 Mbps up. That was a huge increase over the older US standard of 4 Mbps down and 1 Mbps up. The Canadian action raises several questions for me. First, what does it mean when a government defines broadband? Second, once broadband has been defined, how often should the definition be reexamined to see if it’s still adequate?

There is no easy answer to the second question. There is almost nothing in our lives that is growing as rapidly as the demand for broadband. Since the early 80s the demand for speeds and total downloads has doubled approximately every three years. According to Cisco that growth curve might be slowing a tad and perhaps will now double every four years. But this means that any definition of broadband is going to become quickly obsolete. I am not surprised to see somebody talking about twice the speed of the US broadband definition even though it’s only been two years since it was set. And this means that if a government is going to define broadband at a specific speed, then they are almost committed to reexamining that speed on a regular basis.

The policy question of why a government should define broadband is a harder question. Certainly there was a wide range of positions on the topic among the five FCC Commissioners at the time the new definition was set. Commissioner Jessica Rosenworcel thought the definition ought to be 100 Mbps download. Her reasoning was that what the FCC was setting was a goal and that striving high might prompt providers to meet the higher standard. At the other end of the spectrum, Commissioner Michael O’Rielly hated the 25/3 Mbps speed. He said that most cable companies already offered faster speeds and he saw no social benefit from defining broadband to be faster than what people without access to cable networks can get.

It’s clear that after the FCC set the 25/3 definition of broadband that even they weren’t quite sure what it meant. Soon after they approved the 25/3 standard they went on to approve the CAF II plan that is handing out $19 billion dollars to large telcos to improve rural broadband to speeds of at least 10/1 Mbps. The FCC did not feel that their own definition of broadband constrained them from funding something slower.

The main way that the FCC uses their definition of broadband is to count the number of homes that are above or below the broadband threshold. To the FCC this is the litmus test by which they measure the state of broadband in the country. Interestingly, there isn’t a lot of difference for this accounting if the speed is set at 25 Mbps or 50 Mbps. Generally the technologies that can offer 25 Mbps can offer even faster speeds. If the official broadband speed is only for this litmus test then there wouldn’t be much difference between using 25 Mbps and 50 Mbps for that test.

The US has not undertaken any material efforts in the last few years to achieve faster broadband speeds. In fact, in can easily be argued that the CAF II program is doing the opposite and makes it harder for somebody to justify building fiber in rural areas. So, at least in the US, the broadband speed definition is not much more than a number. It certainly presents a target to shoot at for those parts of the country that don’t have broadband, but the government has done almost nothing with that definition to promote faster broadband.

Governments of all sizes have programs to build fiber. Portugal is doing this with tax incentives. The State of Minnesota is doing this with matching grants. And numerous cities have put bond money behind local fiber networks. We’ll have to watch to see if the Canadian government puts any more teeth into their attempt to define broadband. The fact that they also deemed broadband to be a basic service that should be available to all might mean that the government will take steps to build more broadband networks. But setting a broadband definition is a far cry from building fiber infrastructure and it will be interesting to see if setting the 50/10 Mbps goal equates to government involvement in building fiber.

Is there a Right to Broadband?

canada_flag-1920x1080The CRTC in Canada (their version of the FCC) just took a step that is bound to reopen a discussion of best definition of broadband – they defined broadband to now be 50 Mbps down and 10 Mbps up. But they went even further and said that broadband is now a ‘basic telecommunications service’, meaning that everybody in the country ought to have access to broadband. In today and tomorrow’s blog I will look at the two issues raised by the CRTC – if there should be a right to broadband, and the role of governments in defining broadband.

Has broadband grown to become a ‘right’? I put the word in quotes because even I don’t think that is what the CRTC did. What they did was declare that the government of Canada officially blesses the idea that their citizens ought to have access to broadband. Over time that decree should prompt other parts of the Canadian government to help make that happen.

But even the CRTC does not think that every home in the country should be wired with fiber. I’ve traveled north of the arctic circle and there are plenty of remote places there that are not connected to the electric grid. And there are remote homes on top of mountains and deep in the woods where homeowners have purposefully withdrawn from civilization. The CRTC is not guaranteeing broadband to such places.

But the CRTC has made a strong statement to recognize the importance of broadband. This is not without precedent. During the last century the US government made similar statements about the right of Americans to electricity. The government then went on to create programs that would help to realize that right. This meant the formation of the Rural Utility Service to provide funding to create rural electric grids, and it mean the creation of government-sponsored electric generation such as with the Tennessee Valley Authority.

These government programs worked well and the vast majority of US homes were connected to the electric grid within a few decades. The investments made in these programs paid back the US government many times over by bringing numerous communities into the modern world. The electrification of America was probably the most profitable undertaking ever undertaken by the US government.

The action taken by the CRTC will be an empty gesture unless it pushes the Canadian government to take the steps needed to get broadband everywhere. The latest statistics show that nearly 20% of homes there, mostly rural, don’t have access to landline broadband. That’s an even larger percentage of homes than in the US and probably reflects the vast rural stretches in central and northern Canada.

The US government has not made the same kind of firm statement like the one just issued by the CRTC, but we’ve clearly taken official steps to promote broadband. There were billions poured into building middle-mile fiber in rural America with the stimulus grants. And the $19 billion CAF II fund is promoting broadband for areas that have none – although it’s still puzzling to understand the bandaid approach of that program that is pouring money into building infrastructure that doesn’t even meet the FCC’s definition of broadband. But the official goal of CAF II program is that US homes deserve broadband.

The CRTC statement is more pointed because it was paired with a new and higher definition of broadband at 50/10 Mbps. The only technologies that can meet those speeds are cable company HFC networks and fiber – and nobody is building new cable networks. The CRTC has really taken a position that rural Canada ought to have fiber.

It will be interesting to see over the next few years how the rest of the Canadian government responds to this gesture. Without funding this could be nothing more than a lofty goal. But this could also be viewed as a government imperative – much like happened in the US with electricity. And that can drive funding and initiatives that will bring broadband to all of Canada – and is something we here in the US ought to be watching and emulating.