FCC to Fund Experimental Broadband Projects

FCC_New_LogoLast Friday the FCC voted to establish a $100 million fund to provide one-time grants to fund what they are calling experimental rural broadband projects. The announcement was at a high level of detail and we’ll have to wait a bit to see the specifics. Grant filings will be due within 90 days of the release of the final rules. The FCC hopes to award all of the grants by the end of this year.

Here are a few things that can be gleaned from the high- level release:

  • $75 million of the funds will be awarded based upon the ability of projects to deliver at least 25 Mbps download and 5 Mbps upstream. $15 million will be awarded to projects in high cost areas that must deliver at least 10 Mbps download and 1 Mbps upload. And $10 million will be awarded for extremely high cost areas that also can deliver 10 mbps download and 1 Mbps upload.
  • The awards will be made based upon a comparison of the amount per passing that is requested compared to the costs calculated for that area by the CAF cost model. Those willing to take the least amount of money compared to costs should win the grants.
  • Those willing to serve Indian tribal areas will get a 25% bidding credit.
  • There will be some sort of cap set on the amount of any given award.

Here are a few of the things I can glean from these rules so far:

  • Nobody should expect these grants to pay for most or all of a broadband project. This is not going to be anything like the Stimulus grants. Some of those grants were for large amounts and paid for a substantial amount of construction. While $100 million may sound like a lot of money, expect the FCC to spread this money to a lot of projects in a lot of states to cover a wide range of technologies.
  • You are not going to get a lot per customer. These grants are going to reward those who can pay for most of the cost of a project on their own. So think of these grants as providing a little bit of assistance to construct a broadband project.
  • You better already have a project in mind, because 90 days is not a lot of time to understand the filing rules and to complete a grant application.
  • Most of this money, except for some very rural places, must be able to deliver at least 25 Mbps download to all of the customers in the proposed service areas. That is going to eliminate a lot of potential projects like point-to-point WiFi networks or even cellular 4G projects that might deliver that much bandwidth to a few customers close to a tower but a lot less bandwidth to those further away. This makes the grant a real technological challenge since there aren’t a lot of technologies other than fiber or a coaxial cable network that can deliver that much bandwidth to everybody. But the awards are not going to be nearly big enough to fund building fiber. The FCC is walking a tightrope between wanting high bandwidth and also expecting filers to pay for most of the project. This combination is going to be hard for a lot of filers to meet.
  • Like any federal monies, these grants will come with a lot of paperwork. It’s one thing to have accepted the paperwork burden for taking $10’s of millions of stimulus grant, but you need to consider if that paperwork burden will be worth it for getting a $1 million or smaller grant.
  • Because of the quick nature of the process and because the awards will only fund a portion of construction costs, these grants are going to favor incumbent providers who can submit projects that are already in their capital budgets. Since incumbents were already planning on paying the full cost of a project it will be easy for them to just take a little assistance.

The rules should be issued soon and once we see the detailed rules we will understand more about who should and should not bother with this process. There were over 1,000 entities that expressed an interest in these grants at the beginning of the year. I am going to guess that a significant percentage of those projects will find that either they or their projects won’t qualify for the grants.

I hope my caution about the grant process doesn’t come across as too negative, but I have learned from experience that free money is not really free and usually comes with a lot of strings. So before spending money to file these grant requests make sure that you qualify and that you are requesting substantially less than the CAF cost model projects for your study area. There are plenty of folks out there who will be glad to charge you for filing a grant request even if you have little or no chance of winning one.

It’s Not Your Father’s 911

Mashpee_Mass__Ambulance_363_-_2007_Ford_E-450_HortonSince its inception in the late 60’s and wide-spread deployment in the 80’s we have all come to take 911 for granted. No matter where you are in the US, if you dial 911 you expect to be connected to an emergency center for police and fire service.

All telephone providers in the US are required by FCC rules to connect a caller to the proper 911 center based upon their location. These 911 centers are referred to as Public Safety Answering Points (PSAPs). The PSAPs are operated by counties, cities or regionally. PSAPs vary in sophistication from large 911 centers in major cities that have hundreds of operators, to small rural 911 systems where the calls get routed to the local sheriff’s office and involve almost no technology.

I have recently seen two different sets of headlines that put 911 back in the news. The first was about the emergence of text-to-911, where texting to 911 will connect you to the closest PSAP. This grew out of the movement to create next generation 911, which has the goal of allowing voice, text or video emergency calls from any communications device using IP. Historically 911 has been limited to voice calls made from landline or cellphones, except for calls made by deaf and hearing-impaired people using teletypes and other similar devices.

In 2013 the largest wireless carriers began testing text 911 with some large urban PSAPs. People can text 911 and be connected to their PSAP, which will then respond to them via text. The genesis of this upgrade is to provide 911 from anywhere for the hearing-impaired, who can only now do this using special devices. But texting to 911 would be available to anybody

The FCC issued a policy statement in January of this year that said that every wireless carrier should provide text-to-911 service, although it is not yet mandatory. The FCC also mandated that the wireless carriers send back a ‘bounce-back’ message to the sender if they are unable to complete the call to a PSAP. Without that return message a person would assume that the text message successfully got to 911. Both the FCC and the PSAPs encourage people to make the call by voice whenever possible and only use text when there is no other alternative.

There was also some recent more disturbing news about 911. The FCC recently released data that showed that in 2013 that 90% of the 911 calls in Washington DC originated from wireless devices did not deliver the precise location data of the caller. This is a bit chilling for several reasons. First, a large percentage of the population now only uses cell phones, and so this is their only way to call 911. And secondly, not everybody knows their address when they call. If the caller is a child or a tourist they might not have any idea of their location. And sometimes callers who are in danger call 911 and can’t speak and rely on 911 knowing where they are at

Mobile 911 makes a determination of a callers location using triangulation. This means that the 911 PSAP is able to ping back to the cell phone and see the location of several nearby cell towers. By looking at the relative strengths of those ping-backs they were historically able to pinpoint a caller within 50 – 100 feet, often closer.

But this system was established when there was only a handful of cell towers in the world, and so it became fairly easy to locate a caller. But today there is a proliferation of cellular transmitting devices in the network, particularly in urban areas. The cell phone companies are reported to be installing millions of mini-cell sites this year – sites which act as cell towers, but for a much smaller area like part of a stadium, a busy street or on a commuter bridge. Additionally, anybody is able to buy a cell phone booster. These are essentially cellular repeaters with a short range and are used to bring strong outside signals to the inside of a building.

But to a PSAP all of these devices look like enough like cell towers to cause confusion in the triangulation algorithms. And so, where mobile 911 was once fairly accurate, it is now a jumbled mess in urban areas where there is a proliferation of transmitting devices. I am sure there is a technological solution to this, but it is going to take the cell phone carriers start over to find a way to locate a cell phone in an urban environment.

While the headlines of 9 out of 10 being inaccurate sounds scary, the reality is that the lack of precise data didn’t affect most of these calls. Otherwise we’d be seeing a lot of shocking headlines. Remember that in most cases that the 911 PSAP speaks to the caller who can verify their location. And even when the mobile 911 system in not entirely accurate it probably gets close enough to be effective most of the time. But I remember the headlines in the early 80s when several people having heart attacks died because they called 911 from a payphone and didn’t know their location. I hope this latest report prompts the FCC and the cell companies to find a solution before we go back to those headlines again.

Securing the IoT

MLGW_Substation_Whitehaven_Memphis_TN_2013-01-06_006I read this week that a security company was able to hack into somebody’s WiFi network through a smart LED light bulb. This obviously points out a flaw in that particular brand of lights, but it highlights a much larger issue. How are we going to secure the Internet of Things?

Estimates vary widely, but by 2020 there is expected to be many billions of internet connected devices. Many of these devices will have been designed for a given purpose, but many will just be things to which we have added a cheap sensor. The vast majority of the IoT devices will have little or no protection against online attacks. So the IoT is going to create billions of unsecure endpoints in all of our networks.

Many of these devices will have very tiny and primitive processors incapable of any of the kinds of security protection we use today such as anti-virus anti-malware software. The devices are going to be built by a multitude of different companies and have a wide array of capabilities and vulnerabilities. And unless some standard is developed, the devices will use a multitude of different protocols such as Zigbee, WebHooks and IoT6. And perhaps we don’t even want one standard because that could make the whole world susceptible to an effective virus.

Unlike today’s viruses which can cause computer and network problems, an IoT an attack will be able to inflict real world damage. The obvious examples always used include attacks against insulin pumps or pacemakers. But damage can come from anywhere when hackers can address cars, heating and air conditioning systems, water systems and door locks.

There haven’t been many advertised hacks against IoT devices today, mostly because hackers have so many other lucrative places to attack us. But I just read this month how hackers gained access to some electric company grids through their smart metering systems. It won’t take a lot of playing inside an electric network to cause real harm to generators, substations or transformers.

There are some proposed solutions to some of these problems. For example, smartphones and tablets today have elements like SIM or Trusted Execution Environment (TEE) that are secure cores out of the reach of hackers. In those devices we can load credentials into those safe environments which allows us to create a true identity for the device that can be validated by the rest of the network. The more sophisticated IoT devices could deploy the same sort of technology.

We can do something similar for ‘dumber’ devices using something akin to the chip and pin systems that are used in Europe to protect credit cards. Those technologies allow banks to establish the identity of the person trying to complete a transaction.

But to get protection into the IoT is going to require both standards and compliance by manufacturers. Consider the American banking system which is not implementing the same safety standards as Europe, even while tens of millions of credit card numbers and PINs have been stolen multiple times. Just having security is only going to work if the people making the IoT devices spend the money to implement the technology. There will plenty of manufacturers who will cut corners on security to save money.

Further, many of the IoT technologies being contemplated involve swarms of very small sensors connected in clouds and used to monitor our environment. Whether these be deployed in our blood stream to look for signs of illness, or deployed in nature to watch endangered species, these devices will be of such a tiny nature that it will be impossible to add sophisticated software security.

Obviously solutions will be developed because the public will demand it. But before that happens I envision some dramatic and very public cases where hacking kills people or causes other real damage. This doesn’t have to be anything sophisticated. Turning toasters on to full heat overnight might burn down houses. Locking everybody in a town out of their houses by hacking into smart door locks would wake up the public to the dangers of the IoT. I fear we are in for some bumpy roads before we figure out how to do this right.

Funding Broadband to Schools

Indianola_High_SchoolFCC Chairman Tom Wheeler recently announced that he was going to try to funnel $5 billion over the next five years to upgrade the bandwidth inside schools. He is proposing to do this as part of the E-Rate program by changing the things that fund will pay for. I think this begs the question of how and why the FCC has the funds available to pay for this sort of expenditure. So following is a bit of a primer on the E-Rate program.

The E-Rate program is part of the federal Universal Service Fund (USF). Since the 1960’s there was a universal service fund that was administered by AT&T that provided funds to support the expansion of telephony into rural places. This was funded by a small surcharge on interstate long distance calls.

But when AT&T was broken up the funding for USF got murky and so Congress changed the administrator of this funding as part of the Telecommunications Act of 1996. In the Act the Congress directed the Congress to create the Universal Service Administrative Company (USAC) who collects and disburses funding for universal service. USAC is still funded based upon interstate telecommunications. Most telcos pass these fees along to customers on their bills, although this is not mandatory and companies could fund this out of their fees.

The Universal Service Fund has four major components. The High Cost Fund pays to support providing telephony in rural places where the costs are much higher per customer than average. The Low Income Fund pays for some of the installation fees and also some of the monthly fees for telephone lines for low income subscribers. The Rural Health Care Fund provides subsidies for rural tele-health and tele-medicine. And the Schools and Library Funds provides subsidies for Internet access and telecommunications services and infrastructure for schools and libraries.

The USF is undergoing major change due changes ordered by the FCC in Docket 11-171 in 2011. The fund is being transitioned to be called the Connect America Fund and will divert the High Cost Fund to be used to support rural broadband instead of rural telephony.

The Schools and Libraries Fund is commonly referred to as the E-Rate program. This program was started in 1997 where the FCC determined that, “telecommunications services, Internet access, and internal connections, including installation and maintenance,” for schools and libraries were eligible for discounted rates. The E-Rate program will pay from 20% to 90% of the cost of eligible services based upon the poverty and the urban/rural nature of the population supported by a given school. The Fund pays the neediest schools and works its way through the list of applicants each year until it runs out of funding.

What the FCC is doing as part of Chairman Wheeler’s announcement is to look at the definition of what is eligible for reimbursement from the E-Rate program. These definitions haven’t been upgraded for a long time and the fund pays for things like pagers and telephone lines (although one has to imagine the payments for pagers must be very tiny).

The FCC now wants to divert some of the fund to help pay for the basic infrastructure at schools to deliver broadband to the classrooms. President Obama has announced a policy goal, referred to as ConnectED, of bringing faster broadband to all of the schools in the country. His goal set a near-term goal of bringing 100 Mbps connections with a near-future goal of bringing gigabit speeds to schools

The FCC is responding to those policy goals with this change in the E-Rate funding. In Docket WC 13-184 the FCC had looked at some of these issues and had noted that there was a challenge in getting bandwidth from the wiring closet of schools into the classrooms. The FCC now wants some of the E-Rate funds to be used to rewire schools or to deploy other technologies that can bring bandwidth to where students can use it. It certainly is important for this fund to keep up with the times. It makes a lot more sense to use these funds to improve bandwidth at schools rather than to continue to pay for telephone service and for T1 lines. .

CenturyLink and the Cloud

Cloud_computing_icon_svgI don’t write many positive articles about the largest US telcos. This is mostly because these are the competitors for most of my ISP clients, but also because the big companies are on the wrong side of issues like net neutrality and privacy. It’s generally pretty easy to find things to dislike about any one of the big carriers.

But I have to say that I am impressed with CenturyLink’s foray into cloud computing. They got into it early and they have carved out a decent market niche. Cloud services is already a huge business and will grow much bigger. I read a recent statistic that says that only about 13% of US corporate data today is stored in the cloud. That leaves a lot of room for industry growth.

The two big giants of the cloud storage industry are Amazon and Google. In fact, Amazon is so large that I read that Amazon has five times more data center capacity today than the next 14 competitors combined, including Google. But I also have read reviews that talk about Amazon as the ‘Walmart of cloud storage’. They are cheap – they have lowered cloud data storage prices 42 times since they started. But they are also somewhat generic and this comes from having a suite of products that tries to satisfy everybody.

But companies like CenturyLink and Peak 10 have created a niche in the cloud computing market by offering customized services. For example, Peak 10 has concentrated on the medical and the gaming industries. CenturyLink cut its teeth on providing services to governments and other large businesses.

There are several components to cloud computing – data storage, transport, computing power and software centralization. Amazon has clearly moved ahead of everybody else in storage capacity, but one has to wonder if this is a long-term advantage. It appears that data storage is moving towards being free, or nearly free. Obviously with the upcoming Internet of Things there is going to be more pressure put onto storage capacity, but the dropping prices for data storage is what has led to the repeated Amazon price cuts

CenturyLink competes much better in the transport arena. They were born out of the merger of Qwest and US West, with Qwest having significant fiber assets throughout the US and the hemisphere. They continued to expand their fiber post-merger and most of the US is close to a CenturyLink fiber. While transport prices have dropped, particularly on the major intercity routes, transport to smaller markets is still a very lucrative business, and having fiber in those markets gives CenturyLink an advantage in many regions.

Amazon also has the edge today in computing power by virtue of owning so many data center assets. Amazon is not ahead only by virtue of sheer number of data center computing assets, but they have also been working feverishly on building faster and more energy efficient servers and switches. This gives them a temporary market advantage, but these kind of advantages usually don’t last too long. There is a major industry shift towards software defined networking and this is going to result in cheap data center routers and switches for everybody.

I wrote this blog as an example that it’s possible for a company to reinvent itself. I don’t think anybody has been thought to be stodgier than CenturyLink for the last decade. While Verizon and AT&T have been adding data customers, CenturyLink struggled with old copper. But CenturyLink is now a player in cloud products and they have recently launched credible new business lines by building fiber-to-the-home networks and also launching their Prism DSL product that is similar to AT&T’s U-verse. I hold CenturyLink out as an example to my clients. If they are able to take the steps needed to make sure that they will be relevant decades from now, then so can any other ISP.

Threats to the Internet

The InternetThe Pew Research Center recently issued a report that looks at the various threats faced by the Internet going into the not-too-distant future of 2025. The report was prepared by inviting 12,000 industry experts to opine on the various issues and problems they see with the Internet going forward. While Pew received a wide array of responses they were able to boil the responses into four trends that might threaten the Internet as we know it:

Actions by nation-states to maintain security and political control will lead to more filtering, segmentation and balkanization of the Internet.

We are already starting to see this today. Governments now routinely disrupt the web at times of crisis as has happened recently in the Middle East. Numerous governments censor the web to some degree with the most blatant example being China. But many other countries do this to lesser extent. For example, the major ISPs in Great Britain routinely block content related to pornography, suicide, gambling, violence, weapons and even dating. People can go through a process to opt out of the blocking, but even with the opt-out there is content that is not available in the country.

Trust will evaporate in the wake of revelations about government and corporate surveillance and likely greater surveillance in the future.

This is somewhat related to the first item above, because the revelations related to the NSA spying have led many countries to begin the process of establishing firewalls around the data in their country. This is going to greatly hinder any world-wide cloud products and may even go a long way towards isolating a lot of materials from search engines.

But lack of trust also affects the people using the Internet and many are starting to look at web products and software that will disguise or hide their identity. Further, there is a segment of the population that refuses to use the web due to fear of surveillance. While NSA and government spying grab the headlines, many of the experts are more worried about the data-gathering efforts of large companies like Google and Facebook and the chilling impact that might eventually have on using the Internet.

Commercial pressures affecting everything from Internet architecture to the flow of information will endanger the open structure of online life.

This goes to the heart of the network neutrality battle going on in the US. If the major ISPs here begin giving preferential treatment to some content providers then the open web as we know it starts deteriorating. But the fear of the experts goes much farther than net neutrality. Many of them worry that the web is being consolidated into the control of a small handful of network and content companies in the same manner that has happened to cable companies and the media in the US. They fear that control of the Internet by a handful of corporations will lead to decisions about the web based upon short-term quarterly profits rather than doing what is best for the whole web.

Efforts to fix the TMI (too much information) problem might over-compensate and actually thwart content sharing.

There is already a lot of information on the web and it is actually already getting harder and harder to find exactly what you are looking for. And the amount of information available is climbing at a dizzying rate. The fear of many experts is that there will be software and companies that will filter the web for people and that these filters are liable to have as much of a negative influence on the web as censorship or other overt blocking of data. Any editing of data starts to look like censorship as soon as it introduces bias.

Making Privacy Work for You

privacyNumerous surveys have shown that the vast majority of people are leery about ISPs recording and sharing data about them. As a small ISP you can take advantage of this sentiment to get a competitive advantage. Most small ISPs compete against the larger ISPs, and we all know that those companies have willingly and directly handed customer data to the NSA.

Probably the biggest step you can take in this area is to look at your data retention practices. I’ve visited clients who retain records of everything their customers have done on-line for the last year or even longer. They generally say this is done for the purposes of being able to trace network problems and also because data storage is cheap.

There are FCC requirements to retain originating telephone calling records for 18 months. That requirement was recently bolstered by asking you to keep even more data about calls placed to rural locations for six months. But there are no legal requirements that you must track the web sites people visit or retain emails or other web-related records for any length of time.

Think about the message that you can send to your customers if you make it a point to not store customer data for any longer than necessary. You probably want to keep things for a little while for the purpose of investigating network problems and answering custom inquiries. You should make an assessment of how often you need to look backwards at customer data to help determine the right time to keep records. You might even be able to justify not recording some things in the first place.

You can’t always erase customer data. Obviously you have to comply with law enforcement subpoenas, but those can only require you to record and turn over data going forward and you are not required to have kept older data. And most ISPs will voluntarily save and track data for customers who request it, such as somebody who is being cyber-harassed. The hardest data to have a policy for are emails. What you might want to consider is that when a customer deletes an email that you also permanently delete it. This puts emails into the control of your customers instead of you. But if they choose to retain emails you obviously must keep them. Otherwise there is very little reason for you to be keeping customer data. If you don’t have a very specific network security reason to store something, then don’t do it.

If you are proactive on customer privacy and data retention you should make sure that your customers know about your policies. You can openly pledge to customers that you are taking concrete steps to protect their privacy. Such pledges can be a good marketing tool for you, particularly if your competition can’t or won’t make such promises. So make sure you don’t miss an opportunity to talk about your practices.

You can even go further. There are a number of network tools today that promote privacy and you can educate your customers about them. For example, the web search engine DuckDuckGo is different than other search engines in that they don’t save the IP address of somebody doing a search there and they never use cookies. Contrast this to Google who uses their search engine to learn as much as they can about people. If somebody routinely searches about diabetes, for example, Google will infer that someone in that household is a diabetic. Over time Google can put together a really detailed profile of your household just based on the things you search for and purchase.

You can go even further and educate people in general about practices which will help them preserve their privacy. Changing to a no-tracking search engine is a good start, but other practices like clearing their cache of cookies or turning off location identifiers are other ways for people to have a lower web profile. So you can create a few educational pages informing people about good web practices. Again, the point is to show that you care about their desire for privacy.

You obviously can’t help your customers if the NSA is somehow still snagging a lot of the data at some later point in the network. But very few people are as worried about the NSA as they are about the big web companies like Google that are gathering a detailed profile about them and then selling this to marketers of all types. So take whatever steps you can to assure your customer that you understand their concerns about privacy and that you are on their side.

More New Technologies

thermionic-vacuum-tubes-33679-2751941I subscribe to a lot of science and technology feeds and as seems normal these days I saw a number of new technologies in the last few weeks that excite me. As you probably can tell by reading my blog, I am an optimist and I have high hopes that technology can help us overcome some of the challenges facing our planet.

Probably the most promising new technology of the bunch is water-based batteries. Scientists at the University of Southern California have developed a battery that is all-organic, water-based, contains no metals or toxic chemicals and can be entirely synthesized (not using materials mined from the earth).

These are large batteries, the kind that can be used to store power generated by wind or solar energy. The batteries work by using a class of chemicals called quinones that can be dissolved in water. These batteries should be cheaper than current batteries. Right now estimates are that these will cost around $100 per kilowatt hour but will last fifteen years. That’s the same cost as for lead-acid batteries that last for a much shorter time. And these batteries eliminate the toxic waste problem we face with current battery technology.

Next, scientists at the University of Utah have developed an improvement in solar cell technology that can greatly increase the efficiency for cheap solar technology. Today, the affordable solar cells generally rely on one material to hold down manufacturing costs. For example, the common silicon-based solar cells are cheap to produce but are only about 22% efficient in the conversion of sunlight into energy.

There are more efficient solar cells that combine multiple layers of different materials, each of which captures a different part of the light spectrum. But such cells are expensive to make and are generally only used in critical applications such as in military equipment. These scientists have developed a polychromat layer that adds efficiency to existing solar cells. Efficiencies in various applications have improved the efficiencies of the cells by about 16%. Anything that can get more electricity out of low-cost solar cells is an important discovery as we work towards a future where solar production is more cost effective than using fossil fuels.

Next, scientists at the Ames Research Center for NASA have taken a look backwards and have developed a vacuum tube that can be used to develop faster computing. One of the problems with standard CMOS transistors is that they have just about reached their smallest size limit, and they also have an inherent speed limit of a few gigahertz. The NASA scientists have created a nanometer-sized vacuum-channel transistor tube with speeds up to 460 GHz. That is the same sort of speed that has been postulated for grapheme transistors, which have not yet been developed

At this small size a large number of these tubes could be packed together to create a significantly faster computer core than anything possible today. There are, of course a lot of obstacles to go from lab to production, but this joins the list of technologies that are being considered as the next-generation replacement for the transistor in computers.

Finally, scientists at the University of Cambridge have devised a way to make a much stronger superconducting magnet. They broke a decades-old record by creating a golf-ball sized crystal that can generate a 17.6 Tesla magnetic field. And this can be done at higher temperatures than previously needed and only require minus 320 degrees F compared to older technologies needed minus 460 degrees F. These magnets could lead to breakthroughs in electric power generation or could even be the basis for levitating monorails.

We live in a pretty amazing time. It seems like almost every field of research is finding breakthroughs. These four inventions alone could provide for a better world by providing safer and cheaper batteries, better solar power generation, faster computers and better transportation.

How Many Internets Are There?

chinese internetOne of the premises of net neutrality is that we treat all bits the same. But this supposes that the Internet is a ubiquitous network, the same everywhere. It doesn’t take much digging to see that the Internet is not the same everywhere and that there are already some big differences all throughout the on-line world in how and what people use on the Internet.

At the highest level there are three distinct Internets today – the Internet of people, the Internet of servers and the Internet of things. We are all familiar with the Internet of people, but we forget that a huge portion of web traffic data is machine-to-machine traffic. And soon, this is going to be swamped by the traffic from the Internet of things. Each of these three Internets coincide with each other only in the most basic levels of sharing the same system of IP addresses and similar web protocols. But other than that the three Internets are very different in how they function.

And of course, there is huge diversity within the Internet of people. For example, my Internet and that of my teenage daughter are very different. My Internet is PC-based and largely browser-based and that drives my web experience. Her Internet is smartphone and tablet based and is largely applications-based and she has a drastically different web experience than mine. She doesn’t even use email except very begrudgingly. We do cross paths on Facebook a little where she reluctantly posts a few pictures occasionally to keep the old folks satisfied.

But there are much larger degrees of separation within the Internet of people. For instance, people in the US tend to think that our version of the Internet is the same as what others around the world use. But it’s not. First, there are a handful of countries that have firewalled themselves off from the rest of the world, with the most extreme example being China. There are hordes of Chinese censors who scrub the web there constantly to keep out influences that the government doesn’t approve

There are also somewhat differrent Internets for any country that has a unique language such as Japanese, Norwegian or Bantu. The Internet for each foreign language is unique in that the speakers of that language will have migrated towards a set of websites and services that cater to their language. Over time, the web experience in each country becomes unique and very much separate from the rest of the world.

And then one must consider the new worldwide initiative to store data locally within each country. This is from the fallout of the NSA spying where other countries are upset that they are being spied upon by the US and other spy agencies. Many countries are considering laws that will require local data to be stored on servers physically located in their own country as a way to be able to build a firewall around such data. This means that the cloud is not going to be a worldwide phenomenon, but rather will be handled differently in each part of the world.

There is also an international push by different governments to change some of the basic rules for the Internet locally, and if this occurs then the communications between different global parts of the Internet will not be as seamless as it is today. There is even a movement of some countries who want to have their own naming conventions, leading to the possibility that there could be more than once site in the world with the same name (although with different IP addresses).

Until now the Internet has been a fairly coherent network with the US clearly driving the direction of the web both by being the maker of the rules, but also by virtue of having spawned Facebook, YouTube, Google, Yahoo and a host of other world-wide companies. But as the Internet becomes more regional and data is kept locally, the influence of these firms will diminish in various parts of the world. Which is just what much the rest of the world is hoping for.

Musings on the Aereo Shutdown

Rabbit_Ears)I was traveling last week when the Supreme Court ruled on the Aereo case, and that gave me a chance to read a lot of reactions to the ruling before I wrote about it. The Court said that the Aereo business plan was basically a gimmick and that they had to shut down. I will admit that I always thought the same thing and that it’s hard to build a business based upon a loophole. It’s too easy for your competition to attack the loophole and such arbitrage opportunities are rarely permanent.

But in reading articles about the industry reaction to the ruling I noticed that the executives from the major networks reacted in absolute glee over the shutdown and hailed it as a major victory. And frankly, I think they made some huge mistakes with the way they handled Aereo from the beginning. I think what they failed to realize is that Aereo was serving the niche of the market that gets their entertainment using laptops, tablets and cellphones. You know the industry doesn’t get this because they refer to these devices as the ‘second screen’, failing to recognize that for millions of viewers this is the primary screen.

The industry doesn’t seem to understand the demographics that Aereo was serving. It’s easy to think that this is just the cord cutters, but it’s also most of the kids in the country. Let me talk about the cord cutters first since I am one of them. I don’t think I have watched a network TV show live, other than football games in over a decade. Perhaps that makes me unusual, but my life no longer includes network TV shows, commercials or watching TV at a fixed time. They lost me many years ago. I know that the industry keeps their eyes on cord cutters like me, but we are still a small enough segment of the market and they are not yet truly worried about us.

But the big mistake they are making is with the kids. The majority of kids do not like and will not tolerate the linear nature of broadcast TV. I’ve written many times how kids prefer YouTube, Vine and numerous other sources of entertainment and that they are largely walking away from traditional programming. The various industry surveys may show that kids still watch traditional TV, but only a few of the more detailed surveys show the truth, which is that even while they might be watching the big box TV, they are also watching something else on their ‘second screen’.

TV executives are looking at the Aereo decision as an indicator that they are free to keep on doing business as usual. And that is a huge mistake that is going to bite them hard one of these days. Instead of suing Aereo they should have purchased them a few years back and then embraced this as the way to get to the younger demographic.

Aereo didn’t just bring network TV to the second screens. It also came with a built-in DVR letting people with other screens watch what they want where they want and when they want. Aereo was the industry’s best chance at staying relevant with the kids under 16. True, Aereo wasn’t paying retransmission fees, but it was doing the networks a favor by bringing their content to people who obviously don’t want to watch TV on the big box. I guess the TV executives assume that these people will now flood back to their TVs, but I am going to guess that most of them are going to be more like me and they will decide that if they can’t have Aereo they will just forego network TV. There are enough alternatives to keep us second screen people happy whether we are old cord cutters or teenagers.

I really find it hard to believe that the industry and Aereo couldn’t have worked out a compromise solution. It certainly would have been relatively inexpensive for the networks to buy Aereo, and if that wasn’t possible then they certainly could have worked out some kind of reduced retransmission fees that could have kept the company functioning. Instead, the networks gleefully poked a lot of cord cutters and second screen viewers in the eye and I really don’t think most of them are coming back.

I don’t think that the networks understand how easy it is for them to lose a whole generation of TV viewers. Kids have already decided for the most part that they don’t like the constraints of traditional TV and it would be very easy for this whole generation to just walk away from the networks. To some degree they already have. The average age of network viewers keeps climbing. Surveys show that the networks barely register with this generation in their list of favorite sources of programming.

It’s too late to keep Aereo going, but it’s not too late for the TV networks to find some way to remain relevant. But I don’t see them doing anything that is going to enamor them to the younger generation and I think the clock is ticking on a whole generation of viewers. At some point the advertisers that support TV are going to realize that they aren’t getting to the right demographic, and that could bring about the demise of the networks in a big hurry.