Why We Need Network Neutrality

Network_neutrality_poster_symbolWhile the FCC has been making noise about finding a way to beef up net neutrality, the fact is that the courts have gutted it and ISPs are more or less free today to do whatever they want. In March, Barbara van Schewick, a Stanford professor had several ex parte meetings with the FCC and left behind a great memo describing the current dilemma with trying to rein in network neutrality violations.

In this memo she describes some examples of bad behavior by US and British ISPs. While she highlights some well-known cases of overt discrimination by ISPs, she believes the fact that the FCC has actively intervened over the last decade in such cases has held the ISPs at bay. But now, unless the FCC can find some way to put the genie back into the bottle there are likely to be many more examples of ISPs discriminating against some portions of web traffic.

Certainly ISPs have gotten a lot bolder lately. Comcast essentially held Level3 and Netflix hostage by degrading their product to point of barely working in order to extract payments out of them. And one can now imagine AT&T and Verizon doing the same thing to Netflix and all of the ISPs then turning to other big content providers like Amazon and Facebook and demanding the same kind of payments. It seems that we have now entered a period where it’s a pay-for-play network since the FCC did nothing about the issue.

The US is not the only place in the world that has this issue. We don’t have to look at the more extreme places like China to see how this might work here. Net neutrality violations are pretty common in Europe today. A report in 2012 estimated that one on five users there was affected by ISP blocking. The things that have been blocked in Europe are across the board and include not only streaming services, but voice services like Skype, peer-to-peer networks, Amazon cloud services, gaming, alternate email services and instant messaging.

If we don’t find a way to get net neutrality under control the Internet is going to become like the wild-west. ISPs will slow down large bandwidth users that won’t pay them. They will block anybody who is doing too good of a job of competing against them. The public will be the ones who suffer from this, but a lot of the time they won’t even know it’s being done to them.

I don’t know anybody who thinks the FCC has the courage to take the bold steps needed to fix this. The new Chairman talks all the right talk, but there has been zero action against Comcast for what they did to Netflix. I imagine that the ISPs are still taking it a little easy because they don’t want to force the FCC to act. But the FCC’s threats of coming down on violators are going to sound hollow as each day passes and nothing happens.

Professor van Schewick points out that absent strong rules from the FCC that there is no other way to police network neutrality. Some have argued that antitrust laws can be used against violators. But in the memo she demonstrates that this is not the case and that antitrust law is virtually worthless as a tool to curb ISP abuses.

It’s not just the big ISPs we have to worry about. There are a lot of smaller ISPs in the country in the form of telcos, cable companies, municipalities and WISPs. It’s not hard to picture some of the more zealous of these companies blocking things for political or religious reasons. One might assume that the market would act to stop such behavior, but in rural America there are a whole lot of people who only have one choice of ISP.

I hope that things don’t get as bad as I fear they might and that mostly common sense will rule. But as ISPs violate the no-longer functional net neutrality rules and nothing happens they are going to get bolder and bolder over time.

Statistics on How We Watch Video

Old TVExperian Marketing has published the results of yet another detailed marketing survey that looks at how adults watch video. This is perhaps the largest survey I’ve seen and they talked to over 24,000 adults about their viewing habits. This one has a bit of a different twist in that it correlates TV viewing with the use of various devices. The conclusion of the survey is that people who use certain devices are much more likely to be cord cutters.

Probably the most compelling statistic from the survey is their estimate that as of October 2013 the number of cord cutters has grown to 7.5 million households, or 6.5% of all households. This is several million higher than previously published estimates. This survey shows that age is an important factor in cord cutting and that 12.4% of households that have at least one family member who is a millennial between the ages of 18 and 34 are cord cutters. And something that makes sense is that over 18% of those with a NetFlix or Hulu account have become cord cutters.

The survey also shows that the number of people who watch streaming video continues to grow and that 48% of all adults and 67% of those under age 35 watch streaming or downloaded video from the Internet each week. And this is growing rapidly and both of those numbers increased by 3 percentage points just over the prior six months.

The main purpose of this survey was to look at viewing habits by type of device. One of the surprising findings to me is that smartphones are now the primary device used to watch streaming video. I guessed it surprised me because this is not one of the ways we watch video in our household other than videos that pop up from Facebook. But during a typical week 24% of all adults or 42% of smartphone users watch video.

The television set is still the obvious device of choice for viewing content and 94% of adults watch something on their television each week. Only 84% of adults now use the television to watch live programming and the rest are watching in some different manner. For instance 40% of television watchers still view content from DVDs, 32% get content from a DVR, 13% watch pay-per-view and 9% watch streaming video. As of February 2014, 34% of television sets are now connected to the Internet. Of those 41% use AppleTV, 35% use Roku and the rest have Internet-enabled TVs.

Adults are watching content on a lot of different devices now. Something that might be surprising to bosses around the country is that 16% of adults with a PC at work use it to watch streaming video. One fourth of adults who own game consoles watch streaming video, 26% of adults who own a home PC use it for videos, and 42% of adults who have either a smartphone or tablet use them to watch video.

The survey also looked at what people watch and the time spent with specific programming on each kind of device. For example, YouTube is the source for 59% of the video watched on PCs and the average adult spends over 21 minutes per week watching it. Only 7% of content viewed on PCs is NetFlix, but the average time spent is over 23 minutes per week. And over 10 minutes per week is spent on PCs watching Hulu, Bing Videos and Fox News.

The survey also asked how adults feel about advertising that comes with the video on each kind of device. Not surprising to me, only 9% of those over 50 found the advertising on their smartphone to be useful and 14% found advertising on the TV to be useful. But younger viewers are not quite as jaded as us baby boomers and 36% of millennials find advertising on their smartphone to be useful and 39% find TVs advertising to be useful.

AT&T’s Vision of the Future PSTN

AmericanBellTelephone_Boston_Bacon1892AT&T recently met with the FCC staff and issued a memo after the meeting outlining their vision for the future PSTN (public switched telephone network). It’s routine for any ex parte meetings with the FCC to be documented in this kind of memo so that everybody knows what was discussed. There is nothing shocking in what AT&T said, because they have been saying the same things for several years. But I’d like to discuss AT&T’s vision and talk about what such changes mean to the current carrier world.

The memo outlines what the AT&T wants the national telephone network to look like after a transition from today’s TDM-based telephone network to an all-IP network. Here are a few things that AT&T sees in the IP telephone world:
• There will only be a few nationwide points of interconnection. This memo doesn’t say how many, but it probably means just at the major Internet hubs that exist today like Dallas, Chicago, Los Angeles, New York City, Washington DC, etc.
• They don’t think there should be any more distinction between long distance and local calls. Basically, in their new world a minute is a minute.
• Each carrier would be responsible for getting their traffic to and from these few points of interconnection.
• They see the interconnection process handled largely by what they call ‘indirect interconnection’, meaning that smaller carriers will contract with larger carriers to connect to others.
• They liken their vision of the future network to today’s peering and transit arrangements.

So what does this all mean to smaller carriers? I’m afraid this is very much in AT&T’s favor and bad for everyone else:
• Today a CLEC can meet a carrier at any technically feasible location, and then each side is responsible for the cost of their own network. AT&T’s proposal would put 100% of the cost of network and the transport needed to meet AT&T on LECs and CLECs.
• By getting rid of the distinction between toll and local minutes this proposal changes the way that smaller telephone companies get compensated. This would eliminate cost separations for the ILECs. It would eliminate NECA. It would remove long distance as a source for funding the Universal Service Fund. It would essentially remove the FCC from telephone regulation.
• When they say they want the world to look like today’s transit arrangements, AT&T is making a huge money grab. Today local traffic is exchanged for free, or virtually free between carriers. In a transit world (with AT&T presumably being the transit provider) AT&T would charge every carrier for every minute they pass to another carrier. What AT&T likes about transit charges is that they are not regulated in the same manner as access charges, and AT&T will get to set that transit rate.
• This would also make it hard for telcos to continue to give free local calling. Where local calls are swapped for free today, in a world when a carrier has to pay to send and receive each call it’s hard to see that cost not being passed on to customers.
• Also, getting rid of the distinction between local and long distance traffic is AT&T’s way of asking for the end of all access charges. The FCC has already begun a multi-year transition to get rid of terminating access charges at the end-office level. But AT&T envisions the end of both originating access and of transport and tandem access charges. This would destroy some smaller local telcos who rely on those revenues. Getting rid of transport charges is particularly egregious since those costs compensate somebody for building rural fiber. Who will ever build new long-haul rural fiber if they can’t get reimbursed for it?
• These changes would also undo all of the changes made by Judge Greene when he divested Ma Bell in 1984. It gets rid of LATAs and everything associated with that change and basically puts AT&T back where they were before divestiture. Only richer.

One has to admit that AT&T’s version of the future world might well be the most efficient way to do this. But it also very conveniently shifts all costs away from them and gives them a huge windfall in transit charges. This is a massive power play from AT&T that is disguised as an engineering discussion about efficiency.

One would hope that the FCC can see through this because AT&T’s proposal undoes both the divestiture changes from 1984 and the Telecom Act of 1996. And in doing so it makes it far more expensive for any telco other than AT&T to do business by both adding costs and eliminating revenues.

Scratching My Head Over Gigabit Wireless

Wi-FiOver the last few weeks I have seen numerous announcements of companies that plan to deliver gigabit wireless speeds using unlicensed spectrum. For example, RST announced plans to deliver gigabit wireless all over the state of North Carolina. Vivant announced plans to do the same in Utah. And I just scratch my head at these claims.

These networks plan to use the 5 GHz portion of the unlicensed spectrum that we have all come to collectively call WiFi. And these firms will be using equipment that meets the new WiFi standard of 802.11ac. That technology has the very unfortunate common name of gigabit WiFi, surely coined by some marketing guru. I say unfortunate, because in real life it isn’t going to deliver speeds anywhere near to a gigabit. There are two ways to deploy this technology to multiple customers, either through hotspots like they have at Starbucks or on a point-to-multipoint basis. Let’s look at the actual performance of 802.11ac in these two cases.

There is no doubt that an 802.11ac WiFi hotspot is going to perform better than the current hotspots that use 802.11n. But how much better in reality? A number of manufacturers have tested the new technology in a busy environment, and with multiple users the new 80211.ac looks to be between 50% and 100% better than the older 802.11n standard. That is impressive, but that is nowhere near to gigabit speeds.

But let’s look deeper at the technology. One of the biggest improvements in the technology is that the transmitters can bond multiple WiFi channels to make one data path up to one 160 MHz channel. The downside to this is that there are only five channels in the 5 GHz range and so only a tiny handful of devices can use that much spectrum at the same time. When there are multiple users the channel size automatically steps down until it ends up at the same 40 MHz channels as 802.11n.

The most important characteristic of 5 GHz in this application is how fast the spectrum dies with distance. In a recent test with a Galaxy S4 smartphone, the phone could get 238 Mbps at 15 feet, 193 Mbps at 75 feet, 154 Mbps at 150 feet and very little at 300 feet. This makes the spectrum ideal for inside applications, but an outdoor hotspot isn’t going to carry very far.

So why do they call this gigabit WiFi if the speeds above are all that you can get? The answer is that the hotspot technology can include something called beamforming and can combine multiple data paths to a device (assuming that the device has multiple receiving antennas). In theory one 160 MHz channel can deliver 433 Mbps. However, in the real world there are overheads in the data path and about the fastest speed that has been achieved in a lab is about 310 Mbps. Combine three of those (the most that can be combine), and a device that is right next to the hotspot could get 900 Mbps. But again, the speeds listed above for the Galaxy S4 test are more representative of the speeds that can be obtained in a relatively empty environment. Put a bunch of users in the rooms and the speeds drop from there.

But when companies talk about delivering rural wireless they are not talking about hotspots, but about point-to-multipoint networks. How does this spectrum do on those networks? When designing a point-to-point network the engineer has two choices. They can open up the spectrum to deliver the most bandwidth possible. But if you do that, then the point-to-multipoint network won’t do any better than the hotspot. Or, through techniques known as wave shaping, they can design the whole system to maximize the bandwidth at the furthest point in the network. In the case of 5 GHz, about the best that can be achieved is to deliver just under 40 Mbps to 3 miles. You can get a larger throughput if you shorten that to one or two miles, but anybody who builds a tower wants to go as far as they can reach, and so 3 miles is the likely networks that will be built.

However, once you engineer for the furthest point, that is then the same amount of bandwidth that can be delivered anywhere, even right next to the transmitter. Further, that 40 Mbps is total bandwidth and that has to be divided into an upload and download path. This makes a product like 35 Mbps download and 5 Mbps upload a possibility for rural areas.

If this is brought to an area that has no broadband it is a pretty awesome product. But this is nowhere near the bandwidth that can be delivered with fiber, or even with cable modems. It’s a nice rural solution, but one that is going to feel really tiny five years from now when homes are looking for 100 Mbps speeds at a minimum.

So it’s unfortunate that these companies are touting gigabit wireless. This technology only has this name because it’s theoretically possible in a lab environment to get that much output to one device. But it creates a really terrible public expectation to talk about selling gigabit wireless and then delivering 35 Mbps, or 1/28th of a gigabit.

People are Part of the Equation

RobotSurvey results from the Pew Research Group were announced today that summarize their findings about how Americans feel about our scientific future. The survey asked questions in several areas such as how people feel about technology changes and which changes people believe will be coming.

Anyone who follows my blog knows that I am a bit of a futurist in that I think technology is going to be a very positive force in human life during the rest of this century. There are amazing technologies under development that will transform our lives. Technological upgrades are so common any more that I don’t know that the average citizen stops and thinks about how technology has already changed our lives. I saw guys watching NCAA basketball games on their cellphones a few week ago and I’m sure they didn’t appreciate how many kinds of technology had to come together to make that happen and also how many billions of dollars of investments had to be made in cellular networks. In my experience, Americans have already been trained to take new technology for granted.

Which made me a little surprised by some of the responses in the survey. For example, only 59% of the people surveyed thought that the technology changes we are going to see over the next decades will make life better. A surprising 30% felt they would make people worse off than they are today. I find this interesting in that far more than 59% of us now own and use a smartphone which is one of the more recent manifestations of new technology, and yet many people still fundamentally fear new technology

And this is why I say you can never take the human equation out of planning. There are going to be technological breakthroughs that the public will reject, no matter how good they are for the majority of mankind. Let’s look at a few of the technologies that people are most skeptical about:

  • 66% of respondents say it would be a change for the worse if parents were able to manipulate the genes of their children to make them smarter, healthier or more athletic.
  • 65% think it is a bad idea to have robots become the primary caregiver for the elderly or people in poor health.
  • 63% think it is a bad idea to allow personal and commercial drones into US airspace.
  • 53% think it’s a bad idea if people wear implants that let shows them information on the world around them.

The survey also asked if people would be willing to try some new inventions that are now on the horizon.

  • 50% of people say they are not interested in trying a driverless car.
  • Only 20% of people are willing to try meat grown in a lab.
  • 26% of Americans say they would get a brain implant if it would improve their memory or mental capacity.

The survey also asked what future invention people would most like to own. Over 31% of young people were interested in a wide variety of ways to make transportation easier such as a flying car, a self-driving car or a personal spacecraft. But middle-aged people were more pragmatic and a number of them wanted a robot that could help with housework. There were a few questions on the survey that everybody agreed with. For example, over 80% believe that within a few years that doctors will be able to grow organs for people who needs organ transplants.

This kind of survey tells us a lot more about people’s hopes and fears than it does about the various technologies. People are very distrustful of many new technologies and it has always been that way. Generally there are some early adopters that try the newest stuff and take some of the mystery out of it for everybody else. Not every new technology becomes popular, but the kinds of major technologies covered by this survey are likely to become widespread once they become affordable. But the survey reminds us that we can’t assume that any technology will be automatically accepted because people are part of that equation.

Making Money With Home Automation

Nest_Diamond_ThermostatAt CCG we are always telling carriers that they need to find products to replace cable TV and voice, both which are slowly losing customers. One of the products worth considering for carriers that have a sizable residential base is home automation.

What we hear is that once homeowners learn what home automation can do for them they want this product. But there are a lot of moving parts to the product. There are hardware costs to cover along with numerous home visits needed, so it’s not a product that is automatically going to make money unless you do it right. Here are things to think about when considering the product:

  • You must be willing to cross the threshold. This product requires you to routinely go into customer homes. As an industry we have spent a decade looking for ways to reduce truck rolls and this product increases truck rolls as a routine part of the product. Your pricing must embrace that concept so that you are recovering a lot of your technician time.
  • One way to look at this product is that it gives you the opportunity to cross-sell other telecom products. We are told by some clients that the cross-sales are worth far more than the margin on home automation.
  • There are not likely to be any industry standards for a long time, if ever. This means that you need to decide what devices you will and will not support. We think the right strategy is to define the list of things that you will automate, and even then that you only deal with monitoring units that are part of your suite of products. Otherwise customers will always be buying crazy off-the-shelf things (like an egg tray that tells you how many eggs are left) and expect you to somehow tie them into your system.
  • You must recover equipment costs. You need to have an initial installation fee plus some portion of monthly fee on a term contract that is aimed at recovering the cost of the equipment. Base hub units for home automation are going to cost you from $150 to $200 and there is a wide array of monitors that can be added to the system to automate things like watering systems, thermostats, fire detectors, music systems, lighting, etc.
  • This industry and the product are always going to be changing. Home automation is the first small step into the Internet of Things and by becoming the trusted vendor today you have a foot up on that market when it gets more mature. The downside to this is that the technology will be changing quickly and so you are going to have to always be looking at different and newer monitors and devices to support with the product. But your customers will want many of the new things that will be coming along, and so you will have a continuous opportunity to upgrade and add on to customer systems.
  • You must manage customer expectations. There are three components to pricing the product – installation, equipment and ongoing maintenance. We think customers are going to get excited about this product. Once it’s installed and you have automated their sprinkler system and window shades they are going to want you to keep coming back to update more things over time. So your pricing needs to make it very clear about what is included with your base fee, and what costs extra. We suggest that you offer pricing plans that include some set number of visits. For instance, a base plan might mean that all future visits to the home are for a fee. But you might then also sell plans that include two, four or six visits a year where the customers pay for these visits as part of their monthly fees. That kind of pricing will stop customers from calling you to visit every time they think of a new home automation device to add or a refinement to make with the existing equipment. Without managing expectations in this manner you will find yourself making a lot of unpaid trips to customers.
  • Bundle the product. It’s a natural to bundle home automation with home security, but you could bundle it with anything else your customers want. The whole point of this product is to use it as a platform to get your customers to buy multiple products from you.

If They Can Do It

ppp_logoI think everybody agrees that we don’t have enough last mile fiber infrastructure in this country to bring high-speed internet to the homes and businesses that want it. For various reasons the large telecom companies have decided to not invest in rural America, or even in any town or neighborhood where it costs more than average to build fiber.

Meanwhile we have a lot of municipalities that are interested in getting fiber to their communities but don’t know how to get it financed. Lastly, from what I read, we have tens and maybe hundreds of billions of dollars of potential investment money on the sidelines looking for solid investment opportunities. It seems like there ought to be an easier way to pull these things together because all of the components are there to make this work – the customer demand, the community desire and the money needed to get it done.

One has to look north to Canada to see an economic model that offers an answer – public private partnerships (P3s). For an example of a working investment model, consider  Partnerships British Columbia. The venture was launched in 2002 and is wholly owned by the province. It’s operated by a Board comprised of both government and private sector members.

The primary function of Partnerships BC is to analyze potential infrastructure projects and then attract private capital to get them constructed. The benefits to the province are tremendous. First, Partnerships BC stops the government from undertaking projects that don’t make financial sense. After all, governments often like to build ‘bridges to nowhere’. Partnerships BC will analyze a project and make certain that it will cash flow. They then attract private bidders to build and operate the most financially attractive projects.

They bring private money in to take the place of tax money and in British Columbia this is getting things like hospitals and roads built without putting all of the burden on the taxpayer. Additionally, projects built with private money cost less to build. We know all of the horror stories of cost overruns on government-built projects and that doesn’t happen when there are financial incentives for the private entities to build things efficiently. In fact, a few hospitals were built so far ahead of schedule in BC that the province was not ready for them.

One of the biggest complaints against government fiber projects in the US is that government has no business operating a competitive business. The P3 model eliminates that complaint as well since it attracts qualified operators to build and run projects. The telephone companies in the US should all be in favor of having a P3 structure here since it would help them to finance new fiber projects. Smaller and mid-sized telecoms have always had trouble finding capital and a P3 fund would bring money that might not be found elsewhere.

And of course, while I have a bias towards funding and building fiber projects, a state P3 fund would fund almost any infrastructure project that has a demonstrable cash flow such as hospitals, water systems, roads, railroads and bridges. I keep reading that we have a multi-trillion dollar infrastructure need in the country which is far greater than the combined borrowing capacity of governments. So we need to wake up and look north and use the P3 idea along with any other idea that will let us dig ourselves out of our infrastructure hole. America is crumbling and P3s are one part of the solution that could be implemented immediately.

The Gigabit Divide

WDM_FOWe all know what the digital divide is – it’s when one place or demographic has broadband when those nearby do not. The term was originally coined after DSL and cable modems came to urban areas while rural America was left with dial-up access.

Over the years the definition is still the same but the circumstances have changed. For example, there still are some millions of households in the country stuck with dial-up or satellite broadband. But most of the digital divide today is an urban / rural divide where the telecom companies have invested in much newer and faster technology in urban areas and have ignored rural areas. Metropolitan areas all over the country now have at least some 100 Mbps cable modems while surrounding smaller towns often still get maybe 3 Mbps. And there is an economic digital divide within cities where some neighborhoods, particularly richer ones, get better infrastructure than poor.

But we are about to embark on the most dramatic divide of all, the gigabit divide. I spent last week in Austin and they are a good example of what I fear will be happening all over the country. There are three companies building gigabit fiber in Austin – Google, AT&T and Grande. None of them are going to build everywhere. For instance, Google will only build to a ‘fiberhood’ where enough people in an area pre-sign with Google. And the other two carriers are going to do something similar and carve out their parts of the market.

This is great for those who get fiber. They will end up with the fastest fiber connections in the world, and hopefully over time that will make a big difference in their lives. But my concern is that not everybody in Austin is going to get fiber. To see how this works we only have to look at Verizon FiOS. For years Verizon built to the neighborhoods with the lowest construction costs. That meant, for example, that they would favor an older community with aerial cable that could be over-lashed over a newer community where everything was buried and construction costs were high.

You find a real hodge-podge when you look closely at FiOS – it will be on one street and not the next, in one neighborhood and not the adjoining one. And Austin is going to be the same way. These three carriers are not going to all overbuild the same neighborhoods because in a competitive 3-way overbuild none of them will make money. Instead it is likely that Austin will get balkanized and chopped up into little fiberhoods for each of the three carriers.

But what about those that don’t get any fiber? There will likely be significant parts of the City where nobody builds. Those houses are going to be on the wrong side of the gigabit divide. Since most of the world is on the wrong side of the gigabit divide that doesn’t sound so bad. But think what it means. Who is going to buy a house in the future Austin that doesn’t have gigabit fiber? This is going to create a permanent and very tangible division of fiber haves and have-nots.

Cities used to protect their citizens against this sort of thing and that is why cable franchises were awarded locally so that a City could make sure that everybody got served. But cities are embarrassingly falling over themselves for Google to the detriment of many of their own citizens. They are going to take care of the richer neighborhoods at the expense of the poorer ones. This is not what cities are supposed to do since they represent all of their citizens. We have had processes in place for years to make sure that telecom companies don’t bully and divide our communities, and now City Hall is in front of the line inviting them to do so.

I say shame on Austin if they wake up five years from now and find that 20% or 30% of their City doesn’t have fiber and is being left far behind. The houses and businesses in those neighborhoods will have lost value and will probably be the seeds of the slums of the future. When we look back twenty years from now I think we’ll see that this short-sighted policy to bow to Google cost the City more money than it gained.

The Battle for the OTT Box

Apple_TV_2nd_Generation_backAmazon this week finally announced the fireTV, an OTT settop box. It’s been rumored for years that they would launch one, and considering how popular AmazonPrime is it’s surprising how long it took for them to do this. But this announcement highlights the giant battle going on in households for control of the OTT market.

And of course, boxes aren’t the only way for homes to get OTT content. In my house we don’t have a television and we watch our content on PCs, laptops, tablets and smartphones. This works for us. And then there are smart TVs. There are decent smart TVs from LG, Samsung, Sony, Toshiba, Philips and Panasonic. Each of these comes with a different system for giving access to web channels. The best of them offer a lot of customization to make your line-up what you want to watch. All come with some modest amount of web gaming.

But the big battle today is with the boxes, primarily between the new Amazon fireTV, Apple, Roku and the dongle from Google Chromecast. This wide array of options must have the average household scratching their head. Every box is different in look and feel, price and features. They vary widely in what you can watch and in the ease of using their interface. And they all are all hoping to control a large chunk of the market

The Amazon fireTV is an interesting platform. They have built in 2 GB of RAM and a dedicated graphics processor. With an add-on $40 game controller this is going to give them the ability for higher quality gaming than the other boxes, although not near the capability of the dedicated game platforms. For non-hardcore gamers who just want to play games on their big screen it should be a good alternative to buying an expensive gaming box

Many of the boxes now have voice activation. With smart TVs you normally have to shout across the room to the TV and this is widely reported to be clunky. The fireTV puts the voice control in the remote. Roku 3 has taken the path of making their remote motion controlled.

The real competition between boxes comes with the programming choices they have built in to the channel line-up. For example, the ROKU 3 line-up has grown to over 1,000 channels and apps. The Amazon fireTV is launching with only 165 and has some clear major omissions such as HBO Go. But one has to suspect those deals will all be made and that they will quickly catch up.

And perhaps the real winner will be the box and company that finally makes a deal for some regular programming to go along with the OTT content. The first one that can bring in the network channels, HBO without a landline subscription and popular programming like ESPN and Disney could be a major competitor to cable companies. Recently an email from Steve Jobs right before he died showed that Apple was hoping to add this kind of content when they release the next generation of Apple TV, and it might be the lack of such deals that has held off that release.

The Amazon fireTV is announced at a price of $99, the same as the Roku and the Apple TV, although both are widely available today for around $95. The Google Chromecast is available today for only $35. I have to be honest and say that if I buy a TV, which I am considering, that I will have a hard time making a choice between these options. I read a lot more about this stuff than the average household and it makes me wonder how people make such a choice. They probably just go with the brand that they feel the most comfortable with rather than making the hard side-by-side comparisons.

More on White Space Wireless

001-Signal-Command-SSILast July I wrote about the Google database that shows the availability of white space radio spectrum in the US. This is spectrum that has been used for years by UHF television stations. In some rural places it was never used and in others it has been freed up as stations have moved elsewhere.

I’ve been hearing about this spectrum a lot lately so I thought I’d talk a little more about it. There are now several trials of the spectrum going on in the US. The first test market was the City of Wilmington NC who implemented this in their municipal network in 2010. They use it to control traffic lights, for public surveillance cameras and other municipal uses. Probably the biggest US test so far is a campus-wide deployment at West Virginia University in Morgantown that launched in July 2013. There only look to be a few dozen of these trials going on worldwide.

So what are the pros and cons of this technology and why isn’t it being deployed more? Consider some of the following:

  • It’s not available everywhere. That’s why Google and others have put together the maps. Where there are still TV stations using some of the bandwidth, only the unused portion of spectrum is available. There are still large areas around most major metros that have some use in the spectrum.
  • This is still a trial provisional spectrum and the FCC has to approve your trial use. I’m not sure why this is taking so long, because the Wilmington test has been going on since 2010 and supposedly has no interference issues. But I guess the FCC is being very cautious about letting WISPs interfere with television signals.
  • We are at that awkward point that happens with all new uses of spectrum, where there is equipment that will work with the spectrum, but that equipment won’t get really cheap until there is a lot of demand for it. But until that demand is believed by a manufacturer, not much happens. It was this equipment cost barrier that killed the use of LMDS and MMDS spectrum in the 90s. There is no equipment on the market yet that would let white space be used by laptops, cell phones or tablets. Instead it must feed a traditional WiFi router.
  • One use of the spectrum is that it can make a better hotspot. I don’t think most people understand the short distances that can be achieved with hotspots today. A 2.4 GHz WiFi signal can deliver just under 100 Mbps out to about 300 feet. But it dies quickly after that and there may 30 Mbps left at 600 feet and nothing much after that. If they put whitespace receivers into laptops this spectrum can deliver just under 50 Mbps out to 600 feet and 25 Mbps out to 1,200 feet. And there is an additional advantage to white space in that it travels fairly free through walls and other barriers.
  • The real potential for the spectrum is to extend point-to-multipoint radio systems. With white space you can deliver a little less than 50 Mbps up to about 6 miles from the transmitter. That’s easily twice as far as the distances that can be achieved today using unlicensed spectrum and a 12-mile circle around a transmitter can make for viable economic returns on an investment. Physics limits this to about 45 Mbps of total bandwidth meaning that a product of 40 Mbps download and 5 Mbps upload is possible. That is certainly not fiber speeds, but it would be a great rural product. The problem comes in in the many places where part of the spectrum is still in use, and in those places the radios would have to work around the used spectrum and the speeds would be correspondingly slower.

It seems like this is a spectrum with a lot of potential, especially in rural places where there are no existing uses of the spectrum. This could be used for new deployments or for supplementing existing WiFi deployments for WISPS. There is equipment that works on the spectrum today and I guess we are now waiting for the FCC here and regulatory bodies around the world to open this up to more use. The US isn’t the only place that used this spectrum for TV and much of the rest of the world shares the same interference concerns. But if this is ever released from the regulatory holds I think we would quickly hear a lot more about it.