Should All Bits be Treated the Same?

Polk County SignNetwork neutrality asks the question if different providers should be treated the same. That question asks if it’s okay to give priority to NetFlix over another movie provider like AmazonPrime. The concept has people up in arms because they understand that the deals made between content providers and network owners will end up restricting their choices. And people want choice.

But many of the articles I have seen talking about net neutrality have confused the issues between ISPs and content providers to somehow mean that there can’t be any discrimination among bits. And so I ask the question, should all bits be treated the same? And the obvious answer from a network engineering perspective is no, of course not. We already discriminate today for some bits and in the future it’s going to be desirable to discriminate a lot more.

Today any ISP delivering their own VoIP product already discriminates in favor of voice. Customers don’t want their phone call disconnected when another family member starts watching a movie or downloading a large data file. And so we give the voice packets first priority using techniques that are called Quality of Service (QoS).

QoS is the combination of a number of techniques that can give some packets better treatment than they would get using only best effort delivery. For example, QoS uses traffic shaping techniques like packet prioritization, application classification and queuing at congestion points to give priority to preferred bits. QoS also can use the Resource Reservation Protocol (RSVP) at gateways to fine tune the level of packet prioritization.

In the future there are going to be other kinds of packets that we will want to give top priority. Some things that come to mind are signals from burglar alarm, fire alarm and health monitors  which we will always want to have delivered as quickly as possible in case of the emergencies they were designed to monitor.

But we are also nearing a time when we are going to generate a lot of bits that we will want to give the lowest priority. We are going to have numerous monitors and sensors as part of the Internet of Things that will be delivering data that we will not want to interfere with voice calls or even video viewing or web browsing. It’s hard to imagine that we will insist on high priority treatment for packets from monitors that are looking at things like the humidity levels of a flower bed or the number of eggs left in the refrigerator.

And so I think it is likely that we are headed for a time when we will have three types of traffic in our homes. There will be the high priority packets for things like telephone calls and medical monitors. We will have regular priority for things like watching movies or browsing the Internet. And we will want the lowest priority for some of the background sensors that will keep an eye on our world.

And perhaps we will also someday get the flexibility for each household to choose which bits they want to give the highest and lowest priorities. It certainly is going to be a bit of a challenge for network operators, because the easiest thing to do is to treat all bits the same. But if the world demands different priorities for bits, then network operators will find a way to deliver.

What nobody wants is for our ISPs to dictate to us what we can watch by picking winners and losers among content providers. We want the option to watch movies from some start-up content provider and not be forced to watch NetFlix if they are the only ones with deep enough pockets to buy faster connections. If network providers take the path of picking Internet winners and losers they cannot be surprised when people flood to alternate network providers as they show up in any market.

More New Technologies

Math_equation_dice_d6I periodically report on new technologies that I find interesting. This past week I ran across several new technologies that seem pretty revolutionary and which could all result in significant improvements on our lives.

 

First is a new green technology. Scientists at the University of Toronto have developed something they are calling colloidal quantum dots. These new materials have the potential to revolutionize solar cell technology. Today’s solar cells all work by juxtaposing two types of materials, an n-type material that is rich in electrons and a p-type material that is low in electrons. Solar energy then creates a current by taking the energy from sunlight and moving electrons from the high electron to low electron materials. However, until now, all n-type materials have lost potency when exposed to air and thus have had to be sealed into the solar cells that we are familiar with. But with colloidal quantum dots we might be able to have cheap solar cells everywhere. Picture having them embedded into outdoor paint so that every roof, home, bridge or cell tower could be generating electricity.

 

Next is Spansion which has developed and is manufacturing energy harvesting chips that can generate enough electricity to power themselves. They generate small amounts of electricity through different techniques such as taking advantage of vibrations, sunlight or differences in heat. This is one of the breakthroughs that have been needed to unleash the Internet of Things. Without this breakthrough every IoT sensor would need its own battery, and replacing those batteries was a cost barrier to realistic deployment of sensor networks. But self-powering chips make it possible to deploy sensor networks that can monitor crops, herds, pollution or just about anything else.

 

Another big breakthrough comes from HP who is calling it ‘The Machine”. This brings together a number of different technologies that is going to revolutionize the computers that we use to process large amounts of data. HP has developed a new computer from scratch. It uses specialized core processors rather than a series of generic processors. It will use photonics rather than electronics and will eliminate copper wiring. It will use memristors for a unified memory that is as fast as RAM but that can also store data like a flash drive. And it has a 3D architecture that packs components closer than can be done using traditional flat chipsets.

 

All of these changes will result in servers that are about 6 times faster than today’s best server and that uses only one eightieth of the power and requiring significantly less space. Probably the most significant aspect of this is the reduced need for power. Today’s network of data centers have often been built where power is the cheapest, but by cutting power consumption by a factor of 80 then any closet can probably become a small data center. It’s been reported that both Google and Amazon are working on their own versions of new servers and they may very well be doing something similar. But HP is the first to announce the specifics. HP hopes to be able to ship these by 2018.

 

Finally, math gets a headline because a company called Code On Technologies promises to use math to speed up existing data transmissions. Most people probably don’t realize how much time and energy is spent during a data transmission today to reassemble a data stream at the receiving end. Bits essentially get numbered and the receiving end of each transmission looks until it finds all of the needed bits before passing on information. The process is quite inefficient due to searches for missing packets and this reassembly is done over and over in the Internet network as a piece of data goes from device to device.

 

Code On has developed a technique that instead will code data into a mathematical equation. Rather than ‘number’ the bits it will assign identity to packets in terms of the solution to an equation. What this means on the receiving end is that there no longer will have to be a constant search for missing packets since the receiver can assume what was in the missing packets by reconstructing the answer to the equation. This sounds esoteric, but it could improve the transmission speeds on current networks by as much as twenty times by vastly improving the process or reconstructing the data at the receiving end of each transaction. This could make for much faster satellite or WiFi networks without having to change those networks. This makes a math nerd smile!

Cisco’s View of 2018

Alexander_Crystal_SeerFor many years Cisco has been providing future-looking forecasts of the trends in data and networking to help prepare their clients for the future. Like anybody who predicts the future they aren’t always right, but they are often as right as not and they do a good job of spotting trends and of estimating where those trends will take us. They just released a new Visual Networking Index that makes dozens of predictions. Let me highlight a few of them along with my comments on what I think it means to my readers.

Global IP traffic will triple by 2018. Some of this growth will come from new people joining the web through ventures like Google and their data satellites. But every ISP ought to expect at least a doubling in traffic volumes over this time period, much as has been happening historically. Be prepared to buy more backbone bandwidth and to also have to beef up the electronics that get to neighborhoods.

Busy hour Internet traffic is growing faster than overall traffic. For example, in 2013 the traffic experienced by ISPs in the busiest hour of the day increased 32% while total traffic increased 25%. Since ISPs have to buy backbone services to support the busiest times on the network, expect those costs to climb.

IP Video will be 79% of all traffic by 2018, up from 66% in 2013. Not only is NetFlix and AmazonPrime here to stay, the OTT providers will continue to sell more video programming. This is going to continue to put great pressure on cable TV and should result in a lot of people becoming cord cutters (something Cisco did not predict since their prediction only looks at network traffic). UltraHD video will account for 11% of all video traffic by 2018, up from 0.1% today. HD video will account for 52% of all video traffic, up from 36% today. The desire for UltraHD is going to be a lot of pressure on ISPs since a single transmission will require at least 15 Mbps

There will be as many machine-to-machine connections as there are people on earth by 2018. The Internet of Things will have made big strides by 2018 but will still be in its infancy. What this means for network providers is that there will be a steady volume of data traffic generated by M2M traffic that will not vary by time of day like residential or business data usage. Cisco predicts nearly 21 billion global network connections by 2018, up from 12.4 billion in 2013.

Speeds will continue to increase. Cisco predicts that the average global broadband speed will be 42 Mbps by 2018, up from 16 Mbps at the end of 2013. Within that average speed they predict that 55% of worldwide broadband connections will be faster than 10 Mbps. This has a lot of policy implications since the FCC is looking at changing the definition of broadband. If they only increase the definition modestly to 10 Mbps it will be obsolete almost before it is adopted. If the FCC wants the US to keep up with the rest of the world they need to push the industry rather than trail it.

The US will still generate the most data. They predict that in 2018 that the US will still be the largest data producer at 37 exabytes, with China second at 18 exabytes. The fastest growing continent will be Africa with the fastest growing individual countries being India and Indonesia.

WiFi Traffic will exceed Wireline traffic by 2018. WiFi and mobile devices will generate 61% of global IP traffic by 2018 (WiFi 49% and cellular 12%). This is up from 41% and 3% today. This is a bit misleading since most WiFi traffic ultimately feeds into a landline network, but indicates more the first link that is used to reach the Internet.

Another Hassle for ISPs – Policing Pirated Music

Louis_Armstrong_restored_(color_version)You probably remember the attempts of the Recording Industry Association of America (RIAA) last decade when they tried to stop file sharing of music by randomly suing those who shared music files on line. They would go after college students and others and sue them for $750 to $12,000 per song shared and made the cases public to scare other people from sharing music. They stopped this practice in 2008 and instead went after ISPS, asking them to deny service to people who violated their copyrights more than three times.

But now the issue is back in play and ISPs are going to find themselves routinely asked to chase file sharers. Some of the music industry has made a deal with a new company called Rightscorp which is now chasing file sharers instead of the RIAA. Rightscorp asks file sharers to settle for $20 per song violation instead of being sued, and any collected proceeds are shared 50/50 with the recording labels like BMG and Warner Brothers.

The company started in 2012. In 2013 they collected around $750,000 in settlements, but they have a technology that could let them pursue these violations by the millions. And that is where the new hassle for ISPs will come in.

Rightscorp monitors file uploads and downloads at file sharing sites like BitTorrent. They are capturing the IP address of people sharing songs illegally. While they don’t know the identity of the violator they know the ISP involved, and they are asking ISPs to forward their demands for settlements on to violators.

Rightscorp is relying on the Digital Millennium Copyright Act (DMCA) which they believe requires ISPs to forward on their notices. They claim to be working now with 70 ISPs, but there are many ISPs who either do not think they are required to pass on settlement offers, or who pass on only an abbreviated version of the Rightscorp demand for payment. But one would expect with the technology they are using that they are going to be asking every ISP to help them.

There are existing alternatives to what Rightscorp is doing. There is already a process under development among ISPs that is creating a ‘six strike’ system that will deny Internet access to people who violate copy rights multiple times. But Rightscorp and others believe that this system will not have teeth since the ISPS are not heavily invested in kicking out paying customers.

Rightscorp has developed a technology that lets them track file sharing across multiple IP addresses. This is needed since ISPs issue a new IP address to a user any time they initiate a new connection to their server. Rightcorp believes that their audit trail showing multiple violations gives them the leverage to get ISPs to help them. Certainly that is the kind of evidence that could be used in court against an ISP who refuses to help them. They have not sued an ISP yet, but the threat is there. And obviously some ISPs are helping them since they have collected so far from over 70,000 violators.

As an ISP you need to decide what to do when you get one of these demands from Rightscorp. Do you do nothing, do you pass on the full demand to your customers or do you somehow edit the demand before forwarding it? Do you share your customer’s identity with Rightscorp? These are not easy questions to answer. But one thing is for sure and this is just one more of the little hassles that keep getting loaded onto being an ISP today.

Making VoIP Work Better

Black phoneThe Broadband Internet Technical Advisory Group (BITAG) recently issued some guidelines for carriers to suggest ways to improve the delivery of VoIP. BITAG is a technical advisory group that makes recommendations about ways to make the Internet function better. They say that a substantial portion of global voice traffic now uses VoIP, but that there are numerous problems that stop it from working as well as it might.

The report looks in detail at how VoIP works and ways that it can be impaired or restricted. It goes on to make specific recommendations on methods for mitigating restrictions and then describers steps that ISPs, software developers and equipment vendors can take to make VoIP work better.

The primary impairment to VoIP is port blocking where the sending and receiving end of a call are unable to make the desired connection. A VoIP calls requires the synchronization of the source IP address, the destination IP address, the transport protocol being used, the source port and the destination port. The failure of both ends of the call to agree on these parameters will impair or block a VoIP call.

Another way that VoIP calls fail is when a connection is being made into or out of a network that sits behind a Network Address Translation (NAT) device which allows multiple devices in the network to share the same public IP address. These devices often don’t give any priority to VoIP and make it hard to make and keep a connection. Often there is a device sitting behind the NAT, a Application-Level Gateways (ALG), which is used to try to find a path for each kind of traffic hitting a NAT. But the ALGs often do a poor job in identifying and allowing for VoIP. particularly for VoIP that was not provided by the ISP deploying the ALG.

The other major cause of VoIP failure is compatibility problems between programs or applications that end up restricting some portion of the VoIP functionality.

BITAG has made specific recommendations that they hope will ease these problems:

  • They recommend that ISPs should avoid impairing or restricting VoIP applications unless there is no technical alternative. ISPs often take steps to block certain ports on their network in an attempt to kill spam or other unwanted traffic, and some of these network management techniques also end up blocking valid VoIP applications.
  • If ISPs use techniques or have policies that might impair VoIP they should provide full disclosure on-line of these policies on their web site. They should also provide a way for customers to communicate with them concerning such polices when they result in blocking ports or in other ways restricting VoIP. Interestingly, the original Net Neutrality rules issued by the FCC contained similar disclosure requirements for ISPs, but I think those rules stopped being effective when the courts overturned the Net Neutrality order.
  • BITAG recommends that the port selection in consumer equipment should allow for user configuration of the ports. Many devices or firewalls make it impossible for a consumer to modify the specific port assignment needed for their VoIP application.
  • They also recommend that VoIP-related ALGs should minimize the impact on VoIP other than the service provider’s VoIP. For example, a cable modem might be set to allow the cable company’s VoIP but will cause problems with VoIP from other sources.
  • They recommend that VoIP applications be designed to be port-agile, meaning that the application does not require a specific port but would allow finding a port that will work at either the sending or receiving end of a VoIP call.

AT&T Wants to Sell its Abandoned Copper

telephone cablesOn May 28th AT&T made an ex parte presentation to the FCC concerning some more of its ideas about the upcoming conversion of the current PSTN to an all-IP network. Most of this presentation is a very straightforward primer on how the copper network is structured today and what AT&T’s obligations are in terms of having to unbundle and offer various parts of the network to competitors as unbundled network elements.

But the kicker comes on the very last page of the presentation and is titled “AT&T Proposes to Sell its Retired Copper to CLECs” where they describe how they would offer abandoned copper to other carriers. You can see this Powerpoint here.

Let me put this into context. In Docket FCC-11-161A1 the FCC kicked off the process to begin the transition of the PSTN from today’s TDM based technology to an all-IP network. The FCC is proposing that carriers get together to replace the network that used today to interexchange voice traffic between them.

But AT&T has taken the opportunity to open the discussion of how they might be able to walk away from their old copper. In network terms, AT&T’s copper network is called their distribution network, meaning the network that they use to get from their central offices to customers. The FCC has never expressed any interest in requiring that the distribution networks should become all-IP. There are still millions of miles of very serviceable copper, and with newer technologies like G-Fast any copper in good condition can deliver pretty decent broadband speeds.

But AT&T keeps using this regulatory process to lobby the FCC to let them start walking away from customers on copper. Last year they said that they wanted to walk away from millions of copper lines and that in rural areas they plan to instead serve people using cell phones. Anybody who has tried to find bars of data on their cellphone in a rural area can tell you that there are huge parts of the geography in this country where there is no cellular data service. So what AT&T is really telling the FCC is that they want to abandon rural America.

In this presentation AT&T comes along with another idea to try to soften the FCC on this topic. This slide shows that they plan to sell abandoned copper to CLECs, which is their way to assure the FCC that these customers won’t really be abandoned when AT&T walks away. But this is incredibly cynical and supposes that any CLEC would actually want their old copper. There are a lot of practical issues involved in what AT&T is proposing that would not make this a very attractive business plan for a CLEC.

  • They want to sell the copper but not the customers and not the central office. There are plenty of companies who would be interested in buying the whole shebang, but AT&T wants to keep everything in the office that is served with fiber and just sell the old wires. This means the new CLEC would start on day one with no customers or revenue.
  • AT&T would require a CLEC to operate this copper from a collocation inside their office. This is incredibly difficult and costly. Collocation rules require CLECs to use costly paperwork to make any change inside of an AT&T central office. Doing something as simple as changing a power supply requires a detailed application and waiting for AT&T’s approval. In a case where a CLEC bought all of the copper in an office this paperwork would make it impractical to act competitively.
  • I have no idea how this would be practical on hybrid loops, meaning customers who are served both on fiber and copper. A good example would be a subdivision where AT&T has built fiber to the front of the subdivision and then jumps onto copper to get to homes. That kind of copper cannot be accessed from a collocation in a central office but requires an even more costly collocation where a CLEC has to build an access cabinet next to the existing AT&T one in the field. Because of this almost nobody competes today on hybrid loops.
  • AT&T wants CLECs to take over the full cost and maintenance of the old copper. That is mind-boggling, particularly in rural areas where this copper has been ignored and is in bad shape. AT&T and Verizon basically walked away from rural areas decades ago and shut down business offices, got rid of most technicians and stopped making new investments or even spending for normal maintenance. Look at the mess that Frontier got when they bought West Virginia from Verizon to understand the condition of rural copper.

AT&T is trying every tactic they can think of to make the FCC think it’s a good idea to let them walk away from rural copper. I can promise if they do so that there are going to be a whole lot of customers who will go dead and find themselves with no telephone or data service. If AT&T really wants out of the rural business they should sell the whole exchanges as Verizon has done. The idea of keeping the best part of their offices and only shedding the old copper is one of the more hair-brained ideas I’ve ever heard from AT&T. I can’t imagine any CLECs that will sign up for this idea. I really don’t think AT&T thinks this will work, but they are just hoping that the FCC staff is naïve enough to feel good about this if they think that abandoned customers will have an alternative.

It’s a Gigabit Thing

Speed_Street_SignYou hear so much about Google’s gigabit Ethernet product in Kansas City that it’s easy to not notice that gigabit fiber is popping up all over the place. And when I say gigabit I mean something that people and small businesses can buy, because there are already thousands of towns and cities that have brought gigabit speeds to schools or to very large businesses. But the new trend is to offer it to everybody.

There seems to be a race to call your town a gigabit community, meaning that gigabit speeds are available to all. But you have to be careful in looking at these claims. There are places that have embraced the concept and made a gigabit connection as inexpensive as former DSL connection. Next is Google which set a premium price of $70 for a gigabit, and there are now a number of communities matching that price. But there are also communities that have gigabit speeds, but at pretty high prices of $200 to $400. Finally, there are communities that are bringing gigabit to a business park or their schools and still claiming the designation as gigabit community. So all such claims about being a gigabit community are not the same, but at least they all have created some gigabit connections.

What is really going to matter, in the long run is if people really buy the gigabit. Let’s take Google as an example. They offer a gigabit product for a flat $70 per month and they are still waiving a $300 installation fee. But they also offer an option to buy 5 Mbps for up to seven years for the one-time payment of the $300 installation fee, or $25 per month for a year. I’ve seen news reports where Google has neighborhoods with over a 70% penetration, but that doesn’t tell us how many people have paid the premium price and how many people jumped on the really great cheap deal.

So it’s going to be really interesting to see if companies report gigabit penetration rates. Almost every one of my clients offers a range of data products and almost universally they find that 70% to 90% of their customers will buy their ‘okay’ product and not their premium product. This is largely a matter of economics and there is a big difference in many households of paying $40 instead of $70. But this also means that there is a significant number of households that will pay the premium price, which I assume means that they feel they need the faster speed

One of the interesting things you see when looking at the list of gigabit communities is that they are mostly small towns. Google is building large towns and reluctantly dragging along AT&T and the large cable companies. And then there are a few middle-sized markets with gigabit like Lafayette, LA, Chattanooga TN and Omaha, NE. But the vast majority of gigabit communities are smaller places.

We see small municipalities, independent telephone companies, cooperatives and Indian tribes investing in gigabit fiber in some really small towns and remote places. These providers and these communities are generally looking at gigabit fiber as a way to distinguish themselves from surrounding towns to attract or keep jobs. One of the biggest worries in most rural communities today is that their kids are all fleeing the communities to find jobs. Such communities look at this trend and worry that they will dry up and blow away over the next century if they can’t find a way to keep their talented kids home and keep their communities growing and vibrant. And so for many communities, building fiber feels like an investment in their own future.

And I am guessing such communities are right. If there is only one town in a region that has affordable gigabit fiber, one has to imagine that over time that businesses will migrate to that town, bringing jobs and prosperity. This is very much akin to what happened in the past with other innovations. There were cities and towns that prospered because they were close to a railroad, or to an interstate highway or were the first to get electricity in a region.

There is literally not a day that goes by any more where I don’t see another town saying that it will soon by a gigabit community. I almost started listing a lot of them in this blog and finally realized there are enough of them now that this would become just a list of small towns. But as many as there are, these towns are only a tiny fraction of the rural towns in America and we have a long long way to go until we are a gigabit nation.

A New Form of Arbitrage

Cell-TowerIronically, the motivation behind the FCC’s access reform order of a few years ago in FCC 11-161A1 was to end arbitrage. The FCC defined arbitrage to mean that IXCs were trying to change the characteristic of minutes to save money. The classic example would be to define minutes as Interstate if that was charged at a cheaper rate than intrastate, and vice versa.

And largely the FCC has succeeded. They required carriers to bring state and interstate rates into parity to eliminate jurisdictional arbitrage. They outlawed phantom traffic where carriers were removing the details in the call records that made it impossible to bill for them. And they outlawed traffic pumping of various types that were charging very high amounts to carriers for some very dubious minutes.

And this was all driven by the fact that the FCC and state commissions were getting inundated by complaints from carriers seeking redress from parties on both sides of the transactions. So one of the motivating factors behind the new rules was to eliminate the number of complaints they were seeing, and it seems to have worked. There are certainly still things that carriers argue about, but to a large degree the arguments over rate arbitrage are over.

All except in one instance. There was one part of the ruling that actually is leading to a whole new rash of disputes between carriers and that will end at commissions in the form of disputes. In the same docket the FCC reminded us that cellular calls that stay within the same MTA are local calls. MTA are Metropolitan Trading Areas that are defined by Rand McNally and that generally define large circles of common economic interest. Some wireless licenses such as PCS were granted using MTAs as the boundaries.

So the FCC reminded us that calls that stay within an MTA are local (and always have been). This means that a call that goes from a south suburb of Chicago to a west suburb will be free for a wireless carriers but may be long distance for a landline carrier.

But the real world treatment of these calls has always been somewhat complicated. If a wireless carrier negotiated an interconnection agreement with a telco or CLEC, then reciprocal compensation was used to pay each other for handing off calls between the two networks. But if no such agreement was ever negotiated, then these calls were billed by access charges by the telco as if they were a long distance call. The cellular companies had interconnection agreements with all of the large telcos, but they often did not bother to ask for them with smaller companies due to the much smaller volumes of traffic.

The FCC access reform order says that cellular intraMTA calls are now to be settled only using reciprocal compensation instead of access. But the order then went on to say that the reciprocal compensation rate for terminating cellular calls is now zero. So cellular companies now get free termination of their intraMTA calls at telcos and CLECs.

This sounds pretty straightforward until you think about the nature of cellular traffic. It is nearly impossible for a telco to know which cellular calls are intraMTA because you can’t tell by the phone numbers. Cellular phones can roam anywhere and further, there is number portability between landline telephone numbers and cellular numbers.

Take the example of where I live in southwest Florida. In my town in the winter the population more than doubles and the area is flooded by cell phones that come from somewhere up north. When those people call a local business, those calls are going to be intraMTA, even though by looking at the two numbers one would think they are interstate.

If the cellular companies were honest about reporting the jurisdiction of these calls there would not be an issue. But the arbitrage comes in when the wireless companies hand these calls off to intermediate long distance companies who then try to claim that almost all of the calls they send should be terminated for free. That is a classic definition of an arbitrage situation and telcos are being asked to charge nothing to terminate any cellular calls. And they don’t have the facts to fight this. They can’t tell if a call from a New York number is intraMTA without knowing the cell site where the call was originated – which is something that is not included in the call detail record.

And so telcos and CLECs are losing a lot of legitimate access charges for cellular traffic because the carriers bringing them these calls are asking to terminating them for free and are disputing any charges. This is one area where there was not a lot of fights in the past, but the FCC has created a new arbitrage situation by not thinking through how difficult it is to know the jurisdiction of cellular traffic.

What is Fast?

FCC_New_LogoThe FCC is reported to be looking at increasing the definition of broadband. That speed today is defined as 4 Mbps download and 1 Mbps upload. Those speeds were set just a few years ago, but the way that the public uses the Internet has made those speeds obsolete and inadequate as a definition of broadband.

And it shall always be so. I know that the FCC has to establish a definition of broadband to use in some of its programs, but the speeds required by households has been climbing since the introduction of the Internet and is expected to continue to climb into the future. Even if the FCC adopts a faster definition today of what is broadband, in five years we are likely to be back gain discussing how that new speed is too slow to be considered as broadband.

The Washington Post reported that there is an internal debate at the FCC of whether the new standard ought to be 10 Mbps or 25 Mbps. There is a very big difference between those two numbers in terms of what it means for the nation. Why does it matter what speeds the FCC defines as broadband? There are several major reasons:

Who has Broadband? The FCC currently reports that 94% of the US has access to broadband when it’s defined at 4 Mbps download. There are plenty who even dispute that number since the carriers self-report speeds and geographic areas where only a few people have a broadband product are deemed to have it everywhere. But if the speeds in increased, especially to as high as 25 Mbps, then large swaths of the US are no longer going to be considered to have broadband. The government is going to find this embarrassing, when in fact it would just be recognizing the reality of the marketplace.

You can’t operate a modern family on 4 Mbps. One video streaming session would use every bit of that bandwidth leaving no bandwidth for anything else. The fact is that the modern family wants to have multiple streaming videos simultaneously while also using the Internet for other purposes. And with the upcoming Internet of Things the demands for bandwidth from multiple devices in the home is going to require significantly more bandwidth.

Federal Grants and Loans. Federal grants and loans for broadband are only given to areas that are deemed as unserved or underserved. Underserved means areas that don’t have broadband that meets the federal definition.

Large amounts of federal money have been given out in recent years to build rural broadband facilities that barely met the federal definition of broadband at the time of the grants. For example, a significant amount of stimulus grants were awarded to last mile projects that built wireless networks that can’t come close to meeting a 25 Mbps download test. Further, money was awarded from the USF fund to telcos like Frontier to expand DSL that can’t come close to the speeds the government is now considering.

So all of those federal monies will have just been recently spent on broadband upgrades that were barely adequate even as they were being done. Those upgraded or new networks are already obsolete even before the ink barely dries on the grant paperwork.

Let’s Be Forward Looking. Perhaps the government needs to use a more forward-looking test instead of funding broadband projects that barely meet the minimum definition of broadband. Because every five years we are going to be back in this same place. Let’s say that they raise the standards now to only 10 Mbps. It would be a joke today to spend federal USF or RUS money to build a network that barely meets that new standard.

The major problem is that once the government subsidizes a rural network, it becomes exceedingly unlikely that anybody else will spend more money to compete against that network, even if the first network isn’t very good. The first network will have gotten all of the customers in an area, which is a major disincentive for anybody else to spend money there.

I know the feds think they are helping by handing out billions to build rural ‘broadband’ networks. But if those networks are built at slow speeds that become quickly obsolete then they have relegated the people in those areas to remain on the wrong side of the digital divide forever.

If the government raises the new minimum broadband definition to 10 Mbps or 25 Mbps, then they need to set a much higher forward-looking speed standard for networks that get federal funding assistance. If they don’t do that, then every one of those newly constructed networks will fall below the federal definition of broadband in five years when we go through this exercise again.

Beam Me Up Scotty?

Doohan-portraet1I ran across the following technology breakthroughs that could be impacting our lives in a few years. It seems like there is always something being found that can make things faster, smaller or more efficient.

Nanowires. Probably the most interesting breakthrough is that Vanderbilt University student Junhao Lin has created nanowires that measure only three atoms wide. These wires are made from the same semiconducting materials that are used today to make chips and circuits. These nanowires are completely flexible and yet strong and could be used to build electronic devices that are completely flexible. Imagine a television that you could jam into your pocket and then unroll and watch later.

But the real potential for these nanowires is that they could be used to create 3D circuits. The semiconductor world has been following Moore’s law and constantly making circuits smaller. But they are now bumping into the limits of physics as the boards have gotten components the size of molecules. In all of the improvements we have kept to the idea of using circuits that lay out electronics on a flat surface in the same way that we made the first transistor boards. But going to 3D circuits would allow the creation of smaller and more complex circuits in a much smaller space. Think of a complex circuit board the size of the head of a pin. Think of circuits that can finally mimic the structure of the human brain.

Swarming Robots. Researchers at Harvard have been developing tiny swarms of robots that mimic the action of termites. They see these robots as a way to build structures in the future. Termites are interesting in that they work collectively to build complex yet strong structures and some of the African termites can build enduring mounds that are strong and lasting

The beauty of termite-like robots is that they could build structures without human intervention. Set them loose with a pile of building materials and they could construct buildings or other needed structures anywhere. Think of this as a low cost way to provide cheap shelter and housing, to build rural cellular repeaters or to build structure on Mars that would be ready for a future colony

Free Cellphone Charging. Engineers are working on a number of different ways to keep our cellphones perpetually charged without having to rely on being plugged into the walls. And these same technologies can be used to power the small devices that are going to power the Internet of things. There are different technologies being tried, and it might require the combination of a few of these ideas to make a phone or device that always stays charged.

One field of research has to do with thermoelectricity that takes advantage of temperature differences. Electrons flee from hot to cold and it only takes small temperature differences to drive this, such as the difference between the human body and a smartphone. Electricity can also be generated by piezoelectric means using materials that generate electricity when compressed or shaken. This might make it possible to charge your phone using the vibrations from being in a moving vehicle. Finally, electricity can be generated by biomechanical means. It’s possible that the movement of a person walking could be harvested to charge their phone.

Beam Me Up Scotty. A team at Delft University in the Netherlands has recently been able to transmit 100% accurate information about subatomic particles for a distance of three meters. This involves using what is called entanglement, meaning that particles that are far apart can be brought into perfect alignment. Their next experiment will be to try this from locations many miles apart.

While it’s a long way to go from these early experiments that are proving that entanglement can be achieved, the technology could eventually make it possible to have real-time transmission of information over large distances. This could be used to create faster-than-light radios or ‘beaming’ specifications to construct an exact duplicate of an object possible.