A Look Back

Black phoneWe take our communications networks for granted today and it’s easy to forget the history that brought us here. I thought today I would highlight a few of the key dates in the history of our industry. And in future blogs I am going to write more about a few of these important events. I am amazed at how much of this happened during my life time. It’s very easy to forget how recently cell phones appeared, for example.

1915 – First Transcontinental Phone Call. Alexander Graham Bell placed that call in San Francisco to Thomas Watson in New York.

1919 – Telephone Switches and Rotary Dial. Rotary dial phones and switches took the operators out of the business of completing local calls. This took many years to implement in some rural areas.

1920 – Frequency Multiplexing. Frequency multiplexing allowed different calls to be carried at different frequencies, meaning that telephone lines could now carry more than one call at the same time.

1947 – North American Numbering Plan. AT&T and Bell Labs came up with the 10-digit numbering that we still use today in the US, Canada and much of the Caribbean.

1948 – ‘A Mathematical Theory of Communications’. Claude Shannon of Bell Lab’s published a paper of this name which founded information theory and that outlined how the copper telephone network could also be used to transmit data and not only voice.

1951 – Direct Long Distance. Customers could now dial 1+ and make long distance calls without an operator.

1956 – First Transatlantic Call. A call was placed on the first undersea cable that was constructed from Nova Scotia to Scotland.

1962 – First Digital Transmission. The first call was transmitted from a customer over a T1.

1963 – First Touch-tone Telephone. The familiar key pad replaces the rotary dial phone.

1968 – First 911 Call. Was implemented in Haleyville, Alabama.

1973 – First Portable Cell Phone Call. I think this date will surprise younger people.

1975 – First Use of Fiber Optics. The US Navy installed the first fiber optics link aboard the USS Little Rock.

1978 – First Public Test of Cell Phones. 2,000 customers in Chicago got the first trial cellphones. This was followed by another trial in Baltimore in 1980 with commercial service launched nationwide in 1982.

Mid 1990’s – Voice over IP. Commercial providers began offering telephone calls that could be completed over Internet connections.

2000 – 100 Million cell phone subscribers in the US, up from 25,000 in 1984.

How Do You Hire?

MTS_Technician_VanCBS News did an interview with Warren Buffet a few years ago where he talked about how he hires new employees. He said he finds ways to check on their intelligence, energy and integrity. He said that he when he is looking for somebody who can help him grow his business he wants a problem solver and that he wouldn’t hire somebody who lacks one of these three traits.

Buffett tests intelligence by asking applicants to solve tests or puzzles of various types. For energy he finds out about the candidates personal habits on eating, exercising, meditation etc. he also gives them an interesting test. He asks candidates to prepare a presentation for ten minutes that describes some particular business topic they should be familiar with. He then gives them two minutes to chop it down to a five-minute presentation. After that he gives them two more minutes to chop it down again to a one-minute presentation. Buffett says that integrity is impossible to assess in an interview and so for any finalist candidate for an important position in his company he does a full background check.

The whole premise for this hiring process, according to Buffett is that you can’t believe resumes.   They are obviously only going to pick out highlights of a career and will not tell you about negatives. Numerous studies have shown that a significantly high percentage or resumes include half-truths or outright lies. And he thinks asking questions about resumes is a waste of time because that focuses on what people did in the past instead of understanding what they might be able to do for you in the future.

All businesses rely on good people to make them operate and it can be a huge setback to your business if you hire the wrong people for a key role. Most companies have the experience of having made a bad hire and know how traumatic that can be for your business. So it is vital that you find the right people during the interview process. I think almost anybody will agree that the normal way that we hire often doesn’t uncover everything you want to know about a person. We typically sift through resumes and then interview the top few candidates for an hour or two. We don’t often dig very deep past the resume.

I’m not saying that we should all change to Buffett’s method because you can find many other non-traditional hiring methods that other people will swear work equally well as Buffett’s. But you really should consider changing your hiring process if it is not finding you the people you need. Finding something that works for you will take some work on your part. The traits Buffett lists as most important for his company might not be the same traits you think are most important. And certainly you have different needs to meet if you are hiring a new CFO, a help desk technician or an installer. You must determine for each job what you most want out of that position and then find a way to test for those traits.

For example, if you are hiring somebody who says they are an expert in something you need, then grill them hard about what they know. If somebody is supposed to have physical or technical skills, then get out of the interview room and into the central office or into the field and have them demonstrate what they know. If you need a good writer, have them write something on the spot. One of my own favorite tools is to ask candidates to solve a real life problem. Every company has real-life examples of problems you have recently encountered – asking them how they would have solved it will tell you a lot about how they think.

There are a number of companies around that offer tools for non-traditional hiring. There are on-line tools that offer the kinds of games that Buffett administers and many other kinds of tests. I have one client who makes everybody take a test that provides a detailed profile of their personality traits. They think it’s important to know if somebody is an introvert or an extrovert, is likely to work better alone or in teams and similar traits.

But I would caution against administering any test if you don’t feel qualified to interpret the results. I know I would not feel comfortable trying to understand a personality profile since I don’t know how different personality traits affect job performance. As an example, I recently read a university study that found that high-energy introverts often make better salespeople than extroverts. They conjectured that it’s because they have to try harder to communicate and since they are introverted they tend to stick to the basics instead of filling in quiet time with a lot of empty talk. That sounds reasonable but is counterintuitive to the way most people hire salespeople. If I was hiring a salesperson I would have a hard time trying to do so using a personality profile and I think I might find myself quickly second guessing my own judgment.

To some degree, identifying and hiring the right person is itself a talent and some people are good at it and others are not. I have one friend in the industry who has made numerous poor hires, and my advice to him was to find somebody else to hire for him. So perhaps the first place to look at hiring better is to look at yourself. I suspect that many people are uncomfortable in being the sole decision maker in the hiring process and this is why many companies use teams to interview people.

I don’t have any generic device because this is one area where everybody had different ideas, and I have seem many different ideas be effective. But I also know that just reading resumes and judging people by what they tell you about resumes is often ineffective and can lead to some terrible hires. So I strongly recommend that you find ways to test people on those traits that you think are most important for the job you want to fill. If you take some time to think about that before you leap into the hiring process you probably are going to do a better job at finding the right fit for your company.

 

 

Should an ISP Offer Fast Upload Speeds?

Speed_Street_SignOne question I am often asked is if clients should offer symmetrical data speeds for residential customers. I’ve noticed lately a number of fiber networks that are advertising symmetrical speeds, and so this option is gaining some market traction. This is not an easy decision to make and there are a lot of different factors to consider:

The Competition. Most fiber networks are competing against cable networks, and the HFC technology on those networks does not allow for very fast uploading. The number one complaint that cable companies get about upload speeds is from gamers who want fast low-latency upload paths. But they say that they get very few other complaints from residential customers about this issue.

So this leads me to ask if residential customers care as much about upload speeds as they do download speeds. I know that today that household use the bulk of their download capabilities to view video and there are very few households that have the desire to upload videos in the same manner or volume. One of the questions I ask clients is if they are just trying to prove that their network is faster. Because to promote something heavily that most customers don’t care about feels somewhat gimmicky.

Practical. At the residential level there are not many users who have enough legal content to justify a fast upload. There are a few legitimate uses of uploading, but not nearly as many as there are for downloading. Some of the normal uses for uploading include gaming, sending large files, sharing videos and pictures with friends and family, doing data backup and other related activities into the cloud. But these uses normally do not generate as much traffic as the download bandwidth that is used by most households to watch video. And so one must ask the practical question if offering symmetrical bandwidth is just a marketing ploy since customers are not expected to use the upload nearly as much as they download.

Cost. Another consideration is cost, or lack of cost. A lot of ISPs buy symmetrical data pipes on their connection to the Internet. To the extent that they download a lot more data than is uploaded, one can almost look at the excess headroom on the upload side as free. They are already paying for that bandwidth and often there is no incremental cost to an ISP for customers to upload more except at  the point where upload becomes greater than download.

Technical. One must ask if allowing symmetrical bandwidth will increase demand for uploading over time. We know that offering faster download speeds induces homes to watch more video, but it’s not clear if this is true in the upload direction. If uploading is stimulated over time then there are network issues to consider. It requires a more robust distribution network to support a network that has significant traffic in both directions. For example, most fiber networks are built in nodes of some sort and the fiber connection to those nodes needs to be larger to support two-way traffic than it would be if the traffic is almost entirely in the download direction.

Bad Behavior. One of the main arguments against offering fast upload speeds is that it can promote bad behavior or can draw attention from those with malicious intents. For example, fast upload speeds might promote more use of file sharing, and most of the content shared on file sharing sites is copyrighted and being illegally shared.

There has always been the concern that customers also might set up servers on fast connections that can upload things quickly. And one of the few things that requires a fast upward connection is porn. So I’ve always found it likely that having fast upload connections is going to attract people who want to operate porn servers.

But the real concern is that fast networks can become targets for those with malicious intent. Historically hackers took over computers to generate spam. That still happens today, but there are other more malicious reasons for hackers to take over computers. For instance, hackers who launch denial of service attacks do so by taking over many computers and directing them to send messages to a target simultaneously. Computers are also being hijacked to do things like mine bitcoins, which requires frequent communication outward.

One would think that a hacker would find a computer sitting on a network that allows 100 Mbps or 1 Gbps upload to be worth a whole lot more than a computer on a slower network. And so they might well be targeting customer on these networks.

What this all means to me is that if you offer fast upload connections that you ought to be prepared to monitor customer to know which ones upload a lot. If such customers are operating server businesses they might be directed to use business products. Or you can help them find and remove malware if their computers have been hacked. But I find the idea of allowing fast uploads without monitoring to be dangerous for the ISP and for customers.

Cool New Stuff – Computing

Generic-office-desktop2As I do once in a while on Fridays I am going to talk about some of the coolest new technology I’ve read about recently, both items related to new computers.

First is the possibility of a desk-top supercomputer in a few years. A company called Optalysys says they will soon be releasing a first generation chip set and desk-top size computer that will be able to run at a speed of 346 gigaflops in the first generation. A flop is a measure of instructions per second that can be performed by a computer. A gigaflop is 109 instructions, a petaflop is 1015 instructions and an exaflop is 1018. The fastest supercomputer today is the Tinahhe-2, built by a Chinese university and which operates at 34 petaflops, which is obviously much faster than this first desktop machine.

The computer works by beaming low-intensity lasers through layers of liquid crystal. They say that in upcoming generations that they will have a machine that can do 9 petaflops by 2017 and they have a goal of having a machine that will do 17.1 exaflops (17,100 petaflops) by 2020. The 2017 version will be half as fast as the fastest supercomputer today and yet be far smaller and use far less power. This would make it possible for many more companies and universities to own a supercomputer. And if they really can achieve their goal by 2020 it means another big leap forward in supercomputing power since that machine would be several magnitudes faster than the Chinese machine today. This is exciting news because in the future there are going to be mountains of data to be analyzed and it’s going to take myriad, and affordable supercomputing to keep up with the demands of big data.

In a somewhat related, but very different approach, IBM has announced that it has developed a chip that mimics the way the human brain works. They have developed a chip they call TrueNorth that contains the equivalent of one million human neurons and 256 million synapses.

The IBM chip is a totally different approach to computing. The human brain stores memories and does computing within the same neural network and this chip does the same thing. IBM has been able to create what they call spiking neurons within the chip, which means that the chip can store data as a pattern of pulses much in the same way the brain does. This is a fundamentally different approach than traditional computers that use what is called Von Neumann computing that separates data and computing. One of the problems with traditional computing is that data has to be moved back and forth to be processes, meaning that normal computers don’t do anything in real time and there are often data bottlenecks.

The IBM TrueNorth chip, even in this first generation is able to process things in real time. Early work on the chip has shown that it can do things like recognize images in real time both faster and with far less power than traditional computers. IBM doesn’t claim that this particular chip is ready to put into products and they see it as the first prototype for testing this new method of computing. It’s even possible that this might be a dead-end in terms of commercial applications, although IBM already sees possibilities for this kind of computer to be used for both real time and graphics applications.

This chip was designed as part of a DARPA program called SyNAPSE, which is short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics, which is an effort to create a brain-like hardware. The end game of that program is to eventually design a computer that can learn, and this first IBM chip is a long way from that end game. And of course, anybody who has seen the Terminator movies knows that DARPA is shooting to develop a benign version of Skynet!

How Safe are your Customer Gateways?

Cable_modem_arp_500pixIt seems like every day I read something that describes another part of the network that is vulnerable to hackers. Recently in a speech given at the DefCon security conference, Shahar Tal of Check Point Security Technologies said that a large number of residential gateways provided by ISPs are subject to hacking.

Specifically, he pointed out gateways that use the TR-069 protocol, also known as CWMP (CPE WAN Management Protocol). According to scans done by Check Point there are 147 million devices in the world using the TR-069 protocol and 70% (103 million) of them are home gateways. TR-069 is the second most common ISP gateway protocol after 80 (HTTP).

ISPs typically communicate with their customer gateways using an ACS (Auto Configuration Server) and associated software. This gives the ISP the ability to monitor the gateway, provide upgrades to the firmware and troubleshoot customer problems. This is the tool used by an ISP to reset somebody’s modem when it’s not working. Tal says that it’s possible for such software to be the point of entry into the home for the hacker since they can emulate it to gain control of the gateway.

Tal listed a number of weaknesses of the TR-069 gateways. First, the links between a server and the ACS are more often unencrypted than not, making them open for a hacker to read. Second, anybody who can emulate the ACS system can take control of the gateway. This would give the hacker to anything thing that is directly connected to the gateway including computers, smartphones, tablets, smart devices, etc.

This all matters because recently there have been a number of different kinds of attacks against home gateways. Years ago home computers were used mostly to generate spam, but the bad guys are doing far more malicious things with hijacked computers these days including:

  • Hijacking the DNS so that a hacker can see bank transactions.
  • Hijacking the DNS to send false hits to web sites to collect click fraud.
  • Using the router and infected computers to mine for bitcoins.
  • Using the home computing power to launch denial of service attacks.

If you use a gateway using this protocol there are steps you can take to make sure your customers are safe. First, you need to query your ACS Software provider about their security measures. Tal says that many of these systems have not put much emphasis on security. But as an ISP probably the most important thing you can do is to encrypt all transactions between you and your customers.

For now it appears that gateways that use TR-069 are more vulnerable than those using 80 (HTTP). This is mostly due to the fact that 80 (HTTP) has been an industry standard for a long time and thus a lot of effort was put into making connections secure. However, there are still threats on 80 (HTTP) in the world. For example, the Code Red and Nimda worm and close relatives are still being used to launch attacks on 80 (HTTP) ports.

In the end, as an ISP you are responsible to keep your customers safe from these kinds of problems. Certainly failure to do so will increase their risk of being hacked for financial losses. But you are also at risk since the various malicious uses that can come from these hacks can generate a lot of traffic on your network. So if you deploy a gateway that uses TR-069 you should ask the right questions of the manufacturer and your software vendors to see what security tools they have in place. And then you need to use them. Too many ISPs don’t fully all of the tools that come with the software and hardware they purchase.

Remember that this is one part of your network that customers rely upon you to be safe. Generally the gateway is set up such that a customer can’t even see the settings inside and it’s most typical for this to be all controlled by the ISP. So it is incumbent upon you to not be bringing hackers into your customers’ homes.

Latest on the Internet of Things – Part 2, The Market

Goneywell LyricYesterday I wrote about the security issues that are present in the first generation of devices that can be classified as part of the Internet of Things. Clearly the manufacturers of such devices need to address security issues before some widespread hacking disaster sets the whole industry on its ear.

Today I want to talk about the public’s perception of the IoT. Last week eMarketer released the results of a survey that looked at how the public perceives the Internet of Things. Here are some of the key results:

  • Only 15% of homes currently own a smart home device.
  • And half of those who don’t own a smart device say they are not interested in doing so.
  • 73% of respondents were not familiar with the phrase “Internet of Things”.
  • 19% of households are very interested in smart devices and 28% are somewhat interested.
  • There were only a handful of types of devices that were of interest to more than 20% of households: smart cars – 39%; smart home appliances – 34%; heart monitors – 23%; pet monitors – 22%; fitness devices – 22%; and child monitors 20%.

The survey highlights the short-term issues for any carrier that thinks they are going to make a fortune with the IoT. Like many new technology trends, this one is likely to take a while to take hold in the average house. Industry experts think the long-term trend of the IOT has great promise. In a Pew Research Center survey that I discussed a few weeks ago, 83% of industry technology experts thought that the IoT would have “widespread and beneficial effects on the everyday lives of the public by 2025”.

I know that carriers are all hoping for that one new great product that will sweep through their customer base and get the same kind of penetrations that they enjoyed with the triple play services. But this survey result, and the early forays by cable companies and others into the home automation and related product lines show that IoT is not going to be that product, at least not for now.

This is not to say that carriers shouldn’t consider getting into the IoT business. Let’s face it, the average homeowner is going to be totally intimidated by having more than a couple of smart devices in their home. What they will want is for them to all work together seamlessly so that they don’t have to log in and out of different systems just to make the house ready when they want to take a trip. And eMarketer warned that one thing that concerned households was the prospect of having to ‘reboot’ their entire home when things aren’t working right, or of getting a virus that would goof up their home.

And as I mentioned yesterday, households are going to want to feel safe with smart devices, so if you are going to get into the business it is mandatory for you to find smart products that don’t have the kinds of security flaws that I discussed yesterday.

The eMarketer report predicts that more homes will embrace IoT as more name brand vendors like “Apple, Google . . . The Home Depot, Best Buy and Staples” get into the business. And this may be so, but one is going to expect most such platforms to be somewhat generic by definition. If a carrier wants to find a permanent niche in the IoT market they are going to need to distinguish themselves from the pack by providing integration and customization to give each customer what they most want from the IoT experience. Anybody will be able to buy a box full of monitors from one of those big companies, but a lot of people are going to want somebody they trust to come to their home and make it all work.

But the cautionary tale from this survey is that IoT as a product line is going to grow slowly over time. It’s a product today where getting a 10% customer penetration would be a huge success. So I caution carriers to have realistic expectations. There is going to be a lot of market competition from those big companies named above and to be successful you are going to have to stress service and security as reasons to use you instead of the big names.

Latest on the Internet of Things – Part 1, Security

Monitor_padlockThere has been some negative press recently about the Internet of Things. There was both recent news about IoT security and also some consumer research that is of interest. Today’s blog will discuss the latest issues having to do with security and tomorrow I will look at issues having to do with marketing and the public perception of IoT.

Recently, Fortify, the security division of Hewlett-Packard analyzed the ten most popular consumer devices that are currently considered as part of the IoT. They didn’t name any specific manufacturer but did say that they looked at one each of “TVs, webcams, home thermostats, remote power outlets, sprinkler controllers, hubs for controlling multiple devices, door locks, home alarms, scales and garage door openers”. According to Fortify there was an average of 25 security weaknesses found in each device they analyzed.

All of the devices included a smartphone application to control them. The weaknesses are pretty glaring. 8 of the 10 devices had very week passwords. 9 of the 10 devices gathered some personal information about the owner such as an email address, home address or user name. 7 of 10 devices had no encryption and sent out data in a raw format. 6 of the devices didn’t encrypt updates, meaning that a hacker could fake an update and take over the device.

This is not much of a shock and the lack of IoT security has been reported before. It’s been clear that most manufacturers of these kinds of devices are not providing the same kind of security for these devices that is done for computers and smartphones. But this is the first time that anybody has looked at the most popular devices in such detail and has documented all of the kinds of weaknesses they found.

It’s fairly obvious that before the IoT becomes an everyday thing in households that these kinds of weaknesses have to be fixed. Otherwise, a day will come when there will be some spectacular security failure of an IoT device that will affect many households, and the whole industry will be set back a step.

It’s obvious that security really matters for some of these devices. If things like door locks, garage door openers and security systems can be easily hacked due to poor device security then the whole reason for buying such devices has been negated. I read last week that hackers have figured out how to hack into smart car locks and push-button car starters and that a car using those devices is no longer safe from being stolen. For a few years these devices gave some added protection against theft, but now they are perhaps easier to steal than a traditional vehicle and certainly easier to steal than a car using a physical anti-theft device like the Club.

I know that I am not going to be very quick to adopt IoT devices that might allow entry into my home. I don’t really need the convenience that might come from having my front door unlock as I pull into the driveway if this same feature means that a smart thief can achieve easy entry to my home.

So aside from home security devices, what’s the danger of having less secure devices like smart lights, or a smart stove or a smart sprinkler system? There is always the prank sort of hacking like disabling your lights or making your oven heat all day at high heat. But the real danger is that access to such devices might give a hacker access to everything else in your house.

Most of us use pretty good virus protection and other tools to lower the risk of somebody hacking into our computer systems to get access to personal information and banking and monetary systems. But what if a hacker can gain access to your computers through the backdoor of a smart light bulb or a smart refrigerator? This is not a far-fetched scenario. It was reported that the hack of Target that stole millions of credit card numbers was initiated by entry to the company’s heating and ventilation systems.

It’s obvious that these manufacturers are taking the fast path to market rather than taking the time to implement good security systems. But they must realize that they will not be forgiven if their device is the cause of multiple data breaches and that in the worst case their whole product line could dry up overnight. One would hope that efforts like the one just taken by HP will wake up the device makers. With that said, they face a formidable tasks since fixing an average of 25 security flaws is a big order.

 

Changes to the E-rate Program

Indianola_High_SchoolThe FCC recently revised the rules for the E-Rate program which provides subsidies for communications needs at schools and libraries. They made a lot of changes to the program and the rules for filing this year are significantly different than what you might have done in the past. I’ve made a list below of the changes that will most affect carriers and you should become familiar with the revised rules if you participate in the program. Here are some of the key changes to the program from a carrier perspective:

  • Extra Funding. There is an additional $1 billion per year set aside for the next two years for what the FCC has called Internal Connections. This means money to bring high-speed Internet from the wiring closet to the rest of the school. This might be new wiring, WiFi or other technologies that distribute high-speed Internet within a school.
  • Last Mile Connections. It’s also possible to get funding for what they call WAN / Last-Mile connectivity. This would be fiber built to connect a school to a larger network such as one for a whole school district.
  • Stressing High-Speed Connections. The target set by the FCC is that a school should have at least 100 Mbps per 1,000 students and staff in the short run and 1 Gbps access in the long run. It is going to be harder to fund older slower connections even for very few poor schools. As a carrier you need to be planning on how to get connections that meet these requirements to schools if you want to maintain E-rate funding.
  • Things No Longer Funded. One of the ways the FCC will fund the expanded emphasis on higher bandwidth is by not funding other items. The fund is going to focus entirely for the next few years on funding things that promote high-speed connections, so they will no longer fund “Circuit Cards/Components; Interfaces, Gateways, Antennas; Servers; Software; Storage Devices; Telephone Components, Video Components, as well as voice over IP or video over IP components, and the components, such as virtual private networks, that are listed under Data Protection other than firewalls and uninterruptible power supply/battery backup. The FCC will also eliminate E-rate support for e-mail, web hosting, and voicemail beginning in funding year 2015”.
  • Combining Schools and Libraries. For the first time it will be possible to combine the funding for a school and library that are served by the same connection / network.
  • Eliminating Competitive Bidding for Low-Price Bandwidth. A school does not need to go to competitive bid if they can find a connection of at least 100 Mbps that costs $3,600 per year (or $300 per month) or less.
  • Eliminating a Technology Plan. There is no Technology Plan now required for applying for Internal Connections (in-school wiring) or for providing WAN connections.
  • Simplifying Multi-Year Contracts. Subsequent years after the first year of a multi-year contract will require less paperwork and have a streamlined filing process.
  • Simplifying the Discount Calculation. The discount can now be calculated on a per school-district basis and not per school within the district. The FCC adopts the definition from the Census that defines urban areas to be the densely settled core of census tracts or blocks that met minimum population density requirements (50,000 people or more), along with adjacent territories of at least 2,5000 people that link to the densely settled core. “Rural” encompasses all population, housing, and territory not included within an urban area. Any school district or library system that has a majority of schools or libraries in a rural area that meets the statutory definition of eligibility for E-rate support will qualify for the additional rural discount.
  • Requiring Electronic Filings. All filings will need to be electronic, phased in by 2017.

These are a lot of changes to a fairly complex filing process. CCG can help you navigate through these changes. If you have questions or need assistance please contact Terri Firestein of CCG at tfireccg@myactv.net.

How Should the US Define Broadband?

FCC_New_LogoThe FCC just released the Tenth Broadband Progress Notice of Inquiry. As one would suppose by the title there have been nine other of these in the past. This inquiry is particularly significant because the FCC is asking if it’s time to raise the FCC’s definition of broadband.

The quick and glib answer is that of course they should. After all, the current definition of broadband is 4 Mbps download and 1 Mbps upload. I think almost everybody will agree that this amount of bandwidth is no longer adequate for an average family. But the question the FCC is wrestling with is how high they should raise it.

There are several consequences of raising the definition of bandwidth that have to be considered. First is the purely political one. For example, if they were to raise it to 25 Mbps download, then they would be declaring that most of rural America doesn’t have broadband. There are numerous rural town in the US that are served by DSL or by DOCSIS 1.0 cable modems that have speeds of 6 Mbps download or slower. Even if the FCC sets the new definition at 10 Mbps they are going to be declaring that big portions of the country don’t have broadband.

And there are consequences of that definition beyond the sheer embarrassment of the country openly recognizing that the rural parts of America have slow connectivity. The various parts of the federal government use the definition of what is broadband when awarding grants and other monies to areas that need to get faster broadband. Today, with the definition set at 4 Mbps those monies are tending to go to very rural areas where there is no real broadband. If the definition is raised enough those monies could instead go to the rural county seats that don’t have very good broadband. And that might mean that the people with zero broadband might never get served, at least through the help of federal grants.

The next consideration is how this affects various technologies. I remember when the FCC first set the definition of broadband at 3 Mbps download and 768 Kbps upload. At that time many thought that they intended to shovel a lot of money to cellular companies to serve broadband in rural areas. But when we start talking about setting the definition of broadband at 10 Mbps download or faster, then a number of technologies start falling off the list as being able to support broadband.

For example, in rural areas it is exceedingly hard, if not impossible, to have a wireless network, either cellular or using unlicensed spectrum, that can serve every customer in a wide area with speeds of 10 Mbps. Customer close to towers can get fast speeds, but for all wireless technologies the speed drops quickly with the distance from a tower. And it is also exceedingly hard to use DSL to bring broadband to rural areas with a target of 10 Mbps. The speed on DSL also drops quickly with distance, which is why there not much coverage of DSL in rural areas today.

And when you start talking about 25 Mbps as the definition of broadband then the only two technologies that can reliably deliver that are fiber and coaxial cable networks. Both are very expensive to build to areas that don’t have them, and one wonders what the consequences would be of setting the definition that high.

The one thing I can tell you from practical experience is that 10 Mbps is not fast enough for many families like mine. We happen to be cord cutters and we thus get all of our entertainment from the web. It is not unusual to have 3 – 4 devices in our house watching video, while we also surf the web, do our daily data backups, etc. I had a 10 Mbps connection that was totally inadequate for us and am lucky enough to live where I could upgrade to a 50 Mbps cable modem service that works well for us.

So I don’t envy the FCC this decision. They are going to get criticized no matter what they do. If they just nudge the definition up a bit, say to 6 or 7 Mbps, then they are going to be rightfully criticized for not promoting real broadband. If they set it at 25 Mbps then all of the companies that deploy technologies that can’t go that fast will be screaming bloody murder. We know this because the FCC recently used 25 Mbps as the minimum speed in order to qualify for $75 million of their experimental grants. That speed locked out a whole lot of companies that were hoping to apply for those grants. They might not have a lot of choice but to set it at something like 10 Mbps as a compromise. This frankly is still quite a wimpy goal for a Commission that approved the National Broadband Plan a few years ago that talked about promoting gigabit speeds. But it would be progress in the right direction and maybe by the Twentieth Broadband Inquiry we will be discussing real broadband.

Changes to Unlicensed Spectrum

Wi-FiEarlier this year in Docket ET No. 13-49 the FCC made a number of changes the unlicensed 5 GHz band of unlicensed spectrum. The docket was intended to unify the rules for using the 5 GHz spectrum. The FCC had made this spectrum available over time in several different chunks and had set different rules for the use of each portion. The FCC was also concerned about interference with some parts of the spectrum with doppler radar and with several government uses of spectrum. Spectrum rules are complex and I don’t want to spend the blog describing the changes in detail. But in the end, the FCC made some changes that wireless ISPS (WISPs) claim are going to kill the spectrum for rural use.

Comments filed by WISPA, the national association for WISPs claim that the changes that the FCC is making to the 5725 – 5850 MHz band is going to devastate rural data delivery from WISPs. The FCC is mandating that new equipment going forward use lower power and also use better filters to reduce out-of-band emissions. And WISPA is correct about what that means. If you understand the physics of wireless spectrum, each of those changes is going to reduce both the distance and the bandwidth that can be achieved with this slice of spectrum. I didn’t get out my calculator and spend an hour doing the math, but WISPA’s claim that this is going to reduce the effective distance for the 5 GHz band to about 3 miles seems like a reasonable estimate, which is also supported by several manufacturers of the equipment.

Some background might be of use in this discussion. WISPs can use three different bands of spectrum for delivering wireless data – 900 MHz, 2.4 GHz and 5 GHz. The two lower bands generally get congested fairly easy because there are a lot of other commercial applications using them. Plus, those two spectrums can’t go very far and still deliver significant bandwidth. And so to the extent they use those spectrums, WISPs tend to use them for customers residing closer to their towers. They save the 5 GHz spectrum for customers who are farther away and they use it for backhaul between towers. The piece of spectrum in question can be used to deliver a few Mbps to a customer up to ten miles from a transmitter. If you are a rural customer, getting 2 – 4 Mbps from a WISP still beats the heck out of dial-up.

Customers closer to a WISP transmitter can get decent bandwidth. About the fastest speed I have ever witnessed from a WISP was 30 Mbps, but it’s much more typical for customers within a reasonable distance from a tower to get something like 10 Mbps. That is a decent bandwidth product in today’s rural environment, although one has to wonder what that is going to feel like a decade from now.

Readers of this blog probably know that I spent ten years living in the Virgin Islands and my data connection there came from a WISP. On thing I saw there is the short life span of the wireless CPE at the home. In the ten years I was there I had three different receivers installed (one at the end) which means that my CPE lasted around 5 years. And the Virgin Islands is not a harsh environment since it’s around 85 degrees every day, unlike a lot of the US which has both freezing winters and hot summers. So the average WISP will need to phase in the new CPE to all customers over the next five to seven years as the old customer CPE dies. And they will need to use the new equipment for new customers.

That will be devastating to a WISP business plan. The manufacturers say that the new receivers may cost as much as $300 more to comply with the filtering requirements. I take that estimate with a grain of salt, but no doubt the equipment is going to cost more. But the real issue is the reduced distance and reduced bandwidth. Many, but not all, WISPs operate on very tight margins. They don’t have a lot of cash reserves and they rely on cash flow from customers to eke out enough extra cash to keep growing. They basically grow their businesses over time by rolling profits back into the business.

If these changes mean that WISPs can’t serve customers more than 3 miles from an existing antenna, there is a good chance that a lot of them are going to fail. They will be faced with either building a lot of new antennas to create smaller 3-mile circles or else they will have to abandon customers more than three miles away.

Obviously spectrum is in the purview of the FCC and some of the reasons why they are changing this spectrum are surely valid. But in this case they created an entire industry that relied upon the higher power level of the gear to justify a business plan and now they want to take that away. This is not going to be a good change for rural customers since over time many of them are going to lose their only option for broadband. While it is important to be sensitive to interference issues, one has to wonder how much interference there is out in the farm areas where these networks have been deployed. This impacts of this change that WISPA is warning about will be a step backward for rural America and rural bandwidth.