Cool New Stuff – Computing

Generic-office-desktop2As I do once in a while on Fridays I am going to talk about some of the coolest new technology I’ve read about recently, both items related to new computers.

First is the possibility of a desk-top supercomputer in a few years. A company called Optalysys says they will soon be releasing a first generation chip set and desk-top size computer that will be able to run at a speed of 346 gigaflops in the first generation. A flop is a measure of instructions per second that can be performed by a computer. A gigaflop is 109 instructions, a petaflop is 1015 instructions and an exaflop is 1018. The fastest supercomputer today is the Tinahhe-2, built by a Chinese university and which operates at 34 petaflops, which is obviously much faster than this first desktop machine.

The computer works by beaming low-intensity lasers through layers of liquid crystal. They say that in upcoming generations that they will have a machine that can do 9 petaflops by 2017 and they have a goal of having a machine that will do 17.1 exaflops (17,100 petaflops) by 2020. The 2017 version will be half as fast as the fastest supercomputer today and yet be far smaller and use far less power. This would make it possible for many more companies and universities to own a supercomputer. And if they really can achieve their goal by 2020 it means another big leap forward in supercomputing power since that machine would be several magnitudes faster than the Chinese machine today. This is exciting news because in the future there are going to be mountains of data to be analyzed and it’s going to take myriad, and affordable supercomputing to keep up with the demands of big data.

In a somewhat related, but very different approach, IBM has announced that it has developed a chip that mimics the way the human brain works. They have developed a chip they call TrueNorth that contains the equivalent of one million human neurons and 256 million synapses.

The IBM chip is a totally different approach to computing. The human brain stores memories and does computing within the same neural network and this chip does the same thing. IBM has been able to create what they call spiking neurons within the chip, which means that the chip can store data as a pattern of pulses much in the same way the brain does. This is a fundamentally different approach than traditional computers that use what is called Von Neumann computing that separates data and computing. One of the problems with traditional computing is that data has to be moved back and forth to be processes, meaning that normal computers don’t do anything in real time and there are often data bottlenecks.

The IBM TrueNorth chip, even in this first generation is able to process things in real time. Early work on the chip has shown that it can do things like recognize images in real time both faster and with far less power than traditional computers. IBM doesn’t claim that this particular chip is ready to put into products and they see it as the first prototype for testing this new method of computing. It’s even possible that this might be a dead-end in terms of commercial applications, although IBM already sees possibilities for this kind of computer to be used for both real time and graphics applications.

This chip was designed as part of a DARPA program called SyNAPSE, which is short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics, which is an effort to create a brain-like hardware. The end game of that program is to eventually design a computer that can learn, and this first IBM chip is a long way from that end game. And of course, anybody who has seen the Terminator movies knows that DARPA is shooting to develop a benign version of Skynet!

How Safe are your Customer Gateways?

Cable_modem_arp_500pixIt seems like every day I read something that describes another part of the network that is vulnerable to hackers. Recently in a speech given at the DefCon security conference, Shahar Tal of Check Point Security Technologies said that a large number of residential gateways provided by ISPs are subject to hacking.

Specifically, he pointed out gateways that use the TR-069 protocol, also known as CWMP (CPE WAN Management Protocol). According to scans done by Check Point there are 147 million devices in the world using the TR-069 protocol and 70% (103 million) of them are home gateways. TR-069 is the second most common ISP gateway protocol after 80 (HTTP).

ISPs typically communicate with their customer gateways using an ACS (Auto Configuration Server) and associated software. This gives the ISP the ability to monitor the gateway, provide upgrades to the firmware and troubleshoot customer problems. This is the tool used by an ISP to reset somebody’s modem when it’s not working. Tal says that it’s possible for such software to be the point of entry into the home for the hacker since they can emulate it to gain control of the gateway.

Tal listed a number of weaknesses of the TR-069 gateways. First, the links between a server and the ACS are more often unencrypted than not, making them open for a hacker to read. Second, anybody who can emulate the ACS system can take control of the gateway. This would give the hacker to anything thing that is directly connected to the gateway including computers, smartphones, tablets, smart devices, etc.

This all matters because recently there have been a number of different kinds of attacks against home gateways. Years ago home computers were used mostly to generate spam, but the bad guys are doing far more malicious things with hijacked computers these days including:

  • Hijacking the DNS so that a hacker can see bank transactions.
  • Hijacking the DNS to send false hits to web sites to collect click fraud.
  • Using the router and infected computers to mine for bitcoins.
  • Using the home computing power to launch denial of service attacks.

If you use a gateway using this protocol there are steps you can take to make sure your customers are safe. First, you need to query your ACS Software provider about their security measures. Tal says that many of these systems have not put much emphasis on security. But as an ISP probably the most important thing you can do is to encrypt all transactions between you and your customers.

For now it appears that gateways that use TR-069 are more vulnerable than those using 80 (HTTP). This is mostly due to the fact that 80 (HTTP) has been an industry standard for a long time and thus a lot of effort was put into making connections secure. However, there are still threats on 80 (HTTP) in the world. For example, the Code Red and Nimda worm and close relatives are still being used to launch attacks on 80 (HTTP) ports.

In the end, as an ISP you are responsible to keep your customers safe from these kinds of problems. Certainly failure to do so will increase their risk of being hacked for financial losses. But you are also at risk since the various malicious uses that can come from these hacks can generate a lot of traffic on your network. So if you deploy a gateway that uses TR-069 you should ask the right questions of the manufacturer and your software vendors to see what security tools they have in place. And then you need to use them. Too many ISPs don’t fully all of the tools that come with the software and hardware they purchase.

Remember that this is one part of your network that customers rely upon you to be safe. Generally the gateway is set up such that a customer can’t even see the settings inside and it’s most typical for this to be all controlled by the ISP. So it is incumbent upon you to not be bringing hackers into your customers’ homes.

Latest on the Internet of Things – Part 2, The Market

Goneywell LyricYesterday I wrote about the security issues that are present in the first generation of devices that can be classified as part of the Internet of Things. Clearly the manufacturers of such devices need to address security issues before some widespread hacking disaster sets the whole industry on its ear.

Today I want to talk about the public’s perception of the IoT. Last week eMarketer released the results of a survey that looked at how the public perceives the Internet of Things. Here are some of the key results:

  • Only 15% of homes currently own a smart home device.
  • And half of those who don’t own a smart device say they are not interested in doing so.
  • 73% of respondents were not familiar with the phrase “Internet of Things”.
  • 19% of households are very interested in smart devices and 28% are somewhat interested.
  • There were only a handful of types of devices that were of interest to more than 20% of households: smart cars – 39%; smart home appliances – 34%; heart monitors – 23%; pet monitors – 22%; fitness devices – 22%; and child monitors 20%.

The survey highlights the short-term issues for any carrier that thinks they are going to make a fortune with the IoT. Like many new technology trends, this one is likely to take a while to take hold in the average house. Industry experts think the long-term trend of the IOT has great promise. In a Pew Research Center survey that I discussed a few weeks ago, 83% of industry technology experts thought that the IoT would have “widespread and beneficial effects on the everyday lives of the public by 2025”.

I know that carriers are all hoping for that one new great product that will sweep through their customer base and get the same kind of penetrations that they enjoyed with the triple play services. But this survey result, and the early forays by cable companies and others into the home automation and related product lines show that IoT is not going to be that product, at least not for now.

This is not to say that carriers shouldn’t consider getting into the IoT business. Let’s face it, the average homeowner is going to be totally intimidated by having more than a couple of smart devices in their home. What they will want is for them to all work together seamlessly so that they don’t have to log in and out of different systems just to make the house ready when they want to take a trip. And eMarketer warned that one thing that concerned households was the prospect of having to ‘reboot’ their entire home when things aren’t working right, or of getting a virus that would goof up their home.

And as I mentioned yesterday, households are going to want to feel safe with smart devices, so if you are going to get into the business it is mandatory for you to find smart products that don’t have the kinds of security flaws that I discussed yesterday.

The eMarketer report predicts that more homes will embrace IoT as more name brand vendors like “Apple, Google . . . The Home Depot, Best Buy and Staples” get into the business. And this may be so, but one is going to expect most such platforms to be somewhat generic by definition. If a carrier wants to find a permanent niche in the IoT market they are going to need to distinguish themselves from the pack by providing integration and customization to give each customer what they most want from the IoT experience. Anybody will be able to buy a box full of monitors from one of those big companies, but a lot of people are going to want somebody they trust to come to their home and make it all work.

But the cautionary tale from this survey is that IoT as a product line is going to grow slowly over time. It’s a product today where getting a 10% customer penetration would be a huge success. So I caution carriers to have realistic expectations. There is going to be a lot of market competition from those big companies named above and to be successful you are going to have to stress service and security as reasons to use you instead of the big names.

Latest on the Internet of Things – Part 1, Security

Monitor_padlockThere has been some negative press recently about the Internet of Things. There was both recent news about IoT security and also some consumer research that is of interest. Today’s blog will discuss the latest issues having to do with security and tomorrow I will look at issues having to do with marketing and the public perception of IoT.

Recently, Fortify, the security division of Hewlett-Packard analyzed the ten most popular consumer devices that are currently considered as part of the IoT. They didn’t name any specific manufacturer but did say that they looked at one each of “TVs, webcams, home thermostats, remote power outlets, sprinkler controllers, hubs for controlling multiple devices, door locks, home alarms, scales and garage door openers”. According to Fortify there was an average of 25 security weaknesses found in each device they analyzed.

All of the devices included a smartphone application to control them. The weaknesses are pretty glaring. 8 of the 10 devices had very week passwords. 9 of the 10 devices gathered some personal information about the owner such as an email address, home address or user name. 7 of 10 devices had no encryption and sent out data in a raw format. 6 of the devices didn’t encrypt updates, meaning that a hacker could fake an update and take over the device.

This is not much of a shock and the lack of IoT security has been reported before. It’s been clear that most manufacturers of these kinds of devices are not providing the same kind of security for these devices that is done for computers and smartphones. But this is the first time that anybody has looked at the most popular devices in such detail and has documented all of the kinds of weaknesses they found.

It’s fairly obvious that before the IoT becomes an everyday thing in households that these kinds of weaknesses have to be fixed. Otherwise, a day will come when there will be some spectacular security failure of an IoT device that will affect many households, and the whole industry will be set back a step.

It’s obvious that security really matters for some of these devices. If things like door locks, garage door openers and security systems can be easily hacked due to poor device security then the whole reason for buying such devices has been negated. I read last week that hackers have figured out how to hack into smart car locks and push-button car starters and that a car using those devices is no longer safe from being stolen. For a few years these devices gave some added protection against theft, but now they are perhaps easier to steal than a traditional vehicle and certainly easier to steal than a car using a physical anti-theft device like the Club.

I know that I am not going to be very quick to adopt IoT devices that might allow entry into my home. I don’t really need the convenience that might come from having my front door unlock as I pull into the driveway if this same feature means that a smart thief can achieve easy entry to my home.

So aside from home security devices, what’s the danger of having less secure devices like smart lights, or a smart stove or a smart sprinkler system? There is always the prank sort of hacking like disabling your lights or making your oven heat all day at high heat. But the real danger is that access to such devices might give a hacker access to everything else in your house.

Most of us use pretty good virus protection and other tools to lower the risk of somebody hacking into our computer systems to get access to personal information and banking and monetary systems. But what if a hacker can gain access to your computers through the backdoor of a smart light bulb or a smart refrigerator? This is not a far-fetched scenario. It was reported that the hack of Target that stole millions of credit card numbers was initiated by entry to the company’s heating and ventilation systems.

It’s obvious that these manufacturers are taking the fast path to market rather than taking the time to implement good security systems. But they must realize that they will not be forgiven if their device is the cause of multiple data breaches and that in the worst case their whole product line could dry up overnight. One would hope that efforts like the one just taken by HP will wake up the device makers. With that said, they face a formidable tasks since fixing an average of 25 security flaws is a big order.

 

Changes to the E-rate Program

Indianola_High_SchoolThe FCC recently revised the rules for the E-Rate program which provides subsidies for communications needs at schools and libraries. They made a lot of changes to the program and the rules for filing this year are significantly different than what you might have done in the past. I’ve made a list below of the changes that will most affect carriers and you should become familiar with the revised rules if you participate in the program. Here are some of the key changes to the program from a carrier perspective:

  • Extra Funding. There is an additional $1 billion per year set aside for the next two years for what the FCC has called Internal Connections. This means money to bring high-speed Internet from the wiring closet to the rest of the school. This might be new wiring, WiFi or other technologies that distribute high-speed Internet within a school.
  • Last Mile Connections. It’s also possible to get funding for what they call WAN / Last-Mile connectivity. This would be fiber built to connect a school to a larger network such as one for a whole school district.
  • Stressing High-Speed Connections. The target set by the FCC is that a school should have at least 100 Mbps per 1,000 students and staff in the short run and 1 Gbps access in the long run. It is going to be harder to fund older slower connections even for very few poor schools. As a carrier you need to be planning on how to get connections that meet these requirements to schools if you want to maintain E-rate funding.
  • Things No Longer Funded. One of the ways the FCC will fund the expanded emphasis on higher bandwidth is by not funding other items. The fund is going to focus entirely for the next few years on funding things that promote high-speed connections, so they will no longer fund “Circuit Cards/Components; Interfaces, Gateways, Antennas; Servers; Software; Storage Devices; Telephone Components, Video Components, as well as voice over IP or video over IP components, and the components, such as virtual private networks, that are listed under Data Protection other than firewalls and uninterruptible power supply/battery backup. The FCC will also eliminate E-rate support for e-mail, web hosting, and voicemail beginning in funding year 2015”.
  • Combining Schools and Libraries. For the first time it will be possible to combine the funding for a school and library that are served by the same connection / network.
  • Eliminating Competitive Bidding for Low-Price Bandwidth. A school does not need to go to competitive bid if they can find a connection of at least 100 Mbps that costs $3,600 per year (or $300 per month) or less.
  • Eliminating a Technology Plan. There is no Technology Plan now required for applying for Internal Connections (in-school wiring) or for providing WAN connections.
  • Simplifying Multi-Year Contracts. Subsequent years after the first year of a multi-year contract will require less paperwork and have a streamlined filing process.
  • Simplifying the Discount Calculation. The discount can now be calculated on a per school-district basis and not per school within the district. The FCC adopts the definition from the Census that defines urban areas to be the densely settled core of census tracts or blocks that met minimum population density requirements (50,000 people or more), along with adjacent territories of at least 2,5000 people that link to the densely settled core. “Rural” encompasses all population, housing, and territory not included within an urban area. Any school district or library system that has a majority of schools or libraries in a rural area that meets the statutory definition of eligibility for E-rate support will qualify for the additional rural discount.
  • Requiring Electronic Filings. All filings will need to be electronic, phased in by 2017.

These are a lot of changes to a fairly complex filing process. CCG can help you navigate through these changes. If you have questions or need assistance please contact Terri Firestein of CCG at tfireccg@myactv.net.

How Should the US Define Broadband?

FCC_New_LogoThe FCC just released the Tenth Broadband Progress Notice of Inquiry. As one would suppose by the title there have been nine other of these in the past. This inquiry is particularly significant because the FCC is asking if it’s time to raise the FCC’s definition of broadband.

The quick and glib answer is that of course they should. After all, the current definition of broadband is 4 Mbps download and 1 Mbps upload. I think almost everybody will agree that this amount of bandwidth is no longer adequate for an average family. But the question the FCC is wrestling with is how high they should raise it.

There are several consequences of raising the definition of bandwidth that have to be considered. First is the purely political one. For example, if they were to raise it to 25 Mbps download, then they would be declaring that most of rural America doesn’t have broadband. There are numerous rural town in the US that are served by DSL or by DOCSIS 1.0 cable modems that have speeds of 6 Mbps download or slower. Even if the FCC sets the new definition at 10 Mbps they are going to be declaring that big portions of the country don’t have broadband.

And there are consequences of that definition beyond the sheer embarrassment of the country openly recognizing that the rural parts of America have slow connectivity. The various parts of the federal government use the definition of what is broadband when awarding grants and other monies to areas that need to get faster broadband. Today, with the definition set at 4 Mbps those monies are tending to go to very rural areas where there is no real broadband. If the definition is raised enough those monies could instead go to the rural county seats that don’t have very good broadband. And that might mean that the people with zero broadband might never get served, at least through the help of federal grants.

The next consideration is how this affects various technologies. I remember when the FCC first set the definition of broadband at 3 Mbps download and 768 Kbps upload. At that time many thought that they intended to shovel a lot of money to cellular companies to serve broadband in rural areas. But when we start talking about setting the definition of broadband at 10 Mbps download or faster, then a number of technologies start falling off the list as being able to support broadband.

For example, in rural areas it is exceedingly hard, if not impossible, to have a wireless network, either cellular or using unlicensed spectrum, that can serve every customer in a wide area with speeds of 10 Mbps. Customer close to towers can get fast speeds, but for all wireless technologies the speed drops quickly with the distance from a tower. And it is also exceedingly hard to use DSL to bring broadband to rural areas with a target of 10 Mbps. The speed on DSL also drops quickly with distance, which is why there not much coverage of DSL in rural areas today.

And when you start talking about 25 Mbps as the definition of broadband then the only two technologies that can reliably deliver that are fiber and coaxial cable networks. Both are very expensive to build to areas that don’t have them, and one wonders what the consequences would be of setting the definition that high.

The one thing I can tell you from practical experience is that 10 Mbps is not fast enough for many families like mine. We happen to be cord cutters and we thus get all of our entertainment from the web. It is not unusual to have 3 – 4 devices in our house watching video, while we also surf the web, do our daily data backups, etc. I had a 10 Mbps connection that was totally inadequate for us and am lucky enough to live where I could upgrade to a 50 Mbps cable modem service that works well for us.

So I don’t envy the FCC this decision. They are going to get criticized no matter what they do. If they just nudge the definition up a bit, say to 6 or 7 Mbps, then they are going to be rightfully criticized for not promoting real broadband. If they set it at 25 Mbps then all of the companies that deploy technologies that can’t go that fast will be screaming bloody murder. We know this because the FCC recently used 25 Mbps as the minimum speed in order to qualify for $75 million of their experimental grants. That speed locked out a whole lot of companies that were hoping to apply for those grants. They might not have a lot of choice but to set it at something like 10 Mbps as a compromise. This frankly is still quite a wimpy goal for a Commission that approved the National Broadband Plan a few years ago that talked about promoting gigabit speeds. But it would be progress in the right direction and maybe by the Twentieth Broadband Inquiry we will be discussing real broadband.

Changes to Unlicensed Spectrum

Wi-FiEarlier this year in Docket ET No. 13-49 the FCC made a number of changes the unlicensed 5 GHz band of unlicensed spectrum. The docket was intended to unify the rules for using the 5 GHz spectrum. The FCC had made this spectrum available over time in several different chunks and had set different rules for the use of each portion. The FCC was also concerned about interference with some parts of the spectrum with doppler radar and with several government uses of spectrum. Spectrum rules are complex and I don’t want to spend the blog describing the changes in detail. But in the end, the FCC made some changes that wireless ISPS (WISPs) claim are going to kill the spectrum for rural use.

Comments filed by WISPA, the national association for WISPs claim that the changes that the FCC is making to the 5725 – 5850 MHz band is going to devastate rural data delivery from WISPs. The FCC is mandating that new equipment going forward use lower power and also use better filters to reduce out-of-band emissions. And WISPA is correct about what that means. If you understand the physics of wireless spectrum, each of those changes is going to reduce both the distance and the bandwidth that can be achieved with this slice of spectrum. I didn’t get out my calculator and spend an hour doing the math, but WISPA’s claim that this is going to reduce the effective distance for the 5 GHz band to about 3 miles seems like a reasonable estimate, which is also supported by several manufacturers of the equipment.

Some background might be of use in this discussion. WISPs can use three different bands of spectrum for delivering wireless data – 900 MHz, 2.4 GHz and 5 GHz. The two lower bands generally get congested fairly easy because there are a lot of other commercial applications using them. Plus, those two spectrums can’t go very far and still deliver significant bandwidth. And so to the extent they use those spectrums, WISPs tend to use them for customers residing closer to their towers. They save the 5 GHz spectrum for customers who are farther away and they use it for backhaul between towers. The piece of spectrum in question can be used to deliver a few Mbps to a customer up to ten miles from a transmitter. If you are a rural customer, getting 2 – 4 Mbps from a WISP still beats the heck out of dial-up.

Customers closer to a WISP transmitter can get decent bandwidth. About the fastest speed I have ever witnessed from a WISP was 30 Mbps, but it’s much more typical for customers within a reasonable distance from a tower to get something like 10 Mbps. That is a decent bandwidth product in today’s rural environment, although one has to wonder what that is going to feel like a decade from now.

Readers of this blog probably know that I spent ten years living in the Virgin Islands and my data connection there came from a WISP. On thing I saw there is the short life span of the wireless CPE at the home. In the ten years I was there I had three different receivers installed (one at the end) which means that my CPE lasted around 5 years. And the Virgin Islands is not a harsh environment since it’s around 85 degrees every day, unlike a lot of the US which has both freezing winters and hot summers. So the average WISP will need to phase in the new CPE to all customers over the next five to seven years as the old customer CPE dies. And they will need to use the new equipment for new customers.

That will be devastating to a WISP business plan. The manufacturers say that the new receivers may cost as much as $300 more to comply with the filtering requirements. I take that estimate with a grain of salt, but no doubt the equipment is going to cost more. But the real issue is the reduced distance and reduced bandwidth. Many, but not all, WISPs operate on very tight margins. They don’t have a lot of cash reserves and they rely on cash flow from customers to eke out enough extra cash to keep growing. They basically grow their businesses over time by rolling profits back into the business.

If these changes mean that WISPs can’t serve customers more than 3 miles from an existing antenna, there is a good chance that a lot of them are going to fail. They will be faced with either building a lot of new antennas to create smaller 3-mile circles or else they will have to abandon customers more than three miles away.

Obviously spectrum is in the purview of the FCC and some of the reasons why they are changing this spectrum are surely valid. But in this case they created an entire industry that relied upon the higher power level of the gear to justify a business plan and now they want to take that away. This is not going to be a good change for rural customers since over time many of them are going to lose their only option for broadband. While it is important to be sensitive to interference issues, one has to wonder how much interference there is out in the farm areas where these networks have been deployed. This impacts of this change that WISPA is warning about will be a step backward for rural America and rural bandwidth.

Wireless Net Neutrality

Transmitter_tower_in_SpainWhile the FCC has not yet found a set of network neutrality rules that will stand up to a court challenge, all of the attempts they have made so far were aimed at net neutrality for landline data networks. But lately the question has been raised if the same network neutrality concepts should also be applied to wireless data.

There are several reasons why this might now make sense. There are now a substantial number of people whose only connectivity is through their wireless devices. Further, I’ve seen estimates that by sometime next year the number of data transactions from cell phones will exceed transactions from landline sources. It’s obvious that wireless data is now a major component of the general Internet and not some specialized niche like it once was. Landline data will continue to carry the vast majority of web video, but it appears that smartphones are carrying a lot of everything else.

This question has come more to the forefront lately due to some of the actions taken by cell phone providers. For example, Verizon Wireless recently announced that it would be throttling the speeds of customers who are still on its unlimited data plans on 4G LTE networks. The Verizon announcement said that starting October 1 at congested cell sites that Verizon would be throttling the speeds for unlimited plan data customers in favor of customers who buy data by the gigabyte.

This announcement seems to have angered Tom Wheeler, the FCC Chairman who sent a letter to Verizon warning them that this better be done for network management purposes and not as a play to enhance their revenues. Verizon has been trying for years to drive customers from older unlimited data plans and this just seems like another way to make life miserable for those customers to convince them to convert to more costly data plans.

The CEO of Verizon Wireless probably added fuel to the fire by comments he made at CTIA. He said that the wireless industry is still in its ‘infancy’ and needs a light regulatory touch. That’s a little hard to buy in a country where there are now as many active cell phones as there are people. He went on to say that wireless companies should be allowed to determine how they generate revenue and not regulators. He said “If a company chooses to pay us for priority access on our network, that is not a regulatory decision. It’s a business decision.”

And cell phone providers are now starting to offer blatant priority access plans. Virgin Mobile, a subsidiary of Sprint just announced plans that let customers use social media without having it count against their data plans. For example, a customer can buy a data plan for $12 per month that gives them unlimited access to any one service consisting of Facebook, Pinterest, Instagram or Twitter. Or for $10 more they can subscribe to all four. And for $5 per month extra they can add one streaming music source.

These are exactly the kinds of plans that were predicted when the courts rejected network neutrality. I read many predictions that service providers would begin giving priority to some content providers over others. To some, the Virgin Mobile plan might sound like an expansion of choice, but the fear is that customers are being herded towards a handful of big-money web services at the expense of the rest of the web.

While the Virgin Mobile plan sounds like a way for a customer to save money, in the long run going to a la carte services is going to allow wireless carriers to charge more for data than they do today. One of the concerns about lack of net neutrality is that plans like the Virgin one will kill creativity on the web by only given access to content from a handful of large providers. A customer who spends most of their time on one service like Facebook might save money with this kind of plan. But this kind of plan restricts people from trying new things on the web and will drive users towards those services that are willing to pay the wireless carriers for priority access. These plans are just starting to appear on cellphones, but how long will it be before they appear at the big ISPs?

Financing New Fiber Networks

Numismatics_and_Notaphily_iconTraditional financing is not always the solution for financing a new fiber network. For example, many rural communities don’t have the borrowing capacity to fund a fiber network strictly from bonds. And banks are still being extremely cautious about lending to infrastructure projects or for floating loans over twelve years in length. Recently I have seen several creative ideas in the market that are worth highlighting. These concepts could be used to fund municipal fiber projects or public private partnerships that combine a municipality partnering with a commercial operator.

I note that I am on the Board of an infrastructure banking firm and our firm would be interesting in helping to financing fiber networks using these or other financing ideas. Our firm can help in those cases where traditional financing might not offer a solution. We will consider looking at projects as small as $50 million but we have found that its much easier to get funding for projects of $100 million or more. If you have a fiber project of this size in mind please contact me.

TIF Financing. Wabash County Indiana wants to use Tax Increment Financing (TIF) as a way to finance a new fiber network. TIF financing works by borrowing today against future increases in property taxes. TIF has been used for decades to finance infrastructure projects, but I don’t think I have ever seen it used to build fiber. This is very different than the normal way of financing municipal fiber projects which has involved bonds that pledge customer revenues and the value of the network as collateral. In this case, the County is expecting that the project will be able to pay the annual debt service and the property taxes only have to be increased if the fiber network is unable to cover the whole cost of debt. This means that property taxes become the collateral for the project and assure a lender that they will be repaid for lending to the project.

There are also other counties and municipalities in the state looking at TIF financing. Interestingly the Indiana Association of Cities and Towns helped to recently defeat proposed AT&T legislation that would have stopped municipalities from using TIF financing for fiber projects. It is not unusual to see incumbents try to stop or ban any new financing ideas for municipal networks.

Utility Fees. Anybody who watches the industry understands the troubles that have plagued UTOPIA, a municipal network in Utah. The company has been refinanced several times and has never raised sufficient capital to build to enough homes in the area to become solvent.

UTOPIA is now working with Macquarie Capital of Australia on a financing plan that would finance the construction of the rest of the network and that would be funded by a monthly utility fee billed to each home within the network footprint for 30 years. This is similar to what has been done in Provo. There the City sold their fiber network to Google for a dollar and there, customers are billed a monthly fee of about $6. For that small fee customers can get 5 Mbps download Internet, or they can elect to pay more for Google’s gigabit speeds.

This plan differs from the one in Wabash Indiana in that customers begin paying the utility fee at the beginning of the project and will pay it for thirty years. Rather than act as collateral for a loan, the utility fees help to directly finance the project.

Economic Development Bonds / Local Bank Consortium. There is a new cooperative trying to get financed in Sibley and Renville Counties Minnesota that is combining two different financing ideas. First, a portion of the project would be finance with an economic development bond guaranteed by a number of municipal entities within a fairly large rural service area. This bonds would cover less than one fourth of the project cost and would act as seed equity in the project.

The rest of the project will be financed through loans from a consortium of banks. The idea of bank consortiums has been around for a while and has been used to finance other infrastructure projects. It generally requires the involvement of a local bank which then solicits additional banks to carry a part of the loans. It’s well known that local banks often have significant cash on hand to lend but often lack for quality borrowers. And local banks are also generally constrained by the amount that they are willing to lend to any one borrower. By combining many banks together, no one bank lends too much and each gets to participate in a quality loan.

This project is a great example of a public private partnership. It will be operated as a commercial entity, a cooperative, and will draw on both municipal and commercial funding. All three of these ideas step outside of the normal financing channels. In today’s world it often takes this kind of creativity to get needed infrastructure built.

 

The FCC and Peering

Zeus_peering_around_a_corner__(9386751334)As the politics of net neutrality keep heating up, Senator Pat Leahy and Representative Doris Matsui introduced the Online Competition and Consumer Choice Act of 2014.

This bill requires the FCC to forbid paid prioritization of data. But then, Senator Leahy was quoted in several media outlets talking about how the bill would stop things like the recent peering deal between Netflix and Comcast. I’ve read the proposed bill and it doesn’t seem to ban those kinds of peering arrangements. His comments point out that there is still a lot of confusion between paid prioritization (Internet fast lanes) and peering (interconnection between large carriers). The bill basically prohibits ISPs from creating internet fast lanes or in disadvantaging customers through commercial arrangements in the last mile

The recent deals between Netflix and Comcast, and Netflix and Verizon are examples of peering arrangements, and up to now the FCC has not found any fault with these kinds of arrangements. The FCC is currently reviewing a number of industry peering agreements as part of investigating the issue. These particular peering arrangements might look suspicious due to their timing during this net neutrality debate, but similar peering arrangements have been around since the advent of the Internet.

Peering first started as connection agreements between tier 1 providers. These are the companies that own most of the long haul fiber networks that comprise the Internet backbone. In this country that includes companies today like Level3, Cogent, Verizon and AT&T. And around the world it includes companies that you may not have heard of like TeliaSonera and Tata. The tier 1 providers carry the bulk of the Internet traffic and peering was necessary to create the Internet as these large carriers need to be connected to each other.

Most of the peering arrangements between the tier1 carriers have been transit-free or what is often referred to as bill-and-keep. The traffic between the major carriers tends to balance out in terms of originating and terminating volumes and in such cases it doesn’t make a lot of sense for two carriers to bill each other for swapping similar amounts of data traffic.

But over time there were peering arrangements made between the tier 1 carriers and tier 2 providers that includes the large ISPs and telcos. Peering was generally done in these cases to make the network more efficient. It makes more sense to interchange traffic between and ISP and somebody like Level3 at a few places rather than at hundreds of places. It’s always been typical for these kinds of peering arrangements to include a fee for the tier 2 carrier, something that is often referred to as a transit fee.

There is no industry standard arrangement for interconnection between tier 1 and tier 2 providers. And this is because tier 2 providers come in every configuration imaginable. Some of them own significant fiber assets of their own. Others, like Netflix have a mountain of one-directional content and own almost zero network. And so tier 2 providers scramble to find the best commercial arrangement they can in the marketplace. One thing that is almost universal is that tier 2 providers pay something to connect to the Internet. There is no standard level of payment and transit is a very fluid market. But payment generally recognizes the relative level of mutual benefit. If the traffic between two parties is balanced then the payments might be small or even free. If one party causes a lot of costs for the other then payments typically reflect that imbalance.

Netflix has complained about paying Comcast and Verizon. But those ISPs wanted payments from Netflix since the traffic from Netflix is large and totally one-directional. Comcast or Verizon needs to construct a lot of facilities in order to accept the Netflix traffic and they don’t get any offsetting benefit of being able to send traffic back to Netflix on the same connection.

In economic terms, on a national scale the peering market is referred to as an n-dimensional market, meaning that a large tier 2 provider has the ability to negotiate with multiple parties to achieve the same result. For example, Verizon has a lot of options for moving data from the east to the west coast. But eventually the Internet becomes local, and that is where the cost and the contention arises. As Internet traffic enters a local metropolitan market it begins to hit choke points where the traffic can overwhelm the local facilities and cause congestion. The payments that Comcast or Verizon want from Netflix are to build the facilities needed for getting Netflix movie traffic to and through these local hubs and chokepoints.

Peering arrangements like this make sense. I find it hard to believe that the FCC is going to get too deeply involved in peering arrangements. It’s an incredibly dynamic market and carriers are constantly rearranging the network as they find better prices or more efficient network arrangements. If there is any one place where the market works it is between the handful of large carriers that handle the majority of the Internet traffic. Most of the bad things that can happen to customers are going to happen in the last mile network, and that is where net neutrality should properly be focused.

And why the picture of the kitten? I work at home and at my very local part of the network this is the kind of peering that I often get.