Categories
Technology

Can Small Networks Keep Up?

OpenCompute1I hope this blog doesn’t come across as too negative, but it seems that every few weeks I read something that makes me think, “That is not good for small network owners.” There are a lot of changes going on at the top of our industry that are, at a minimum, worrisome for small network owners.

The biggest users of servers and related hardware have come together to create a consortium called the Open Compute Project that is working to create cheap generic hardware that will replace the expensive servers and switches bought from companies like Cisco and Juniper. This effort was started by Facebook and is likely to completely disrupt that industry. In addition to that effort, a few of the largest companies like Amazon and Google have developed their own proprietary hardware.

Recently a pile of telcos like Verizon, AT&T, Deutsche Telekom, Korea’s SK Telecom, and Equinix have joined the effort. It looks like all of the largest users of this kind of equipment will be buying their hardware from new channels, which is going to devastate the existing vendors.

One might think that this is good news for smaller companies, because in this industry small companies have always ridden the tails of the large companies. We have all benefited by having reasonable prices and a variety of option because those options were created for the big users.

But unfortunately, the Open Compute Project isn’t going to operate that way. Anybody is free to share in the open specifications that are being created, but companies are then expected to modify those specs to meet their needs and then find a way to get the gear built. Probably the top 95% of the market will no longer be buying off-the-shelf servers, which is not good for smaller users. Small companies, meaning anybody smaller then perhaps CenturyLink, will not have the resources to wade through the open source process to make their own hardware.

One might hope that there would still be somebody left to supply all of the smaller users of this equipment, but that flies against the past experience of the industry. Without big buyers of equipment, there is unlikely to be any R&D or new product development to serve a much smaller potential market. There are analysts that believe that companies like Cisco and Juniper will eventually flee the server market.

One has to worry about the general availability of telecom electronics of any kind in the future. The open source movement is not going to stop with servers; over time it will tackle fiber electronics, cable headend, settop boxes, you name it. As the big companies stop buying from vendors we are likely to see a lot of failure among the already reduced field of telecom vendors.

Along with the move to proprietary open source hardware is a similar move towards open source software to control the hardware. Again, one might think that small companies could just use the open source software, but that also doesn’t work the way you might hope. Open source software provides a sprawling mass of options and companies that use it for something like operating servers have select what they want out of it and develop their own package of options. That is way past the abilities or budgets for smaller companies. This is another area today where we benefit for the work done for larger carriers.

Small companies probably feel safe that there are a few vendors around that specialize in serving small carriers today. But many of us that are in the industry know that telecom vendors come and go. Any vendor that gets most of their revenue from the big ISPs is going to be in trouble. And when I look at the vendors used by small companies today you see almost no vendors that were here twenty years ago. The periodic downturns in the industry have always been hard for vendors to weather. There might not be enough volume from small telecom carriers to support healthy vendors for the long haul.

I hope I am wrong about all of this. Each one of these factors would be a cause for some concern. But taken all together, these trends point to a future five and ten years from now where there will be fewer vendors, where it’s going to be harder and more expensive for smaller carriers to buy gear, and where there might not be much dollar incentive for anybody to do the same R&D for small carriers that the big carriers will be doing on their own.

 

Categories
Regulation - What is it Good For?

FCC Approves New 5G Spectrum

In what is already being called the 5G Order, in Docket FCC 16-89 the FCC just released a lot of new spectrum. Quoted directly from the FCC Order the new spectrum is as follows:

Specifically, the rules create a new Upper Microwave Flexible Use service in the 28 GHz (27.5-28.35 GHz), 37 GHz (37-38.6 GHz), and 39 GHz (38.6-40 GHz) bands, and an unlicensed band at 64-71 GHz.

  • Licensed use in the 28 GHz, 37 GHz and 39 GHz bands: Makes available 3.85 GHz of licensed, flexible use spectrum, which is more than four times the amount of flexible use spectrum the FCC has licensed to date.
    • Provides consistent block sizes (200 MHz), license areas (Partial Economic Areas), technical rules, and operability across the exclusively licensed portion of the 37 GHz band and the 39 GHz band to make 2.4 GHz of spectrum available.
    • Provides two 425 MHz blocks for the 28 GHz band on a county basis and operability across the band.
  • Unlicensed use in the 64-71 GHz band: Makes available 7 GHz of unlicensed spectrum which, when combined with the existing high-band unlicensed spectrum (57-64 GHz), doubles the amount of high-band unlicensed spectrum to 14 GHz of contiguous unlicensed spectrum (57-71 GHz). These 14 GHz will be 15 times as much as all unlicensed Wi-Fi spectrum in lower bands.

The U.S. is the first country to authorize specific use of this much spectrum in these upper bands, which have commonly been referred to as millimeter wave spectrum. And the FCC isn’t yet finished. Along with the Order, the FCC issued a Further Notice for Proposed Rulemaking to look at how it should deal with other blocks of spectrum, including existing space in the 24-25 GHz, 32 GHz, 42 GHz, 48 GHz, 51 GHz, 70 GHz, and 80 GHz. The FCC also asked for comments on how it might provide access to spectrum above 95 GHz.

The FCC hopes that opening up this spectrum will result in a lot of new wireless applications. Today there are two planned uses for the millimeter wave spectrum. The cellular 5G standard talks about using this spectrum on a broadcast basis to deliver high bandwidth for short distances. That most likely means use as a way to deliver big bandwidth wirelessly within a room or office.

There is also an application today for using these frequencies for point-to-point microwaves. These radios can deliver about 2 gigabits on a point-to-point basis and can act as a fiber replacement where the economics make sense. But these frequencies are largely killed by heavy rain and they need pure line-of-site, meaning nothing between the transmitter and the receiver. Still, there are hopes that in rural areas this could be a replacement for building expensive fiber for just a few customers, or as a way to reach remote locations.

The FCC is hoping that releasing such large blocks of spectrum will result in a burst of research and development, much like what happened when they first released WiFi. At that time the FCC first created WiFi spectrum blocks there were only a few applications envisioned, but engineers and entrepreneurs have since developed a huge range of WiFi applications far beyond what the FCC first envisioned.

The FCC is adopting flexible regulatory rules for the new spectrum. Licensees will be able to get a 10-year license either as a common carrier, as a non-common carrier or for private internal communications. They are expecting to issue numerous licenses per area and don’t expect a lot of interference issues due to the short-distance nature of the propagation for these spectrums. A lot of the specific details will need to be generated by the FCC Wireless Bureau.

 

Categories
Technology The Industry

Broadband and Medicine

I think it’s been at least fifteen years since I first began hearing that one of the major benefits of broadband will be improved health care. Yet, except for a few places that are doing telemedicine well, for the average person none of this has yet come to pass. But now I think we are at the cusp of finally seeing medical applications that will need broadband. Following are some areas where we ought to soon see real applications:

Letting the Elderly Stay in Their Homes Longer. This is the holy grail of future medicine products because surveys have shown that a huge majority of Americans want to stay in their homes as long as possible, and die in their homes when it’s time.  There is no one solution that can solve this problem, but a whole suite of technologies and solutions working together – and the good news is that there are now more than a hundred companies looking for ways to make this work.

All solutions for the elderly begin with smart monitoring systems. This means video cameras and sensors of all sorts that look for problems. Medical monitors will monitor vital signs. Smart sensors will track an elderly person and alert somebody if that person doesn’t move for a while. Reminder systems will make sure medications are taken on time. Virtual reality will help homebound elderly to keep in touch with caregivers and to have an active social life from home. Robots can help with physical tasks. The key to a good product is one that ties all of these things together into a package that people can afford (or that is at least less costly than the alternatives). My guess is that we are only a few years away from these packages finally being a reality.

Medical Diagnosis with Artificial Intelligence. IBM’s Big Blue has already demonstrated that it is better than most doctors and nurses at diagnosing medical conditions, with the added benefit that Big Blue generally catches rare diagnoses that doctors tend to not consider. There are already a number of companies working on integrating this into clinics, but this is also going to be taken online so that patients can be screened before even coming to see a doctor. IBM isn’t the only possible solution; companies like Google and Microsoft are now selling time on their AI platforms.

Virtual Reality and Telemedicine. One of the biggest drawbacks today to telemedicine is that a doctor can’t really get a good look at a patient in as much detail as they can in a live visit. But with big bandwidth and virtual reality technology doctors will soon be able to see patients in 3D and in close-up detail, which is going to make telemedicine a lot more accurate and usable. And combining this technology with some sort of medical monitor to supply vital signs can allow for easy treatment of most problems. But this is going to require big bandwidth at homes as well as a big data pipe between the remote community and the doctors.

Nanobots. A lot of future treatment of diseases is going to involve nanobots in the bloodstream. These will be tiny devices that deliver medicine specifically to the areas of the body that need it, are engineered to attack specific viruses or germs, or that monitor ongoing health issues closely. There are researchers who believe that we will carry nanobots with us at all time – to fend off cancer, treat diseases like the common cold before we have any symptoms, to rejuvenate cells, and to act as an early warning system for anything unusual.  There are already nanobot treatments for cancer being tested. We clearly will need to monitor nanobots and that means a reliable broadband connection and specific kinds of sensors.

Categories
Regulation - What is it Good For?

Getting Access to Conduit

There is an interesting case at the California Public Utilities Commission where Webpass is fighting with AT&T over access to conduit. You may have seen that Webpass was just recently bought by Google Fiber and I would think this case will be carried forward by Google.

The right for competitive providers to get access to conduit comes from the Telecommunications Act of 1996. In that Act, Congress directed that competitive telecom providers must be provided access to poles, ducts, conduits, and rights-of-way by utilities. A utility is defined as any company, except for electric cooperatives and municipalities, which owns any of those facilities that are used in whole or in part for communications by wire. Under this definition telcos, cable companies, commercial electric companies, gas companies, and others are required by law to make spare conduit available to others.

If a utility allows even one pole or piece of conduit to be used for communications, including for its own internal purposes, then the whole system must be made available to competitors at fair prices and conditions. About half of the states have passed specific rules governing those conditions while states without specific rules revert to the FCC rules.

Webpass tried to get access to AT&T conduits in California and ran into a number of road blocks. It seems like there are a few situations where AT&T has provided conduit to Webpass, but AT&T denied the majority of the requests for access.

This is not unusual. Over the years I have had several clients try to get access to AT&T and Verizon conduit and none of them were successful. AT&T, Verizon, and the other large telcos generally have concocted internal policies that make it nearly impossible to get access to conduit. When a competitor faces that kind of intransigence their only alternative is to take the conduit owner to court or arbitration – and small carriers generally don’t have the resources for this kind of protracted legal fight.

But even fighting the telcos is no guarantee of success because the FCC rules provide AT&T with several reasons to deny access. A utility can deny access on the basis of safety, reliability or operational concerns. So even when a conduit owner is ordered to provide access after invoking one of these reasons, they can just invoke one of the other exceptions and begin the whole fight again. It takes a determined competitor to fight through such a wall of denial.

Trying to get conduit reminds me of the battles many of my clients fought in trying to get access to dark fiber fifteen years ago. I remember that AT&T and Verizon kept changing the rules of the dark fiber request process so often that a competitor had a difficult time even formulating a valid request for dark fiber. Even when Commissions ordered the telcos to comply with dark fiber requests, the telcos usually found another reason to deny the requests.

This is a shame because getting access to conduits might be one of best ways possible to promote real competition. AT&T and Verizon both claim to have many hundreds of thousands of miles of fiber, much of it in conduit. I am sure there are many cases where older conduit is full. But newer conduits contain multiple empty tubes and one would have to think that there is a huge inventory of empty conduits in the telco networks. The same is true for the cable companies and the large electric companies, and I can’t recall any small carriers who has ever gotten access to any of this conduit. I think some of the large carriers like Level3 or XO probably have gotten some access to conduit, but I would imagine even they probably had to fight very hard to get it.

I remember talking to a colleague the day that we first read the Telecommunications Act of 1996 that ordered the telcos to make conduit available to competitors. We understood immediately that the telcos would adopt a strategy of denying such access – and they have steadfastly said no to conduit requests over the years. I am glad to see Webpass renewing this old fight and it will be interesting to see if they can succeed where others have failed.

Categories
Technology

The End of Moore’s Law

I’ve been meaning to write this blog for a while. It is now commonly being acknowledged that we are nearing the end of Moore’s law. Moore’s law is named after Gordon Moore, an engineer who later was one of the founders of Intel. In 1965, Moore made the observation that the number of transistors that could be etched onto a circuit board would double every two years. He originally thought this would last for a decade or so, but the microchip industry has fulfilled his prediction for over 50 years now.

In 1965 a single transistor cost about $8 in today’s dollars and now, after so many years of doubling, we can put billions of transistors onto a chip, at a tiny fraction of a cent each. It was the belief that chips could continue to improve that helped to launch Silicon Valley, and that enabled the huge array of technological changes that have been brought about by cheap computer chips.

The companies that make chips have thrived by creating a new generation of chips every few years that represented a significant leap forward in computing power. I think every adult understands the real life consequences of these changes – we’ve all been through the cycle of having to upgrade computers every few years, and more recently of having to upgrade cellphones. Each subsequent generation of PC or smartphone was expected to be considerably faster and more powerful.

But we are starting to reach the end of Moore’s law, mostly driven by limits of physics and the size of atoms. It now looks like there will be better chips perhaps every three years. And within a decade or so Moore’s law will probably come to an end. There may be faster and better computers developed after that point – but improvements will have to come from somewhere other than cramming more transistors into a smaller space.

There are researchers looking to improve computers in other ways – through better software or through chip designs that can be more efficient with the same number of transistors. For instance, IBM and others have been working on chips that use layers of single chips built into a matrix – essentially a 3D chip. And there has been a lot of research into using light instead of electricity to speed up the computing process.

We are already starting to see the result of the slowdown of Moore’s law. The PC and tablet industries are suffering because people are hanging onto those devices a lot longer than they used to. Apple and Samsung are both struggling due to a drastic reduction in the sale of premium smartphones – because new phones are no longer noticeably better than the old ones.

Faster chips also fueled a lot of other technologies, including many in the telecom world. Faster chips have brought us better and faster servers, routers, and switches. Better chips have led to improved generations of fiber optic gear, voice switches, cable TV headends, settop boxes – basically every kind of telecom electronics. No doubt these technologies will keep improving, but soon the improvements won’t be from faster and more powerful processors. The improvements will have to come from elsewhere.

Faster and more powerful chips have enabled the start of whole new industries – smart cars, drones, robots, and virtual reality. But those new industries will not get the same boost during their fledgling years like what happened in the past to other electronics-based industries. And that has a lot of technology futurists concerned. Nobody is predicting the end to innovation and new industries. But anything new that comes along will not get the boost that we’ve enjoyed these many decades through the knowledge that a new technology would improve almost automatically with more powerful processors.

 

 

 

 

Categories
Technology

The Anniversary of Fiber Optics

I recently saw an article that noted that this month marks the fiftieth anniversary of a scientific paper by Charles Kao in 1966 that kicked off the field of fiber optics communications. That paper eventually won him the Nobel prize for physics in 2009. He was assisted by George Hockman, a British engineer who was awarded the Rank prize for Opto-electronics in 1978.

We are so surrounded by fiber optic technology today that it’s easy to forget what a relatively new technology this is. We’ve gone from theoretical paper to the world covered with fiber optic lines in only fifty years.

As is usual with most modern inventions, Kao and Hockman were not the only ones looking for a way to use lasers for communications. Bell Labs had considered using fiberglass but abandoned the idea due to the huge attenuation they saw in glass – meaning that the laser light signal scattered quickly and wouldn’t travel very far. Bell Labs was instead looking at shooting lasers through hollow metal tubes using focused lenses.

The big breakthrough was when Kao and Hockman found a way to reduce the attenuation within a fiberglass cable to less than 20 decibels per kilometer. At that level of attenuation they could overcome irregularities and impurities in the fiber cable.

It took a decade for the idea to be put to practical use and Corning Glass Works (now Corning Inc.) found ways to lower attenuation even more; they laid the first fiber optic cable in Torino, Italy in 1977.

We didn’t see any wide-spread use of fiber optics in the U.S. until the early 1980s. AT&T and a few other companies like the budding MCI began installing fiber as an alternative to copper for long-haul networks.

We’ve come a very long way since the first generation fiber installations. The glass was expensive to manufacture, and so the early fiber cables generally did not contain very many strands of glass. It was not unusual to see 6 and 8 strand fibers being installed.

Compared to today’s standards, the fiber produced in the 1980s into the early 1990s was dreadful stuff. Early fiber cables degraded over time, mostly due to microscopic cracks introduced into the cable during manufacturing and installation. These cracks grew over time and eventually caused the cables to become cloudy and unusable. Early splicing technologies were also a problem and each splice introduced a significant amount of interference into the fiber run. I doubt that there is much, if any, functional fiber remaining from those early days.

But Corning and other companies have continually improved the quality of fiber optic cable and today’s fiber is lightyears ahead of the early cables. Splicing technology has also improved and modern splices introduce very little interference into the transmission path. In fact, there is no good estimate today of how long a properly-installed fiber cable might last in the field. It’s possible that fiber installed today might still be functional 75 to 100 years from now. The major issues with the life of fiber today is no longer failure of the glass sheath, but rather the damage that is done to fibers over time due to fiber cuts and storm damage.

The speeds achieved in modern fiber optics are incredible. The newly commissioned undersea fiber that Google and others built between Japan and the west coast of the US can pass an incredible 60 Terabits per second of data. Improvements in laser technology have grown probably even faster than the improvements in fiber glass manufacturing. We’ve grown to where fiber optic cable is taken for granted as something that is reliable and relatively easy to install and use. We certainly would be having a very different discussion about broadband today had fiber optic cables not improved quickly over the last several decades.

Categories
Regulation - What is it Good For?

Decommissioning Copper Lines

The FCC just released new rules in WC Docket 13-3 having to do with the decommissioning of copper lines. These rules apply to all regulated LECS, not just to the large RBOCs. The order also declared that the large telcos are no longer considered dominant carriers.

These rules are needed because AT&T and Verizon have been pestering the commission for five years to let them tear down copper lines. What has always surprised me about this Order is that it has been included in the docket looking at the transition of the PSTN from TDM technology to Ethernet. Decommissioning copper has nothing to do with that topic since copper lines for customers would function the same as today even with an all-IP network between carriers. But the two big telcos flooded this docket with the end-user network issues until the FCC finally caved and included the topic.

The order establishes rules that carriers must follow if they want to automatically decommission copper. A carrier must file a plan with the FCC that guarantees that:

  • Network performance, reliability and coverage is substantially unchanged for customers.
  • Access to 911 and access for people with disabilities must still both meet current rules and standards.
  • There must be guaranteed compatibility with an FCC list of legacy services that include such things as fire alarms, fax machines, medical monitors, and other devices that might not work on an IP network.

If a carrier can meet all of these requirements then they can file plans for each proposed copper retirement with the FCC. The company then needs to go through a specific notification process with customers.

While the FCC was not quite as explicit with a rule, they also expects that any replacement service to copper remain affordable for customers.

If a telco can’t meet any one of the many requirements, then they have to file with the FCC and go through a formal review process to see if the retirement will be approved. The FCC is making it clear that there will be no guaranteed timeline for the manual process.

The main regulatory impact of the rules is that now all telcos have to go through a formal process before tearing down copper. There have been, in the past, many examples of telcos taking down copper with no notification to customers or to regulators. Small telcos that have been installing fiber to customers must take notice of these rules since they now apply to them as well. These rules also means that a small telco can’t force a customer onto a fiber connection until they have gone through the FCC process.

There is still a lot of concern in rural areas that copper landlines will be taken down with only cellular service offered as the alternative. That may still happen under this process, but it’s likely that those sorts of situations will require the more detailed FCC review process and won’t be allowed automatically.

The dominant carrier issue is interesting. The FCC notes that in some markets traditional copper landlines have dropped nearly to single digit penetration rates. By ending the dominant carrier requirement for the large telcos, the FCC has lowered the regulatory burden on the large companies. For issues like 214 compliance they are now considered the same as smaller telcos. Any FCC rules that were different for dominant versus non-dominant carriers now default to the non-dominant rules. But this ruling does not end any rules that were determined by the difference between price cap and rate-of-return carriers. Those rules remain in place.

Categories
Technology

A Last Gasp Technology for Copper?

Genesis Technical Systems of Canada has announced an improvement to an existing technology that might breathe some life into rural copper networks. The technology is called DSL rings. The technology is not entirely new and I can recall seeing it being discussed fifteen years ago, but the company has added a twist that improves on the concept.

DSL rings are essentially shared DSL. Currently deployed DSL technology can bond together two pairs of copper and in real-life networks can get as much as 50 Mbps speeds.  Under current DSL architecture, the bonded pairs are dedicated to a single home/business. DSL rings instead allows for the bonding of multiple pairs of copper that are then shared among multiple homes. In that bonding process there is a little less new bandwidth available from each pair added, so there is a natural limit on the number of copper pairs that can be bonded.

From the neighborhood device in a pedestal, the “ring” is created by using one copper pair “into” each home and one copper pair “out”. This architecture is looped repetitively through all of the homes in the ‘ring’ so that they are on one continuous copper ‘ring’. For example, in a neighborhood where there are ten homes that can currently each get 10 Mbps using standard DSL,  this technology might create a 80 Mbps pipe that would be shared by all ten homes. But at peak times when all of the homes are using a lot of bandwidth this might not be much faster than today. But by sharing all of the bandwidth with everybody, customers would have access to more bandwidth when the network isn’t busy. A single customer would have access to the whole 80 Mbps pipe. The technology is an improvement on traditional DSL – it uses the same bandwidth-sharing concept as fiber and cable TV nodes where customers in neighborhoods share bandwidth rather than each getting a separate bandwidth pipe.

The current DSL ring technology wouldn’t do anything useful for today’s rural DSL, since there is not a lot of benefit in bonding together slow connections that are only at 1 or 2 Mbps. But as CAF II is implemented by the big telcos and as faster DSL is built into the rural areas, this idea might make sense.

Genesis Technical Systems’ new twist is that they can use the DSL ring base units as a DSL regeneration site, meaning it can not only serve the nearby homes, but the unit can send out bandwidth to the next DSL ring and start a new 2 – 3 mile delivery circle around the next ring in the chain.

The big drawback to that idea is that the second chain is going to be limited to the amount of bandwidth that can be sent to it up the copper, and so it won’t have nearly as much available bandwidth as a DSL ring that is fed by fiber. I see that as the big limiting factor. But this might allow for a network with one or two DSL ring ‘hops’ that can reach further out into the rural area with faster DSL, with each subsequent ring getting significantly smaller bandwidth.

The ideal configuration would be to feed each DSL ring with fiber. But even without considering the cost of building new fiber the technology is not cheap, in the range of $600 to $800 per home added.

There will be other issues to deal with in the rural areas. Most copper networks are ‘loaded’ meaning that there are equalizers to maximize voice quality and this loading would have to be deactivated to use the DSL technology. In some areas, there might not be enough spare copper pairs to make the ring. These days we all assume that most homes have abandoned landlines for cellphones, but in rural areas where the cellular coverage is bad there are still pockets of homes where most have landlines. But copper pairs could be freed by converting analog voice to VoIP.

In looking at the technology, I see the most promising use of it in rural towns, like county seats. Neighborhood rings could be created that would upgrade DSL to compete with most current small town cable modem systems. Where customers might today be buying DSL that has speeds up to 6 Mbps or 12 Mbps they might be able to get speeds up to 50 Mbps or 100 Mbps. But the big caveat on this would be that these rings would slow down during the busiest evening hours similar to older cable TV networks. Still, it would be a major DSL upgrade.

It’s an interesting technology, but at best it’s the last gasp for an old copper network. If this technology is used to move DSLAMs closer to rural homes they are going to get a lot more bandwidth than they get today. It looks like in the ideal situation the technology would let customers burst faster than the FCC’s broadband definition of 25 Mbps. But to some degree this extra speed is illusory – during peak times the DSL would probably be significantly slower. My guess is that if one of the big telcos adopt the technology they will claim the burst speeds in reporting to the FCC and not the achieved speeds at the busy hours of the day. But customers would quickly figure out the difference.

Categories
Regulation - What is it Good For? Technology

Some Relief for WiFi?

The FCC is currently considering a proposal by Globalstar to open up a fourth and private WiFi channel. It looks like the vote is going to be close with Commissioners Rosenworcel and Pai saying they oppose the idea.

Globalstar, based in Covington, Louisiana, is a provider of satellite-based telephone systems, but has been dwarfed in that part of the industry by the much larger Iridium. Globalstar was awarded a swath of spectrum in the high 2.4 GHz bandwidth to use for its satellite phones. The Globalstar bandwidth sits next to the part of the WiFi spectrum used for Bluetooth – but there is such a small amount of satellite phone usage that interference has never been an issue.

Globalstar made a proposal to make their spectrum available for WiFi, but with the twist that the want their slice of spectrum to be private and licensed by them. This differs from the rest of the WiFi spectrum that is free and open for anybody to use. Globalstar argues that allowing some large users, such as AT&T, to use their spectrum will take a lot of the pressure off of existing WiFi.

There are places today where WiFi interference is noticeable, and it is likely to get worse. Cisco projects that the amount of data carried by WiFi will triple in the next three years – a growth rate 50% greater than data usage overall. There is expected to be a lot of demand put onto WiFi from the Internet of Things. And the cellular companies have a proposal called LTE-U that would let them dip into the WiFi spectrum for cellular data.

But as might be imagined there is a lot of opposition to the Globalstar plan. One of the major objections is that this would be a private use of the spectrum while the rest of the WiFi is available to everybody. Globalstar could license this to a handful of companies and give them an advantage over other WiFi users by giving them access to a largely empty swath of spectrum that wouldn’t have many users. Having a few companies willing to pay the price for Globalstar’s spectrum flies against the whole concept of making WiFi available to everybody.

But the primary concern about the idea is that it will cause interference with existing WiFi. Today the normal WiFi antennas used to send and receive data are not very expensive, and they routinely broadcast signals outside of the range of the narrow WiFi channels. This creates a condition called adjacent channel interference where WiFi interferes with adjacent bands of spectrum. The FCC has handled this by creating buffers around each WiFi channel that allows for the bleed-over signals.

The Globalstar spectrum sits in one of those adjacent buffer zones and critics say that heavy use of the Globalstar spectrum would directly then interfere with existing WiFi that already bleeds into the Globalstar spectrum. In general it’s never been a good idea to place two heavily used slices of spectrum next to each other without buffers, and the proposal would jam Globalstar spectrum next to existing WiFi. On the other side of the Globalstar spectrum is the part of WiFi reserved for Bluetooth, and again use of the spectrum would eliminate any buffer.

The opponents to the idea have been very vocal. They don’t think the FCC should allow for the risk that Globalstar will create a clear channel for a few carriers while interfering with everybody else trying to use WiFI. The industry as a whole says this is an overall losing idea.

The issue has been in front of the FCC for a few years and looks like it will come to a vote soon. Chairman Wheeler is for the Globalstar plan with two other Commissioners already against it. It will be up to the final two commissioners to decide if this is a go or not.

 

Categories
The Industry

The County Dilemma

Somebody made a comment to me last week that we need more municipal broadband. I certainly agree with the sentiment, but I’ve recently been working with a lot of rural counties and what I’ve found is that bringing broadband to rural places is a lot harder than it sounds.

In the last year I have analyzed in detail a number of different rural counties. Engineers took a hard look at the cost of bringing broadband to each of these counties and I also created extensive financial models trying to find a way to pay for the broadband solution.

One unsurprising result of these studies is that it’s exceedingly hard to find and fund permanent broadband solution in rural places. A few of the counties I studied were in the Midwest where the soil is deep and soft and where buried fiber is as cheap as, or sometimes even cheaper than getting onto poles. But even with the lowest possible construction costs it can be hard to justify building rural fiber. And most of the country has higher construction costs than in rural Iowa or Minnesota.

I’ve also looked at places where the soil is rocky and hard and expensive to bury fiber. But some of these places also have a big mess on poles, making it a challenge to hang fiber. There are many rural pole networks that consist of short poles that need a lot of work or even replacement to add fiber. And as I have covered in several blogs, there are often major practical issues with getting access to poles even where it makes sense to do so.

But the number one issue with building rural fiber is getting financing. As it turns out, many rural counties have an exceedingly hard time contributing much financing towards a broadband network.

Citizens who want fiber often say that local governments ought to just suck it up and borrow the bonds needed to build fiber. But that sentiment is naïve. Rural counties generally don’t have the borrowing capacity to fully fund a fiber network. I’ve looked at counties recently where the cost of building just the fiber and electronics (which ignores operating losses and the cost of financing) ranged from $20 million to over $100 million. Numbers that large are beyond the ability of most rural counties to finance, even if they have the political will.

Rural counties as a whole don’t have a lot of discretionary money. By that, I mean that the revenues they are able to collect are generally almost entirely needed for the services they are required to provide by law. Counties have a long list of responsibilities. They generally have to maintain extensive road systems and bridges. They generally have to fund a police and jail system. They have to provide a healthcare systems of some sort. Many of them have to provide water and sewer systems to at least some of their constituents. And they have to take care of the daily issues of removing snow, repairing potholes, and all of those things that local governments do for citizens.

Counties everywhere have similar sources of funding. For instance, they collect property taxes, but some significant portion of those taxes is usually earmarked for specific purposes like the school systems. Counties also generally share in the sales taxes collected anywhere in the county, but in rural counties this is a much smaller revenue source than for more urban places. Counties also typically get a significant amount of their funding from the state or federal government, but these funds are usually earmarked for specific purposes as well. And most rural counties don’t collect a lot of taxes from businesses, which are a significant funding source for cities and towns.

I’ve talked to the bond advisors in many rural counties about the possibility of financing fiber. What I’ve generally found is that even if most counties borrow up their credit limit they can’t raise nearly enough to pay for a broadband network. And so many county governments, as much as they might want to find a broadband solution, are not themselves able to contribute much towards paying for the solution. So in many counties, municipal funding is not ever going to be possible, meaning that broadband networks need to funded in some other way.

Exit mobile version