A 5G Timeline

Network World recently published their best guess at a timeline for 5G cellular deployment. As happens with all new technologies that make a big public splash, the actual deployment is likely to take a lot longer than what the public expects.

They show the timeline as follows:

  • 2017 – Definition, specification, requirements, technology development and technology field tests
  • 2019/20 – Formal specifications
  • 2021 – Initial production service rollouts
  • 2025 – Critical mass
  • 2030+ – Phase-out of 4G infrastructure begins

There is nothing surprising about this timeline, and in the cellular world we saw something similar with the roll-out of both 3G and 4G and there is no reason to think that 5G will be introduced any faster. There are an incredible number of things that must come to bear before 5G can be widely available.

Just to be clear, this timeline is talking about the use of the 5G standard for cellular service, as opposed to the same 5G terminology that is being used to describe high-speed radio connections used to deliver broadband over short distances. The use of the term 5G is going to be confusing the public for years, until some point where we will need a different name for the two different technologies.

Like with any new technology, it will probably be fifteen years until there is equipment that incorporates the full 5G specification. We are just now finally seeing a full implementation of fully-compliant 4G electronics. This means that early 5G roll-outs will only implement a few of the new features of 5G. Just like with 4G we can then expect successive future 5G roll-outs as new features are introduced and the technology inches forward. We won’t go straight to 5G, but will work our way through 4.1G and 4.2G until we finally get to the full 5G specification.

Here are just a few of the things that have to happen before 5G cellular is widely deployed.

  • Standards have to be completed. Some of the first generation standards will be completed by the end of this year, but that’s not the end of the standards process. There will be continued standards developed over the next few years that look at the practical issues of deploying the technology.
  • Then equipment must be developed that meets the new standards. While many wireless companies are already working on this, it takes a while to go from lab prototype to mass production.
  • True field trials are then needed. In the wireless world we have always seen that there is a big difference between the capabilities that can be tested in a lab versus the real performance that can be had in differing outdoor environments. Real field trials can’t proceed until there are finished deployments that are not prototypes that are then tested in many different environments.
  • Then the cellular companies have to start deploying the equipment into the field. That means not only upgrading the many existing cell towers, but it’s going to mean deploying into smaller neighborhood cell sites. As I’ve written about recently, this means building a lot of new fiber and it means solving the problems of deploying small cell sites in neighborhoods. If we’ve learned anything from the recent attempt by the cell companies to deploy small 4G cell sites it’s that these two issues are going to be a major impediment to 5G deployment. Just paying for all of the needed fiber is a huge hurdle.
  • One of the biggest challenges with a new cellular technology is introducing it into handsets. Handset makers will like the cachet of selling 5G, but the biggest issue with cellphones is battery power and it’s going to be costly and inefficient to deploy the more complicated 5G big-MIMO antennae in handsets. That’s going to make the first generation of 5G handsets expensive. This is always the catch-22 of a new cellular technology – cellphone makers don’t want to commit to making big volumes of more-expensive phones until customer can actually use the new technology, and the cellphone makers won’t deploy too much of the 5G technology until there are enough handsets in the world to use it. I’ve seen some speculation that this impasse could put a real hitch in 5G cellular deployment.

To a large degree the cellular industry it its own worst enemy. They have talked about 5G as the savior of all of our bandwidth problems, when we know that’s not true. Let’s not forget that when 4G was introduced fifteen years ago that the industry touted ubiquitous 100 Mbps cellphone connections – something that is still far above our capabilities today. One thing not shown on the timeline is the time when we finally get actual 5G capabilities on our cellphones. It’s likely to be 15 years from now, at about the time when we have shifted our attention to 6G.

ATSC 3.0 – More Spectrum for Broadband?

This past February the FCC approved the voluntary adoption of the new over-the-air standard for ATSC 3.0. for television stations. There will be around twenty different standards included within the final protocol that will define such things as better video and audio compression, picture improvement using high dynamic range (HDR), a wider range of colors, the ability to use immersive sound, better closed captioning, an advanced emergency alert system, better security through watermarking and fingerprinting, and the ability to integrate IP delivery.

The most interesting new feature of the new standard is that it allows programmers to tailor their TV transmission signal in numerous ways. The one that is of the most interest to the telecom world is that the standard will allow a TV broadcaster to compress the existing TV transmission into a tiny slice of the spectrum which would free up about 25 Mbps of wireless bandwidth per TV channel.

A TV station could use that extra frequency themselves or could sell it to others. Broadcasters could use the extra bandwidth in a number of ways. For example, it’s enough bandwidth to transmit their signal in 4K. Stations could also transmit their signal directly to cellphones and other mobile devices. TV stations could instead the extra bandwidth to enhance their transmissions by the addition of immersive sound and virtual reality. They could also use the extra bandwidth to transmit additional digital channels inside one slice of spectrum.

But my guess is that a lot of TV stations are going to lease the spectrum to others. This is some of the most desirable spectrum available. The VHF bands range from 30 MHz to 300 MHz and the UHF bands from 300 MHz to 3 GHz. The spectrum has the desirable characteristics of being able to travel for long distances and of penetrating easily into buildings – two characteristics that benefit TV or broadband.

The first broadcasters that have announced plans to implement ATSC 3.0 are Sinclair and Nexstar. Together they own stations in 97 markets, including 43 markets where both companies have stations. The two companies are also driving a consortium of broadcasters that includes Univision and Northwest Broadcasting. This spectrum consortium has the goal of being able to provide a nationwide bandwidth footprint, which they think is essential for maximizing the economic value of leasing the spectrum. But getting nationwide coverage is going to require adding a lot more TV stations to the consortium, which could be a big challenge.

All this new bandwidth is going to be attractive to wireless broadband providers. One has to think that the big cellular companies will be interested in the bandwidth. This also might be an opportunity for the new cellular players like Comcast and Charter to increase their spectrum footprint. But it could be used in other ways. For instance, this could be used by some new provider to communicate with vehicles or to monitor and interface with IoT devices.

The spectrum could provide a lot of additional bandwidth for rural broadband. It’s likely that in metropolitan areas that the extra bandwidth is going to get gobbled up to satisfy one or more of the uses listed above. But in rural areas this spectrum could be used to power point-to-multipoint radios and could add a huge amount of bandwidth to that effort. The channels are easily bonded together and it’s not hard to picture wireless broadband of a few hundred Mbps.

But this may never come to pass. Unlike WiFi, which is free, or 3.65 GHz, which can be cheaply licensed, this spectrum is likely to be costly. And one of the major benefits of the spectrum – the ability to travel for long distances – is also a detriment for many rural markets. Whoever is using this spectrum in urban areas is going to worry about interference from rural uses of the spectrum.

Of course, there are other long-term possibilities. As companies are able to upgrade to the new standard they will have essentially have reduced their need for spectrum. Since the TV stations were originally given this spectrum to transmit TV signals I can’t think of any reason that they should automatically be allowed to keep and financially benefit from the freed spectrum. They don’t really ‘own’ the spectrum – it was provided to them originally by the FCC to launch television technology. There are no other blocks or spectrum I can think of that are granted in perpetuity.

TV station owners like Sinclair and Nexstar are watering at the mouth over the huge potential windfall that has come their way. I hope, though that the FCC will eventually see this differently. One of the functions of the FCC is to equitably allocate spectrum to best meet the needs of all users of spectrum. If the TV stations keep the spectrum then the FCC will have ceded their spectrum management authority and it will be TV stations that determine the future spectrum winners and losers. That can’t be in the best interests of the country.

The Need for Fiber Redundancy

I just read a short article that mentioned that 30,000 customers in Corvallis, Oregon lost broadband and cable service when a car struck a utility pole and cut a fiber. It took Comcast 23 hours to restore service. There is nothing unusual about this outage and such outages happen every day across the country. I’m not even sure why this incident made the news other than that the number of customers that lost service from a single incident was larger than normal.

But this incident points to the issue of network redundancy – the ability of a network to keep working after a fiber gets cut. Since broadband is now becoming a necessity and not just a nice-to-have thing we are going to be hearing a lot more about redundancy in the future.

Lack of redundancy can strike anywhere, in big cities or small – but the effects in rural areas can be incredibly devastating. A decade ago I worked with Cook County, Minnesota, which is a county in the far north of the state. The economy of the county is driven by recreation and they were interested in getting better broadband. But what drove them to get serious about finding a solution was an incident that knocked out broadband and telephone to the entire county for several days. They County has now built their own fiber network that now includes redundant route diversity to the rest of the world.

We used to have this same concern about the telephone networks and smaller towns often got isolated from making or receiving calls when there was a cable cut. But as cellphones have become prevalent the cries about losing landline telephone have diminished. But the cries about lack of redundancy are back after communities suffer the kinds of outages just experienced by Corvallis. Local officials and the public want to know why our networks can’t be protected against these kinds of outages.

The simple answer is money. It often means building more fiber, and at a minimum it takes a lot more expensive electronics to create network redundancy. The way that redundancy works is simple – there must be separate fiber or electronic paths to provide service to an area in order to provide two broadband feeds. This can be created in two ways. On larger networks it’s created with fiber rings. In a ring configuration two sets of electronics are used to send every fiber signal in both directions around a fiber. In that configuration, when a fiber is cut the signal is still being received from the opposite direction. The other (and even more expensive) way to create diversity is to lay two separate fiber networks to reach a given location.

Route redundancy tends to diminish as a network gets closer to customers. In the US we have many different types of fiber networks. The long-haul fiber networks that connect the NFL cities are largely on rings. From the major cities there are then regional fiber networks that are built to reach surrounding communities. Some of these networks are also on fiber rings, but a surprising number are not and face the same kind of outages that Cook County had. Finally, there are local networks built of fiber, telephone copper, or coaxial cable that are built to get to customers. It’s rare to see route diversity at the local level.

But redundancy can be added anywhere in the network, at a cost. For example, it is not unusual for large businesses to seek local route diversity. They most often achieve this by buying broadband from more than one provider. But sometimes this doesn’t work if those providers are sharing the same poles to reach the business. I’ve also seen fiber providers create a local ring for large businesses willing to pay the high price for redundancy. But most of the last mile that we all live and work on has no protection. We are always one local disaster away from losing service like happened in Corvallis.

But the Corvallis outage was not an outage where a cut wire knocked out a dozen homes on a street. The fiber that got cut was obviously one that was being used to provide coverage to a wide area. A lot of my clients would not design a network where an outage could affect so many customers. If they served a town the size of Corvallis they would build some local rings to significantly reduce the number of customers that could be knocked out by an outage.

But the big ISPs like Comcast have taken shortcuts over the years and they have not spent the money to build local rings. But I am not singling out Comcast here because I think this is largely true of all of the big ISPs.

The consequences of a fiber cut like the one in Corvallis are huge. That outage had to include numerous businesses that lost their broadband connection for a day – and many businesses today cannot function without broadband. Businesses that are run out of homes lost service. And the cut disrupted homework, training, shopping, medical monitoring, security alarms, banking – you name it – for 30,000 homes and businesses.

There is no easy fix for this, but as broadband continues to become essential in our lives these kinds of outages are going to become less acceptable. We are going to start to hear people, businesses, and local governments shouting for better network redundancy, just as Cook County did a decade ago. And that clamor is going to drive some of these communities to seek their own fiber solution to protect from the economic devastation that can come with even moderate network outages. And to some degree, if this happens the carriers will have brought this upon themselves due to pinching pennies and not making redundancy a higher priority in network design.

Shaking Up the FTTP Industry

Every once in a while I see something in the equipment market that surprises me. One of my clients recently got pricing for building a gigabit PON FTTP network from the Chinese company ZTE. The pricing is far under the market price for other brands of equipment, and it makes me wonder if this is not going to put downward price pressure on the rest of the industry.

There are two primary sets of electronics in a PON network – the OLT and ONTs. The OLT (Optical Line Terminal) is a centrally located piece of equipment that originates the laser signal headed towards customers. The OLT is basically a big bay of lasers that talk to customers. The ONT (Optical Network Terminal) is the device that sits at a customer location that has the matching laser that talks back to the OLT.

ZTE’s pricing is industry shaking. They have priced OLTs at almost a third of the price of their competition. They have been able to do this partially by improving the OLT cards that hold the lasers and each of their cards can connect to twice as many customers as other OLTs. This makes the OLT smaller and more energy efficient. But that alone cannot account for the discount and their pricing is obviously aimed at gaining a foothold in the US market.

The ONT pricing is even more striking. They offer a gigabit Ethernet-only indoor ONT for $45. That price is so low that it almost turns the ONT into a throw away item. This is a very plain ONT. It has one Ethernet port and does not have any way to connect to existing inside wiring for telephone or cable TV. It’s clearly meant to work with WiFi at the customer end to deliver all services. Their pricing is made even more affordable by the fact that they offer lower-than-normal industry prices for the software needed to activate and maintain in future years.

This pricing is going to lead companies to reexamine their planned network design. A lot of service providers still use traditional ONTs that contain multiple Ethernet ports and that also have ports for connection to both telephone copper and cable company coaxial wiring. But those ONTs are still relatively expensive and the most recent quotes I’ve seen put these between $200 and $220.

Using an Ethernet-only ONT means dumping the bandwidth into a WiFi router and using that for all services. That means having to use voice adapters to provide telephone service, similar to what’s been used by VoIP providers for years. But these days I have clients that are launching fiber networks without a voice product, and even if they want to support VoIP the adapters are relatively inexpensive. This network design also means delivering only IPTV if there is a cable product and this ONT could not be used with older analog-based cable headends.

ZTE is an interesting company. They are huge in China and are a $17 Billion company. They make a lot of cellphones, which is their primary product line. But they also make a lot of different kinds of telecom gear like this PON equipment. They claim they FTTP equipment is widely used in China and that they have more FTTP customers connected than most US-based vendors.

This blog is not a blanket endorsement of the company. They have a questionable past. They have been accused of bribery in making sales in Norway and the Philippines. They also were fined by the US Commerce Department for selling technology to North Korea and Iran, both under sanctions. And to the best of my knowledge they are just now trying to crack into the US market, which always is something to consider.

But this kind of drop in FTTP pricing has been needed. It is surprising that OLTs and ONTs from other manufacturers still basically cost the same as they did years ago. We generally expect that as electronics are mass produced that the prices will drop, but we have never seen this in a PON network. One can hope that this kind of pricing will shake up other manufacturers to sharpen their pencils. Larger fiber ISPs already get pricing cheaper than what I mentioned above on today’s equipment. But most of my clients are relatively small and they have little negotiating power with equipment vendors. I hope this shakes the industry a bit – something that’s needed if we want to deploy fiber everywhere.

Our Aging Fiber Infrastructure

One thing that I rarely hear talked about is how many of our long-haul fiber networks are aging. The fiber routes that connect our largest cities were mostly built in the 1990s in a very different bandwidth environment. I have a number of clients that rely on long-haul fiber routes and the stories they tell me scare me about our future ability to move bandwidth where it’s needed.

In order to understand the problems of the long-haul networks it’s important to look back at how these fiber routes were built. Many were built by the big telcos. I can remember the ads from AT&T thirty years ago bragging how they had built the first coast-to-coast fiber network. A lot of other fiber networks were built by competitive fiber providers like MCI and Qwest, which saw an opportunity for competing against the pricing of the big telco monopolies.

A lot of the original fibers built on intercity routes were small by today’s standards. The original networks were built to carry voice and much smaller volumes of data than today and many of the fibers contain only 48 pairs of fiber.

To a large degree the big intercity fiber routes follow the same physical paths, either following interstate highways, but to an even greater extent following the railroad tracks that go between markets. Most companies that move big amounts of data want route diversity to protect against fiber cuts or disasters, yet a significant percentage of the routes between many cities are located next to fibers of rival carriers.

It’s also important to understand how the money works in these routes. The owners of the large fibers have found it to be lucrative to lease pairs of fiber to other carriers on long-term leases called IRUs (indefeasible rights to use). It’s not unusual to be able to shop for a broadband connection between primary and secondary markets, say Philadelphia and Harrisburg, and find a half-dozen different carriers. But deeper examination often shows they all share leased pairs in the same fiber sheath.

Our long-haul fiber network infrastructure is physically aging and I’ve seen a lot of evidence of network failures. There are a number of reasons for these failures. First, the quality of fiber glass today has improved by several magnitudes over glass that was made in the 1980s and 1990s. Some fiber routes are starting to show signs of cloudiness from age which kills a given fiber pair. Probably even more significant is the fact that fiber installation techniques have improved over the years. We’ve learned that if a fiber cable is stretched or stressed during installation that microscopic cracks can be formed that slowly spread over time until a fiber becomes unusable. And finally, we are seeing the expected wear and tear on networks. Poles get knocked down by weather or accidents. Contractors occasionally cut buried fibers. Every time a long-haul fiber is cut it loses a little efficiency, and over time splices can add up to become problems.

Probably the parts of the network that are in the worst shape are the electronics. It’s an expensive proposition to upgrade the bandwidth on a long-haul fiber network because that means not only changing lasers at the end points of a fiber, but at all of the repeater huts along a fiber route. Unless a fiber route is completely utilized the companies operating these routes don’t want to spend the capital dollars needed to improve bandwidth. And so they keep operating old electronics that are often many years past their expected functional lives.

Construction of new long-haul fiber networks is incredibly expensive and it’s rare to hear of any major initiative to build fiber on the big established intercity routes. Interestingly, the fiber to smaller markets is in much better shape than the fiber between NFL cities. These secondary fiber routes were often built by groups like consortiums of independent telephone companies. There were also some significant new fiber routes built using the stimulus funding in 2008.

Today a big percentage of the old intercity fiber network is owned by AT&T, Verizon and CenturyLink. They built a lot of the original network but over the years have also gobbled up many of the other companies that built fiber – and are still doing so, like with Verizon’s purchase last year of XO and CenturyLink’s purchase of Level3. I know a lot of my clients worry every time one of these mergers happens because it removes another of a small handful of actual fiber owners from the market. They are fearful that we are going to go back to the old days of monopoly pricing and poor response to service issues – the two issues that prompted most of the construction of competitive fiber routes in the first place.

A lot of the infrastructure of all types in this country is aging. Sadly, I think we need to put a lot of our long-haul fiber backbone network into the aging category.

The Return of Edge Computing

We just went through a decade where the majority of industry experts told us that most of our computing needs were going to move to the cloud. But it seems that that trend is starting to reverse somewhat and there are many applications where we are seeing the return of edge computing. This trend will have big implications for broadband networks.

Traditionally everything we did involved edge computing – or the use of local computers and servers. But a number of big companies like Amazon, Microsoft and IBM convinced corporate America that there were huge benefits of cloud computing. And cloud computing spread to small businesses and homes and almost every one of us works in the cloud to some extent. These benefits are real and include such things as:

  • Reduced labor costs from not having to maintain an in-house IT staff.
  • Disaster recovery of data due to storing data at multiple sites
  • Reduced capital expenditures on computer hardware and software
  • Increased collaboration due to having a widely dispersed employee base on the same platform
  • The ability to work from anywhere there is a broadband connection.

But we’ve also seen some downsides to cloud computing:

  • No computer system is immune from outages and an outage in a cloud network can take an entire company out of service, not just a local branch.
  • A security breach into a cloud network exposes the whole company’s data.
  • Cloud networks are subject to denial of service attacks
  • Loss of local control over software and systems – a conversion to cloud often means losing valuable legacy systems, and functionality from these systems is often lost.
  • Not always as cheap as hoped for.

The recent move away from cloud computing comes from computing applications that need huge amounts of computing power done in real time. The most obvious examples of this is the smart car. Some of the smart cars under development run as many as 20 servers onboard the car, making them a driving datacenter. There is no hope of ever moving the brains from smart cars or drones to the cloud due to the huge amounts of data that must be passed quickly between the car’s sensors and its computers. Any external connection is bound to have too much latency to make true real-time decisions.

But smart cars are not the only edge devices that don’t make sense on a cloud network. Some other such applications include:

  • Drones have the same concerns as cars. It’s hard to imagine a broadband network that can be designed to always stay in contact with a flying drone or even a sidewalk delivery drone.
  • Industrial robots. Many new industrial robots need to make decisions in real-time during the manufacturing process. Robots are no longer just being used to assemble things, but are also being used to handle complex tasks like synthesizing chemicals, which requires real-time feedback.
  • Virtual reality. Today’s virtual reality devices need extremely low latencies in order to deliver a coherent image and it’s expected that future generations of VR will use significantly more bandwidth and be even more reliant on real-time communications.
  • Medical devices like MRIs also require low latencies in order to pass huge data files rapidly. As we built artificial intelligence into hospital monitors the speed requirement for real-time decision making will become even more critical.
  • Electric grids. It turns out that it doesn’t take much of a delay to knock down an electric grid, and so local feedback is needed to make split-second decisions when problems pop up on grids.

We are all familiar with a good analogy of the impact of performing electronic tasks from a distance. Anybody my age remembers when you could pick up a telephone, have instant dialtone, and then also got a quick ring response from the phone at the other end. But as we’ve moved telephone switches farther from customers it’s no longer unusual to wait seconds to get a dialtone, and to wait even more agonizing seconds to hear the ringing starting at the other end. Such delays are annoying for a telephone call but deadly for many computing applications.

Finally, one of the drivers to move to more edge computing is the desire to cut down on the amount of bandwidth that must be transmitted. Consider a factory where thousands of devices are monitoring specific operations during the manufacturing process. The idea of sending this mountains of data to a distant location for processing seems almost absurd when local servers can handle the data at faster speeds with lower latency. But cloud computing is certainly not going to go away and is still the best network for many applications. In this factory example it would still make sense to send alarms and other non-standard data to some remote monitoring location even if the data needed to keep a machine running is done locally.

 

Indoor or Outdoor ONTs?

I have a lot of clients with FTTP networks and I find it interesting that they have significantly different views for placing subscriber fiber terminals (ONTs) outdoors on the side of the premise versus indoors. There are significant pros and cons for each position and many of my clients wrestle hard with the issue.

Originally in the industry the outdoor ONT was the only option and all the FTTH networks that were built until a few years ago had outdoor ONTs. But now there are pros and cons of each type of ONT, which doesn’t make this an easy decision.

Pros for Outdoor ONTs

  • An outdoor ONT allows technicians to install and service the ONT without having to schedule and coordinate with customers. In today’s world of working families this is often a huge plus in getting access to the ONTs during working hours.
  • Outdoor ONTs are generally undisturbed once installed and customers rarely touch them.
  • Creates a clear demarcation points between the ISP and the customer – what’s inside is the customer’s responsibility.

Cons for Outdoor ONTs

  • If not installed properly the ONTs can allow in water or dust and invite corrosion. But if installed properly this should not be an issue – but those who use contract installers worry about this.
  • Uses existing home wiring. In many cases that means running new copper, coaxial or Cat5 cables.
  • Can be powered outside from the electric meter, but this adds costs and in some states increases installation costs if a licensed electrician is required to tie into a meter. If powered from inside the ONT runs the risk of being unplugged by customers – a fairly common occurrence.

Pros of Indoor ONTs

  • Can be a little less expensive, but that’s not automatic and you need to consider installation labor as well as the cost of the electronics.
  • Avoids the outdoor power issue and can be plugged in anywhere in the home. This makes it easier to deploy where the customer wants it rather than where the fiber happens to hit the house. Generally easy to feed into an existing phone jack for voice service.
  • Allows for customers to help with troubleshooting by looking at the colors of various light indicators.

Cons of Indoor ONTs

  • Requires running fiber through the wall and somewhere into the home. This muddies the demarcation point between ISP and customer.
  • Since the ONT is connected to fiber, there are numerous opportunities for customer to bend, break or pinch the fiber.
  • Customers often walk away with them when they move.
  • One new issue is that many indoor ONTs now include a WiFi modem. Considering the rapid changes in wireless technologies it’s likely that the WiFi modem will need to be upgraded before the normal lifecycle of the ONT, adding considerable replacement costs over time.

The issue become further complicated by the fact that most FTTP vendors now have dual-use ONTs that can be used indoors or outdoors. But even that causes some dilemmas because these ONTs are probably not the perfect solution for either location. But these flexible-use ONTs do allow a company to put some indoors and some outdoors, depending upon the customer situation. But any company that chooses to deploy both ways then faces the dilemma of needing two different set of processes for dealing with technicians and customers – something that most of my clients try to avoid.

 

Machine Generated Broadband

One of the more interesting predictions in the latest Cisco annual internet forecast is that there will be more machine-to-machine (M2M) connections on the Internet by 2021 than there are people using smartphones, desktops, laptops and tablets.

Today there are a little over 11 billion human-used machines connected to the Internet. That number is growing steadily and Cisco predicts that by 2021 there will be over 13 billion such devices using the Internet. That prediction also assumes that total users on the internet will grow from a worldwide 44% broadband penetration in 2016 to a 58% worldwide penetration of people that have connectivity to the Internet by 2021.

But the use of M2M devices is expected to grow a lot faster. There are fewer than 6 billion such devices in use today and Cisco is projecting that will grow to nearly 14 billion by 2021.

So what is machine-to-machine communication? Broadly speaking it is any technology that allows networked devices to exchange information and perform actions without assistance from humans. This encompasses a huge range of different devices including:

  • Cloud data center. When something is stored in the cloud, most cloud services create duplicate copies of data at multiple data centers to protect against a failure at any given data center. While this does not represent a huge number of devices when measured on the scale of billions, the volume of traffic between data centers is gigantic.
  • Telemetry. Telemetry has been around since before the Internet. Telemetry includes devices that monitor and transmit operational data from field locations of businesses, with the most common examples being devices that monitor the performance of electric networks and water systems. But the devices used for telemetry will grow rapidly as our existing utility grids are upgraded to become smart grids and when telemetry is used by farmers to monitor crops and animals, used to monitor wind and solar farms, and used to monitor wildlife and many other things in the environment.
  • Home Internet of Things. Much of the growth of devices will come from an explosion of devices used for the Internet of Things. In the consumer market that will include all of the smart devices we put into homes such as burglar alarms, cameras, smart door locks and smart appliances of many kinds.
  • Business IoT. There is expected to be an even greater proliferation of IoT devices for businesses. For example, modern factories that include robots are expected to have numerous devices that monitor and direct the performance of machines. Hospitals are expected to replace wires with wireless networked devices used to monitor patients. Retail stores are all investigating devices that track customers through the store to assist in shopping and to offer inducements to purchase.
  • Smart Cars and Trucks. By 2021 it’s expected that most new cars and trucks will routinely communicate with the Internet. This does not necessarily imply self-driving vehicles, but rather that all new vehicles will have M2M capabilities.
  • Smart Cities. A number of large cities are looking to improve living conditions using smart city technologies. This is going to require the deployment of huge numbers of sensors that will be used to improve things like traffic flow, monitoring for crimes and improvement everyday things like garbage collection and snow removal.
  • Wearables. Today there are huge numbers of fitness monitors, but it’s expected that it will become routine for people to wear health monitors of various types that keep track of vital statistics and monitor to catch problems at an early stage.
  • Gray Areas. There are also a lot of machine-to-machine communications that come from computers, laptops and smartphones. I see that my phone uses data even at those times when I’m not using it. Our devices now query the cloud to look for updates, to make back-ups of our data or to take care of other tasks that our apps do in the background without our knowledge or active participation.

Of course, having more machine-to-machine devices doesn’t mean that this traffic will grow to dominate web traffic. Cisco predicts that by 2021 that 83% of the traffic on the web will be video of some sort. While most of that video will be used for entertainment, it will also include huge piles of broadband usage for surveillance cameras and other video sources.

If you are interested in M2M developments I recommend M2M: Machine2Machine Magazine. This magazine contains hundreds of articles on the various fields of M2M communications.

5G Needs Fiber

I am finally starting to see an acknowledgement by the cellular industry that 5G implementation is going to require fiber – a lot of fiber. For the last year or so the industry press – prompted by misleading press releases from the wireless companies – made it sound like wireless was our future and that there would soon not be any need for building more wires.

As always, when there is talk about 5G there is a need to make sure which 5G we are talking about, because there are two distinct 5G technologies on the horizon. One is high-speed wireless loops send directly to homes and businesses as a replacement for a wired broadband connection. The other is 5G cellular providing bandwidth to our cellphones.

It’s interesting to see the term 5G being used for a wireless microwave connection to a home or business. For the past twenty years this same technology has been referred to as wireless local loop, but in the broadband world the term 5G has marketing cachet. Interestingly, a lot of these high-speed data connections won’t even be using the 5G standards and could just as easily be transmitting the signals using Ethernet or some other transmission protocol. But the marketing folks have declared that everything that uses the millimeter wave spectrum will be deemed 5G, and so it shall be.

These fixed broadband connections are going to require a lot of fiber close-by to customers. The current millimeter radios are capable of deliver speeds up to a gigabit on a point-to-point microwave basis. And this means that every 5G millimeter wave transmitter needs to be fiber fed if there is any desire to offer gigabit-like speeds at the customer end. You can’t use a 1-gigabit wireless backhaul to feed multiple gigabit transmitters, and thus fiber is the only way to get the desired speeds to the end locations.

The amount of fiber needed for this application is going to depend upon the specific way the network is being deployed. Right now the predominant early use for this technology is to use the millimeter wave radios to serve an entire apartment building. That means putting one receiver on the apartment roof and somehow distributing the signal through the building. This kind of configuration requires fiber only to those tall towers or rooftops used to beam a signal to nearby apartment buildings. Most urban areas already have the fiber to tall structures to support this kind of network.

But for the millimeter technology to bring gigabit speeds everywhere it is going to mean bringing fiber much closer to the customer. For example, the original Starry business plan in Boston had customers receiving the wireless signal through a window, and that means having numerous transmitters around a neighborhood so that a given apartment or business can see one of them. This kind of network configuration will require more fiber than the rooftop-only network.

But Google, AT&T and Verizon are all talking about using millimeter wave radios to bring broadband directly into homes. That kind of network is going to require even more fiber since a transmitter is going to need a clear shot near to street-level to see a given home. I look around my own downtown neighborhood and can see that one or two transmitters would only reach a fraction of homes and that it would take a pole-mounted transmitter in front of homes to do what these companies are promising. And those transmitters on poles are going to need to be fiber-fed if they want to deliver gigabit broadband.

Verizon seems to understand this and they have recently talked about needing a ‘fiber-rich’ environment to deploy 5G. The company has committed to building a lot of fiber to support this coming business plan.

But, as always, there is a flip side to this. These companies are only going to deploy these fast wireless loops in neighborhoods that already have fiber or in places where it makes economic sense to build it. And this is going to mean cherry-picking – the same as the big ISPs do today. They are not going to build the fiber in neighborhoods where they don’t foresee enough demand for the wireless broadband. They won’t build in neighborhoods where the fiber construction costs are too high. One only has to look at the hodgepodge Verizon FiOS fiber network to see what this is going to look like. There will be homes and businesses offered the new fast wireless loops while a block or two away there will be no use of the technology. Verizon has already created fiber haves and have-nots due to the way they built FiOS and 5G wireless loops are going to follow the same pattern.

I think the big ISPs have convinced politicians that they will be solving all future broadband problems with 5G, just as they made similar promises in the past with other broadband technologies. But let’s face it – money talks and these ISPs are only going to deploy 5G / fiber networks where they can make their desired returns.

And that means no 5G in poorer neighborhoods. It might mean little or limited 5G in neighborhoods with terrain or other similar issues. And it certainly means no 5G in rural America because the cost to build a 5G network is basically the same as building a landline fiber network – it’s not going to happen, at least not by the big ISPs.

Tackling Pole Attachment Issues

In January the new FCC Commissioner Ajit Pai announced the formation of a new federal advisory committee  – the Broadband Deployment Advisory Committee (BDAC). This new group has broken into sub-groups to examine various ways that the deployment of broadband could be made easier.

I spoke last week to the Sub-Committee for Competitive Access to Broadband Infrastructure, i.e. poles and conduits. This group might have the hardest task of all because getting access to poles has remained one of the most challenging tasks of launching a new broadband network. Most of the issues raised by a panel of experts at the latest meeting of this committee are nearly the same issues that have been discussed since the 1996 Telecommunications Act that gave telecom competitors access to this infrastructure.

Here are some of the issues that still make it difficult for anybody to get onto poles. Each of these is a short synopsis of an issue, but pages could be written about the more detailed specifics involved each of these topics:

Paperwork and Processes. It can be excruciatingly slow to get onto poles for a fiber overbuilder, and time is money. There are processes and paperwork thrown at a new attacher that often seem to be done for no other reason than to slow down the process. This can be further acerbated when the pole owner (such as AT&T) is going to compete with the new attacher, giving the owner incentives to slow-roll the process as has been done in several cities with Google Fiber.

Cooperation Among Parties. Even if the paperwork needed to get onto poles isn’t a barrier, one of the biggest delays in the process of getting onto poles can be the requirement to coordinate with all of the existing attachers on a given pole. If the new work requires any changes to existing attachers they must be notified and they must then give permission for the work to be done. Attachers are not always responsive, particularly when the new attacher will be competing with them.

Who Does the Work? Pole owners or existing attachers often require that a new attacher use a contractor that they approve to make any changes to a pole. Getting into the schedule for these approved contractors can be another source of delay if they are already busy with other work. This process can get further delayed if the pole owner and the existing attachers don’t have the same list of approved contractors. There are also issues in many jurisdictions where the pole owner is bound by contract to only use union workers – not a negative thing, but one more twist that can sometimes slow down the process.

Access Everywhere. There are still a few groups of pole owners that are exempt from having to allow attachers onto their poles. The 1996 Act made an exception for municipalities and rural electric cooperatives for some reason. Most of these exempt pole owners voluntarily work with those that want access to their poles, but there are some that won’t let any telecom competitor on their poles. I know competitive overbuilders who were ready to bring fiber to rural communities only to be denied access by electric cooperatives. In a few cases the overbuilder decided to pay a higher price to bury new fiber, but in others the overbuilder gave up and moved on to other markets.

Equity. A new attacher will often find that much of the work needed to be performed to get onto poles is largely due to previous attachers not following the rules. Unfortunately, the new attacher is still generally on the hook for the full cost of rearranging or replacing poles even if that work is the result of poor construction practices in the past coupled with lax inspection of completed work by pole owners.

Enforcement. Perhaps one of the biggest flaws in the current situation is enforcement. While there are numerous federal and state laws governing the pole attachment process, in most cases there are no remedies other than a protracted lawsuit against a pole owner or against an existing attacher that refuses to cooperate with a new attacher. There is no reasonable and timely remedy to make a recalcitrant pole owner follow the rules.

And enforcement can go the other way. Many of my clients own poles and they often find that somebody has attached to their poles without notifying them or following any of the FCC or state rules, including paying for the attachments. There should be penalties, perhaps including the removal of maverick pole attachments.

Wireless Access. There is a whole new category of pole attachments for wireless devices that raise a whole new set of issues. The existing pole attachment rules were written for those that want to string wires from pole to pole, not for placing devices of various sizes and complexities on existing poles. Further, wireless attachers often want to attach to light poles or traffic signal poles, both for which there are no existing rules.

Solutions. It’s easy to list all of the problems and the Sub-Committee for Competitive Access to Broadband Infrastructure is tasked with suggesting some solutions to these many problems. Most of these problems have plagued the industry for decades and there are no easy fixes for them. Since many of the problems of getting onto poles are with pole or wire owners that won’t comply with the current attachment rules there is no easy fix unless there can be a way to force them to comply. I’ll be interested to see what this group recommends to the FCC. Since the sub-committee contains the many different factions from the industry it will be interesting to see if they can come to a consensus on any issue.