Using Fiber as a Sensor

I am an admitted science nerd. I love to spend time crawling through scientific research papers. I don’t always understand the nuances since scientific papers are often written in severe jargon, but I’ve always been fascinated by scientific research, because it presages the technology of a few decades from now.

I ran across research by Nokia Bell Labs concerning using fiber as a sensor. Scientists there have been exploring ways to interpret the subtle changes that happen in a long strand of fiber strand. The world is suddenly full of fiber strands, and scientists want to know if they can discern any usable real-life data from measuring changes in fiber.

They are not looking at the transmission of the light inside the data. Fiber electronics have been designed to isolate the light signal from external stimuli. We don’t get a degraded signal when a fiber cable is swaying in the wind. We probably don’t marvel enough about the steady and predictable nature of a fiber light signal.

The research is exploring if the physical attributes of the fiber can be used to predict problems in the network before they occur. If a network operator knows that a certain stretch of fiber is under duress, then steps can be taken to address the issues long before there is a fiber outage. Developing ways to interpret the stresses on fiber would alone justify the research many times over.

But scientists can foresee a much wider range of sensor capabilities. Consider a fiber strung across a bridge. It’s hard to measure tiny shifts in the steel infrastructure in a bridge. However, a fiber cable across the bridge can sense and measure subtle changes in the tensions on the bridge and might be able to understand the way that a bridge is shifting long before it becomes physically obvious.

There is already some physical sensing used to monitor underseas fibers – but more can be done. The fiber can possibly measure changes in temperature, current flows, and seismic activity for the full length of these long fibers. Scientists have developed decent sensors for measuring underground faults on land, but it’s much harder to do in the depths of the open ocean.

To test the capabilities to measure and interpret changes to fiber, Bell Lab scientists built a 524-kilometer fiber route between Gothenburg and Karlstad in Sweden as the first test bed for the technology. This will allow them to try to measure a wide range of environmental data to see what can or cannot be done with the sensing technology.

It’s hard to know where this research might go, which is always the case with pure research. It’s not hard to imagine uses if the technology works as hoped. Fiber might be able to identify and pinpoint small forest fires long before they’ve spread and grown larger. Fibers might serve as an early warning system for underground earthquakes long before we’d know about them in the traditional way. The sensing might be useful as a way to identify minor damage to fiber – we know about fiber cuts, but there is often no feedback today from lesser damages to fiber that can still grow to finally result in an outage.

The Next Big Fiber Upgrade

CableLabs recently wrote a blog announcing the release of the specifications for CPON (Coherent Passive Optical Networks), a new fiber technology that can deliver 100 gigabits of bandwidth to home and business nodes. The CPON specification working group that developed the new specification includes seventeen optical electronics vendors, fourteen fiber network operators, CableLabs, and SCTE (Society for Cable Telecommunications Engineers). For those interested, a link to the new specifications can be downloaded here.

The blog notes the evolution of PON from the first BPON technology that delivered 622 Mbps to today’s PON that can deliver 10 gigabits. The blog notes that current PON technology relies on Intensity-Modulation Direct-Detect (IM-DD) technology that will reach its speed limitations at about 25 gigabits.

The CPON specification instead relies on coherent optical technology, which is the basis for today’s backbone fiber networks that are delivering speeds up to 400 Gbps. The specification calls for delivering the higher bandwidth using a single wavelength of light, which is far more efficient and less complicated than a last-mile technology like NG-PON2 that balances multiple wavelengths on the customer path. This specification is the first step towards adapting our long-haul technology to serve multiple locations in a last-mile network.

There are a few aspects of the specification that ISPs are going to like.

  • The goal is to create CPON as an overlay that will coexist with existing PON technology. That will allow a CPON network to reside alongside an existing PON network and not require a flash cut to the new technology.
  • CPON will increase the effective reach of a PON network from 12 miles today to 50 miles. This would allow an ONT placed in a hut in a city to reach customers well into the surrounding rural areas.
  • CPON will allow up to 512 customers to share a neighborhood node. That means more densely packed OLT cards that will need less power and cooling. On the downside, that also means that a lot of customers can be knocked out of service with a card failure.

The blog touts the many benefits of having-100 gigabit broadband speeds in the last-mile. CPON will be able to support applications like high-resolution interactive video, augmented reality, virtual reality, mixed reality, the metaverse, smart cities, and pervasive communications.

One of the things not mentioned by the blog is that last-mile fiber technology is advancing far faster than the technology of the devices used in the last mile. There aren’t a lot of devices in our homes and businesses today that can fully digest a 10-gigabit data pipe, and stretching to faster speeds means developing a new generation of chips for user devices. Releasing specifications like this one puts chipmakers on alert to begin contemplating those faster chips and devices.

There will be skeptics who will say that we don’t need technology at these faster speeds. But in only twenty years, we’ve gone from broadband delivered by dial-up to bandwidth delivered by 10-gigabit technology. None of these skeptics can envision the uses for broadband that can be enabled over the next twenty years by newer technologies like CPON. If there is any lesson we’ve learned from the computer age, it’s that we always find a way to use faster technology within a short time after it’s developed.

Digital Payments

When the iPhone first hit the market, the pundits started touting the huge benefits that would come from carrying around a computer in our hands. Some of those benefits have been transformational. There used to be a rack with maps inside every gas station and convenience store to help travelers figure out directions. The map industry has been completely displaced by online GPS and driving instructions that have brought huge efficiency and a lot fewer lost travelers wandering rural roads.

We were also told that the Rolodex was dead and that you would be carrying everybody’s contact information with you – something that quickly became true. When was the last time that you called information to get somebody’s telephone number?

At the top of the claimed benefits was the promise that we’d quickly be paying for everything seamlessly with our smartphone. We’d be able to buy from a vending machine or shop at a store by just waving our phone.

There has been some movement in recent years to make this easier. You can load credit or debit cards into your phone and use Apple Pay, Google Pay, or Samsung Pay at places that accept the payments. There is a more recent movement to allow people to seamlessly pay each other through direct bank debits without having to use an intermediary service.

But we are nowhere near universal acceptance of payments through a phone. There are a number of reasons why this is still the case 16 years after the promise that this was right around the corner.

  • A bank survey in 2022 showed that 38% of Americans would refuse to use such a payment system. But that is not an excuse for making it easier for everybody else.
  • For many years, financial institutions didn’t have any interest in accepting micropayments. Banks were not interested in enabling a system that would generate millions of $1 transactions at vending machines or other types of small transactions. The fees the banks wanted for the transaction were too high to make this reasonable.
  • There were always a lot of concerns about security. Somebody could steal a phone with an automatic payment system and spend it without scrutiny. That’s being solved in many cases by phones tied into biometrics.
  • All of the proposed payment solutions require sellers and retailers to foot the bill for the electronic readers that can accept payments. This is particularly challenging when there isn’t a universal reader that would accept payments for multiple payment systems. The different payment systems have been pushing unique hardware solutions. This has led to many merchants unwilling to embrace electronic payments.
  • It’s even more of a challenge to equip millions of vending machines, gas pumps, and other payment portals with readers, particularly those in an outdoor environment.
  • There are still plenty of merchants in rural areas that have problems accepting credit cards the traditional way. A credit card transaction doesn’t require the transfer of a lot of data, but it requires a stable connection to be held during the length of a transaction. A lot of rural broadband fluctuates and kills a lot of credit card transactions.

Perhaps the most important reason it’s not widespread here is that the U.S. took the high-technology approach, like we do with many things. Requiring a new set of payment readers is good business for the merchant service companies that provide the readers and the software for merchants.

To demonstrate how we might have taken the wrong path, we only need to look at India. A common payment method for outdoor street vendors is to have a QR code posted. A buyer scans the QR code, which sends them to a portal where they approve the amount of payment. When the payment is complete, a message is sent and is usually played out loud on the merchant’s cell phone. When somebody buys food from a food cart, the payment can be completed by the time the seller is ready to hand over the food. Maybe we are just making this too complicated.

Shutting Down Obsolete Technologies

There was an interesting statement during the recent Verizon first quarter earnings report call. The company admitted that shutting down the 3G cellular networks cost it about 1.1 million retail cellular customers along with the corresponding revenues.

This was long expected because there are still a lot of places where 3G technology was the only cellular signal available to rural customers living in remote areas. There were also a lot of people still happy with 3G flip phones even where 4G was available. Some of these customers will likely come back with 4G phones, but many might be angry with Verizon for cutting them off and go elsewhere.

Verizon has been trying to shut down the 3G network for at least five years. Its original plans got delayed due to discussions with the FCC and then got further delayed because of the pandemic – it didn’t seem like a good idea to cut folks dead when cellular stores were shuttered.

This change was inevitable. The bandwidth that can be delivered on the 3G networks is tiny. Most of you remember when you used 3G and a flip phone to check the weather and sports scores. Cellular carriers want to repurpose the spectrum used for 3G to support 4G and 5G. This is something that is inevitable – technologies become obsolete and have to be upgraded or replaced. The 3G transition is particularly abrupt, because the only possible transition is to cut the 3G signal dead, and 3G phones become bricks.

All of the technologies used for broadband and telecom eventually become obsolete. I remember when we used ISDN to deliver 128 Kbps broadband to businesses. I remember working with n-carrier and other technologies for creating data connections between central offices. Telephone switches took up a big room instead of being housed inside a small computer. The earlier version of DOCSIS technology were largely abandoned and upgraded to new technology. BPON became GPON and is now becoming XGS-PON.

Most transitions to new technologies are phased in over time. You might be surprised that there are still working ISDN lines chugging along that are being used to monitor remote sensors. There are still tiny rural cable companies operating the early versions of DOCSIS. But the industry inevitably replaces ancient technology in the same way that none of you are reading this blog on an IBM 5150 or a Macintosh 128k.

But some upgrades are painful. There were folks who lost cellular coverage when 3G was cut dead since they lived in a place that might not be able to receive the 4G replacement. A 3G phone needed only a tiny amount of bandwidth to operate – at levels that newer phones would perceive to be far under one bar of service.

The other painful technology replacement that keeps getting press is the big telcos killing off the copper networks. When copper is cut off in an area, the traditional copper landlines and DSL go dead. In some cases, customers are offered to move to a fiber network. The price might be higher, but such customers are offered a good permanent technology replacement. But not every DSL customer in a city that loses copper service is offered a fiber alternative. Customers find themselves likely having to pay $30 or $40 more to move to the cable company.

In rural areas, the telcos often offer to move customers to wireless. For a customer that lives within a decent distance from a cell tower, this should be an upgrade. Fixed wireless delivered for only a few miles should be faster than rural DSL. But like all wireless technologies, there is a distance limitation around any given tower, and the FWA signal isn’t going to work for everybody. Some customers that lose rural copper are left with whatever alternatives are available – because the telephone company is basically abandoning them. In many rural areas, the broadband alternatives are dreadful – which is why many were sticking with slow rural DSL.

I hear a lot of complaints from folks who lose traditional copper who are upset that they lose the ability to use services that work on copper technology, such as fax machines and medical monitors. It may sound uncaring, but these folks need to buy something newer that works with today’s broadband. Those are the kind of changes that are inevitable with technology upgrades. Just like you can’t take your old Macintosh to get fixed at Best Buy, you can’t keep using a technology that nobody supports. That’s an inevitable result of technology getting better over time. This is not a comfort to the farmer who just lost his 3G cell coverage – but there is no way to keep older technology operating forever.

Matter – The New IoT Standard

Anybody that uses more than one brand of Internet of Things (IoT) device in the home understands that there is no standard way to connect to these devices. Each manufacturer chooses from a range of different protocols to communicate with and control its devices, such as BLE, LoRa, LTE-M, NB-IoT, SigFox, ZigBee, and others. Every family of devices, and typically every different brand requires a separate app on your smartphone, which means managing a pile of different apps, passwords, and logs-ins to control your devices.

The situation is tougher on businesses. Consider a farmer that might need a dozen sets of software to control the different smart devices and systems installed in a modern dairy or chicken farm. Farmers have complained to me that it’s been growing increasingly complex to manage the electronics in their operation from day to day. Not only must they master different systems to control each set of devices, but the outputs of the various systems are not in a compatible format to communicate with other systems. A farmer must manually intervene if an alarm from one set of devices needs a response from other devices.

This is a big problem also for larger businesses that deploy IoT devices. It’s not uncommon for the makers of smart devices to retool their products over time, and a large business might find over time that it has multiple generations of smoke alarms, security cameras, smart door locks, or other devices from the same manufacturer that each require a different set of software to control. Companies have sometimes resorted to ripping and replacing older but still functional devices that are incompatible with the newest generation of devices.

Big companies also have the same problems as farmers in that there is no easy way to tie devices together onto one platform. The benefit of using smart sensors loses a lot of appeal if people are needed to manually interpret and intervene when trying to coordinate alarms or other events. Some companies have spent a lot of money to develop unique software to make sense of the outputs of different kinds of smart sensors – but that software has to constantly be tweaked for new devices.

The manufacturers of smart devices recognized that the chaos in the industry is holding down sales. Amazon, Apple, Google, and more than 200 other makers of home electronics and smart devices got together to develop a common IoT platform. These manufacturers agreed that it is important for them to work together, even though they are market rivals because the confusion created by the multiple communications platforms for IoT devices is hurting sales for the industry as a whole.

The new IoT platform that addresses the problems of the industry has been named Matter. There were hundreds of new devices using Matter at this years CES from a variety of vendors. Matter has created a standard language for interpreting the outputs from IoT devices. This means that the commands to operate a smart door lock will be identical from every manufacturer of smart door locks that joins the Matter consortium.  Matter also tests and certifies that devices adhere to the new standard.

This has huge potential for users of IoT. It will be possible to have one app on a smartphone that can communicate with all Matter-enabled devices in the home. This will make it easy and almost instantaneous to connect a new Matter device into your home network of devices. It also will make it easier to coordinate interactions between devices. For example, let’s say that you want your smart blinds to be lowered any time the inside temperature rises to some predetermined level. That can be made to work even if your smart thermostat and smart blinds equipment come from different vendors – commands will be unified across Matter devices, regardless of who made them. The implications for the farmer and the businesses are even more profound. They might finally be able to have a suite of interactive smart devices instead of disparate devices that can’t communicate with each other.

Interestingly, there were folks calling for this from the beginning of the IoT industry. But vendors all chose to take separate paths, and some competitors chose a different path so they wouldn’t be compatible with anything else. In the early days, manufacturers had a vision that people would buy a whole integrated suite of products from them – but the industry didn’t go in that direction. If this catches on, vendors that use Matter ought to have a major advantage within a few years of anybody that refuses to use the new standard.

The Next Big Thing

I’ve always been somewhat amused to read about the colossally important technology trends that are right around the corner. These trends are mostly driven by the wishful thinking of vendors, and they have rarely come true, at least to the extent that is predicted. Even when the next big thing comes to pass, it’s almost never at the predicted magnitude. There has been at least one of these big trends announced every year, and here are a few of the more interesting ones.

I can remember when it was announced that we would be living in an Internet of Things world. Not only would our houses be stuffed full of labor-savings IOT devices, but our fields, forests, and even the air around us would be full of small sensors that would give us feedback on the world around us. The reality was not the revolution predicted by the industry press, but over a decade, most of us now have smart devices in our homes. But the fields, forests, and surrounding environment – not so much.

The IOT trend was followed by big pronouncements that we’d all be adopting wearables. This was not only devices like Google Glass, but we’d all have wearables built into our everyday clothes so that we could effortlessly carry a computer and sensors with us everywhere. This prediction was about as big of a flop as imaginable. Google Glass crashed and burned when the public made it clear that nobody wanted everyday events to be live streamed. Other than gimmicks at CES, there was no real attempt at smart clothes.

But wearables weren’t the biggest flop of all – that is reserved in my mind for 5G. The hype for 5G swamps the hype for all of the other big trends combined. 5G was going to transform the world. We’d have near gigabit speeds everywhere, and wireless was going to negate the need for investing in fiber broadband networks. 5G was going to enable fleets of driverless cars. 5G would drive latency so low that it was going to be the preferred method for connection by gamers and stock traders. There was going to be 5G small cell sites on every corner, and fast wireless broadband would be everywhere. Instead of 5G, we got a watered-down version of 4G LTE labeled as 5G. Admittedly, cellular broadband speeds are way faster, but none of the predicted revolution came to pass.

A few predictions came to pass largely as touted – although at a much slower pace. Five years ago, we were told that everything was going to migrate to the cloud. Big corporations were going to quickly ditch internal computing, and within a short time, the cloud would transform computing. It didn’t happen as quickly as predicted, but we have moved a huge amount of our computing lives into the cloud. Tasks like gaming, banking, and most of the apps we’ve come to rely on are in the cloud today. The average person doesn’t realize the extent that they rely on the cloud until they lose broadband and realize how little of the things they do are stored in the computers at their homes and offices.

This blog was prompted by the latest big trend. The press is full of stories about how computing is moving back to the edge. In case the irony of that escapes you, this largely means undoing a lot of the big benefits of going to the cloud. There are some good reasons for this shift. For example, the daily news about hacking has corporations wondering if data will be safer locally than in the cloud. But the most important reason cited for the movement to edge computing is that the world is looking for extremely low latency – and this can only come when computer processing is done locally. The trouble with this prediction is that it’s hard to find applications that absolutely must have a latency of less than 10 milliseconds. I’m sure there are some, but not enough to make this into the next big trend. I could be wrong, but history would predict that this will happen to a much smaller degree than being touted by vendors.

All big technology trends have one big weakness in common – the fact that the world naturally resists change. Even when the next big thing has clear advantages, there must be an overwhelming reason for companies and people to drop everything to immediately adopt something new, and that usually is untested in the market. Most businesses have learned that being an early adapter is risky – a new technology can bring a market edge, but it can also result in having egg on one’s face.

Replacing Poles

When folks ask me for an estimate of the cost of building aerial fiber, I always say that the cost is dependent upon the amount of required make-ready needed. Make-ready is well-named – it’s any work that must be done on poles to be ready to string the new fiber.

One of the most expensive aspects of make-ready comes from having to replace existing poles. Poles need to be replaced before adding a new fiber line for several reasons:

  • The original pole is too short, and there is not space to add another wire without upgrading to a taller pole. National electric standards require specific distances between different wires for technician safety when working on a pole.
  • It’s possible that the new wire will add enough expected wind resistance during storms that the existing pole is not strong enough to take on an additional wire.
  • One of the most common reasons for replacing poles is that the poles are worn out and won’t last much longer. That’s what the rest of the blog discusses.

Poles don’t last forever. The average wooden utility pole has an expected life of 45 to 50 years. This can differ by the locality, with poles lasting longer in the desert where there are no storms and having a shorter life in more challenging environments. It’s easy to think of poles as being strong and hard to damage, but the forces of nature can create a lot of stress on a pole. The biggest stress on most poles comes from the cumulative effect of heavy winds or ice pulling on the wires and attachments.

There are a lot of reasons why poles fails:

  • Although most poles are usually made of rot-resistant wood, the protection eventually wears off, and poles can decay. This can be made worse if vegetation has been allowed to grow onto a pole.
  • Using a pole differently than the way it was designed is common. A pole might have been rated to carry utility wires but over time got loaded with extra attachments like electric transformers, streetlights, or cellular electronics.
  • The soil around the base of a pole can change over the decades. The area may now be subject to flooding and erosion that wasn’t anticipated when the pole was built.
  • Somebody might have removed a guide wire that was supporting the pole and not replaced it.
  • A pole may have been hit by a car, but not badly enough to be replaced.

ISPs complain when saddled with the full cost of pole replacement. Many of the issues described above should more rightfully be borne by the pole owner. But the federal and most state make-ready rules put the entire cost burden of a pole replacement on the new attacher. It is clearly not fair to make a new attacher pay the full cost to replace a pole that was already in less than ideal condition.

It may seem to the general public that poles are just stuck into the ground. But if you’ve ever watched a new pole being placed, you’ll know that the process can be complex. The design of any new pole must account for all of the anticipated stresses the pole will have to endure. This includes the weight of the wires in a windstorm, ice accumulation, soil composition, the quality of neighboring poles, the spacing between poles (the greater the spacing, the more weight and wind resistance), and if the pole is standalone or to be guyed (anchored to the ground with several strong supporting cables).

Most engineers estimate that a generic aerial construction project will require replacing around 10% of the poles. It’s a pleasant surprise when the percentage is smaller but it can be a real sticker shock if a lot of poles must be replaced. I’ve seen projects where an electric company has neglected maintenance and most of the poles were inadequate.

The right question to ask is not how much it costs to build a mile of fiber. The better question to ask is how good are the poles?

Was That Fiber Construction?

One way that I know that there is a lot of fiber construction occurring is that many of the people I talk to tell me that they’ve seen fiber construction in their neighborhood. I always ask about the type of construction they are seeing, and many folks can’t define it. I thought today I’d talk briefly about the primary methods of fiber construction.

Aerial Fiber. The aerial fiber construction process starts with steps most folks don’t recognize as being fiber-related. Technicians will use cherry pickers or climb poles for make-ready work that prepares the poles to accept new fiber. There might even be some poles replaced, but most people wouldn’t associate that with fiber construction. The construction process of hanging the fiber can be hard to distinguish from the process of adding wires for other utilities. There are generally some cherry pickers and a vehicle involved that holds a reel of fiber wire. The aerial construction process can move quickly after the poles have been properly prepared, and many folks won’t even realize that fiber has been added along their street.

Trenching. Trenching fiber is the best-named construction method because it exactly describes the construction process. With trenching, a construction crew will open a ditch with a backhoe and lay conduit or fiber into the open hole. Trenching is usually chosen in two circumstances. First, it is often the least expensive way to bury conduit along stretches of a road that don’t have impediments like driveways. When a contractor builds fiber in a whole city, trenching might be used along streets that have not yet been developed and that don’t yet have sidewalks. Trenching is usually the preferred construction method when putting fiber into a new subdivision – the ditches are excavated, and conduit is placed before the streets are paved.

Plowing. Cable plowing is a construction method that uses a heavy vehicle called a cable plow to directly bury fiber into the ground as the plow drives along the right-of-way. Fiber plowing is done almost exclusively when burying fiber cable along a route where the fiber will be placed in unpaved rights-of-way, such as along a country road. The right-of-way must be open and not wooded to allow access to the cable plow.

A cable plow is an unmistakable piece of equipment. It’s a bulldozer-sized vehicle that holds a large spool of fiber. It’s unmistakable to see a cable plow because folks will inevitably wonder what the contraption is moving along a country road. But the plowing work can proceed quickly, and the more noticeable crews are the ones boring underneath driveways and intersections along the plowing construction route.

Boring. Also called horizontal boring, trenchless digging, or directional drilling, this is a construction method that uses drills to push or pull rods horizontally underground to create a temporary hole large enough to accommodate a conduit. This is the technique used to place fiber under paved streets, driveways, and sidewalks.

Boring rigs come in a variety of sizes based on the length of the expected drill path. Small boring rigs might be mounted on the back of a truck. Large boring rigs are standalone heavy equipment that are often mounted on treads (like a tank) instead of wheels to accommodate a wide variety of terrain. It’s fairly easy to identify a fiber boring operation because there will be vehicles of all sorts around the area and usually large reels of brightly colored conduit nearby. The chances are that if you see fiber construction in a town, it is using boring.

Microtrenching. This construction process is unmistakable. A heavy piece of equipment that contains a giant saw cuts a narrow trench in the street. The saw is usually followed by trucks that haul away the removed street materials. The cutting process is loud and draws everybody’s attention. Microtrenching can be finished in a day in ideal circumstances where the hole is cut, side connections are made with a high-pressure water drill to get fiber under the streets and sidewalks, and the narrow trench is refilled and capped.

Next-generation PON is Here

At some point during the last year, practically every ISP I know that uses PON technology has quietly upgraded to next-generation PON. For now, that mostly means XGS-PON, which can deliver 10 gigabits of bandwidth to a neighborhood. We’re on the verge of seeing even faster PON cards that will be able to deliver 40 gigabits and probably beyond to 100 gigabits.

This is a big upgrade over GPON which delivers 2.5 Gbps download speed to a neighborhood node. In recent years ISPs have been able to use GPON technology to sell reliable gigabit speeds to homes or businesses that share the network in a neighborhood.

We saw a similar upgrade a dozen years ago when the industry upgraded from BPON, which delivered 622 Mbps to a neighborhood – the upgrade to GPON was a 4-fold increase in available bandwidth. Upgrading to XGS-PON is another 4-fold increase. 40-gigabit PON will be another 4-fold increase.

The best thing about the current upgrade to faster PON is that the vendors got smarter this time. I still have clients who were angry that the upgrade from BPON to GPON meant a total replacement of all electronics – even though the vendors had declared that there would be an easy upgrade path from BPON. Many ISPs decided to change vendors for the upgrade to GPON, and I think vendors got the message.

The PON architecture for most vendors allows upgrading some customers to XGS-PON by adding a faster card to an existing GPON platform. This smart kind of upgrade means that ISPs don’t need to make a flash-cut to faster PON but can move customers one at a time or neighborhood by neighborhood. Upgrades to even faster generations of PON are supposed to work in the same way.

The impact of going to GPON was the widespread introduction of gigabit-speed broadband. A decade ago, gigabit broadband was declared by cable companies to be a gimmick – likely because they couldn’t touch gigabit speeds that fast at the time. But now, all large cable companies are successfully selling gigabit products. According to the latest report from OpenVault, a quarter of homes now subscribe to gigabit or faster broadband products and almost 20% of homes regularly use more than a terabyte of data in a month.

We’ve already seen changes in the market due to next-generation PON. I know a number of ISPs that are now selling 2 Gbps and 5 Gbps broadband products using the new technology. A few are now offering 10 Gbps connections.

One of the biggest decisions faced by an ISP is how many customers to load onto a single PON card at the chassis. GPON allowed for putting up to 128 customers on a PON card, but most ISPs I know only loaded 32 customers. While this was a conservative decision, it meant there was a lot of safety so that customers almost always get the bandwidth they subscribe to.

It’s possible to load a lot more customers onto an XGS-PON card. Most of my clients are still configuring with 32 customers per card, although I’m now seeing a few ISP load 48 or 64 customers per card. There is enough bandwidth on a 10-gigabit card to give everybody a gigabit product, even with a higher customer counts, except perhaps in business districts where there might be some customers using a lot of bandwidth all of the time. The main consideration for loading extra customers on a card is the consequence of a bad card knocking out a greater number of customers.

While you never hear them talking about it, the widespread introduction of XGS-PON is one of the driving factors behind cable companies scrambling to upgrade to faster bandwidth. While the cable companies initially scoffed at gigabit speeds on GPON, I think they’ve learned that claims of faster speeds by fiber ISPs have convinced the public that fiber is superior, even when a cable company can match fiber speeds.

The race for faster technologies is clearly on. Many industry skeptics still scoff that people don’t need faster speeds – but ISPs have learned that people will buy it. That’s a fact that is hard to argue with.

Fixed Wireless in Cities

I am often asked by cities about the option of building a municipal fixed wireless broadband network. As a reminder, fixed wireless in this case is not a cellular system but is the point-to-multipoint technology used by WISPs. My response has been that it’s possible but that the resulting network is probably not going to satisfy the performance goals most cities have in mind.

There are several limitations of fixed wireless technology in an urban that must be considered. The first is the spectrum to be used. Cities tend to be saturated with unlicensed WiFi signals, and the amount of interference will make it a massive challenge to use unlicensed WiFi for broadband purposes. Most folks don’t realize that cellular carriers can snag a lot of the free WiFi spectrum in cities to supplement their cellular data networks – meaning that the free public spectrum is even more saturated than what might be expected.

Licensed spectrum can provide better broadband results. But in cities of any size, most of the licensed spectrum is already spoken for and belongs to cellular companies or somebody else that plans to use it. It never hurts to see if there is spectrum that can be leased, but often there will not be any.

Even if licensed spectrum is available, there are other factors that affect performance of fixed wireless in highly populated areas. The first is that most fixed wireless radios can only serve a relatively small number of customers. Cities are probably not going to be willing to make an investment that can only serve a limited number of people.

Another issue to consider is line-of-sight. In practical terms, this means that neighbor A’s home might block the signal to reach neighbor B. In the typical city, there are going to be a lot of homes that cannot be connected to a fixed wireless network unless there are a lot of towers – and most cities are averse to building more towers.

Even when there is decent line-of-sight, an urban wireless signal can be disturbed by the many routine activities in the city, such as seasonal foliage, bad weather, and even traffic.  One of the more interesting phenomenons of spectrum in an urban setting is how the signal will reflect in scatter in unexpected ways as it bounces off buildings. These factors tend to cause a lot more problems in a dense neighborhood than in a rural setting.

A point-to-multipoint fixed wireless system is also not a great solution for multi-tenant buildings. These networks are designed to provide bandwidth connections to individual users, and there is not enough bandwidth to deliver broadband from one connection to serve multiple tenants. There are also challenges in where to place antennas for individual apartments.

The combination of these issues means that fixed wireless can only serve a relatively small number of customers in an urban area. The speeds are going to bounce around due to urban interference. Speeds are not likely going to be good enough to compete with cable technology.

There is a good analogy to understand the limitations on wireless technologies in cities. Cell carriers have one advantage over many WISPs by owning licensed spectrum. But even with licensed spectrum there are usually numerous small dead spots in cities where the signals can’t reach due to line-of-sight. Cellular radios can serve a lot more customers than fixed wireless radios, but there are still limitations on the number of customers who can buy cellular FWA broadband in a given neighborhood. Any issues faced by cellular networks are worse for a point-to-multipoint network.

The bottom line is that there are a lot of limitations on urban fixed wireless networks that make it a risky investment. Tower space is usually at a premium in cities, and it’s hard to build a network that will reach many customers. There is a lot more interference and line-of-sight issues in a city that makes it hard to maintain a quality connection.

But this doesn’t mean there are no applications that make sense. For example, a fixed wireless network might be ideal for creating a private network for connecting to city locations that don’t need a lot of broadband, like sensor monitoring. That makes a lot more sense than trying to use the technology as an alternative ISP connection for residences and businesses.