Categories
Technology

Aging Coaxial Copper Networks

We’re not talking enough about the aging coaxial copper networks that provide broadband to the majority of the broadband customers in the country. Comcast and Charter alone serve more than half of all broadband customers in the country.

These copper networks are getting old. Most coaxial networks were constructed in the 1970s and had an expected life of perhaps forty years. We seem to be quietly ignoring that these networks will soon be fifty years old.

There is an industry-wide consensus that telephone copper is past the end of its economic life, and most telephone networks that are still working are barely limping along. It was no surprise last October when AT&T announced that it would no longer connect new DSL customers – if the company had its way, it would completely walk away from all copper, other than as a convenient place to overlash fiber.

To some degree, coaxial networks are more susceptible to aging than telephone copper networks. The copper wire inside of a coax cable is much thicker than telephone copper wires, and that is what keeps the networks chugging along. However, coaxial networks are highly susceptible to outside interference. A coaxial network uses a technology that creates a captive RF radio network inside of the wires. This technology used the full range of radio spectrum between 5 MHz and 1,000 MHz inside the wires, with the signals arranged in channels, just like is done with wireless networks. A coaxial copper network is susceptible to outside interference anywhere along the wide range of frequencies being carried.

Decades of sun, cold, water, and ice accumulate to create slow deterioration of the coaxial copper and the sheathe around the wire. It’s vital for a coaxial sheathe to remain intact since it acts as the barrier to interference. As the sheath gets older, it develops breaks and cracks and is not as effective in shielding the network. The sheathing also accumulates breaks due to repairs over the decades from storms and other damage. Over forty or fifty years, the small dings and dents to the network add up. The long coaxial copper wires hanging on poles act as a giant antenna, and any break in the cable sheathing is a source for interference to enter the network.

Just like telcos never talk publicly about underperforming DSL, you never hear a cable that admits to poor performance in networks. But I’ve found that the performance of coaxial networks varies within almost every community of any size. I’m worked in several cities in recent years where we gathered speed tests by address, and there are invariably a few neighborhoods that have broadband speeds far slower than the rest of the network. The primary explanation for poorly performing neighborhoods is likely deterioration of the physical coaxial wires.

Cable companies could revitalize neighborhoods by replacing the coaxial cable – but they rarely do so. Anybody who has tried to buy house wiring knows that copper wiring has gotten extremely expensive. I haven’t done the math recently, but I wouldn’t be surprised if it costs much more to hang coaxial copper than fiber. You can see by the following chart how copper prices have peaked in recent years.

Big cable companies deliver decent bandwidth to a lot of people, but there are subsets of customers in most markets who have lousy service due to local network issues. I talk to cities all of the time who are tired of fielding complaints from the parts of town where networks underperform. City governments want to know when the cable companies will finally bite the bullet and upgrade to fiber. A lot of industry analysts seem to think the cable companies will put off upgrades for as long as possible, and that can’t be comforting to folks living in pockets of cable networks that already have degraded service. And as the networks continue to age, the problems experienced with coaxial networks will get worse every year.

Categories
Technology

Zayo Installs 800 Gbps Fiber

Zayo announced the installation of an 800 Gbps fiber route between New York and New Jersey. This is a big deal for a number of reasons. In my blog, I regularly talk about how home and business bandwidth has continued to grow and is doubling roughly every three years. It’s easy to forget that the traffic on the Internet backbone is experiencing the same growth. The routes between major cities like Washington DC and New York City are carrying 8-10 times more traffic than a decade ago.

Ten years ago, we were already facing a backhaul crisis on some of the busiest fiber routes in the country. The fact that some routes continue to function is a testament to smart network engineers and technology upgrades like the one announced by Zayo.

There is not a lot of new fiber construction along major routes in places like the northeast since such construction is expensive. Over the last few years, a major new fiber route was installed along the Pennsylvania Turnpike as that road was rebuilt – but such major fiber construction efforts are somewhat rare. That means that we must somehow handle the growth of intercity traffic with existing fiber routes that are already fully subscribed.

You might think that we could increase fiber capacity along major fiber routes by upgrading the bandwidth capacity, as Zayo is doing on this one route. But that is not a realistic option in most cases. Backhaul fiber routes can best be described as a hodge-podge. Let’s suppose as an example that Verizon owns a fiber route between New York City and Washington DC. The company would use some of the fibers on that route for its own cellular and FiOS traffic. But over the years, Verizon will have leased lit or dark fibers to other carriers. It wouldn’t be surprising on a major intercity route to find dozens of such leased arrangements. Each one of those long-term arrangements comes with different contractual requirements. Lit routes might be at specific bandwidths. Verizon would have no way of knowing what those leasing dark fiber are carrying.

Trying to somehow upgrade a major fiber route is a huge puzzle, largely confounded by the existing contractual arrangements. Many of the customers using lit fiber will have a five 9’s guarantee of uptime (99.999%), so it’s incredibly challenging to take such a customer out of service, even for a short time, as part of migrating to a different fiber or a different set of electronics.

Some of the carriers on the major transport routes sell transport to smaller entities. This would be carriers like Zayo, Level 3, and XO (which is owned by Verizon). These wholesale carriers are where smaller carriers go to find transport on these existing busy routes. That’s why it’s a big deal when Zayo and similar carriers increase capacity.

I wrote about the first 400 Gbps fiber path in March 2020, implemented by AT&T between Dallas and Atlanta. Numerous carriers have started the upgrade to 400 Gbps transport, including Zayo, which has plans to have that capacity on 21 major routes by the end of 2022. The 800 Gbps route is unique in that Zayo is able to combine two 400-Gbps fiber signals into one fiber path using electronics from Ciena. Verizon had a trial of 800 Gbps last year using equipment from Infinera.

In most cases, the upgrades to 400 Gbps or 800 Gbps will replace routes lit at the older standard 100 Gbps transport. While that sounds like a big increase in capacity, in a world where network capacity is doubling every three years, these upgrades are not a whole lot more than band-aids.

At some point, we’re going to need a major upgrade to intercity transport routes. Interestingly, all of the federal grant funding floating around is aimed at rural last-mile fiber – an obviously important need. Many federal funding sources can’t be used to build or upgrade middle-mile. But at some point, somebody is going to have to make the needed investments. It does no good to upgrade last-mile capacity if the routes between towns and the Internet can’t handle the broadband demand. This is probably not a role for the federal government because the big carriers make a lot of money on long-haul transport. At some point, the biggest carriers need to get into a room and agree to open up the purses – for the benefit of them all.

Categories
Technology

Improvements in Undersea Fiber

We often forget that a lot of things we do on the web rely on broadband traffic that passes through undersea cables. Any web traffic from overseas gets to the US through one of the many underwater fiber routes. Like with all fiber technologies, the engineers and vendors have regularly been making improvements.

The technology involved in undersea cables is quite different than what is used for terrestrial fibers. A  long fiber route includes repeater sites where the light signal is refreshed. Without repeaters, the average fiber light signal will die within about sixty miles. Our landline networks rely on powered repeater sites. For major cross-country fiber routes, multiple carriers often share the repeater sites.

But an undersea cable has to include the electric power and the repeater sites with the fiber since the cable may be laid as deep as 8,000 beneath the surface. HMN Tech recently announced a big improvement in undersea electronics technology. On a new underseas route between Hainan, China and Hong Kong, the company has been able to deploy 16 fibers with repeaters. This is a huge improvement over past technologies that have limited the number of fibers to eight or twelve. With 16 lit fibers, HMN will be able to pass data on this new route at 300 terabits per second.

Undersea fibers have a rough existence. There is a fiber cut somewhere in the world on underseas fiber every three days. There is a fleet of ships that travel the world fixing underseas fiber cuts or bends. Most underseas fiber problems come from the fiber rubbing against rocks on the seabed. But fibers are sometimes cut by ship anchors, and even occasionally by sharks that seem to like to chew on the fiber – sounds just like squirrels.

Undersea fibers aren’t large. Near to the shore, the fibers are about the width of a soda can, with most of the fiber made up of tough shielding to protect against dangers that come from the shallow waters near to shore. To the extent possible, an undersea fiber will be buried near shore. Further out to sea, the size of the fibers is much smaller, about the size of a pencil – there is no need to try to protect fibers that are deep on the ocean floor.

With the explosion in worldwide data usage, it’s vital that the cables can carry as much data as possible. The builders of the undersea routes only count on a given fiber lasting about ten years. The fiber will last longer, but the embedded electronics are usually too slow after a decade to justify continued use of the cable. Upgrading to faster technologies could mean a longer life for the undersea routes, which would be a huge economic benefit.

Categories
Technology

The Beginnings of 8K Video

In 2014 I wrote a blog asking if 4K video was going to become mainstream. At that time, 4K TVs were just hitting the market and cost $3,000 and higher. There was virtually no 4K video content on the web other than a few experimental videos on YouTube. But in seven short years, 4K has become a standard technology. Netflix and Amazon Prime have been shooting all original content in 4K for several years, and the rest of the industry has followed. Anybody who purchased a TV since 2016 almost surely has 4K capabilities, and a quick scan of shopping sites shows 4K TVs as cheap as $300 today.

It’s now time to ask the same question about 8K video. TCL is now selling a basic 8K TV at Best Buy for $2,100. But like with any cutting-edge technology, LG is offering a top-of-the-line 8K TV on Amazon for $30,000. There are a handful of video cameras capable of capturing 8K video. Earlier this year, YouTube provided the ability to upload 8K videos, and a few are now available.

So what is 8K? The 8K designation refers to the number of pixels on a screen. High-Definition TV, or 2K, allowed for 1920 X 1080 pixels. 4K grew this to 3840 X 2160 pixels, and the 8K standard increases pixels to 7680 X 4320. An 8K video stream will have 4 pixels in the space where a high-definition TV had a single pixel.

8K video won’t only bring higher clarity, but also a much wider range of colors. Video today is captured and transmitted using a narrow range of red, green, blue, and sometimes white pixels that vary inside the limits of the REC 709 color specifications. The colors our eyes perceive on the screen are basically combinations of these few colors along with current standards that can vary the brightness of each pixel. 8K video will widen the color palette and also the brightness scale to provide a wider range of color nuance.

The reason I’m writing about 8K video is that any transmission of 8K video over the web will be a challenge for almost all current networks. Full HD video requires a video stream between 3 Mbps and 5 Mbps, with the highest bandwidth needs coming from a high-action video where the pixels on the stream are all changing constantly. 4K video requires a video stream between 15 Mbps and 25 Mbps. Theoretically, 8K video will require streams between 200 Mbps and 300 Mbps.

We know that video content providers on the web will find ways to reduce the size of the data stream, meaning they likely won’t transmit pure 8K video. This is done today for all videos, and there are industry tricks used, such as not transmitting background pixels in a scene where the background doesn’t change. But raw 4K or 8K video that is not filtered to be smaller will need the kind of bandwidth listed above.

There are no ISPs, even fiber providers, who would be ready for the largescale adoption of 8K video on the web. It wouldn’t take many simultaneous 8K subscribers in a neighborhood to exhaust the capability of a 2.4 Gbps node in a GPON network. We’ve already seen faster video be the death knell of other technologies – people were largely satisfied with DSL until people wanted to use it to view HD video – at that point, neighborhood DSL nodes got overwhelmed.

There were a lot of people in 2014 who said that 4K video was a fad that would never catch on. With 4K TVs at the time priced over $3,000 and a web that was not ready for 4K video streams, this seemed like a reasonable guess. But as 4K TV sets got cheaper and as Netflix and Amazon publicized 4K video capabilities, the 4K format has become commonplace. It took about five years for the 4K phenomenon to go from YouTube rarity to mainstream. I’m not predicting that the 8K trend could do the same thing – but it’s possible.

For years I’ve been advising to build networks that are ready for the future. We’re facing a possible explosion over the next decade of broadband demand from applications like 8K video and telepresence – both requiring big bandwidth. If you build a network today that is not contemplating these future needs, you are looking at being obsolete in a decade – likely before you’ve even paid off the debt on the network.

Categories
Technology

Demystifying Oversubscription

I think the concept that I have to explain the most as a consultant is oversubscription, which is the way that ISPs share bandwidth between customers in a network.

Most broadband technologies distribute bandwidth to customers in nodes. ISPs using passive optical networks, cable DOCSIS systems, fixed wireless technology, and DSL all distribute bandwidth to a neighborhood device of some sort that then distributes the bandwidth to all of the customers in that neighborhood node.

The easiest technology to demonstrate this with is passive optical fiber since most ISPs deliver nodes of only 32 people or less. PON technology delivers 2.4 gigabits of download bandwidth to the neighborhood node to share with 32 households.

Let’s suppose that every customer has subscribed to a 100 Mbps broadband service. Collectively, for the 32 households, that totals to 3.2 gigabits of demand – more than the 2.4 gigabits that is being supplied to the node. When people first hear about oversubscription, they think that ISPs are somehow cheating customers – how can an ISP sell more bandwidth than is available?

The answer is that the ISPs knows that it’s a statistical certainty that all 32 customers won’t use the full 100 Mbps download capacity at the same time. In fact, it’s rare for a household to ever use the full 100 Mbps capability – that’s not how the Internet works. Let’s say a given customer is downloading a huge file. Even if the ISP at the other end of that transaction has fast Internet, the signal doesn’t come pouring in from the Internet at a steady speed. Packets have to find a path between the sender and the receiver, and the packets come in unevenly, in fits and starts.

But that doesn’t fully explain why oversubscription works. It works because all of the customers in a node never use a lot of bandwidth at the same time. On a given evening, some of the people in the node aren’t at home. Some are browsing the web, which requires minimal download bandwidth. Many are streaming video, which requires a lot less than 100 Mbps. A few are using the bandwidth heavily, like a household with several gamers. But collectively, it’s nearly impossible for this particular node to use the full 2.4 gigabits of bandwidth.

Let’s instead suppose that everybody in this 32-home node has purchased a gigabit product, like is delivered by Google Fiber. Now, the collectively possible bandwidth demand is 32 gigabits, far greater than the 2.4 gigabits being delivered to the neighborhood node. This is starting to feel more like hocus pocus, because the ISP has sold 13 times the capacity that is available to the node. Has the ISP done something shady here?

The chances are extremely high that they have not. The reality is that the typical gigabit subscriber doesn’t use a lot more bandwidth than a typical 100 Mbps customer. And when the gigabit subscriber does download something, it does so quicker, meaning that the transaction has less of a chance of interfering with transactions from neighbors. Google fiber knows it can safely oversubscribe at thirteen to one because it knows from experience that there is rarely enough usage in the node to exceed the 2.4 gigabit download feed.

But it can happen. If this node is full of gamers, and perhaps a few super-heavy users like doctors that view bit medical files at home, this node could have problems at this level of oversubscription. ISPs have easy solutions for this rare event. The ISP can move some of the heavy users to a different node. Or the ISP can even split the node into two, with 16 homes on each node. This is why customers with a quality-conscious ISP rarely see any glitches in broadband speeds.

Unfortunately, this is not true with the other technologies. DSL nodes are overwhelmed almost by definition. Cable and fixed wireless networks have always been notorious for slowing down at peak usage times when all of the customers are using the network. Where a fiber ISP won’t put any more than 32 customers on a node, it’s not unusual for cable company to have a hundred customers.

Where the real oversubscription problems are seen today is on the upload link, where routine household demand can overwhelm the size of the upload link. Most households using DSL, cable, and fixed wireless technology during the pandemic have stories of times when they got booted from Zoom calls or couldn’t connect to a school server. These problems are fully due to the ISP badly oversubscribing the upload link.

Categories
Technology

The DOCSIS vs. Fiber Debate

In a recent article in FierceTelecom, Curtis Knittle, the VP of Wired Technologies at CableLabs, argues that the DOCSIS standard is far from over and that cable company coaxial cable will be able to compete with fiber for many years to come. It’s an interesting argument, and from a technical perspective, I’m sure Mr. Knittle is right. The big question will be if the big cable companies decide to take the DOCSIS path or bite the bullet and start the conversion to fiber.

CableLabs released the DOCSIS 4.0 standard in March 2020, and the technology is now being field tested in planned deployments through 2022. In the first lab deployment of the technology earlier this year, Comcast achieved a symmetrical 4 Gbps speed. Mr. Knittle claims that DOSIS 4.0 can outperform the XGS-PON we’re now seeing deployed. He claims that DOCSIS 4.0 will be able to produce a true 10-gigabit output while the XGS-PON actual output is closer to 8.7 Gbps downstream.

There are several issues that are going to drive the decision-making in cable company board rooms. The first is cost. An upgrade to DOCSIS 4.0 doesn’t sound cheap. The upgrade to DOCSIS 4.0 increases system bandwidth by working in higher frequencies – similar to G.Fast on telephone copper. A full upgrade to DOCSIS 4.0 will require ripping and replacing most network electronics. Coaxial copper networks are getting old and this probably also means replacing a lot of older coaxial cables in the network. It probably means replacing power taps and amplifiers throughout the outside network.

Building fiber is also expensive. However, the cable companies have surely learned the lesson from telcos like AT&T and Verizon that there is a huge saving in cost by overlashing fiber onto existing wires. The cable company can install fiber for a lot less than any competitor by overlashing onto existing coax.

There is also an issue of public perception. I think the public believes that fiber is the best broadband technology. Cable companies already see that they lose the competitive battle in any market where fiber is built. The big telcos all have aggressive plans to build fiber-to-the-premise, and there is a lot of fiber coming in the next five years. Other technologies like Starry wireless are also going to nibble away at the urban customer base. All of the alternative technologies to cable have faster upload speeds than the current DOCSIS technology. The cable industry has completely avoided talking about upload speeds because they know how cable subscribers struggled working and schooling from home during the pandemic. How many years can the cable company stave off competitors that offer a better experience?

There is finally the issue of speed to market. The first realistic date to start implementing DOCSIS 4.0 on a large scale is at least five years from now. That’s five long years to limp forward with underperforming upload speeds. Customers that become disappointed with an ISP are the ones that leap first when there is any alternative. Five years is a long time to cede the marketing advantage to fiber.

The big cable companies have a huge market advantage in urban markets – but they are not invulnerable. Comcast and Charter have both kept Wall Street happy by seeing continuous growth from the continuous capture of disaffected DSL customers. Wall Street is going to have a totally different view of the companies if that growth stops. The wheels likely come off stock prices if the two companies ever start losing customers.

I’ve always thought that the cable’s success for the last decade has been due more to having a lousy competitor in DSL than it has been by a great performance from the cable companies. Every national customer satisfaction poll continues to rank cable companies at the bottom behind even the IRS and funeral homes.

We know that fiber builders do well against cable companies. AT&T says that it gets a 30% market share in a relatively short time everywhere it builds fiber. Over time, AT&T thinks it will capture 50% of all subscribers with fiber, which means a 55% to 60% market share. The big decision for the cable companies to make is if they are willing to watch their market position start waning while waiting for DOCSIS 4.0. Are they going to bet another decade of success on aging copper networks? We’ve already seen Altice start the conversion to fiber. It’s going to be interesting to watch the other big cable companies wrestle with this decision.

Categories
Technology

Is Wireless Power a Possibility?

Wireless power transmission (WPT) is any technology that can transmit electrical power between two places without wires. As we are moving towards a future with small sensors in homes, fields, and factories, this is an area of research that is getting a lot more attention. The alternative to wireless power is to somehow put small batteries in sensors and devices that have to somehow periodically be replaced.

There are half a dozen techniques that can be used to create electric power remotely. Most involve transmitting some form of electromagnetic radiation that is used to excite a remote receiver that converts the energy into electricity. There have been trials using frequencies of all sorts, including microwaves, infrared light, and radio waves.

The most commonly used form of wireless power transmission today is used in wireless pads that can recharge a cellphone or other small devices. This technology uses inductive coupling. This involves passing alternating current through an induction coil. Since any moving electrical current creates a magnetic field, the induction coil creates a magnetic or electromotive field that fluctuates in intensity as the AC current constantly changes. A cellphone pad only works for a short distance because the coils inside the device are small.

There are a few household applications where induction charging works over slightly greater distances, such as automatically charging electric toothbrushes and some hand tools. We’ve been using the technology to recharge implanted medical devices since the 1960s. Induction charging has been implemented on a larger scale. In 1980, scientists in California developed a bus that could be recharged wirelessly. There is currently research in Norway and China to top off the charge in cars and taxi batteries to avoid having to stop to recharge electric vehicles.

There have successful uses of transmitted radiation to create remote electricity over great distances. Radio and microwaves can be beamed great distances to excite a device called a rectenna or rectifying antenna, which converts transmitted frequency into electricity. This has never been able to produce a lot of power, but scientists are looking at the technology again because this could be a way to charge devices like farm sensors in fields.

The private sector is exploring WPT solutions for everyday life. Wi-Charge is using safe infrared light to charge devices within a room. Energous has developed a radio transmitter that can charge devices within a 15-meter radius. Ossia is developing wireless charging devices for cars that will automatically charge cellphones and other consumer devices. We’re not far away from a time when motion detectors, smoke alarms, CO2 sensor,s and other devices can be permanently powered without a need for batteries or hardwiring.

Scientists and manufacturers are also exploring long-distance power transmission. Emrod in New Zealand is exploring bringing power to remote sites through the beaming of radio waves. On an even grander scale, NASA is exploring the possibility of beaming power to earth gathered from giant solar arrays in space.

Remote power was originally envisioned by Nicola Tesla, and perhaps over the next few decades will become an everyday technology that we take for granted. I’m just looking forward to the day when I’m not wakened in the middle of the night by a smoke detector that wants me to know it’s time to change the battery.

Categories
Technology The Industry

Are We Ready for Big Bandwidth Applications?

There is a recent industry phenomenon that could have major impacts on ISP networks in the relatively near future. There has been an explosion of households that subscribe to gigabit data plans. At the end of 2018, only 1.8% of US homes subscribed to a gigabit plan. This grew to 2.8% by the end of 2019. With the pandemic, millions of homes upgraded to gigabit plans in an attempt to find a service that would support working from home. By the end of the third quarter of 2020, gigabit households grew to 5.6% of all households, a doubling in nine months. But by the end of last year, this mushroomed to 8.5% of all households. OpenVault reports that as of the end of the first quarter of 2021 that 9.8% of all households have subscribed to gigabit plans.

I have to think that a lot of these upgrades came from homes that wanted faster upload speeds. Cable company broadband is stingy with upload speeds for basic 100 Mbps and 200 Mbps basic plans. Surveys my company has done show a lot of dissatisfaction with urban ISPs, and my guess is that most of that unhappiness is due to sluggish upload performance.

Regardless of how we found ourselves at this place, one out of ten households in the US now buys gigabit broadband. As an aside, that fact alone should completely eradicate any further discussions about 25/3 Mbps even being part of the discussion of broadband.

My ISP clients tell me that the average gigabit household doesn’t use a lot more bandwidth than customers buying 100 Mbps broadband – they just get things faster. If you’ve never worked on a gigabit connection, you might not understand the difference – but with gigabit broadband, websites appear on your screen almost instantaneously. The word I’ve always used to describe gigabit broadband is ‘snappy’. It’s like snapping your fingers and what you want appears instantly.

I think the fact that 10% of households have gigabit speeds opens up new possibilities for content providers. In the early days after Google Fiber got the country talking about gigabit fiber, the talking heads in the industry were all asking when we’d see gigabit applications. There was a lot of speculation about what those applications might do – but we never found out because nobody ever developed them. There was no real market for gigabit applications when only a handful of households were buying gigabit speeds. Even at the end of 2019, it was hard to think about monetizing fast web products when less than 3% of all homes could use them.

My instincts tell me that hitting a 10% market share for gigabit subscribers has created the critical mass of gigabit households that might make it financially worthwhile to offer fast web applications. The most likely first applications are probably telepresence and 3D gaming in your living room space. It’s hard to think that there is no market for this.

I know that ISPs are not ready for households to actually use the speeds they have been peddling to them. There is no ISP network anywhere, including fiber networks, that wouldn’t quickly bog down and die if a bunch of subscribers started streaming at fast speeds between 100 Mbps and a gigabit. ISP networks are designed around the concept of oversubscription – meaning that customers don’t use broadband at the same time. The normal parameters for oversubscription are already changing due to the proliferation of VPN connections made for working and schooling from home – ISPs must accommodate large chunks of bandwidth that are in constant use, and that can’t be shared with other customers. Home VPN connections have paralyzed DSL networks, but it’s something that even fiber network engineers are watching carefully.

I’ve been imagining what will happen to a network if households start streaming at a dedicated symmetrical 100 Mbps instead of connecting to Zoom at 2 Mbps. It wouldn’t take many such customers in any neighborhood to completely tie up network resources.

I will be shocked if there aren’t entrepreneurs already dreaming up gaming and telepresence applications that take advantage of the 10% market share for gigabit broadband. In looking back at the past, new technology phenomenon seems to hit almost overnight. It’s not hard to imagine a craze where a million gigabit homes are playing live 3D games in the living room air. When that finally happens,  ISPs are going to be taken by surprise, and not in a good way. We’ll see the instant introduction of data caps to stop customers from using broadband. But we’ll also see ISPs beefing up networks – they’ll have no choice.

Categories
Technology

Drone Research in North Carolina

The Platform for Advanced Wireless Research (PAWR) program, funded by the National Science Foundation, recently expanded the footprint for a wireless research trial in North Carolina. Labeled as the Aerial Experimentation and Research Platform for Advanced Wireless (AERPAW), a wireless testbed has been created in and around Cary, NC to also include the Lake Wheeler Field Laboratory and the Centennial Campus of NC State in Raleigh.

The wireless testbed is one of four created in the country. There will be a number of participants in the experiments, including NC State University, the Wireless Research Center of North Carolina, Mississippi State University, the Renaissance Computing Institute of the University of North Carolina at Chapel Hill, the town of Cary, the City of Raleigh, Purdue University, and the University of South Carolina.

The AERPAW program has been seeded with $24 million of grants from the National Science Foundation. The primary purpose of the North Carolina testbed is to explore the integration of drones with 5G wireless technology and to accelerate the development and commercialization of promising technologies.

The recent expansion came when the FCC named the area as an Innovation Zone as part of its Program Experimental Licenses. These licenses give the qualified research institutions the ability to conduct multiple and unrelated experiments over a range of frequency bands without having to ask permission from the FCC each time. The combination of the PAWR grants and the FCC Innovation Zone means an accelerated timeline for gaining access to the spectrum needed for experiments.

The North Carolina experiments are just getting underway and should be in full swing by 2023. There are already some interesting experiments being contemplated:

  • There will be an experiment to explore the feasibility of using drones to temporarily act as cell sites after the damage caused by hurricanes and other disasters. As our society becomes more reliant on 5G connectivity, there will be an urgency in restoring damaged cell sites quickly. The experiments will also consider the use of 5G cell sites mounted on cars, buses, and golf carts.
  • This same concept might be able to make a portion of a cellular network mobile, meaning the network could be shifted to serve increased demand when people and traffic are unexpectedly busy. Picture 5G drones flying over a football stadium.
  • There will be experiments to try to improve and guarantee the accuracy and ability of drones to delivery key packages such as medicines or commercial deliveries.
  • Research is contemplated to use drones to collect data from IoT sensors on nearby farms.
  • There is also an experiment envisioned that will look at ways to improve the ability of air traffic control to track and account for drones.

The PAWR platform is interesting in that commercial companies can request research into specific applications. In this case, a corporation could fund research into a specific use of drones, and the Innovation Zone means that any spectrum issues associated with trying new ideas can be accommodated. As might be expected, several wireless vendors are part of the platform. For example, Ericsson has installed 4G/5G RAN equipment at the Lake Wheeler site to initiate experimentation. Thirty-five vendors plan to participate in the four wireless testbeds around the country and might likely be the major beneficiaries of any technologies that prove to be viable.

This kind of research is vital if we are to develop wireless technologies for widespread use. There is only so much experimentation that can happen in labs, and this kind of testbed allows researchers to quickly identify both issues and benefits of 5G drone applications.

Categories
Technology

Satellite Broadband and Farming

An article in Via Satellite discussed John Deere’s interest in using low-orbit satellite broadband as the platform to expand smart farming. This makes perfect sense because there is no quicker way to bring broadband coverage to all of the farms in the world.

There are only a few other technologies that might be able to fill the farm broadband gap. The one we’ve been thinking about for the last decade is cellular, but there is a major reluctance of the big cellular companies to invest in the needed rural cellular towers and to constantly upgrade electronics for cell sites that have only a few customers. If you’ve ever traveled to the upper Midwest, you’ve stood on farm roads where agricultural fields stretch to the horizon in every direction. Standing in such a place makes you realize that cellular is not the answer. No carrier will invest in a cell tower where the potential customers are a few farms and the farm machinery. No farmer wants to pay a cellular bill large enough to support a single cell tower.

There is also experimentation with the use of permanent blimps that can provide broadband coverage over a several-county area. I wrote a blog last year about a trial with blimps in Indiana. It’s an interesting idea, and it may be a good way to support self-driving equipment. But the big downside to blimps is the ability to handle millions of sensors and to process huge amounts of data in real-time, like the monstrously large data files created by surveying fields for a wide variety of soil conditions. I also have to wonder about the ability to replicate this technology outside of prosperous farming areas in the US.

Before John Deere or anybody decides that satellites are the answer to smart agriculture, we need to know more about the real-life capabilities of satellites constellations. Big data users like cell sites and farms could bog down a satellite network and lower the usefulness for retail ISP services.

I think we’re going to quickly find out if satellite technology has a limited capacity – something we learned with earthbound networks a long time ago. An individual satellite is the equivalent of a broadband node device like a DSLAM for DSL or an OLT used in FTTP. Local broadband devices are subject to being overwhelmed by a few heavy data users. There is a reason that we use separate networks today to support cell towers and home broadband – because both would suffer by being on the same node.

The article raised this question and quoted a John Deere engineer as saying that the ideal situation would be for John Deer to own a satellite constellation. Putting the cost issue aside, space is going to get to be far too busy if we allow individual corporations to put up fleets of satellites. It’s not hard to imagine a John Deere constellation, a Verizon cellular constellation, a FedEx constellation, etc. The sky could get incredibly crowded.

The alternative is for the handful of satellite companies to sell access to corporations like John Deere. And that means somehow satisfying hugely different broadband needs from a satellite constellation. It’s not hard to imagine Starlink preferring big farms and cell sites over residential subscribers. Farmers and cellular companies are likely willing to spend more than households, but will expect priority service for the higher fees.

I’m sure that the satellite companies are flooded with unique requests for ways to use the satellites. If I owned a satellite constellation, I would likely pursue the most lucrative, highest-margin customers, and it’s hard to think somebody like Starlink won’t do the same. If they do, there will either be a limit on the number of residential customers they can accept or a degraded level of broadband for customers not willing to pay a premium corporate rate.