Improving Rural Wireless Broadband

Microsoft has been implementing rural wireless broadband using white space spectrum – the slices of spectrum that sits between traditional TV channels. The company announced a partnership with ARK Multicasting to introduce a technology that will boost the efficiency of fixed wireless networks.

ARK Multicasting does just what their name implies. Today about 80% of home broadband usage is for video, and ISPs unicast video, meaning that the send a separate stream of a given video to each customer that wants to watch it. If ten customers in a wireless node are watching the same new Netflix show, the ISP sends out ten copies of the program. Today, in even a small wireless node of a few hundred customers an ISP might be transmitting dozens of simultaneous copies of the most popular content in an evening. The ARK Multicasting technology will send out just one copy of the most popular content on the various OTT services like Netflix, Amazon Prime, and Apple TV. This one copy will be cached in an end user storage device, and if a customer elects to watch the new content they view it from the local cache.

The net impact of multicasting should be a huge decrease in demand for video content during peak network hours. It would be interesting to know the percentage of video viewing in a given week comes from watching newly released content. I’m sure all of the OTT providers know that number, but I’ve never seen anybody talk about it. If anybody knows that statistic, please post in reply comments to this blog. Anecdotal evidence suggests the percentage is significant because people widely discuss new content on social media soon after it’s released.

The first trial of the technology is being done in conjunction with a Microsoft partner wireless network in Crockett. Texas. ARK Multicasting says that it is capable of transmitting 7-10 terabytes of content per month, which equates to 2,300 – 3,300 hours of HD video. We’ll have to wait to see the details of the deployment, but I assume that Microsoft will provide the hefty CPE capable of multi-terabyte storage – there are no current consumer settop boxes with that much capacity. I also assume that cellphones and tablets will grab content using WiFi from the in-home storage device since there are no tablets or cellphones with terabyte storage capacity.

To be effective ARK must be deleting older programming to make room for new, meaning that the available local cache will always contain the latest and most popular content on the various OTT platforms.

There is an interesting side benefit of the technology. Viewers should be able to watch cached content even if they lose the connection to the ISP. Even after a big network outage due to a storm, ISP customers should still be able to watch many hours of popular content.

This is a smart idea. The weakest part of the network for many fixed wireless systems is the backhaul connection. When a backhaul connection gets stressed during the busiest hours of network usage all customers on a wireless node suffer from dropped packets, pixelization, and overall degraded service. Smart caching will remove huge amounts of repetitive video signals from the backhaul routes.

Layering this caching system onto any wireless system should free up peak evening network resources for other purposes. Fixed wireless systems are like most other broadband technologies where the bandwidth is shared between users of a given node. Anything that removes a lot of video downloading at peak times will benefit all users of a node.

The big OTT providers already do edge-caching of content. Providers like Netflix, Google, and Amazon park servers at or near to ISPs to send local copies of the latest content. That caching saves a lot of bandwidth on the internet transport network. The ARK Multicasting will carry caching down to the customer level and bring the benefits of caching to the last-mile network.

A lot of questions come to mind about the nuances of the technology. Hopefully the downloads are done in the slow hours of the network so as to not to add to network congestion. Will all popular content be sent to all customers – or just content from the services they subscribe to? The technology isn’t going to work for an ISP with data caps because the cashing means customers might be downloading multiple terabytes of data that may never be viewed.

I assume that if this technology works well that ISPs of all kinds will consider it. One interesting aspect of the concept is that this means getting ISPs back into the business of supplying boxes to customers – something that many ISPs avoid as much as possible. However, if it works as described, this caching could create a huge boost to last-mile networks by relieving a lot of repetitive traffic, particularly at peak evening hours. I remember local caching being tried a decade or more ago, but it never worked as promised. It will be interesting to see if Microsoft and ARK can pull this off.

A New Technology for MDU Broadband

A Canadian company recently announced a new device that promises the ability to deliver gigabit speeds inside of MDUs using existing copper or coaxial wiring. The company is Positron Access Solutions and I talked to their CTO and president, Pierre Trudeau at the recent Broadband Communities event in Washington DC. Attached is an article and a PowerPoint talking about the new technology.

The technology is built upon a framework of the G.hn standards. You might remember this as the standard supporting powerline carrier that was used before WiFi to distribute broadband around the home using the electrical wiring in the home. G.hn over powerline was a sufficient technology when broadband speeds were slow but didn’t scale up to support faster broadband speeds. In thinking back, I recall that the biggest limitation was that there are dozens of different types of electrical wires used in homes over the last century and it was hard to have a technology that worked as promised over the various sizes and types of in-home wiring.

Positron has been around for many years and manufactures IP PBX systems and DSL extenders. They are referring to the new technology as GAM, which I take to mean G.hn Access Network.

The company says that the technology will deliver a gigabit signal about 500 feet over telephone copper wires and over 4,000 feet on coaxial cable. Large MDUs delivering the technology using telephone copper might require spacing a few devices throughout parts of the network.

The technology operates on unused frequency bands on the copper cables. For example, on telephone copper, the technology can coexist on a telephone wire that’s already carrying telephone company voice. On coaxial cable, the Positron device can coexist with satellite TV from DirecTV or Dish Networks but can’t coexist with a signal from a traditional cable company.

Positron says they are a natural successor to G.Fast which has never gotten a lot of traction in the US. Positron says they can deliver more bandwidth with less noise than G.Fast. The Positron GAM spits out Ethernet at the customer apartment unit and can be used with any existing CPE like WiFi routers, computers, TVs, etc.

This is a new technology and the company currently has only a few test units at clients in the field. Like all new technology, a company should consider this as a beta technology where the vendor will be working out field issues. But this technology has a lot of promise if perfected. There are a lot of older MDUs where the cost of rewiring is prohibitive or where the building owners don’t want fiber strung through hallways. Getting to apartment units through existing copper wiring should be less disruptive, less expensive and faster to market.

I always caution all of my clients about using first-generation technology. It’s bound to suffer from issues that aren’t discovered until deployed in real-world situations. First-generation equipment is always a risk since many vendors have abandoned product lines that have too many field problems. The supply chain is often poorly defined, although in the case of Positron the company has been providing technical support for many years. My main concern with beta technology is that it’s never comfortable using end-user customers as guinea pigs.

However, an MDU might be the perfect environment to try new technology. Many MDUs have been unable to attract better broadband due to high rewiring costs and might be willing to work with an ISP to test new technology. If this technology operates as touted it could provide a cost-effective way to get broadband into MDUs, particularly older ones where rewiring is a cost barrier.

The Future of Coaxial Networks

My blog devotes a lot of time looking at fiber deployment, but since the majority of people in the US get broadband from cable companies using hybrid fiber/coaxial (HFC) technology, today’s blog looks at the next generation of changes planned for HFC.

DOCSIS 4.0. The current generation of HFC technology is DOCSIS 3.1 This technology uses 1.2 GHz of spectrum over coaxial cable. DOCSIS 3.1 has several competitive drawbacks compared to fiber. First, while the technology can deliver gigabit download speeds to customers, the dirty secret of the industry is that gigabit speeds can only be given to a limited number of customers. With current node sizes, cable companies can’t support very many large data users without sacrificing the performance of everybody in a node. This is why you don’t see cable companies pricing gigabit broadband at competitive prices or pushing it very hard.

The other big drawback is that upload speeds on DOCSIS 3.1 are set by specification to be no more than one-eighth of the total bandwidth on the system. Most cable companies don’t even allocate that much to upload speeds.

The primary upgrade with DOCSIS 4.0 will be to increase system bandwidth to 3 GHz. That supplies enough additional bandwidth to provide symmetrical gigabit service or else offer products that are faster than 1 Gbps download. It would also allow a cable company to support a lot more gigabit customers.

The big drawback to the upgrade is that many older coaxial cables won’t be able to handle that much bandwidth and will have to be replaced. Further, upgrading to 3 GHz is going to mean replacing or upgrading power taps, repeaters, and other field hardware in the coaxial network. CableLabs is talking about finalizing the DOCSIS 4.0 specification by the end of 2020. None of the big cable companies have said if and when they might embrace this upgrade. It seems likely that many of the bigger cable companies are in no hurry to make this upgrade.

Low Latency DOCSIS (LLD). Another drawback of HFC networks is that they don’t have the super-low latency needed to support applications like intense gaming or high-quality video chat. The solution is a new encoding scheme being called low latency DOCSIS (LLD).

The LLD solution doesn’t change the overall latency of the cable network but instead prioritizes low-latency applications. The result is to increase the latency for other applications like web-browsing and video streaming.

This can be done because most of the latency on an HFC network comes from the encoding schemes used to layer broadband on top of cable TV signals. The encoding schemes on coaxial cable networks are far more complex than fiber encoding. There are characteristics of copper wires that cause natural interference within a transmission path. A coaxial encoding scheme must account for attenuation (loss of signal over distance), noise (the interference that appears from external sources since copper acts as a natural antenna), and jitter (the fact that interference is not linear and comes and goes in bursts). Most of the latency on a coaxial network comes from the encoding schemes that deal with these conflicting characteristics. The LLD solution bypasses traditional encoding for the handful of applications that need low latency.

Virtual CMTS. One of the more recent improvements in coaxial technology was distributed access architecture (DAA). This technology allows for disaggregating the CMTS (the router used to provide customer broadband) from core routing functions, meaning that the CMTS no longer has to sit at the core of the network. The easiest analogy to understand DAA is to consider modern DSLAM routers. Telephone companies can install a DSLAM at the core of the network, but they can instead put the DSLAM at the entrance to a subdivision to get it closer to customers. DAA allowed cable companies to make this same change.

With virtual CMTS a cable network takes DAA a step further. In a virtual CMTS environment, the cable company might perform some of the CMTS functions in remote data centers in the cloud. There will still be a piece of electronics where the CMTS used to sit, but many of the computing functions can be done remotely.

A cloud-based CMTS offers some advantages to the cable operator:

  • Allows for customizing portions of a network. The data functions provided to a business district can be different from what is supplied to a nearby residential neighborhood. Customization can even be carried down to the customer level for large business customers.
  • Allows for the use of cheap off-the-shelf hardware, similar to what’s been done in the data centers used by the big data complies like Google and Facebook. CMTS hardware has always been expensive because it’s been made by only a few vendors.
  • Improves operations by saving on local resources like local power, floor/rack space, and cooling by moving heavy computing functions to data centers.

Summary. There is a lot of discussion within the cable industry asking how far cable companies want to push HFC technology. Every CEO of the major cable companies has said that their eventual future is fiber, and the above changes, which each bring HFC closer to fiber performance, are still not as good as fiber. Some Wall Street analysts have predicted that cable companies won’t embrace bandwidth upgrades for a while since they already have the marketing advantage of being able to claim gigabit speeds. The question is if the cable companies are willing to make the expensive investment to functionally come closer to fiber performance or if they are happy to just claim to be equivalent to fiber performance.

Do Cable Companies Have a Wireless Advantage?

The big wireless companies have been wrangling for years with the issues associated with placing small cells on poles. Even with new FCC rules in their favor, they are still getting a lot of resistance from communities. Maybe the future of urban/suburban wireless lies with the big cable companies. Cable companies have a few major cost advantages over the wireless companies including the ability to bypass the pole issue.

The first advantage is the ability to deploy mid-span cellular small cells. These are cylindrical devices that can be placed along the coaxial cable between poles. I could not find a picture of these devices and the picture accompanying this article is of a strand-mounted fiber splice box – but it’s s good analogy since the size and shape of the strand-mounted small cell device is approximately the same size and shape.

Strand-mounted small cells provide a cable company with a huge advantage. First, they don’t need to go through the hassle of getting access to poles and they avoid paying the annual fees to rent space on poles. They also avoid the issue of fiber backhaul since each unit can get broadband using a DOCSIS 3.1 modem connection. The cellular companies don’t talk about backhaul a lot when they discuss small cells, but since they don’t own fiber everywhere, they will be paying a lot of money to other parties to transport broadband to the many small cells they are deploying.

The cable companies also benefit because they could quickly deploy small cells anywhere they have coaxial cable on poles. In the future when wireless networks might need to be very dense the cable companies could deploy a small cell between every pair of poles. If the revenue benefits of providing small cells is great enough, this could even prompt the cable companies to expand the coaxial network to nearby neighborhoods that might not otherwise meet their density tests, which for most cable companies is to only build where there are at least 15 to 20 potential customers per linear mile of cable.

The cable companies have another advantage over the cellular carriers in that they have already deployed a vast WiFi network comprised of customer WiFi modems. Comcast claims to have 19 million WiFi hotspots. Charter has a much smaller 500,000 hotspots but could expand that count quickly if needed. Altice is reportedly investing in WiFi hotspots as well. The big advantage of WiFi hotspots is that the broadband capacity of the hotspots can be tapped to act as landline backhaul for cellular data and even voice calls.

The biggest cable companies are already benefitting from WiFi backhaul today. Comcast just reported to investors that they added 204,000 wireless customers in the third quarter of 2019 and now have almost 1.8 million wireless customers. Charter is newer to the wireless business and added 276,000 wireless customers in the third quarter and now has almost 800,000 wireless customers.

Both companies are buying wholesale cellular capacity from Verizon under an MVNO contract. Any cellular minute or cellular data they can backhaul with WiFi doesn’t have to be purchased from Verizon. If the companies build small cells, they would further free themselves from the MVNO arrangement – another cost savings.

A final advantage for the cable companies is that they are deploying small cell networks where they already have a workforce to maintain the network. Bother AT&T and Verizon have laid off huge numbers of workers over the last few years and no longer have the fleets of technicians in all of the markets where they need to deploy cellular networks. These companies are faced with adding technicians where their network is expanding from a few big-tower cell sites to vast networks of small cells.

The cable companies don’t have nearly as much spectrum as they wireless companies, but they might not need it. The cable companies will likely buy spectrum in the upcoming CBRS auction and the other mid-range spectrum auctions over the next few years. They can use the 80 MHz of free CBRS spectrum that’s available everywhere.

These advantages equate to a big cost advantage for the cable companies. They save on speed to market and avoid paying for pole-mounted small cells. Their networks can provide the needed backhaul for practically free. They can offload a lot of cellular data through the customer WiFi hotspots. And the cable companies already have a staff to maintain the small cell sites. At least in the places that have aerial coaxial networks, the cellular companies should have higher margins than the cellular companies and should be formidable competitors.

Keeping an Eye on the Future

The IEEE, the Institute of Electrical and Electronics Engineers, has been issuing a document annually that lays out a roadmap to make sure that the computer chips that drive all of our technologies are ready for the future. The latest such document is the 2019 Heterogeneous Integration Roadmap (HIR). The purpose of the document is to encourage the needed research and planning so that the electronics industry creates interoperable chips that anticipate the coming computer needs while also functioning across multiple industries.

This is particularly relevant today because major technologies are heading in different directions. Fields like 5G, quantum computing, AI, IoT, gene splicing, and self-driving vehicles are all pursuing different technology solutions that could easily result in specialized one-function chips. That’s not necessarily bad, but the IEEE believes that all technologies will benefit if chip research and manufacturing processes are done in such a way as to accommodate a wide range of industries and solutions.

IEEE uses the label of ‘heterogeneous integration’ to describe the process of creating a long-term vision for the electronics industry. They identify this HIR effort as the key technology going forward that is needed to support the other technologies. They envision a process where standard and separately manufactured chip components can be integrated to produce the chips needed to serve the various fields of technology.

The IEEE has created 19 separate technical working groups looking at specific topics related to HIR. This list shows both the depth and breadth of the IEEE effort. Working groups in 2019 include:

Difficult Challenges

  • Single chip and multichip packaging (including substrates)
  • Integrated photonics (including plamonics)
  • Integrated power devices
  • MEMS (miniaturization)
  • RF and analog mixed signals

Cross Cutting Topics

  • Emerging research materials
  • Emerging research devices
  • Interconnect
  • Test
  • Supply chain

Integrated Processes

  • SiP
  • 3D + 2.5D
  • WLP (wafer level packaging)

Packaging for Specialized Applications

  • Mobile
  • IoT and wearable
  • Medical and health
  • Automotive
  • High performance computing
  • Aerospace and defense

Just a few years ago many of the specific technologies were not part of the HIR process. The pace of technological breakthroughs is so intense today that the whole process of introducing new chip technology could easily diverge. The IEEE believes that taking a holistic approach to the future of computing will eventually help all fields as the best industry practices and designs are applied to all new chips.

The effort behind the HIR process is substantial since various large corporations and research universities provide the talent needed to dig deeply into each area of research. I find it comforting that the IEEE is working behind the scenes to make sure that the chips needed to support new technologies can be manufactured efficiently and affordably. Without this effort the cost of electronics for broadband networks sand other technologies might skyrocket over time.

Be Wary of 5G Hardware

We’ve now entered the period of 5G competition where the wireless carriers are trying to outdo each other in announcing 5G rollouts. If you believe the announcements, you’d think 5G is soon going to be everywhere. Device manufacturers are joining the fray and are advertising devices that can be used with the early carrier 5G products. Buyers beware – because most of what the cellular companies and the manufacturers are hyping as 5G is not yet 5G. Any first generation hardware you buy today will become quickly obsolete as future 5G upgrades are introduced.

5G Cellular. Cellular carriers are introducing two new spectrum bands – CBRS spectrum and millimeter wave spectrum – as 5G. The actual use of these spectrums is not yet technically 5G because the carriers aren’t yet using much of the 5G specifications. These two specific spectrum bands come with another warning in that they are only being used to produce faster outdoor broadband. Customers who live in places where they can receive the new frequencies, and who compute outdoors might see value in paying extra for the devices and the 5G data plans. Most people are not going to find any value in what these plans offer and should not get sucked into paying for something they can’t get or won’t use.

Cellphone manufacturers are already starting to build the CBRS spectrum into high-end phones. By next year there should be a 5G version of every major cellphone – at a premium price. Within a few years this will be built into every phone, but for now, expect to pay extra.

The question that users need to ask is if faster cellphone data is worth the extra hardware cost and worth the extra monthly fee that will be charged for 5G browsing. I’ve thought about the cellphone functions that would be improved with faster broadband and the only one I can come up with is faster downloads of movies or software. Faster broadband is not going to make web browsing any faster on a cellphone. Cellphones have been optimized for graphics, which is why you can scroll easily through a Google map or can flip easily between videos on social media. The trade-off for faster graphics is that cellphones aren’t good at other things. Cellphones crawl when trying to process non-cellular websites or when trying to handle spreadsheets. Faster broadband is not going to make these functions any faster, because the slowness comes from the intrinsic design of the cellphone operating software and can’t be improved with faster broadband.

I also think customers are going to face a huge challenge in getting a straight answer about when CBRS spectrum or millimeter wave spectrum will be available in their local neighborhood. The carriers are in full 5G marketing mode and are declaring whole metropolitan areas to have 5G even if that only means new spectrum is in a few neighborhoods.

Finally, beware that both of these spectrums only work outdoors. And that means on foot, not in cars. Millimeter wave spectrum is likely to always be a gimmick. Folks testing the spectrum today report that they can lose the connection simply by rotating their phone slightly or by putting their body in the path from the transmitter. CBRS spectrum will be much more well-behaved.

Laptops.  Lenovo has already announced a 5G-capable laptop coming in 2020 and others will surely jump on the bandwagon soon. The big issue with laptops is also an issue with cellphones. It might be reasonable in an area with good CBRS spectrum coverage to get a 100 Mbps or faster cellular connection. This is going to tempt a user to use a laptopas if it was on a home broadband connection. However, this is still going to be cellular data supplied on a cellular data plan. Unless the carriers decide to lift data caps, a customer using a CBRS spectrum laptop might find themselves exhausting their monthly data cap in a day or two. It’s also worth repeating that these are outdoor spectrums, and so only students or others who regularly use computers outdoors a lot are going to find this spectrum potentially useful.

5G Hotspots. A 5G hotspot is one that broadcasts bandwidth in millimeter wave spectrum. Sprint is already marketing such a hot spot. This takes us back to the early days of WiFi when we needed a dongle to use WiFi since the spectrum wasn’t built into desktops or laptops. A 5G hotspot will have that same restriction. One of the primary reasons to consider a millimeer wave hotspot is security. It will be much harder to hack a millimter wave connection than a WiFi connection. But don’t select the millimeter wave hot spot for speed because a millimeter wave connection won’t be any faster than the WiFi 6 routers just hitting the market.

In future years, 5G hotspots might make sense as millimeter wave spectrum is built into more devices. One of the biggest advantages of indoor millimeter wave spectrum is to avoid some of the interference issues inherent in WiFi. I picture the ideal future indoor network to be millimeter wave spectrum used to provide bandwidth to devices like computers and TVs while WiFi 6 is used for everything else. There is likely to be an interesting battle coming in a few years between millimeter wave and WiFi 6 routers. WiFi already has a huge advantage in that battle since the WiFi technology will be included in a lot more devices. For now there won’t be many easy ways to use a 5G millimeter wave hotspot.

Farms Need Broadband Today

I recently saw a presentation by Professor Nicholas Uilk of South Dakota State University. He is the head of the first bachelor degree program in the country for Precision Agriculture. That program does just what the name suggests – they are teaching budding farmers how to use technology in farming to increase crop yields – and those technologies depend upon broadband.

Precision agriculture is investigating many different aspects of farming. Consider the following:

  • There has been a lot of progress creating self-driving farm implements. These machines have been tested for a few years, but there are not a lot of farmers yet willing to set machines loose in the field without a driver in the cab. But the industry is heading towards the day when driverless farming will be an easily achievable reality.
  • Smart devices have moved past tractors and now include things like automated planters, fertilizer spreaders, manure applicators, lime applicators, and tillage machines.
  • The most data-intensive farming need is the creation of real-time variable rate maps of fields. Farmers can use smart tractors or drones to measure and map important variables that can affect a current crop like the relative amounts of key nutrients, moisture content, and the amount of organic matter in the soil. This mapping creates massive data files that are sent off-farm. Experts agronomists review the data and prepare a detailed plan to get the best yields from each part of the field. The problem farms have today is promptly getting the data to and from the experts. Without fast broadband, the time required to get these files to and from the experts renders the data unusable if the crop grows too large to allow machines to make the suggested changes.
  • Farmers are measuring yields as they harvest so they can record exactly which parts of their fields produced the best results.
  • SDSU is working with manufacturers to develop and test soil sensors that could wirelessly transmit real-time data on pH, soil moisture, soil temperature, and transpiration. These sensors are too expensive today to be practical – but the cost of sensors should drop over time.
  • Research is being done to create low-cost sensors that can measure the health of individual plants.
  • Using sensors for livestock is the most technologically advanced area and there are now dairy farms that measure almost everything imaginable about every milking cow. The sensors for monitoring pigs, chickens, and other food animals are also advanced.
  • The smart farm today measures an immense amount of data on all aspects of running the business. This includes gathering data for non-crop parts of the business such as the performance of vehicles, buildings, and employees. The envisioned future is that sensors will be able to sense a problem in equipment and a send a replacement part before a working machine fails.
  • One of the more interesting trends in farming is to record and report on every aspect of the food chain. When the whole country stopped eating romaine last year because of contamination at one farm, the industry has started to develop a process where each step of the production of crops is recorded, with the goal to report the history of food to the consumer. In the not-too-distant future, a consumer will be able to scan a package of lettuce and know where the crop was grown, how it was grown (organic) when it was picked, shipped and brought to the store. This all requires creating a blockchain with an immutable history of each crop, from farm to store.

The common thread of all of these developments in precision agriculture is the need for good broadband. Professor Uilk says that transmitting the detailed map scans for crop fields realistically requires 100 Mbps upload to get the files to and from the experts in a timely exchange. That means fiber to the farm.

A lot of the other applications require reliable wireless connections around the farm, and that implies a much better use of rural spectrum. Today the big cellular carriers buy the rights to most spectrum and then let it lie fallow in rural areas. We need to find a way to bring spectrum to the farm to take advantage of measuring sensors everywhere and for directing self-driving farm equipment.

Unlicensed Millimeter Wave Spectrum

I haven’t seen it talked about a lot, but the FCC has set aside millimeter wave spectrum that can be used by anybody to provide broadband. That means that entities will be able to use the spectrum in rural America in areas that the big cellphone companies are likely to ignore.

The FCC set aside the V band (60 GHz) as unlicensed spectrum. This band provides 14 GHz of contiguous spectrum available for anybody to use. This is an interesting spectrum because it has a few drawbacks. This particular spectrum shares a natural harmonic with oxygen and thus is more likely to be absorbed in an open environment than other bands of millimeter wave spectrum. In practice, this will shorten bandwidth delivery distances a bit for the V band.

The FCC also established the E band (70/80 GHz) for public use. This spectrum will have a few more rules than the 60 GHz spectrum and there are light licensing requirements for the spectrum. These licenses are fairly easy to get for carriers, but it’s not so obvious that anybody else can get the spectrum. The FCC will get involved with interference issues with the spectrum – but the short carriage distances of the spectrum make interference somewhat theoretical.

There are several possible uses for the millimeter wave spectrum. First, it can be focused in a beam and used to deliver 1-2 gigabits of broadband for up to a few miles. There have been 60 GHz radios on the market for several years that operate for point-to-point connections. These are mostly used to beam gigabit broadband in places where that’s cheaper than building fiber, like on college campuses or in downtown highrises.

This spectrum can also be used as hotspots, as is being done by Verizon in cities. In the Verizon application, the millimeter wave spectrum is put on pole-mounted transmitters in downtown areas to deliver data to cellphones as fast as 1 Gbps. This can also be deployed in more traditional hot spots like coffee shops. The problem of using 60 GHz spectrum for this use is that there are almost no devices yet that can receive the signal. This isn’t going to get widespread acceptance until somebody builds this into laptops or develops a cheap dongle. My guess is that cellphone makers will ignore 60 GHz in favor or the licensed bands owned by the cellular providers.

The spectrum could also be used to create wireless fiber-to-the-curb like was demonstrated by Verizon in a few neighborhoods in Sacramento and a few other cities earlier this year. The company is delivering residential broadband at speeds of around 300 Mbps. These two frequency bands are higher than what Verizon is using and so won’t carry as far from the curb to homes, so we’ll have to wait until somebody tests this to see if it’s feasible. The big cost of this business plan will still be the cost of building the fiber to feed the transmitters.

The really interesting use of the spectrum is for indoor hot spots. The spectrum can easily deliver multiple gigabits of speed within a room, and unlike WiFi spectrum won’t go through walls and interfere with neighboring rooms. This spectrum would eliminate many of the problems with WiFi in homes and in apartment buildings – but again, this needs to first be built into laptops, sart TVs and other devices.

Unfortunately, the vendors in the industry are currently focused on developing equipment for the licensed spectrum that the big cellular companies will be using. You can’t blame the vendors for concentrating their efforts in the 24, 28, and 39 GHz ranges before looking at these alternate bands. There is always a bit of a catch 22 when introducing any new spectrum – a vendor needs to make the equipment available before anybody can try it, and vendors won’t make the equipment until they have a proven market.

Electronics for millimeter wave spectrum is not as easily created as equipment in lower frequency bands. For instance, in the lower spectrum bands, software-defined radios can easily change between nearby frequencies with no modification of hardware. However, each band of millimeter wave spectrum has different operating characteristics and specific antenna requirements and it’s not nearly as easy to shift between a 39 GHz radio and a 60 GHz radio – they requirements are different for each.

And that means that equipment vendors will need to enter the market if these spectrum bands are ever going to find widespread public use. Hopefully, vendors will find this worth their while because this is a new WiFi opportunity. Wireless vendors have made their living in the WiFi space and they need to be convinced that they have the same with these widely available spectrum bands. I believe that if some vendor builds indoor multi-gigabit routers and receivers, the users will come.

The Busy Skies

I was looking over the stated goals of the broadband satellite companies and was struck by the sheer numbers of satellites that are being planned. The table further down in the blog shows plans for nearly 15,000 new satellites.

To put this into perspective, consider the number of satellites ever shot into space. The United Nations Office for Outer Space Affairs (NOOSA) has been tracking space launches for decades. They report that there have been 8,378 objects put into space since the first Sputnik in 1957. As of the beginning of 2019, there were still 4,987 satellites still in orbit, although only 1,957 were still operational.

There was an average of 131 satellites launched per year between 1964 and 2012. Since 2012 we’ve seen 1,731 new satellites, with 2017 (453) and 2018 (382) seeing the most satellites put into space.

The logistics for getting this many new satellites into space is daunting. We’ve already seen OneWeb fall behind schedule. In addition to these satellites, there will continue to be numerous satellites launched for other purposes. I note that a few hundred of these are already in orbit. In the following table, “Current” means satellites that are planned for the next 3-4 years.

Current Future Total
SkyLink 4,425 7,528 11,953
OneWeb 650 1,260 1,910
Telesat 117 512 629
Samsung 4,600 4,600
Kuiper 3,326 3,326
Boeing 147 147
Kepler 140 140
LeoSat 78 30 108
Iridium Next 66 66
SES 03B 27 27
Facebook 1 1
 Total 5,192 9,300 14,492

While space is a big place, there are some interesting challenges from having this many new objects in orbit. One of the biggest concerns is space debris. Low earth satellites travel at a speed of about 17,500 miles per hour to maintain orbit. When satellites collide at that speed, they create a large number of new pieces of space junk, also traveling at high speed. NASA estimates there are currently over 128 million pieces of orbiting debris smaller than 1 square centimeter and 900,000 objects between 1 and 10 square centimeters.

NASA scientist Donald Kessler described the dangers of space debris in 1978 in what’s now described as the Kessler syndrome. Every space collision creates more debris and eventually there will be a cloud of circling debris that will make it nearly impossible to maintain satellites in space. While scientists think that such a cloud is almost inevitable, some worry that a major collision between two large satellites, or malicious destruction by a bad actor government could accelerate the process and could quickly knock out all of the satellites in a given orbit. It would be ironic if the world solves the rural broadband problem using satellites, only to see those satellites disappear a cloud of debris.

Having so many satellites in orbit also concerns another group of scientists. The International Dark Sky Association has been fighting against light pollution that makes it hard to use earth-based telescopes. The group now also warns that a large number of new satellites will forever change our night sky. From any given spot on the Earth, the human eye can see roughly 1,300 visible stars. These satellites are all visible and once launched, mankind will never again see the natural sky that doesn’t contains numerous satellites at any given moment.

Satellite broadband is an exciting idea. The concept of bringing good broadband to remote people, to ships, and to airplanes is enticing. For example, the company Kepler listed above is today connecting to monitors for scientific purposes in places like lips of volcanos and on ocean buoys and is helping us to better understand our world. However, in launching huge numbers of satellites for broadband we’re possibly polluting space in a way that could make it unusable for future generations.

Robocalls and Small Carriers

In July, NTCA filed comments in the FCC docket that is looking at an industry-wide solution to fight against robocalls. The comments outline some major concerns about the ability of small carriers to participate in the process.

The industry solution to stop robocalls, which I have blogged about before, is being referred to as SHAKEN/STIR. This new technology will create an encrypted token that verifies that a given call really originated with the phone number listed in the caller ID. Robocalls can’t get this verification token. Today, robocallers spoof telephone numbers, meaning that they insert a calling number into the caller ID that is not real. These bad actors can make a call look like it’s coming from any number – even your own!

On phones with visual caller ID, like cellphones, a small token will appear to verify that the calling party is really from the number shown. Once the technology has been in place for a while, people will learn to ignore calls that don’t come with the token. If the industry does this right, it will become easier to spot robocalls, and I imagine a lot of people will use apps that will automaticlly block calls without a token.

NTCA is concerned that small carriers will be shut out of this system, causing huge harm to them and their customers. Several network prerequisites must be in place to handle the SHAKEN/STIR token process. First, the originating telephone switch must be digital. Most, but not all small carriers now use digital switches. Any telco or CLEC using any older non-digital switch will be shut out of the process, and to participate they’d have to buy a new digital switch. After the many-year decline in telephone customers, such a purchase might be hard to cost-justify. I’m picturing that this might also be a problem for older PBXs – the switches operated by private businesses. The world is full of large legacy PBXs operated by universities, cities, hospitals and large businesses.

Second, the SHAKEN/STIR solution is likely to require an expensive software upgrade for the carriers using digital switches. Again, due to the shrinking demand for selling voice, many small carriers are going to have a hard time justifying the cost of a software upgrade. Anybody using an off-brand digital switch (several switch vendors folded over the last decade) might not have a workable software solution.

The third requirement to participate in SHAKEN/STIR is that the entire path connecting a switch to the public switched telephone network (PSTN) must be end-to-end digital. This is a huge problem and most small telcos, CLECs, cable companies, and other carriers connect to the PSTN using the older TDM technology (based upon multiples of T1s).

You might recall a decade ago there was a big stir about what the FCC termed a ‘digital transition’. The FCC at the time wanted to migrate the whole PSTN to a digital platform largely based upon SIP trunking. While there was a huge industry effort at the time to figure out how to implement the transition, the effort quietly died and the PSTN is still largely based on TDM technology.

I have clients who have asked for digital trunking (the connection between networks) for years, but almost none of them have succeeded. The large telcos like AT&T, Verizon, and CenturyLink don’t want to spend the money at their end to put in new technology for this purpose. A request to go all-digital is either a flatly refused, or else a small carrier is told that they must pay to transport their network traffic to some distance major switching point in places like Chicago or Denver – an expensive proposition.

What happens to a company that doesn’t participate in SHAKEN/STIR? It won’t be pretty because all of the calls originating from such a carrier won’t get a token verifying that the calls are legitimate. This could be devastating to rural America. Once SHAKEN/STIR is in place for a while a lot of people will refuse to accept unverified calls – and that means calls coming from small carriers won’t be answered. This will also affect a lot of cellular calls because in rural America those calls often originate behind TDM trunking.

We already have a problem with rural call completion, meaning that there are often problems trying to place calls to rural places. If small carriers can’t participate in SHAKEN/STIR, after a time their callers will have real problems placing calls because a lot of the world won’t accept calls that are not verified with a token.

The big telcos have assured the FCC that this can be made to work. It’s my understanding that the big telcos have mistakenly told the FCC that the PSTN in the country is mostly all-digital. I can understand why the big telcos might do this because they are under tremendous pressure from the FCC and Congress to tackle the robocall issue. These big companies are only looking out for themselves and not the rest of the industry.

I already had my doubts about the SHAKEN/STIR solution because my guess is that bad actors will find a way to fake the tokens. One has to only look back at the decades-old battles against spam email and against hackers to understand that it’s going to require a back-and-forth battle for a long time to solve robocalling – the first stab of SHAKEN/STIR is not going to fix the problem. The process is even more unlikely to work if it doesn’t function for large parts of the country and for whole rural communities. The FCC needs to listen to NTCA and other rural voices and not create another disaster for rural America.