Broadband Stats for 2019

Leichtman Research Group recently released the broadband customer statistics for the end of 2019 for the largest cable and telephone companies. Leichtman compiles most of these numbers from the statistics provided to stockholders other than Cox, which is estimated.

The numbers are lower than broadband customers these same companies report to the FCC, and I think that most of the difference is due to the way many of these companies count broadband to apartment buildings. If they provide a gigabit pipe to serve an apartment building, they might count that as 1 customer, whereas for FCC reporting they are likely to count the number of apartment units served.

4Q 2019 2019 Change % Change
Comcast 28,629,000 1,407,000 5.2%
Charter 26,664,000 1,405,000 5.6%
AT&T 15,389,000 (312,000) -2.0%
Verizon 6,956,000 (5,000) -0.1%
Cox 5,170,000 110,000 2.2%
CenturyLink 4,678,000 (134,000) -2.8%
Altice 4,187,300 71,900 1.7%
Frontier 3,500,000 (235,000) -6.3%
Mediacom 1,328,000 64,000 5.1%
Windstream 1,049,300 28,300 2.8%
Consolidated 784,165 5,195 0.7%
WOW 781,500 21,900 2.9%
Cable ONE 773,000 39,000 5.3%
TDS 455,200 31,800 7.5%
Atlantic Broadband 451,463 25,857 6.1%
Cincinnati Bell 426,700 1,100 0.3%
101,222,628 2,525,052 2.6%

Leichtman says this group of companies represents 96% of all US broadband customers. For the year these large ISPs collectively saw growth that annualizes to 2.6%.

The customer additions for 2019 for these large ISPs are just slightly higher than customers additions for 2018. The cable companies performed a little better in 2019 while the losses continue to accelerate for the big telcos. The big telco losers for the year are Frontier, which lost 6.3% of its customer base, AT&T (lost 2.0 %) and CenturyLink (lost 2.8%). AT&T claims to have added 1.1 million customers to fiber for the year, so they are still losing a lot of customers on DSL. Frontier is a total disaster and there may be no recovery for the company if they keep losing broadband customers at a pace of over 6% annually.

‘                                        2018                 2019

Cable Companies        2,987,721        3,144,657

Telcos                           ( 472,124)        ( 619,605)

Total                             2,425,597        2,525,052

The two best-performing companies were again Comcast and Charter, which each added over 1.4 million customers for the year while the rest of the ISPs, including cable companies, collectively lost half a million customers.

One note on the above numbers – the TDS and Cable One numbers include adjustments due to small acquisitions).

We Need Penalties for Bad FCC Mapping Data

The FCC has been in the process of implementing revised mapping that will fix a lot of the problems with the current 477 broadband reporting process. The needed changes should be further boosted by the Broadband DATA Act that was signed into law on Monday. The new mapping will use polygons, and ISPs are supposed to show precise coverage areas for where they offer or don’t offer broadband.

If ISPs do this correctly – and that’s a big if – then this will fix at least one big problem that I call the town boundary problem. The current FCC data gathering asks ISPs to report the fastest speed they can deliver in a census block. Unfortunately, census blocks don’t stop at town boundaries, and so the FCC databases regularly assumes that all of the people outside of town can receive the same speeds as people inside the towns. If cable companies and fiber providers draw honest polygons that stop where their network stops, this boundary issue should disappear.

Unfortunately, the benefits of the new mapping are not so clear cut in rural areas. DSL providers and fixed wireless providers are also supposed to draw polygons. The rural polygons are supposed to only cover existing customers as well as places that can be connected within ten business days of a customer request for activation.

I’ve been spending a lot of time lately looking through the claimed coverage on Form 477 by telco DSL and WISPs. Some of the things I see in the FCC database are massively erroneous and I’m not convinced that rural ISPs will clean up their act even if they are forced to use the polygons. Consider a few examples:

  • I’ve been working with a sparsely populated county that has large rural census blocks – which is pretty normal. The incumbent telco claims 25/3 Mbps coverage for almost all of the rural areas of the county. We’ve been working with the county to have residents perform speed tests and have seen almost no speeds faster than 5 Mbps, with some speeds on DSL below 1 Mbps. The incumbent telco does widely offer DSL, but the claimed 25/3 Mbps capability reported to the FCC is pure fantasy.
  • I’m working with another rural county where two WISPs claim to provide 100 Mbps wireless service covering the whole county. The WISPs don’t operate towers in the county and their nearest towers are in a nearby county. The county has undertaken a large canvass of residents to identify the ISPs in the county and so far hasn’t found even one customer of these WISPs. Even if they find a few customers, the WISPs can’t deliver 100 Mbps wireless broadband from towers more than 10 miles away – it’s doubtful they deliver that much speed even next to the existing towers.

I am not convinced that the revised FCC mapping is going to fix these two situations. The incumbent telco is going to say that they can install DSL within ten business days everywhere in the county – so they might not shrink their claimed coverage when going to the polygons. The problem with the telco isn’t the coverage area – it’s the claimed speeds. If the new FCC reporting still allows ISPs to overstate speeds, then nothing will be fixed in this county with the new mapping.

The two WISPs have a double problem. First, the coverage area of the two WISPs seem to be highly exaggerated. The WISPs are also exaggerating the broadband speeds available and there is zero chance that the WISPs are delivering speeds even remotely close to 100 Mbps broadband from a distant tower. These WISPs seem to be guilty of overstating both the coverage areas and the speeds. Unfortunately, the WISPs might still claim they can install in this area within 10 business days and might not shrink their claimed coverage. And unless they are somehow forced, the WISPs might not lower the claim of 100 Mbps.

There are real life consequences to the claims made in these two examples. In the first example, the FCC believes the whole county has access to 25/3 Mbps DSL, when in fact it looks like nobody has DSL even close to that speed. The county with the two WISPs is in even worse shape. The FCC considers this county completely covered with 100/10 Mbps broadband, when in fact there is no fast broadband coverage. In reality, the fastest broadband option in some parts of the county is a third WISP that markets speeds of 15 Mbps but mostly delivers less.

The consequences of the current mapping are dire for both of these counties. These counties are not included in the FCC’s eligible areas for $20 billion RDOF grants that was just published because the FCC thinks these counties have good broadband. If the ISP data being reported was honest, both counties would be eligible for these grants. These counties might be eligible for other grants that would allow the grant applicant to challenge the FCC speed data – but such challenges are a lot of work and don’t always get accepted.

I know there are hundreds of other counties in the same situation, and I have little faith that new mapping is going to fix this in rural areas. What is needed are severe fines for ISPs that overstate speed or coverage areas. In this case, the existing ISPs are causing huge economic harm to these counties and the fines ought to be set accordingly. I don’t understand what motivates ISPs to claim speeds that don’t exist – but if we are going to fix rural broadband, we need to start by kicking the bad ISP actors hard in the pocketbook.

The Broadband DATA Act allows for a challenge process so that localities can force honest reporting. The FCC needs to implement this immediately, without more study or delay.

Using Bigger Bandwidth Applications

The recent Cisco Annual Internet Report for 2018 – 2023 had one chart that I found intriguing. The purpose of Cisco’s report is to look at the future of broadband usage, and the report included a chart showing the amount of bandwidth needed for various web functions. To me this list was reminiscent of the list that the FCC made in 2015 when they set the definition of broadband at 25/3 Mbps – except that all of the item on this list require more bandwidth than the functions the FCC foresaw just five years ago.

I think Cisco’s point is that we find ways to use more broadband when it’s available. Most of the items on this list are already in use today, with the last few on this expected in the near future.

  • 4K security cameras – 16 Mbps
  • Streaming 4K Video – 16 Mbps
  • Virtual Reality Streaming – 17 Mbps
  • Self-Driving Car Diagnostics (done when your car pulls into the driveway) – 20 Mbps
  • Cloud Gaming – 30 Mbps
  • Streaming 8K Video – 51 Mbps
  • 8K Wall TV – 100 Mbps
  • Online HD Virtual Reality – 167 Mbps
  • Online 8K Virtual Reality – 500 Mbps

Cisco notes that these functions have become viable for the public as the amount of bandwidth to homes has grown. Anybody with broadband speeds of 125 Mbps or faster should be able to use all except the last few services. In the US a lot of homes now have Internet speeds in this range as Comcast, Charter and the other big cable companies have increased basic speeds to 100 – 200 Mbps homes with the introduction of DOCSIS 3.1. Charter increased my home last year from 60 Mbps to 135 Mbps.

4K security cameras have been on the market for a few years. They provide enough resolution to clearly identify people at the front door or outside a factory. The 16 Mbps bandwidth application is needed to upload video images into the cloud or to view the camera feed remotely over the web. Interestingly, a 4K security camera ought to have a fast upload speed to work properly – something that is still lacking for most cable company connections.

The web is now full of 4K videos on Netflix, Amazon Prime, YouTube and elsewhere. There are already web sites doing virtual reality streaming.

The self-driving car diagnostics is an interesting category. My wife’s 2019 Subaru already connects to the web every time she pulls into the driveway. This connection is likely not at 20 Mbps, but her car is doing diagnostics and uploading the results of driving to Subaru, and also making driving statistics available to us.

Cloud gaming is already here, although most applications are streaming at 4K or slower speeds. However, since several of the game companies have migrated online, the intensity and bandwidth needed for games is expected to increase, and Cisco pegs that at needed a 30 Mbps connection. What this speed requirement doesn’t tell you is that kids that routinely run several games simultaneously.

Bandwidth needs really shoot up for 8k video at 51 Mbps per stream. 8K content is already widely available on YouTube and other websites. 8K TVs have been around for a few years. Their prediction of 100 Mbps for an 8K TV assumes multiple simultaneous streams – something that sports fans routinely do.

Cisco also lists two near-future applications that will be real bandwidth hogs. They estimate that HD virtual reality done online will require 167 Mbps. They set 8K virtual reality as needing 500 Mbps. These functions are going to need faster broadband connections than what most homes have today. However, OpenVault reports that the number of US homes subscribing to a gigabit connection doubled in 2019 to 2.8% of all households. As that number keeps growing there will finally be a market for applications that need giant bandwidth. For years the industry has searched for gigabit applications, but nobody developed them since there have been so few homes that could use them. 8K virtual reality would bring true immersive virtual reality into the home – but ISPs are going to hate it. They love selling gigabit connections, but they don’t really expect homes to use that much bandwidth.

Quantifying the Homework Gap – Finally a Definitive Study

The Quello Center that is part of the Department of Media and Information at Michigan State University just released a definitive study that looks at the impact of lack of broadband on students. The study was done in conjunction with Merit Networks, the organization that acts as the ISP for schools in Michigan.

I describe the study as definitive because it used study techniques that isolate the impact of broadband from other factors such as sex, race, and family income. The study was done in conjunction with the schools to allow Quello researchers to get blind performance results from student participants without violating student confidentiality. The study involved 3,258 students in grades 8 – 11 in Michigan from schools described as being in rural areas.

The study showed significant performance differences for students with and without home broadband. Students with no Internet access at home tested lower on a range of metrics including digital skills, homework completion and grade point average. Some of the specific findings include

  • Students with home Internet access had an overall grade point average of 3.18 while students with no Internet access at home had a GPA of 2.81.
  • During the study, 64% of students with no home Internet access sometimes left homework undone compared to only 17% of students with a high-speed connection at home.
  • Students without home Internet access spend an average of 30 minutes longer doing homework each evening.
  • The study showed that students with no Internet at home often had no alternative access to broadband. 35% of students with no broadband also didn’t have a computer at home. 34% of students had no access to alternate sources of broadband such as a library, church, community center, or homes of a neighbor or relative.

One of the most important findings was that there is a huge gap in digital skills for students without home broadband. To quote the study, “The gap in digital skills between students with no home access or cell phone only and those with fast or slow home Internet access is equivalent to the gap in digital skills between 8th and 11th grade students.” It’s almost too hard to grasp that the average 11th grade student without home broadband had the equivalent digital skills an 8th grader with home broadband. Digital skills not only involves competence in working with technology, but also is manifested by the ability to work efficiently, to communicate effectively with others, and managing and evaluation information.

Students with lower digital skills don’t perform as well on standardized tests. Students who are even modestly below average in digital skills (one standard deviation below the mean) rank nearly 7 percentiles lower on their total SAT/PSAT scores, 5 percentiles lower in math, and 8 percentiles lower in evidence-based reading and writing. Students who are even moderately lower in digital skills are also 19% less likely to consider a STEM-related career (that’s science, technology, engineering, and math).

The study also showed lower expectations for students without broadband at home. For example, 65% of students with home broadband have plans to pursue post-secondary education. Only 47% of students with no Internet access have such plans.

This study is significant because it is the first study I know of that isolates the impact of home broadband from other factors. There are other studies that have shown that lack of broadband hurt school performance, but in other studies it was impossible to isolate Internet access from factors like household income levels.

This study should be a wake-up call for getting broadband into the home of every student. It’s not tolerable to allow a big percentage of our kids to underperform in school due to the lack of home broadband. We know that underperforming in school translates to underperforming in lifetime earnings, and so the cost to society for not fixing the homework gap is far larger than the cost to find a broadband solution. If you are lucky enough to have a home computer – do the math.

Low-orbit Satellite Security

I’ve been watching the progress of the low-orbit satellite providers which are promising to bring broadband solutions across the planet. There has been some serious movement since the last time I discussed their status.

On January 29, Starlink launched its latest round of low-orbit satellites, bringing the number in space to 242. Not all of these will be delivering broadband. The first half dozen satellites were test units to try out various concepts. Starlink will use 10 of the most recent batch to test the ability to ‘de-orbit’ and bring satellites back to earth.

The latest Starlink satellites weigh 260 kilograms, up from 227 kilograms for the first satellites launched in May 2019. The latest satellites are designed to be 100% demisable, meaning they will completely burn up in the atmosphere upon reentry.

Starlink still has a long way to go to meet its business plan. If they meet all of the planned launches this year, they’ll have 1,500 satellites in orbit. They’ve told the FCC that they plan to have 6,000 satellites in orbit by the end of 2024 and 12,000 by the end of 2027. As they add new satellites the company must also replace the short-lived satellites that only have a planned life of about five years. That means by 2026 they’ll have to launch 1.200 satellites a year forever just to maintain the first fleet of 6,000 satellites.

We also saw some progress being made by OneWeb, the satellite company founded by Greg Wyler with backing from Virgin, Airbus, SoftBank, and Qualcomm. The company launched 6 satellites last year. They recently launched 34 more satellites and the company’s goal is to put 200 satellites in orbit this year.

These launches show that the industry is for real and that over the next few years we’ll see big numbers of low-orbit satellites in the sky. We finally heard just last week from Elon Musk that he does not intend to compete with rural ISPs and will only sell satellite broadband in the most remote places. He still hasn’t disclosed prices – but if he doesn’t compete with existing ISPs he’s not going to have to be competitively priced. Starlink hints that it might add some customers by the end of this year, but the serious launch of broadband service will start next year.

It’s starting to feel odd that these companies won’t talk about broadband speeds. Like with any broadband technology, the degree of oversubscription will affect broadband performance. The first customers to use the satellites might see blazingly fast speeds – but speeds will lower quickly as customers are added. One of the biggest temptations facing these companies will  be to oversubscribe the technology.

Like with any new technology, satellite broadband brings a new set of worries. There is a recent article on Fastcompany by William Akoto asking how we’re going to protect satellite fleets from hacking. If the proposed satellite constellations grow as promised, there will be tens of thousands of satellites circling the earth delivering broadband. Akoto points out that the satellite supply chain is far from secure and open to tampering. The satellites are being constructed by a number of different vendors using off-the-shelf components. The satellites are not much more than a router connected to a solar array.

It’s clear that there are virtually no hardware or software system that can’t be hacked by a determined effort. The satellites will fly over every country on earth, giving ample opportunity for hackers to hack into satellites directly overhead. The satellites will be controlled by earth station hubs, which also might be hacked in the same manner that happens to big corporate server farms.

The consequences of hacking for satellites are direr than with land-based technology. Hackers could turn satellites off making them dead weights in space. They could rearrange the solar collectors to make them run out of power. Hackers could direct all satellites to come back to earth and burn up in the atmosphere.

In the worse scenario, hackers could crash satellites together creating a lot of space debris. NASA scientist Donald Kessler described the dangers of space debris in 1978 in what’s now described as the Kessler syndrome. Every space collision creates more debris and eventually creates a cloud of circling debris that makes it impossible to maintain satellites in space. Many scientists think such a cloud is almost inevitable, but malicious hacking could create such a cloud quickly.

Hacking won’t only affect rural broadband. The ability of satellites to connect remote locations into a unified network is going to be attractive to a wide range of industries. It’s not hard to imagine the satellite constellations being used to connect to critical infrastructure like rural electric grids, rural dams, and industries of all sorts that connect to rural or third-world locations.

Industry experts are already calling for regulation of satellite security. They believe that governments need to step in to mandate that satellite constellations be as safe as possible. While this could be done voluntarily by the industry there doesn’t seem to be any such effort afoot. The consequences of not getting this right could be a disaster for the planet.

A New Paradigm for Conventions?

In the last few days, I’ve seen numerous notices of telecom conventions and meetings that are being canceled or postponed. Many big corporations that attend conventions have already decided that their employees can’t undertake non-essential travel. It’s likely soon that local governments are going to cancel conventions even if meeting organizers won’t. It seems, at least for this year, that big public telecom events will be a rare event, if they happen at all.

I’ve been thinking about this for a few days and it seems like a good time for us to reexamine how we hold telecom conventions. When you consider how much technology has changed, the way we hold telecom conventions hasn’t changed in the last forty years since I’ve been going to them. There is usually a string of speakers during the day using PowerPoint presentations (used to be overhead slides), mixed in with some panels of folks discussing various topics. There are vendors that pay for coffee breaks and meals hoping that people will stop by their booth to chat.

You probably wouldn’t be able to tell much difference if you were plopped down into a convention from twenty years ago – other than the laptops were larger and the speakers wwere talking about the big breakthroughs in DSL. There is one big difference I’ve noted that should be of concern to convention planners – there are not nearly as many young people attending the conventions today as there was twenty years ago. They find them boring and unproductive.

I got a few glimpses of a different way to meet. FierceWireless just announced a completely online ‘convention’ for 5G. I call it a convention because it stretches over multiple days and includes an array of speakers you’d expect to see at a live 5G convention. I also got a notice that WISPAmerica 2020 is going virtual – no details yet of how they’ll do it.

Having virtual portions of conventions is an idea that’s long overdue. It’s got to be a lot easier to assemble good speakers for virtual presentations. Virtual speakers can devote a few hours rather than a few days to talk at a convention. People like FCC Commissioners or presidents of major telecom firms might speak at a lot more events if they are able to speak from their office for an hour instead of making a trip. Online sessions might also make it easier to ask questions of presenters – sessions are freed from the constraints of clearing out meeting halls for the next presentation, and question sessions could be extended as needed.

If we really want to duplicate the convention experience, then having virtual speakers is not enough. The main reason that a lot of people, including me, go to conventions is the networking and the chance to make new connections in the industry. As a consultant, I invariably meet a few potential new clients and I get to catch up with existing clients. I also go to check in with the various vendors to see what’s new.

I don’t think it would be hard to duplicate the networking in a virtual convention. Speakers, vendors, and attendees could post calendars and make appointments to speak virtually with each other for 15 or 30-minute slots. This would be a lot more productive than a live convention because I always come home feeling like I’ve not met with everybody that I should have.

The coronavirus isn’t going to last forever, and it will die out or we’ll eventually find an effective vaccine. Virtual meetings like the one I describe above could keep communications in the industry flowing this year and not put the industry on hold. If anything, the giant increase in the demand to work-from-home and the demand for telemedicine means that the broadband industry will likely be busier than ever.

My hope would be that after this crisis is over that we don’t return to the existing convention format. Future live conventions would benefit by these same ideas. Bringing in virtual speakers can improve the quality of the message being conveyed. Most conventions have a few good speakers but a host of the same folks that speak year after year. Having a mix of live and virtual speakers would be an upgrade. Scheduling meetings between attendees is an idea that’s 10-years overdue.

This would also be a boon to vendors. The current system of having valuable employees man booths for several days to then meet with folks during hurried time is incredibly nonproductive. Having a reservation system to easily schedule virtual meetings with vendors would be incredibly attractive to me. It ought to also be attractive to vendors who get quality time with interested attendees instead of trying to juggle several folks standing around their booth at the same time. I can’t tell you how many vendor booths I’ve walked away from because they were busy with somebody else.

Of course, this raises the question in the future of also having virtual attendees. Paying a fee could give virtual attendees access to the speaker sessions. It would also allow for one-on-one meetings with speakers and vendors. I know there are many conventions that I’ve considered attending but that didn’t fit into my schedule. I would participate in more events virtually if I could buy a half-day, full-day, or several-day pass, priced appropriately.

The above scenario is a big break from the way we’ve traditionally held conventions. I know that I would find the virtual format I’ve described to be a lot more efficient and productive than what actually happens at conventions. We already have the technology that could make this work – although somebody has to bundle this into a convention product. There are folks who attend conventions to get out of the office and have a beer with colleagues – and that’s one reason conventional conventions won’t totally lose their appeal in the future. But if we want to make conventions relevant to the next generation of telecom employees and make them more efficient for everybody today, then mixing a virtual component into conventions ought to become the new norm.

The Dirty Secret of Coaxial Broadband

The US has clearly pinned our hopes for providing modern broadband on the big cable companies. At the end of 2019, the big cable companies had almost 68 million customers compared to 33 million for the big telcos. Any discussion of broadband in urban markets is mostly a discussion of big cable company broadband. Cable companies will continue to grow market dominance as urban DSL customers continue to migrate to cable modem. In 2019 the big cable companies added 3.1 million customers while the telcos lost over 600,000 customers.

The big cable companies have all advertised to their customers that they had upgraded to the latest technology in DOCSIS 3.1 and can now provide gigabit broadband – for an expensive price in most markets set well over $100 per month.

It’s easy to think of urban cable systems as up-to-date and high tech and ready and able to deliver fast broadband speeds. While this is true in some cities and in some neighborhoods, the dirty secret of the cable industry is that their networks are not all up to snuff. Everybody is aware of the aging problems that have plagued the telephone copper network – but it’s rare to hear somebody talking about the aging of the cable company copper networks.

Most of the cable networks were built in the 1970s, with some even a little older. Just like with telephone copper networks the coaxial networks are getting old and a network built around 1970 is now fifty years old.

Cable coaxial networks suffer more from deterioration than do telephone copper networks. The copper wires in a coaxial system are much larger and the wires hanging on poles act like a giant antenna that can receive a range of different frequencies. Any physical opening into the wire through a splice point or from aging creates a new ingress point for external frequencies – and that equates to noise on the coaxial network. Increased noise translates directly to decreased performance of the network. The capacity of the older coaxial networks is significantly lower than when the networks were first constructed.

Another issue with coaxial networks is that the type of coaxial cable used has changed over time and some of the coax used in the early networks can’t handle the capacity needed today. Some older coax has been replaced in urban networks, but not all. Coaxial networks in smaller towns still can contain a lot of older-generation coaxial cables.

These issues mean that coaxial networks don’t always perform as well as is touted by the cable companies. I can use the network in my city of Asheville NC as an example. Charter announced nationally that when it upgraded to DOCSIS 3.1 that it had a goal of raising broadband speeds everywhere to 200 Mbps. My speed at the modem is 135 Mbps. I’m not complaining about my speed and I’m glad they increased my speed, but there must be issues in the local network that stopped Charter from achieving its 200 Mbps goal.

We undertake surveys and citywide speed tests across the country and we often see that the performance of coaxial networks varies by neighborhood. We’ve seen neighborhoods where there are more outages, more variance in download speeds, and overall slower speeds than the rest of the city. These problems are almost certainly due to differences within a city of the quality of the coaxial network.

Cable companies could bring older neighborhoods up to snuff, but such upgrades are expensive. It might mean replacing a lot of drops and any runs of older coaxial cable. It might mean replacing or re-spacing amplifiers. It often means replacing all of the power taps (the devices that connect homes to the distribution cables). The upgrading effort is labor-intensive, and that means costly.

I think this means that many cities will never see another unilateral increase in broadband speeds unless the cable companies first make big investments. The cable companies have increased speeds every few years since 2000 to keep ahead of the telcos and to make customers happier with their service. I fear that since cable companies are becoming de facto monopolies in most cities that they have lost the incentive to get faster if that means spending money. The coaxial networks and speeds that we have in place today might be what we still have a decade from now, only with coaxial networks that are another ten years older.

Apple Satellites?

Word has leaked out that Apple is working on a satellite project. The company is at the beginning of the research project, so there is no way to know exactly what they have in mind. For example, is the company considering launching satellites or would they lease capacity from one of the other planned satellite networks?

The fact that Apple is working on the concept is a good segue to discuss the many ways that satellite connectivity could be useful to Apple or other companies. It’s hard to find any press that doesn’t assume that the satellite constellations will be used mostly for rural broadband, but there are numerous other ways that Apple or others could use low-orbit satellites.

One of the more obvious ways that Apple could use satellites is by offering its own branded broadband to go with their devices. It’s not hard to imagine iMacs or IPads having the option to be bundled with Apple satellite broadband, particularly for customers that don’t have adequate home broadband today. With the current vision of satellite technology, any customer connected this way would need the same sort of dish at their home as envisioned by Starlink – a flat dinner-plate-sized antenna that doesn’t have to be ‘aimed’ at the satellites.

Apple might instead be thinking of using satellites to communicate with cellphones, which would allow the company to un-tether from cellular companies. It’s unlikely that the fleets of low-orbit broadband satellites could communicate with something as small as a cellphone. However, a new company – AST & Science – recently announced that they have found a way that cellphones can communicate through satellites. This involves putting up large satellites that would act as a cellular repeater in the sky. For a space nerd like me this brings back memories of Echo 1, pictured above, which was a giant balloon that acted as a passive reflector of microwave signals. AST & Science says that this kind of satellite would act as a cellular repeater rather than as a cell site – it would connect cellphones to a cell site elsewhere.

Apple might also be considering an automobile antenna that can work with satellites. A satellite-to-car antenna would open up a host of products for Apple including smart car connectivity products. This would not be the data-intensive connections imagined by the self-driving car folks, but even a relatively slow satellite connection of even 25 Mbps would open up a whole range of broadband products for use in vehicles.

Apple’s early research might go nowhere and they might just be brainstorming on what is practically possible. The fact that companies like Apple are looking at satellites points out that there are likely many applications for satellite broadband that nobody is talking about. It makes sense that the press, for now, is concentrating on whether any of the proposed satellite constellations ever get launched, because until they are in the sky all of this discussion is purely speculative.

However, the possibilities are endless. How many uses can be developed for a worldwide broadband network that’s available everywhere? Some applications seem obvious, like tying together communications for all of the locations of a worldwide corporation into a big private network. It’s not hard to imagine school systems using the satellites as the way to get broadband for homework to every student. I’m betting there are hundreds of other ideas that have market potential. It will be interesting to see which ones are of the most interest to Apple.

The Explosive Growth of M2M Traffic

The Cisco Annual Internet Report for 2018 – 2023 is full of interesting predictions this year. One of the more intriguing predictions is that Machine-to-Machine (M2M) traffic (which they also refer to as Internet of Things (IoT) traffic) will become a little more than half all of the traffic on the web by 2023. That’s an amazing prediction until you stop and think about all of the devices that communicate with the Internet without needing a human interface.

Cisco forecasts several reasons why M2M traffic will grow so much in the next few years. The primary way is through the proliferation of M2M devices. They predict over the 5-year period there will be a 2.4 times growth in connected devices from 6.1 billion in 2018 to 14.7 billion in 2023. That’s a 19% compounded growth rate and by 2023 equals 1.8 connected devices for every person on earth.

The second reason for the growth is that we are using M2M devices for a lot more functions than just a few years ago. Cisco is predicting fast growth in the following categories of M2M

  • They predict the number of worldwide connected home devices will grow by 20% per year. This is the largest category of devices and will represent just under 50% of connected devices by 2023. This category includes things like home automation, home security and video surveillance, connected white goods (the new term for connected appliances), and our communications and entertainment devices like smart TVs, laptops, desktops, and smartphones.
  • They predict that connected car applications will be the fastest-growing sector, growing at 30% per year. This includes connections made for things like fleet management, in-vehicle entertainment, emergency calling systems, vehicle diagnostics, and vehicle navigation.
  • Cisco predicted that connected city applications will be the second fastest-growing M2M category with a 26% compounded growth. This includes things like smart traffic systems, surveillance cameras, weather and environmental monitors, smart parking systems, gunshot monitors, etc.
  • They predict that connected health will grow 19% annually. This category mostly consists of telemedicine monitors used for outpatient monitoring.
  • Connected energy applications are predicted to grow by 24%. This includes smart grid monitors that track utility usage and loads, and pinpoint network outages quickly. It includes energy monitors, which can turn off air conditioners during times of heavy peak usage. In includes sensors in water systems that track pressure and usage and that predict underground leak locations.
  • Cisco predicts connected work will grow by 15%. This is used for things like inventory tracking, surveillance and security monitoring, and tracking and connecting to employees working in the field.
  • They predict that connected retail will grow by 11% annually. M2M traffic is being used to track inventory. Big chain stores are starting to track the shopping pattern of individual shoppers to see how they traverse the various departments.
  • Connected Manufacturing and Supply Chain will grow by 8% annually. Supply chain monitoring tracks the status of delivery for components needed in the manufacturing process. This also includes smart warehousing that automates packing and shipping or orders. Smart manufacturing supports monitors that track the performance of machinery and manufacturing processes.
  • They predict all other M2M traffic will grow by 19%. This would include things like smart agriculture where monitors are tracking individual herd animals and are just starting to be deployed to monitor crop conditions. This would include other things like sports monitors.

The volume of traffic generated by M2M traffic surprises people. So much of what we do happens in the background and we either forget about it or don’t even know it’s happening. For example, there was an article in the Washington Post last year by a reporter that left the country for a month and left his cellphone in his home. During his absence, the phone used a significant portion of his monthly data plan by updating apps and communicating regularly with remote web sites. My wife’s car connects to the web through or WiFi every time she pulls into the driveway and uploads diagnostics of the various monitors and checks for and downloads needed software updates. Whether for good or bad, our machines and electronics are connecting to the web and using broadband.

New Emphasis on Working from Home

One of the hottest topics in the news related to coronavirus is working from home. Companies of all sizes are telling employees to work from home as a way to help curb the spread of the virus. Companies without work-at-home policies are scrambling to define how to make this work to minimize disruption to their business.

Allowing employees to work at home is not a new phenomenon. Most large corporations have some portion of the workforce working at home at least part-time. Studies have shown that home-based employees are often more productive than those working in the office. Those working at home enjoy big savings, both in dollars and time, from not commuting to an office.

There are a few communities around the country that have offered incentives to attract employees who work from home. The first such program I heard of was in 2018 where Vermont offered a cash incentive of between $5,000 and $10,000 for families with a home-worker to relocated to the state. The state has an aging population and wanted to attract families with good incomes to help energize the local economy. The state recognized that the long-term local benefits to the state from attracting high-paying jobs is worth a lot more than the cash incentive they are offering.

Since then other communities have tried the same thing. I recently read about a similar effort in Tulsa, Oklahoma, which has been watching its population drop since 2016. In Tulsa, a foundation is fronting the $10,000 payments used to attract home workers to the community. There is a similar program in Topeka, Kansas and in northwest Alabama.

I’ve been working from home for twenty years, and during that time I’ve seen a big shift in the work-from-home movement. When I first worked from home, I didn’t know anybody else who was doing so. Over time that has changed and in my current neighborhood over a third of the homes on my block include at least one adult working from home. According to Bloomberg, about 4% of the full-time workforce, not counting self-employed people, now work from home. Adding in self-employed people means that work-from-home is a major segment of the economy.

Wall Street seems to have recognized the value of working at home. As I write this article the Dow Jones average has dropped over 11% since February 14th. During that same time, the stock price of Zoom, a company that facilitates remote meetings has climbed over 27%.

I’m sure that most of the people being sent home to work are going to eventually return to the office. However, this current crisis is likely to make many companies reexamine their work-from-home philosophy and policies. Companies that allow people to work from home, at least part-time, are going to be the least disrupted by future economic upheavals.

If you read my blog regulatory you knew what’s coming next. The one group of people who can’t work from home are those who can’t get a decent home broadband connection. Huge numbers of rural homes in the country still have no broadband option or can only buy broadband that is not sufficient for working from home. Most corporations test the home broadband connection before letting employees work from home, and homes can be disqualified due to poor download speed, poor upload speed, or poor latency. A home broadband connection that meets the FCC definition of broadband at 25/3 Mbps might still be deemed by a corporation to be inadequate for working from home.

My consulting firm CCG talked to a homeowner this week who moved to a rural area looking for an improved lifestyle. The wife works from home, and before they bought the new home they were assured that the broadband there was fast enough to support work at home. It turns out the home is served by a WISP that is delivering less than the advertised speed, and that working from home is impossible in the new home. This family is now facing a crisis caused by lack of good broadband – and there may be no solution for their problem.

Sadly, a whole lot of America is losing economically by not being able to attract and support good-paying jobs from those working at home. If a city like Tulsa is willing to pay $10,000 to attract one work-from-home employee, imagine the negative impact on rural counties where nobody can work from home.