How Low Can They Go?

AT&T and Verizon continue to aggressively eliminate staff. You have to wonder where the bottom will be in staffing levels.

In September 2024, Verizon announced that it would cut 5,000 positions. As of January 1 of this year, the company had 99,600 employees, down 5,000 from the beginning of 2024. As of January 1 of this year, AT&T had 140,990 employees, down 8,910 people during 2024. At the beginning of 2000, the two companies employed over 475,000 people, and since that time have shed a little over half of their employees.

The following graph shows the employees of the two companies since 2000.Verizon has steadily cut full-time employees during this century. The graph doesn’t show any disruption from Verizon’s purchase of AOL in 2015 and Yahoo in 2017. The graph also doesn’t tell the whole story since Verizon has also outsourced positions during this time. I recall a controversy at the end of 2018 when the company outsourced 2,500 IT jobs to India.

AT&T employee counts are a lot more complicated since AT&T acquired a lot of companies this century, including BellSouth in 2006, Leap Wireless in 2013, DirectTV in 2015, and Time Warner in 2018. AT&T subsequently shed both DirecTV and Tim Warner. Even with the turmoil caused by purchasing and ditching subsidiaries, AT&T has steadily been eliminating staff.

Both companies are currently actively striving to eliminate copper networks, with Verizon is much further along with this effort than AT&T. However, Verizon is slated to merge with Frontier sometime this year, which will bring new employees and a return of a lot of copper networks that Verizon had ditched to Frontier in the past.

Both companies also say they are considering how AI might streamline operations, which probably means even further cuts in staffing over the next few years.

This is all a far cry from the time when AT&T was the telephone monopoly and had over 1 million employees, making it the biggest employer outside the U.S. military. It’s anybody’s guess how much more these companies can slash staff and remain viable.

What’s the Future of Keyboards?

My consulting firm does surveys and I want to highlight the results from a recent survey. This was a random survey with statistically valid results surveyed a cross section of the community, so the results are reasonably believable.

We asked survey respondents the number of hours per day they use a cell phone and computer or tablet. The following chart shows the response by age:These results are not as accurate as studies that require people to keep a usage log, and the above numbers tabulate the number of hours people think they use devices. Note that these statistics come from just one survey.

These results reinforce a  few things I’ve been reading from various studies. Those over 65 are still using devices for fewer hours per day than younger age groups. 18 to 34 years old are using devices more than older folks, on average.

The response that I want to highlight is the big shift in usage of those under 34 to using cell phones instead of computers. Just a few years ago, our surveys showed an even split of device use for this age group. Going back a few more years and usage would have been weighted to using computers.

Unfortunately, our surveys don’t reach those under 18, but everything I’ve been reading says that teens and younger kids have migrated to cell phones to a greater degree than shown by this survey for 18-34 year olds. Kids are not just using cell phones – they are talking to them and rarely use a phone’s keyboard.

That’s the phenomenon that makes me ask if we are seeing the beginning of the end for typing into computers. I’ve been reading science fiction my whole life, and a constant prediction of the future has always been communicating with computers by voice.

The recent advent of AI is likely to increase this trend away from typing. I’ve been promised a good computer assistant since I used Ask Jeeves in 1996. No good software has ever come along that isn’t more work than doing things myself, but with AI that is likely to change soon.

There is no denying that younger folks are already making the transition away from typing and now prefer the smart phone. Friends of mine with younger kids say they complain loudly about having to use a keyboard. I’m clearly old school. I spend four hours or more a day writing and a lot of time working on complicated spreadsheets. My brain is completely trained on using a keyboard for those functions, and I’m not sure I’d want to try to transition to talking. But I love voice-to-text on my phone and I see the appeal to use voice for other functions.

It’s not hard to envision a reasonably near future where people will transition more from keyboards to talking. The future choice will not be between computer and cell phone, but a choice of the best screen to use for various functions. Unless we finally get functional glasses or holograms that can display anywhere we go. Give me the whole package and maybe I’m ready to talk.

Making it Easier to Kill Copper

The FCC recently enacted four rule changes to make it easier for legacy telcos to walk away from copper networks. These changes were adopted by the FCC’s Wireline Competition Bureau, meaning these changes did not come to the full Commission for a vote. While there has been regulatory changes in the past ordered by the various Bureaus within the FCC, it’s unusual for changes of any real importance to be enacted without a full Commissioner vote.

One order allows a telco to turn off copper wires without having to conduct a test to first see if a replacement technology can take over the functions that were being performed by copper. The requirement for having such tests is not eliminated, but the order gives telcos ways to justify not performing the tests.

In rural areas, AT&T is largely replacing copper with FWA wireless. But as anybody who lives in rural America knows, there are huge areas where there are no cell towers and no cellular coverage. The rule being clarified is one that came from the FCC’s 2016 Technology Transition order that requires a telco to prove that a replacement technology can match or exceed the performance of the copper network. The clarification of this new order is that the telco can justify tearing down the old network by saying that the ‘totality of the circumstances’ proves that the change is needed and not conduct testing. We’ll have to see how that works in practice, but it seems like a way to remove copper without having a replacement as long as some adequate number of homes in an area will have a replacement.

Another new order makes it easier for telcos to grandfather copper services. Grandfathering is a term used when a telco agrees to continue selling a product to existing customers while not offering it to new customers. The new rule eliminates the FCC paperwork required to grandfather a product.

Another order provides a two-year moratorium for telcos having to disclose and seek public opinions on changes made to copper network. This change was precipitated by having more than 1,100 such changes that were filed with the FCC since 2021, for which there were no objections or public feedback.

The final new order approved a petition filed by USTelecom on behalf of AT&T, Verizon, and Luman. The waiver asked that the FCC kill the rule to require telcos to offer standalone voice to replace voice lost when a copper network is torn down. The telcos want to be able to offer customers a bundle of services instead that probably includes FWA wireless bundled with voice. The FCC granted the waiver for two years, with a provision that the waiver can be extended.

Taken altogether, these changes eliminate a lot of paperwork involved with tearing down copper networks and remove the paperwork delays in the process. All of the big telcos are actively killing copper networks, with the latest big plans coming from AT&T to kill all copper by the end of 2029.

FCC Chairman Brendan Carr said that these changes are only the beginning and that many more regulatory rules will be relaxed or eliminated as part of the FCC’s Delete, Delete, Delete effort.

Speed Isn’t Everything

The marketing are of the broadband industry spends a lot of time convincing folks that the most important part of a broadband product is download speed. This makes sense if fiber of cable are competing in a market against slower technologies. But it seems like most advertising about speed is to convince existing customers to upgrade to faster speeds. While download speed is performance, the industry doesn’t spend much time talking about the other important attributes of broadband.

Upload Speed. Households that make multiple simultaneous upload connections like video calls, gaming, or connecting to a work or school server quickly come to understand the importance of upload speeds if they don’t have enough. This was the primary problem that millions of households subscribed to cable companies encountered during the pandemic when they suddenly were using a lot of upload. Many homes still struggle with this today, and too many people upgrade to faster download speeds, hoping to solve the problem. ISPs using technologies other than fiber rarely mention upload speed.

Oversubscription. Home broadband connections are served by technologies that share bandwidth across multiple customers. Your ISP is very unlikely to tell you the number of people sharing your node or the amount of bandwidth feeding your node. The FCC’s broadband labels require ISPs to disclose their network practices, but nobody tells you statistics like this that would help you compare the ISPs competing for your business. The cable industry ran afoul of this issue fifteen years ago when large numbers of homes began streaming video, and many ran into it again during the pandemic. It still happens today any time a neighborhood has more demand than the bandwidth being supplied.

Latency. The simple description of latency is the delay in getting the packets to your home for something sent over the Internet. Latency increases any time that packets have to be resent and pile up. If enough packets get backlogged, latency can make it difficult or impossible to maintain a real-time connection. Latency issues are behind a lot of the problems that people have with Zoom or Teams calls – yet most folks assume the problem is not having fast enough speed.

Prioritization. A new problem for some broadband customers is prioritization. Customers buying FWA cellular wireless are told upfront that their usage might be slowed if there is too much cellular demand at a tower. Cellular carriers clearly (and rightfully) give priority to cell phones users over home broadband. Starlink customers who buy mobile broadband are given the same warning. Starlink will prioritize normal customers in an area over campers and hikers. Most ISPs say they don’t prioritize, but as AI is introduced into networks it will be a lot easier for them to do so. Over the last few months I’ve seen that several big ISPs are considering selling a priority (and more expensive) connection to gamers at the expense of everybody else.

Your Home Network. Everybody wants to blame the ISP when they have problems. However, a large percentage of broadband problems come from WiFi inside the home. People keep outdated and obsolete WiFi routers that are undersized for their bandwidth. Customers try to reach an entire home from a single WiFi device. Even when customers use WiFi extenders and mesh networks to reach more of the home, they often deploy the devices poorly. If you are having any broadband problems, give yourself a present and buy a new WiFi router.

Reliability. If operated properly, fiber networks tend to be the most reliable. But there are exceptions, and it all boils down to the quality of your local ISP as it does to the technology. It’s hard to say that any factor is more important than reliability if your ISP regularly has network outages when you want to use broadband.

Technology Shorts March 2025

This blog takes a look at some of the newest technologies coming out of the lab that might eventually make a difference in broadband.

Terahertz Chips. One of the biggest hurdles to faster computing is the speed at which we can get data into and out of a chip. Scientists at Notre Dame, the Universite de Lille in France, and Nanyang Technology University in Singapore have collaborated to design a chip that uses multiple terahertz waves to vastly increase the I/O function in computers. Their findings were reported in Nature.

Terahertz waves are located between optical light and microwaves, ranging in frequencies between 0.1 and 10 terahertz. The challenge with using terahertz waves in electronics is finding a way to beam a signal to where it’s needed rather than broadcasting widely. The team is using topological photonics and a beamformer that can focus the beam in any direction within a chip. The remaining challenge is to find efficient power amplifiers and electronic oscillators that will work at terahertz speeds. The chips would be a huge breakthrough that could enable super-highspeed applications like real-life 3D holograms or self-driving cars capable of processing the huge amounts of information from multiple sensors.

New Material for Better Chips. Scientists at the EPFL’s Power and Wide-band-gap Electronics Research Laboratory (POWERlab) in Lausanne, Switzerland, found an interesting property of vanadium dioxide  – it naturally changes from an insulator to a conductor at 155 degrees Fahrenheit. Further, after the material is cooled to become an insulator it remembers what happened to it while it was a conductor. This makes it a great material for building chips because the use of a circuit through the material naturally heats it to the needed temperature to become a conductor. But turn off the power, even temporarily, and when the material cools it remembers the circuit path and data that was stored during use. These properties hold huge promise for using vanadium dioxide for long-term data storage. The switch between the two states also mimics the way that brain neurons operate, in that a circuit could be triggered only when needed, making this an interesting material for making advanced chips that mimic brain functioning.

Cooling Data Centers. Julia Carpenter, the cofounder of the new company Apheros stumbled across a metal foam during research for her Ph.D.  It runs out the foam is as much as 1,000X better as a heat sink than standard metal used in cooling plates used to cool down electronic components, like used in data centers. The term metal foam comes from looking at the material under a microscope that shows a sponge-like appearance. The foam can be inserted into existing cooling places and increase cooling capacity by 90%. The real promise of the metal foam is to use it in the primary design as a cooling element. Data centers create a huge amount of heat, and getting rid of that heat is one of the challenges of building a new data center.

End to Exploding Batteries. Researchers at Cornell have made a breakthrough that could end the problem of exploding lithium-ion batteries. The solution is to replace the normal liquid based lithium with porous lithium crystals. The crystal structure has been tried before, but solid crystals encouraged the growth of dendrites, or crystal growth, that eventually slowed the flow of ions. The porous crystals are structured around a molecular cage, with and three macrocycles, that allow ions to pass.

Effective Dig Once Policies

I know a number of counties and cities that have adopted dig once policies that require that every major road project includes burying empty conduit along new or reworked roads. A lot of them have found out that just having the policy is not enough, and that dig once is more complicated than they imagined.

It’s a common misconception that dig once just means laying conduit in the ground while roads are dug up for repaving. It’s not that easy. It turns out that most contractors who are fulfilling road construction projects are not sympathetic to any activities that delay the roadwork, so they are not particularly accommodating to a fiber construction crew. The road contractor often gets no extra compensation for working with the party laying the conduit, so they often make it hard for the conduit installers to get the work done by giving them a very short window to get the work done or requiring work to be done at night.

One way to make dig once work better would be to require road contractors to build fiber construction into their schedules. That sounds easy enough, but many road projects are funded by sources other than the local government, so getting dig once provisions into a state road project can be a major challenge.

A bigger question is who pays for the conduit. While burying conduit when the roadway is open is far less costly than normal fiber construction, it’s not free. Should the local government require the road contractor to pay for the conduit installation and build it into the road contract? I find it troubling to think about requiring a road contractor to build fiber since it’s not in their skill set.

The other primary issue with a dig once policy is placing access points – places where an ISP or carrier can get access to the conduit and fiber. Without the right access points, buried conduit can be nearly worthless. One problem is that it’s impossible to know upfront who might use the conduit in the future. Somebody who is looking for pure transport through the area won’t care about access points. But an ISP that wants to build last mile fiber will want an access point for every few potential customers – and that can be expensive, even with dig once. It’s even more complicated when trying to predict what might be along that stretch of road over the next fifty years. Since the only good time to place the access points is when the conduit is placed, somebody has to make this determination before the conduit is placed.

For a dig once policy needs to be effective, there also has to be a way to let the world know where conduit is available. This means having a website with a map of available conduit, a list of the policies that anybody that wants to use the fiber must follow, and a price determined for anybody that want to use the conduit.

Dig once sounds like a great idea, but unless conduit is placed in ways that are useful to ISPs, it will never be used. I know ISPs who have considered using government conduit and decided not to. To use a buried conduit means meeting a dig once route at both ends, and that is often not convenient or easy.

Finally, a local government that mandate dig once has to be patient. ISPs will not be rushing to incorporate random short conduit runs into their network design. Dig once only gets attractive when there are enough routes built to be of interest.

Eliminating Regulations

The FCC, under Chairman Brendan Carr, has issued a Public Notice asking for public input on eliminating regulations that create unneeded burdens or that stand in the way of the deployment, expansion, competition, or technological innovation.  The Notice is titled: ‘In Re: Delete, Delete, Delete.’

The Public Notice asks for comments of various types:

  • Cost-benefit Considerations. The FCC invites the public to comment on regulations where the cost of compliance is more than can be justified by the benefits. They ask commenters to take a stab at documenting the cost/benefit of proposed rule changes.
  • Experience Based on Implementation. This asks if some rules are too complex based on the experience of companies that must comply. Are there rules that are routinely waived because of the complexity?
  • Marketplace and Technological Changes. Have marketplace changes or new technologies made some rules obsolete?
  • Barriers to Entry. Are there regulations that act as a barrier to entry of new companies? (I must note that many such regulations are on the books at the request of monopoly providers).
  • A Broader Regulatory Context. Are there rules that are now obsolete due to later regulations that supplanted them?
  • Consideration of Court Decisions. This asks if there are regulations that might be considered in light of Supreme Court rules like Roper Bright that says that regulatory agencies shouldn’t undertake any major regulation that hasn’t been explicitly directed by Congress?

I have no doubt that every large company and lobbyist will trot out their wish list of regulations they would love to see eliminated. I have little doubt that there is somebody who dislikes every regulation on the FCC books. But there are a lot of obsolete regulations. For example, it’s ridiculous in today’s environment for the FCC to have rules about video channel lineups. There are a ton of rules on the books for technologies that are no longer in use.

It’s worth noting that the FCC already routinely ignores obsolete regulations, as do all regulatory agencies. While it’s cleaner to get old regulations off the books, it’s nearly as effective to not consider or enforce old rules that no longer apply.

The FCC also has to consider the source of various regulations. The agency does not have the authority, on it’s own, to eliminate a requirement imposed in the past by Congress. Eliminating such rules is fine as long as nobody objects, but doing so also opens the agency to lawsuits, which would be a colossal waste of time.

It’s a good idea for any regulatory agency to do this periodically as long as this is done well. This will hopefully not become an excuse to let large ISPs, wireless companies, TV and radio station owners, and others walk away from needed regulation.

What is most interesting about this effort is that Chairman Carr came into the position with a fairly long list of new regulations he’d like to see the FCC tackle. At the top of his list is a new look at the FCC’s role in regulating Section 230 related to web content.

Alternatives to GPS?

The FCC plans to hold a vote in April to consider alternatives to GPS, the U.S. location technology. The aviation industry has reported an increase in GPS spoofing, where a fake GPS signal shows a pilot the wrong location of a plane. GPS spoofing has been common around conflict zones, but airlines are reporting it happening in other places.

There are national security concerns because GPS is now used extensively by airlines, shipping, the military, and by the public for a wide range of uses. There is growing fear of the negative impact of something going wrong with GPS due to malicious attacks, technical malfunctions, or natural phenomenon like solar flares.

GPS technology was developed by the U.S. and is currently controlled by the U.S. Space Force. The technology was first designed in 1973 and became fully functional when a constellation of 24 satellites was in place in 1993. The U.S. government first made GPS available for civilian uses after Korean Air Lines Flight 007 was shot down in 1983 when it entered Russian air space. Over time, the government allowed wider use of GPS, and the technology is familiar to everybody who uses it as the basis for driving directions.

We’ve already begun to modernize the GPS network. There are currently 18 new GPS satellites in orbit that use the L5 frequency band that can provide accuracy for functions like surveying within 2 centimeters. The new satellite constellation will be completed in 2024 when it reaches 24 satellites.

GPS is not the only location network in the world. Russia has a GLONASS network, China a BeiDou network, and the European Union operates its Galileo locating network.

The purpose of GPS is supply geolocation information and the time anywhere on earth. Folks in the telecom business are familiar with GPS because we use it to mark the location of network outdoor components. GPS can help to lead a technician directly to the source of a network problem.

The FCC wants to open an exploration into other locating technologies so that we aren’t dependent on the GPS satellites. There are alternatives to GPS that can be explored, and it seems likely that a second locating system would be used in conjunction with GPS so that there wouldn’t be a single network providing the service. Some of the alternate technologies that might be considered include:

  • GNSS (Global Navigation Satellite System). LORAN (Long Range Navigation) technology is already used in conjunction with current GPS. This is a land-based network that use low-frequency radios that allow a calculation of position. Today LORAN supplements GPS in areas where reception is poor, and it can enhance accuracy where GPS is being used. Some are proposing that an updated eLORAN network be built as more extensive alternative to GPS. The downside is the cost of build a large numbers of LORAN towers around the world.
  • INS (Inertial Navigation System) is a self-contained system that keeps track of the location of an INS device through continuous motion tracking – the device constantly calculates where it is at. The technology is already used today in airplanes, ships, and by the military. The devices are fairly expensive but could become more affordable with mass production. The downside is what is called sensor drift where a device has to occasionally be recalibrated by connecting to GPS or another location system.
  • Quantum Clocks are still in the research and development phase but hold promise for timekeeping and location calculations. Quantum clocks are far more accurate than the atomic clock that is currently used as our time standard. The lab devices today are complex, and the challenge to make this into a usable technology is miniaturization and mass production.

Google’s Next Generation of Light-based Broadband

Mahesh Krishnaswamy of Alphabet X announced the development of its next generation of light-based broadband transmission. Google uses the brand name Taara for the technology. Google has already deployed the first generation of the technology in hundreds of high-speed light links around the world, in places where it was impractical or too expensive to install fiber.

The new breakthrough being announced is that Google has reduced the technology to a chip. The first generation device used a complicated set of movable mirrors to steer the light signal, but the new chip does this electronically. The first generation device was the size of a traffic-light, but the new one is described as being the size of a fingernail.

The new chip uses light that is below the visible range. Each chip contains hundreds of tiny light emitters, and the software can control each individually with great precision. Lab tests of the chip have been able to deliver 10 Gbps speed for about a kilometer. Google believes the practical distance for the technology will be as far as about twelve miles, carrying up to 20 Gbps. Google hopes to make the chip commercially available in 2026.

It’s not hard to envision uses for the technology. One of the first trials was to beam data across the Congo River, where fiber was not a practical alternative. I can think of dozens of places in fiber networks where light-beams could be a huge cost-saver. Picture using this technology to connect to rural homes that are set back from the road. This could solve the cost and delays of crossing bridges and railroad tracks. This seems like a natural technology to use in cities to create a network between buildings. Bring a 100 GB fiber connection to one tall building and serve multiple other buildings without additional fiber.

The concept of using light for data transmission has been around for a decade, generally described under the general term Li-Fi. The primary vision for Li-Fi has been an indoor technology for beaming superfast broadband within the home of office. There was also talk about using Li-Fi as the best way to communicate between cars on the road. A few companies have developed Li-Fi devices, but the technology never gained any serious traction in the market. There has been a lot of research on Li-Fi technology by the military for providing fast broadband on the battlefield.

There are several natural limitations for using light to send data, particularly outdoors. Light requires a pure line of sight and is deflected by trees and bushes. Outdoor events like rain, fog, snow, and birds will disrupt the signal. Just like with radio signals, light dissipates over distance, and the signal gets weaker as the distance between transmitter and receiver increases. Google says it is working on ways to minimize the impact of weather. Indoor use would require deploying multiple devices to see into each room where you want broadband – no closed doors allowed.

The real benefit for this technology comes if Google can make the chips affordable. It’s not hard to envision a light mesh network delivering gigabit speeds to a small town without the need to build a wired network. Nobody has light-based broadband networks on their broadband bingo card – but in a few years it might become a viable option.

Government Restriction of Broadband

It’s probably a testament to how important broadband is when governments shut down or threaten to shut down broadband access for political reasons. This blog was prompted by a news report that Ontario tore up a contract with Starlink as a result of the announced  U.S. tariffs against Canadian goods. Even just a few years ago, it was probably unimaginable that broadband would have been mentioned in any talks about trade between countries.

That announcement prompted me to look to see what other governments have been using broadband connectivity as a political tool. I found Internet shutdowns in 2024: a global overview by Surfshark. This report covers not only total shut downs, but also restrictions imposed by governments in response to political unrest, protests, or social issues.

There are a number of countries that have long-term restrictions on Internet access. Eritrea and North Korea have a nearly total Internet ban. Iran, Cuba, Turkmenistan, Azerbaijan, and Saudi Arabia block social media platforms. The United Arab Emirate (UAE) has laws against online criticism of the government and arrests people for online content. China is famous for the Great Firewall of China, and closely watches Internet content. Egypt and Tunisia monitor online content and emails.

Some countries block specific apps. Russia banned Discord and Signal. Turkey blocked Discord after the company refused to share information with the government. This followed an incident where a man murdered two women, and users on Discord praised the killing. The U.S. almost joined this list with a threatened ban of TikTok.

Some Internet shutdowns are traditional and scheduled. For example, the Telegram messaging app is turned off every year in Kenya to prevent cheating during the tests for the Kenya Certificate of Secondary Education. Pakistan, Senegal, and Mauritius restricted the Internet last year during elections.

Unfortunately, most other temporary Internet shutdowns are not so benign. India had the most Internet restrictions during the year with 23 incidents. Thirteen of the restrictions were to try to quell and restrict protests, like a farmers’ protest in Punjab. Ten were related to political turmoil, like violence in Saran after the polls closed.

Turkey blocks broadband four times during 2024. There was a total block of social media platforms for 24 hours following an attack on the Turkish Aerospace Industries headquarters.

Bangladesh had a near-total shutdown of the Internet after student protests. There has been an ongoing block of WhatsApp and Facebook related to protests against the Prime Minister.

Mozambique had restrictions for the first time and shut down broadband after protests related to a disputed election.

Overall, there were slightly fewer incidents of government restrictions in 2024 than in 2023. In my mind, the shutdowns are evidence of the power of the Internet. It’s likely that governments that want to control their citizens will continue use Internet shutdowns.