WiFi Router Ban

The FCC issued a ban on March 23 on all consumer-grade routers made in foreign countries. A router is the device in your home that connects your ISP broadband to the WiFi that almost everybody uses to connect devices in the home. Businesses use routers to direct ISP broadband around the business on fiber or copper networks. The ban covers all new brands and models of routers except those that have been granted a Conditional Approval by the Department of Defense or the Department of Homeland Security.

The ban comes after the White House convened an interagency group comprised of government security experts, which collectively decided that new routers made overseas “pose unacceptable risks to national security of the United States and the safety and security of United States persons”. There have been previous technology bans for security reasons, such as a ban on using software from Kaspersky Lab, and telecommunications services provided by China Telecom and China Mobile International USA. It’s worth noting that the FCC cannot decide to ban any equipment or service and can only do so if directed by national security agencies.

The ban noted that malicious actors have exploited security gaps in foreign-made routers to attack households, disrupt networks, engage in espionage, and steal intellectual property. The notice says that foreign-made routers were involved in cyberattacks from Volt, Flax, and Salt Typhoon.

The ban does not stop consumers from using existing routers. It doesn’t stop retailers from selling existing stocks of routers or from continuing to buy routers that previously have been approved by the FCC’s equipment authorization process. All that is blocked is any new models or generations of routers.

Router manufacturers can petition the DoD or DHS for conditional approval, which would allow them to apply to the FCC for equipment authorization for new routers. There are no manufacturers today that have this conditional approval.

It’s hard to know where this ban will lead, but this could become a big concern for ISPs, since most ISPs provide a WiFi router for new customers. Many cable companies and fiber builders build the router into the modem. Any ISP that is currently using a router that has not been approved by the FCC is in trouble, because according to this ban, they can’t give an unauthorized router to a new customer. Every ISP should be checking this week to make sure the routers they are providing have been blessed by the FCC.

This has longer-term implications since virtually all routers are made overseas, including those made by American companies like TP-Link, which manufactures its routers in Vietnam. Manufacturers routinely upgrade and improve routers every few years, and American ISPs will be stuck with older routers if the government doesn’t approve any new brands or models of routers.

One unspoken intent of the order is probably to promote the manufacture of routers in the U.S. I have to wonder if an American-made router would be any less susceptible to hacking than a foreign-made one. If not, I’m not sure what this ban will accomplish, other than making it more expensive to get routers. It will be interesting to see if any router companies move manufacturing to the U.S. due to this ruling. A more likely outcome might be that American consumers won’t be able to get some of the newest routers that are available to the rest of the world.

The Rapid Evolution of Transport Lasers

The Internet in the U.S. relies on long-haul and middle-mile fiber routes that are used to connect every part of the country to the core internet hubs located in Virginia, Dallas, Chicago, Atlanta, Los Angeles. New York, and Denver. In more recent times, the growth of data centers has created additional major Internet hubs in places like Phoenix, Silicon Valley, Portland, and Seattle.

Like every other part of the industry, there has been a constant evolution in the lasers that were used to power the long-haul fiber routes. When I first got involved in working with companies providing transport in the early 2000s, the transport electronics delivered 1 GB (gigabit) speeds – something that everybody at the time thought was blazingly fast. Today, millions of homes are buying 1 GB broadband.

The Internet was exploding during the 2000s as millions of people started to buy broadband provided by DSL and cable modems. Gigabit transport routes became full, and carriers knew they had to upgrade. The IEEE standard for 10 GB transport was adopted in 2002, and over the next decade, it became the standard for transport fiber routes.

Of course, 10-gigabit transport routes grew full as growth continued, and carriers were looking for more speed. The IEEE standard for 40 GB transport was adopted in 2010, although a few vendors, like Nortel, had started to market 40 GB products as early as 2008. The biggest technical breakthrough for 40 GB lasers was the introduction of Digital Signal Processing, which better handled light dispersion across long-haul fiber routes. The higher speed became the industry standard for transport by 2012.

Next in the evolution were 100 GB lasers. This standard was also adopted by IEEE in 2010. This faster technology was slower to be adopted because of the relatively high cost of the lasers. By 2014, there were only about 600 deployments of this technology worldwide. But over time, 100 GB lasers became standard for anybody building transport fiber routes.

The next step in progressively faster lasers was 400 GB, with the IEEE standard adopted in 2017. Network owners started to introduce these faster lasers into networks in 2020, and by 2022, 400 GB lasers became the new standard for long-haul transport.

The general continuous growth of Internet traffic, and the new demand from AI, is pushing transport fiber owners to seek even faster lasers. A few vendors introduced 800 GB lasers as early as 2019. Ciena announced the 800 GB WaveLogic 5 laser in 2019, and Infinera and Windstream successfully tested a 800 GB long-haul route in 2020. While 400 GB lasers are still the most affordable option, Nokia and Ribbon say that they are now seeing a lot of demand for 800 GB lasers.

Ciena says it is seeing demand for even faster lasers and has installed a few fiber routes with 1.6 TB lasers for Lumen in the U.S., e& in the USA, and Cirion in Latin America.

The faster speeds are also moving down market into last-mile uses. Nokia is selling a lot of 800 GB pluggable fiber electronics for inside data centers.

This has been an amazingly fast evolution. As recently as 2019, almost everybody in the industry was still buying 100 GB lasers for transport, and in the few years since then, we’ve seen increases to 400 GB, then 800 GB, and now the beginnings of 1.6 TB. I remember seeing a PowerPoint at a trade show twenty or so years ago where a vendor claimed that within twenty years we’d be seeing terabit lasers. It was a bold prediction at a time when 10 GB lasers were cutting-edge technology, but it turned out to be a good prediction. I’m not even going to try to predict the speeds we’ll be seeing twenty years from now.

Technology Shorts March 2026

Today’s blogs looks at some of the recent breakthroughs coming out of labs and research facilities that could have practical applications that could eventually benefit the broadband industry.

Rainbow Chip. Researchers at the Columbia University School of Engineering and Applied Science have created a chip that turns a single laser beam into a “frequency comb” that produces dozens of light channels at once. As often happens in science, the breakthrough was discovered by accident when the team was working on a project related to Lidar.

Normal laser beams used in telecom are not precise and transmit a closely bunched group of similar light frequencies that scientists refer to as a messy light signal. This new chip creates multiple laser beams in a range of colors, with each beam precisely at a single light frequency. The chip output is called a comb because there is a clear gap between each different beam, so there is no interference between separate light beams. This chip could revolutionize fiber optic technology by simultaneously sending dozens of even-spaced light channels at precise frequencies through a single fiber, with no interference between colors. Scientists have created precise laser beams in the lab for research, but this chip could bring the technology into practical use.

Energy Efficient Wireless Chips. Researchers at the University of Colorado Boulder have developed a new device that could revolutionize wireless technology. The breakthrough is the creation of a surface acoustic wave (SAW) phonon laser that can create ultra-high frequency vibrations on a single chip. The new device layers silicon, piezoelectric lithium niobate, and indium gallium arsenide to amplify radio vibrations much like a diode laser amplifies light. SAW technology is already embedded in smartphones, GPS, and radar systems and is used to filter signals and reduce noise. However, today’s SAW technology  requires multiple chips and external power. The new phonon technology simplifies this to a single chip that can be powered by a battery. The new chip can also reach far higher frequencies and currently operates at about 1 gigahertz, but has a clear development path to boost this ability to tens or even hundreds of gigahertz.

Efficient Power Module. Researchers at the National Renewable Energy Laboratory unveiled a breakthrough that could squeeze more power from existing electricity supplies. They’ve created a silicon-carbide-based power module they call ULIS (Ultra-Low Inductance Smart). The ULIS device dramatically improves the way electricity is converted and delivered inside devices. Most electronic devices contain a power module, which houses the power electronics that regulate the flow of electricity inside the device. The ULIS device is smaller and lighter while bringing up to a five times improvement in power efficiency. The device would make sense in data centers, electric grids, and any devices using next-generation electronics, like in ships and aircraft. The secret to the success of the new device is that it slows parasitic inductance by seven to nine times, which is the resistance to the process of changing or converting an electric current inside a device.

Some of the benefits come from its new design. Traditional power modules stack components inside a box-like package, while ULIS has found a way to arrange components in a two-dimensional octagon. This creates a smaller light-weight device that also minimizes magnetic interference. One of the most interesting features is that the device can be controlled wirelessly, without needing to be connected to communications cables.

ULIS is expected to impact multiple sectors. Probably the most beneficial is in the electric grid. Today, the devices in the grid require electricity to be converted into a usable form before entering every smart device in the grid. The ULIS device could make this conversion more efficiently and with less power loss in the grid.

Cooling Data Centers with Hot Water. One of the biggest challenges of large data centers is having a large supply of cool water for cooling. At CES this year, Nvidia CEO Jensen Huang announced the company is using water at 45 degrees Celsius (113 degrees Fahrenheit) to cool supercomputers. This is a big breakthrough because hot water doesn’t require water chillers and the accompanying power-hungry compressors. Those devices account for about 6% of the power used at a data center. This breakthrough could be a boon for two-phase liquid cooling systems. Most liquid cooling systems today circulate water, which then must be cooled before reuse. A two-phase system extracts more heat from computers by using the heat to convert the liquid to a gas and then converting back to a liquid. This is not a new technology and has been used on a limited basis for a few years, but the NVIDIA announcement will prompt data center owners to consider hot water as the primary way to cool data centers. The announcement instantly tanked the stock prices of companies that make cool-water chillers for data centers.

Low Latency AI Networks

A partnership has been announced that has the goal of creating a low-latency private Internet for AI traffic. The three partners involved are Moonshot Energy, a manufacturer of electrical and modular infrastructure for AI data centers, QumulusAI, Inc., a provider of GPU-as-a Service, and Connected Nation Internet Exchange, which has been promoting the creation of more Internet Exchanges.

The group’s goal is to initially create 25 carrier-neutral interexchange points designed to handle only low-latency traffic. The goal is to scale to 125 locations, many which would be located at major research university campuses and municipalities. The coalition has labeled the new hubs as AI Pods.

The goal of this coalition is to create a network designed specifically for AI and other data traffic that requires low latency. The network will be designed with highly efficient switches at the hub sites that will move traffic quickly. This would essentially be a private network that would isolate low-latency traffic from the large volumes of general Internet traffic that can clog up Internet hubs at busy times.

The idea of creating private networks for data is an old one. Many universities in the country are connected to the Internet2 fiber network that allows for low-cost transfer of large amounts of research and other data between universities. Many corporations have created private networks between company sites to keep corporate data traffic out of normal Internet traffic flow and to provide a higher level of security.

Tackling this as a new venture makes a lot of sense. If the companies that run the large Internet hubs  decided to somehow give priority to AI or other traffic to reduce latency, they would awaken cries about violations of network neutrality, since such behavior is exactly what network neutrality is supposed to block. If the normal Internet hubs gave priority to bits from AI data centers, then all other traffic would get a lower priority and see more problems from delays. However, a private network for AI avoids such issues by isolating AI traffic from other traffic.

The first data site for the network is scheduled for activation in July 2026, located at the campus of Wichita State University. The coalition is working towards providing dual, geographically diverse fiber routes between the new AI hubs using 400 GB transport. Each AI site would house redundant 400 GB IX ports and switches. Data centers that want to connect to the network would acquire dark fiber to one of the AI hubs.

QumulusAI says the new network would result in moving GPU computing directly to the network edge, meaning the AI network could be expanded to reach large businesses and other users of large amounts of AI data.

Connected Nation has been touting the benefits of creating more Internet hubs for a number of years. These new hubs would also become carrier-neutral locations for the interexchange of normal Internet traffic, which would lower the cost to ISPs to reach the Internet.

Technology Shorts January 2026

Sensors That Beat Lidar and Radar

The Boston startup Tarador has developed a sensor that co-founder Matt Carey says beats the performance of radar and lidar. The sensors are solid-state, meaning no moving parts, and use the terahertz band of spectrum that sits between microwaves and infrared light.

The spectrum band allows the sensors to easily pierce rain and fog. The use of higher terahertz frequencies improves the resolution of images by twenty times compared to radar. The sensors have a range of 325 yards. One of the sales points for the new sensors is a target cost to be far less than lidar. This would make the sensors a great solution for driver-assisted and self-driving cars.

Laser Cooling for Data Centers

Sandia Labs, the federally funded energy research lab, has found a way to use lasers to cool things. It’s anti-intuitive since lasers generally generate heat when they hit an object. Scientists at the lab have been working with Maxwell Labs from Minneapolis to develop the technology.

Lasers can create a cooling effect, and this has been used in the past to chill antimatter and to study quantum phenomena. How does this work? Lasers tuned to a specific frequency and targeted at a small area on the surface of a certain element can cool it instead of heating it. Small means an area in the order of hundreds of microns. The technology would utilize a photonic cold plate with components a thousand times smaller than the width of a human hair that would channel the cooling lasers. The cold plate would be composed of a millimeter-thick plate of pure gallium arsenide. The scientists believe this can bring as much cooling as the current method of circulating water close to chips. This would be a huge breakthrough since 30% to 40% of the cost of operating a data center is used for cooling. This could also extend the life of chips, which tend to burn out in two years under data center loads.

A Chip that Can Stream Thoughts

A team from Columbia University, New York Presbyterian Hospital, Stanford University, and the University of Pennsylvania has collaborated to create a tiny brain implant that could significantly change how people interact with computers.

The brain implant is called a Biological Interface System to Cortex (BISC). The power of this technology is the small size, since the BISC is thinner than a human hair, along with the ability to transmit large amount of data. The implant is a big improvement over current technologies because it is controlled by a single small chip that can be easily implanted inside the skull.

One of the benefits of the BISC implant is the ability to treat conditions like epilepsy, spinal cord injuries, ALS, strokes, and blindness. The chip can hopefully create a communication pathway to the brain to help restore motor, speech, and visual abilities.

Like all new technologies, this could also power other uses, like creating an interface between humans and computers. This team was not focused on that goal, but this is another technology step forward in brain/computer interfaces, a goal of scientists over the last decade.

Network Timing

One element that is key to all networks rarely gets discussed. Network timing (or network clocks) involves hardware or processes to make sure that all parts of a network are in synch.

Timing and synchronization are critical for network services that depend on precise, synchronized timing on network devices. Accurate and reliable synchronization of any network device helps manage the security, availability, and efficiency of the network devices. Timing is essential for the function of telephone, cellular, and broadband networks.

There are multiple kinds of timing in use.

Frequency Synchronization. This makes sure that all electronics inside a network operate using the same clock rate or frequency. Many kinds of network gear come with built-in clocks, and having different parts of a network using different clocks will result in data loss, corruption, or misinterpretation of bits. Frequency synchronization forces all of the clocks inside the network to operate in unison by matching the frequency of each clock to a source clock. There are different sources for frequency synchronization:

  • Synchronous Ethernet (SyncE) chooses one clock and forces the other clocks to match.
  • Networks can be synchronized to external clocks such as BITS or the GPS satellites. BITS can choose any reliable external clock.
  • Many networks use Precision Time Protocol (PTP), which eliminates the danger of losing the connection to an external clock.
  • A network can use a free-running internal oscillator chip that holds an accurate clock.

Many networks have used GPS for frequency synchronization. A GPS satellite carries a highly stable atomic clock that provides precise time signals, which can be converted into frequency references by a GPS receiver. While the atomic clock provides highly precise time and frequency information, GPS is not as reliable when there isn’t a clear view of the sky during weather events.

Phase Synchronization makes sure that the phase of network signal is consistent throughout the network. Phase refers to a specific point in time on a waveform cycle. Phase synchronization ensures that electronics agree on the timing of the start and end of each bit in a data stream. This is critical in applications where data from multiple sources have to be combined or compared, such as in a cellular network.

Time Synchronization, also called Time of Day (ToD) ensures that all electronics agree on the current time, which is critical in applications where timing is crucial. Networks differ in the need for precise time. Network Time Protocol (NTP) can be used to provide millisecond accuracy, while PTP can provide nanosecond accuracy along with phase synchronization.

A New Security Risk

A new security risk has recently been brought to my attention. I was on a Teams call that included an attorney who would not let the call continue while an AI notetaker was present. His comment was that the notetaker is listening to everything that is said, transmitting everything verbatim to a data center somewhere in the cloud. He said he was aghast that people would hold meetings about sensitive topics and then give everything that was said to unknown parties outside of the call. He used the analogy that having an AI notetaker is the equivalent of inviting a reporter into a meeting.

It didn’t take much research to realize he is right. An AI notetaker records everything that is said in a meeting so that AI servers somewhere in the cloud can make a transcript or summary of the meeting. Every word said in a meeting, from the brilliant to the mundane, is sent to a data center out of the control of the people on the call.

There is no way to know what the folks who control the recording will do. At a minimum, it’s almost certain they are using the data to further train AI models, which are voracious for more data. A record of the meeting could be sold to others. It’s possible, and even likely, that somebody really good at AI prompts can figure out what is discussed at a corporate meeting.

Of course, the AI notetaker companies can all swear that they don’t use the data for purposes other than making a summary of the meeting. But I have to ask, does anybody have the slightest idea of the identity of the people who own and work at these businesses, and do you trust them? Nobody would let an unknown stranger into a work meeting, but that’s exactly what companies are doing with AI notetakers. But suddenly, companies have begun willingly sharing conversations with the cloud that they might not even want to share with everybody else inside their company. It’s hard to see this as anything but a self-inflicted data breach.

Before writing this blog, I asked a few people about this. One friend who is an AI expert said that it would be too tempting for anybody in this kind of business to monetize the data they are gathering by selling it to others to train AI models. He said that most AI companies are struggling to be profitable, and that secondary revenue streams have to be tempting (just as it is tempting for ISPs to sell user data). He thought that it’s too expensive for companies to routinely sift through the data for tidbits of corporate espionage, but that it would be possible for anybody willing to spend the processing time, or who is interested in a specific business or a specific person. He also said he would be worried that AI companies could be using the data to gather a voice print of meeting participants, something that they might otherwise have a hard time finding for most people.

I don’t have any knowledge that the companies in this line of business are doing anything nefarious with the data gathered, and perhaps they are not. But letting key information out of a closed circle of people on a call is practically the definition of a security risk. There is no way to know if this might harm a business.

There are a few companies that sell notetakers that say that they keep all data on a user’s computer and don’t share it in the cloud. The AI engine that summarizes a call is still going to be in the cloud, so unless that can be proven somehow, that still feels like a risk. Tech companies have been lying to the public about how they use the data they gather since AOL and early web companies figured out how to monetize user data.

This is one of the oddest blogs I’ve ever written because it makes me wonder if I’m being paranoid. But that feeling is probably a sign that this is a real concern.

Broadband in a Hurry

There is an interesting new twist on wireless backhaul. The Swedish company TERASi has developed a wireless backhaul technology that enables networks to be configured on the fly. The company has developed a small, lightweight, portable microwave radio that can quickly be mounted anywhere on a tripod, a pole, or any object with line-of-sight to a neighboring radio.

The radios use frequencies in the 70 GHz range. They can provide 2 Gbps in bandwidth for up to 5 miles or 10 Gbps for a few miles. Latency is a super-low 5 milliseconds.

The selling point for these portable radios is that they can be installed and configured in minutes. This is due to the small size of 3x3x1 inches. The company says a radio can be mounted on a photography tripod or even on a drone to create a quick wireless link. The small radios are being touted as a solution for quick links in the field for the military or for a quick link any time an ISP needs a quick connection.

The radios are now in beta testing mode, and the company would like to hear from ISPs or local governments that might have a unique use case for radios that can create a quick link.

It’s not hard to imagine numerous uses for a microwave network that can be installed quickly.

  • The company is marketing this to the military as an alternative to using Starlink on the battlefield. There have been several times in Ukraine where the Starlink network went down – at least once intentionally, and once recently when Starlink had a worldwide outage. Microwave radios are safe from interference since it’s nearly impossible to intercept the tiny beam between two devices. These radios also have the upside of delivering higher bandwidth than satellite.
  • The technology could be a boon for disaster recovery. ISPs and utilities could string together a backhaul network that would allow them to reestablish a quick bandwidth link to substations, cell towers, or powered electronics hubs. The devices could be in place quickly to establish connections for critical first responders. Local governments could use the radios to power public hotspots to give quick connectivity to the public.
  • These radios could be an instant patch for damaged networks, particularly in situations where repairs will be slow. These radios could be a quick fix for fiber cuts in places that are hard to fix, like bridges and railroad crossings. The radios could leapfrog landslides, fire, or flooded areas to keep a network functioning.
  • Temporary wireless networks make sense for places like construction sites that need bandwidth today, but not permanently.
  • Commercial firms might consider this as a quick fix between nearby buildings for emergency redundancy.

The downside is the expense of buying units that might never be used. But the huge upside is having the ability to create a quick broadband connection for emergencies and critical needs.

A New Major Telecom Vendor

Many folks in the industry will already recognize Amphenol, the company that is poised to become one of the major new vendors in the industry. The company has decided to grow quickly by acquisition. It recently purchased the Connectivity and Cable Solutions subsidiary from CommScope for $10.5 billion. Amphenol also bought Trexon, a cable assembly business, for $1 billion.

Amphenol is a worldwide business with manufacturing facilities in forty countries. The company is in a wide range of markets, including military-aerospace, industrial, automotive, information technology, mobile phones, wireless infrastructure, broadband, medical, and pro audio. The largest division of Amphenol is Amphenol Aerospace (formerly Bendix Corporation).

In the telecom world, Amphenol Fiber Systems International (AFSI) was started in 1993 to manufacture fiber optic connectivity products and systems in Allen, Texas. In July 2024, Amphenol purchased two subsidiaries from CommScope. The company paid $2.1 billion to buy the Outdoor Wireless Networks (OWN) and the Distributed Antenna Systems (DAS) business. Amphenol also resurrected the Andrew Corporation brand name, a company previously acquired by CommScope, that manufactures tower and rooftop systems and cable management accessories.

Amphenol’s acquisitions are not just focused on telecom, and recent acquisitions include Carlisle Interconnect Technologies (CIT) which makes antennas and sensors for harsh environments; Lutze, a railway technology company; LifeSync, a manufacturer of connectors, antennas, and sensors for the medical industry; Narda-MITEQ, a maker of RF and microwave equipment for the military; XMA, a manufacturer of passive microwave components; and Q Microwave, which specializes in RF filters and subsystems for the military and space sectors.

The many acquisitions have already boosted 2025 earnings for the first half of the year. The strategic acquisitions contributed 15% to the first half of 2025 revenues. On a reported basis, revenues jumped 52% and excluding acquisition-related contributions, organic growth was 37% to hit $10.46 billion. In second-quarter 2025, revenues jumped 57% year over year on a reported basis and 41% organically to $5.65 billion.

The acquisition of CommScope’s fiber business makes Amphenol a major player in the broadband business. This puts Amphenol in competition with companies like Corning, Belden, and Prysmian. The company is also hoping for a big boost from selling fiber to supply the current AI explosion.

The CommScope sale might surprise some, but the company was in trouble due to a massive debt load of over $7 billion, and slower-than-expected sales that led to inventory build-ups in its broadband and cable access segments.

Space Shorts September 2025

Space has been a part of the communications networks since the communications satellite Telstar was first put into orbit in 1962. I remember as a kid tracking Telstar across the sky. Space today is an increasingly important part of communications. The following are a few pieces of space news I recently found to be interesting..

Low-Orbit LEO. The Spanish startup Kreios Space is working to develop a new type of satellite that can fly at lower altitudes. LEO satellites today typically fly at altitudes from 220 to 350 miles above the Earth. Kreios is working on satellites that would fly at an altitude of 125 miles. LEO satellites for companies like Starlink are parked high enough to avoid the drag caused by the upper atmosphere. Kreios would be able to fly lower by using air intake to drive electric motors that would generate enough thrust to maintain altitude. This would allow for long-duration orbits and the ability to move the satellite without needing any traditional fuel.

It’s not hard to understand the advantage of flying at lower altitudes. The satellites would be able to observe the ground in much greater detail. Communications and broadband satellites at a lower altitude would mean lower latency and faster communications times.  The company thinks the improvement in performance would be between 3 and 16 times better than the current LEO satellites flying at higher altitudes.

 Bluetooth Satellites. Hubble Network is a startup that is building a fleet of satellites to communicate with Bluetooth devices. The Bluetooth devices involved are different than the typical Bluetooth device that is designed to send a lot of data for a short distance. Instead, Hubble will connect to low-power Bluetooth sensors that only transmit a small amount of data. Hubble launched its first two satellites in 2024, now has seven satellites in orbit, and plans on having a full satellite constellation in place by 2028.

The advantage of the technology for Hubble customers is the use of low-power Bluetooth devices that are far less costly than connecting to cellular technology. Sensors can be placed anywhere on the planet that are out of reach of cellular networks and can be used for functions like tracking the movement of cargo ships. Hubble is already tracking millions of devices and expects to be able to keep track of billions. The company today is working with customers like Life360, which has a location-based safety service that can let families and friends share real-time locations with each other. The sensors can be used to track vehicle fleets and can provide instant feedback on things like driving speeds.

 Space Robots. I can’t think of a space sci-fi movie that didn’t have worker robots in the background taking care of the maintenance required to work in space. I saw an article about Icarus, a startup that is raising money to develop robot workers to replace astronauts on the ISS space station. That set me on a search to understand the space robotics market, and there is a space robot-race underway. Established companies like Maxar Technologies, Northrup Grumann, NDA, Honeybee Robotics, and Motiv Space Systems have been active in the field. They are joined by numerous startups, including Astrobotic Technology, GITAI, Rovial Space, BigDipper Exploration Technologies, Space 11, and Novium.

We’ve already seen space robots for many years, like the various Mars rovers like Nasa’s Sojourner, Spirit, Opportunity, Curiosity, and Perseverance, and China’s Zhurong. The companies listed are working on robots of all sizes, from the inchworm robots being developed GITAI to moon rovers being developed by several companies.

Asteroid Mining. There have now been several trips to explore asteroids and bring back samples. This includes NASA’s OSIRIS-Rex mission that returned samples from the Bennu asteroid in 2023 and the Japanese Hayabusa-2 mission, which returned samples from the Ryugu asteroid in 2020. These missions were government-funded and cost hundreds of millions of dollars, and were funded for scientific research purposes.

Startup Karman+ is working on being able to fund a round trip to asteroids for roughly $10 million, with the cost to hopefully drop in the future. This is the first step in developing an asteroid mining industry that would use robots to mine valuable metals from asteroids and round-trip rockets to ferry materials back to Earth orbit. This first mission only plans to bring back one kilogram of material and is a test of concept for the technology. The ultimate technology will need to mine the materials in space needed to create the fuel needed to return heavier payloads back to Earth.