Quantum Encryption

Verizon recently conducted a trial of quantum key distribution technology, which is the first generation of quantum encryption. Quantum cryptography is being developed as the next-generation encryption technique that should protect against hacking from quantum computers. Carriers like Verizon care about encryption because almost every transmission inside of our communications paths are encrypted.

The majority of encryption today uses asymmetric encryption. That means encryption techniques rely on the use of secure keys. To use an example, if you want to send encrypted instructions to your bank (such as to pay your broadband bill), your computer uses the publicly available key issued by the bank to encode the message. The bank then uses a different private key that only it has to decipher the message.

Key-based encryption is safe because it takes immense amounts of computing power to guess the details of the private key. Encryption methods today mostly fight off hacking by using long encryption keys – the latest standard is a key consisting of at least 2048 bits.

Unfortunately, the current decryption methods won’t stay safe for much longer. It seems likely that quantum computers will soon have the capability of cracking today’s encryption keys. This is possible since quantum computers can perform thousands of simultaneous calculations and could cut down the time needed to crack an encryption key from months or years down to hours. Once a quantum computer can do that, then no current encryption scheme is safe. The first targets for hackers with quantum computers will probably be big corporations and government agencies, but it probably won’t take long to turn the technology to hack into bank accounts.

Today’s quantum computers are not yet capable of cracking today’s encryption keys, but computing experts say that it’s just a matter of time. This is what is prompting Verizon and other large ISPs to look for a form of encryption that can withstand hacks from quantum computers.

Quantum key distribution (QKD) uses a method of encryption that might be unhackable. Photons are sent one at a time through a fiber optic transmission to accompany an encrypted message. If anybody attempts to intercept or listen to the encrypted stream the polarization of the photons is impacted and the recipient of the encrypted message instantly knows the transmission is no longer safe. The theory is that this will stop hackers before they know enough to crack into and analyze a data stream.

The Verizon trial added a second layer of security using a quantum random number generator. This technique generates random numbers and constantly updates the decryption keys in a way that can’t be predicted.

Verizon and others have shown that these encryption techniques can be performed over existing fiber optics lines without modifying the fiber technology. There was a worry in early trials of the technology that new types of fiber transmission gear would be needed for the process.

For now, the technology required for quantum encryption is expensive, but as the price of quantum computer chips drops, this encryption technique ought to become affordable and be available to anybody that wants to encrypt a transmission.

Network Outages Go Global

On August 30, CenturyLink experienced a major network outage that lasted for over five hours and which disrupted CenturyLink customers nationwide as well as many other networks. What was unique about the outage was the scope of the disruptions as the outage affected video streaming services, game platforms, and even webcasts of European soccer.

This is an example of how telecom network outages have expanded in size and scope and can now be global in scale. This is a development that I find disturbing because it means that our telecom networks are growing more vulnerable over time.

The story of what happened that day is fascinating and I’m including two links for those who want to peek into how the outages were viewed by outsiders who are engaged in monitoring Internet traffic flow. First is this report from a Cloudflare blog that was written on the day of the outage. Cloudflare is a company that specializes in protecting large businesses and networks from attacks and outages. The blog describes how Cloudflare dealt with the outage by rerouting traffic away from the CenturyLink network. This story alone is a great example of modern network protections that have been put into place to deal with major Internet traffic disruptions.

The second report comes from ThousandEyes, which is now owned by Cisco. The company is similar to Cloudflare and helps clients deal with security issues and network disruptions. The ThousandEye report comes from the day after the outage and discusses the likely reasons for the outage. Again, this is an interesting story for those who don’t know much about the operations of the large fiber networks that constitute the Internet. ThousandEyes confirms the suspicions that were expressed the day before by Cloudflare that the issue was caused by a powerful network command issued by CenturyLink using Flowspec that resulted in a logic loop that turned off and restarted BGP (Border Gateway Protocol) over and over again.

It’s reassuring to know that there are companies like Cloudflare and ThousandEye that can stop network outages from permeating into other networks. But what is also clear from the reporting of the event is that a single incident or bad command can take out huge portions of the Internet.

That is something worth examining from a policy perspective. It’s easy to understand how this happens at companies like CenturyLink. The company has acquired numerous networks over the years from the old Qwest network up to the Level 3 networks and has integrated them all into a giant platform. The idea that the company owns a large global network is touted to business customers as a huge positive – but is it?

Network owners like CenturyLink have consolidated and concentrated the control of the network to a few key network hubs controlled by a relatively small staff of network engineers. ThousandEyes says that the CenturyLink Network Operation Center in Denver is one of the best in existence, and I’m sure they are right. But that network center controls a huge piece of the country’s Internet backbone.

I can’t find where CenturyLink ever gave the exact reason why the company issued a faulty Flowspec command. It may have been used to try to tamp down a problem at one customer or have been part of more routine network upgrades implemented early on a Sunday morning when the Internet is at its quietest. From a policy perspective, it doesn’t matter – what matters is that a single faulty command could take down such a large part of the Internet.

This should cause concerns for several reasons. First, if one unintentional faulty command can cause this much damage, then the network is susceptible to this being done deliberately. I’m sure that the network engineers running the Internet will say that’s not likely to happen, but they also would have expected this particular outage to have been stopped much sooner and easier.

I think the biggest concern is that the big network owners have adopted the idea of centralization to such an extent that outages like this one are more and more likely. Centralization of big networks means that outages can now reach globally and not just locally like happened just a decade ago. Our desire to be as efficient as possible through centralization has increased the risk to the Internet, not decreased it.

A good analogy for understanding the risk in our Internet networks comes by looking at the nationwide electric grid. It used to be routine to purposefully allow neighboring grids to automatically interact until it because obvious after some giant rolling blackouts that we needed firewalls between grids. The electric industry reworked the way that grids interact, and the big rolling regional outages disappeared. It’s time to have that same discussion about the Internet infrastructure. Right now, the security of the Internet is in the hands of few corporations that stress the bottom line first, and which have willingly accepted increased risk to our Internet backbones as a price to pay for cost efficiency.

Network Function Virtualization

Comcast recently did a trial of DOCSIS 4.0 at a home in Jacksonville, Florida, and was able to combine various new techniques and technologies to achieve a symmetrical 1.25 Gbps connection. Comcast says this was achieved using DOCSIS 4.0 technology coupled with network function virtualization (NFV), and distributed access architecture (DAA). Today I’m going to talk about the NFV concept.

The simplest way to explain network function virtualization is that it brings the lessons learned in creating efficient data centers to the edge of the network. Consider a typical data center application that is to provide computing to a large business customer. Before the conversion to the cloud, the large business network likely contained a host of different devices such as firewalls, routers, load balancers, VPN servers, and WAN accelerators. In a fully realized cloud application, all of these devices would be replaced with software that would mimic the functions of each device, all operated remotely in a data center consisting of banks of super-fast computer chips.

There are big benefits from a conversion to the cloud. Each of the various devices used in the business IT environment  is expensive and proprietary. The host of expensive devices, likely from different vendors are replaced with lower-cost generic servers that run on fast chips. A host of expensive electronics sitting at each large business is replaced by much cheaper servers sitting in a data center in the cloud.

There is also a big efficiency gain from the conversion because inevitably the existing devices in the historic network operated with different software systems that were never 100% compatible. Everything was cobbled together and made to work, but the average IT department at a large corporation never fully understood everything going on inside the network. There were always unexplained glitches when software systems of different devices interacted in the work network.

In this trial, Comcast used this same concept in the cable TV broadband network. Network function virtualization was used to replace the various electronic devices in the Comcast traditional network including the CMTS (cable modem termination system), various network routers, transport electronics for sending a broadband signal to neighborhood nodes, and likely the whole way down to the settop box. All of these electronic components were virtualized and performed in the data center or nearer to the edge in devices using the same generic chips that are used in the data center.

There are some major repercussions for the industry if the future is network function virtualization. First, all of the historic telecom vendors in the industry disappear. Comcast would operate a big data center composed of generic servers, as is done today in other data centers all over the country. Gone would be different brands of servers, transport electronics, and CMTS servers – all replaced by sophisticated software that will mimic the performance of each function performed by the former network gear. The current electronics vendors are replaced by one software vendor and cheap generic servers that can be custom built by Comcast without the need for an external vendor.

This also means a drastically reduced need for electronics technicians at Comcast, replaced by a handful of folks operating the data center. We’ve seen this same transition roll through the IT world as IT staffs have been downsized due to the conversion to the cloud. There is no longer a need for technicians that understand proprietary hardware such as Cisco servers, because those devices no longer exist in the virtualized network.

NFV should mean that a cable company becomes more nimble in that it can introduce a new feature for a settop box or a new efficiency into data traffic routing instantly by upgrading the software system that now operates the cable network.

But there are also two downsides for a cable company. First, conversion to a cloud-based network means an expensive rip and replacement of every electronics component in the network. There is no slow migration into DOCSIS 4.0 if it means a drastic redo of the underlying way the network functions.

There is also the new danger that comes from reliance on one set of software to do everything in the network. Inevitably there are going to be software problems that arise – and a software glitch in an NFV network could mean a crash of the entire Comcast network everywhere. That may sound extreme, and companies operating in the cloud will work hard to minimize such risks – but we’ve already seen a foreshadowing of what this might look like in recent years. The big fiber providers have centralized network functions across their national fiber networks, and we’ve seen network outages in recent years that have knocked out broadband networks in half of the US. When a cloud-based network crashes, it’s likely to crash dramatically.

Breakthroughs in Laser Research

Since the fiber industry relies on laser technology, I periodically look to see the latest breakthroughs and news in the field of laser research.

Beaming Lasers Through Tubes. Luc Thévenaz and a team from the Fiber Optics Group at the École Polytechnique Fédérale de Lausanne in Switzerland have developed a technology that amplifies light through hollow-tube fiber cables.

Today’s fiber has a core of solid glass. As light moves through the glass, the light signal naturally loses intensity due to impurities in the glass, losses at splice points, and light that bounces astray. Eventually, the light signal must be amplified and renewed if the signal is to be beamed for great distances.

Thévenaz and his team reasoned that the light signal would travel further if it could pass through a medium with less resistance than glass. They created hollow fiber glass tubes with the center filled with air. They found that there was less attenuation and resistance as the light traveled through the air tube and that they could beam signals for a much greater distance before needing to amplify the signal. However, at normal air pressure, they found that it was challenging to intercept and amplify the light signal.

They finally struck on the idea of adding pressure to the air in the tube. They found that as air is compressed in the tiny tubes that the air molecules form into regularly spaced clusters, and the compressed air acts to strengthen the light signal, similar to the manner that sound waves propagate through the air. The results were astounding, and they found that they could amplify the light signal as much as 100,000 times. Best of all, this can be done at room temperatures. It works for all frequencies of light from infrared to ultraviolet and it seems to work with any gas.

The implications for the breakthrough is that light signals will be able to be sent for great distances without amplification. The challenge will be to find ways to pressurize the fiber cable (something that we used to do fifty years ago with air-filled copper cable). The original paper is available for purchase in nature photonics.

Bending the Laws of Refraction. Ayman Abouraddy, a professor in the College of Optics and Photonics at the University of Central Florida, along with a team has developed a new kind of laser that doesn’t obey the understood principles of how light refracts and travels through different substances.

Light normally slows down when it travels through denser materials. This is something we all instinctively understand, and it can be seen by putting a spoon into a glass of water. To the eye, it looks like the spoon bends at that point where the water and air meet. This phenomenon is described by Snell’s Law, and if you took physics you probably recall calculating the angles of incidence and refraction predicted by the law.

The new lasers don’t follow Snell’s law. Light is arranged into what the researchers call spacetime wave packets. The packets can be arranged in such a way that they don’t slow down or speed up as they pass through materials of different density. That means that the light signals taking different paths can be timed to arrive at the destination at the same time.

The scientists created the light packets using a device known as a spatial  light modulator which arranges the energy of a pulse of light in a way that the normal properties if space and time are no longer separate. I’m sure like me that you have no idea what that means.

This creates a mind-boggling result in that light can pass through different mediums and yet act as if there is no resistance. The packets still follow another age-old rule in Fermat’s Principle that says that light always travels to take the shortest path. The findings are lading scientists to look at light in a new way and develop new concepts for the best way to transmit light beams. The scientists say this feels as if the old restrictions of physics have been lifted and has given them a host of new avenues of light and laser research.

 The research was funded by the U.S. Office of Naval Research. One of the most immediate uses of the technology would be the ability to communicate simultaneously from planes or satellites with submarines in different locations.  The research paper is also available from nature photonics.

 

Breakthrough in Video Compression

Fraunhofer HHI, Europe’s largest research organization recently announced a new video codec, H.266, or Versatile Video Coding (VVC). This represents a huge breakthrough in video compression technology and promises to reduce the size of transmitted video by 50%. This is big news for ISPs since video drives a large percentage of network traffic.

Codec is an acronym for compressor/decompressor. Codec software is used to prepare videos for streaming over the Internet. Codec software compresses video signals at the sender’s end and is used at the viewer’s end to decompress video. The decompressed video file you watch on your TV, computer, or smartphone is much larger than the video file that is transmitted to you over the Internet.

Codec software is used to compress video signals of all types. It’s used by online video vendors like Netflix and YouTube TV. It’s used by networks like ESPN that broadcast live sports. It’s used by online video games. It’s used in online chat apps like Zoom. The codec is used to compress images from video cameras that are transmitted over the web. Any video you receive online has likely been compressed and decompressed by codec software. Fraunhofer claims its codec software is included in over 10 billion devices.

Reducing the size of video files will be a huge deal in the future. Sandvine reported in October of 2019 that video represented over 60% of all downloads on the web. We know the amount of streaming video has exploded during the pandemic, aided by massive cord-cutting. Cisco predicts that video could grow to be 82% of downloaded web traffic by the end of 2022.

The new H.266 codec standard will replace earlier codec software H.264 and H.265. Interestingly, the H.265 codec reduced the size of video files by 50% compared to the predecessor H.264 codec. Fraunhofer says the software is particularly well-suited for transmitting 4K and 8K streaming video for flat-screen TVs and for video with motion like high-resolution 360-degree video panoramics.

The new codec won’t be introduced immediately because it has to be designed and installed into the network gear that transmits video and into all of the devices we use to watch video. Hopefully, the new codec will hit the market sooner than its predecessor H.265. That codec software was announced on a similar press release by Fraunhofer in 2012 and has just recently been implemented across the network.

H.265 got embroiled by a number patent disputes. The new H.266 codec might encounter similar problems since the team working on the codec includes Apple, Ericsson, Intel, Huawei, Microsoft, Qualcomm, and Sony. Fraunhofer is trying to avoid disputes by implementing a uniform and transparent licensing model.

There also might be an eventual competitor for the new codec. The Alliance for Open Media announced a new codec in 2015 call AV1 which is a competitor of the current H.265 codec. This is open-source software and free and is supported by Google, Microsoft, Mozilla, and Cisco. (Note Microsoft is backing both sets of codec software). This group has been working on a forward-looking codec as well.

Even should everything go smoothly it’s unlikely to see the H.266 codec affecting consumer video for 3-4 years. Carriers could deploy the codec on network gear sooner than that.

A New Fiber Optic Speed Record

Researchers at University College London (UCL) have set a new bandwidth record for fiber optic bandwidth transmission. They’ve been able to communicate through a fiber optic cable at over 178 terabits per second, or 178,000 gigabits per second. The research was done in collaboration with fiber optic firms Xtera and KDDI Research. The press release of the achieved speed claims this is 20% faster than the previously highest achieved speed.

The achieved speed has almost reached the Shannon limit, which defines the maximum amount of error-free data that can be sent over a communications channel. Perhaps the most impressive thing about the announcement was that UCL scientists achieved this speed over existing fiber optic cables and didn’t use pristine fiber installed in a laboratory.

The fast signal throughput was achieved by combining several techniques. First, the lasers use raman amplification, which involves injecting photons of lower energy into a high-frequency photon stream. This produces predictable photon scattering which can be tailored to the characteristics needed for optimally traveling through glass fiber.

The researchers also used Erbium-doped fiber amplifiers. To those who have forgotten the periodic table, erbium is a commonly found metal in nature with an atomic weight of 68. Erbium has a key characteristic needed for fiber optic amplifiers in that the metal efficiently amplifies light in the wavelengths used by fiber optic lasers.

Finally, the amplifiers used for the fast speeds used semiconductor optical amplifiers (SOA). These are diodes that have been treated with anti-reflection coatings so that the laser light signal can pass through with the least amount of scattering. The net result of all of these techniques is that the scientists were able to reduce the amount of light that is scattered during the transmission though a glass fiber cable, thus maximizing data throughput.

UCL also used a wider range of wavelengths than are normally used in fiber optics. Most fiber optic transmission technologies create empty buffers around each light bandwidth being used (much like we do with radio transmissions). The UCL scientists used all of the spectrum, without separation bands, and used several techniques to minimize interference between bands of light.

This short description of the technology being used is not meant to intimidate a non-technical reader, but rather show the level of complexity in today’s fiber optic technology. It’s a technology that we all take for granted, but which is far more complex than most people realize. Fiber optic technology might be the most lab-driven technology in daily use since the technology came from research labs and scientists have been steadily improving the technology for decades.

We’re not going to see multi-terabit lasers in regular use in our networks anytime soon, and that’s not the purpose of this kind of research. UCL says that the most immediate benefit of their research is that they can use some of these same techniques to improve the efficiency of existing fiber repeaters.

Depending upon the kind of glass being used and the spectrum utilized, current long-haul fiber technology requires having the signals amplified every 25 to 60 miles. That means a lot of amplifiers are needed for long-haul fiber routes between cities. Without amplification, the laser light signals get scattered to the point where they can’t be interpreted at the receiving end of the light transmission. As implied by their name, amplifiers boost the power of light signals, but their more important function is to reorder the light signals into the right format to keep the signal coherent.

Each amplification site adds to the latency in long-haul fiber routes since fibers must be spliced into amplifiers and passed through the amplifier electronics. The amplification process also introduces errors into the data stream, meaning some data has to be sent a second time. Each amplifier site must also be in powered and housed in a cooled hut or building. Reducing the number of amplifier sites would reduce the cost and the power requirement and increase the efficiency of long-haul fiber.

Keeping Track of Satellites

The topic of satellite broadband has been heating up lately. Elon Musk’s StarLink now has over 540 broadband satellites in the sky and is talking about starting a few beta tests of the technology with customers. OneWeb went into bankruptcy but it being bought out by a team consisting of the British government and Bharti Airtel, the largest cellular company in India. Jeff Bezos has continued to move forward with Project Kuiper and the FCC recently gave the nod for the company to move ahead.

These companies have grandiose plans to launch large numbers of satellites. Starlink’s first constellation will have over 4,000 satellites – and the FCC has given approval for up to 12,000 satellites. Elon Musk says the company might eventually grow to over 30,000 satellites. Project Kuiper told the FCC they have plans for over 3.300 satellites. The original OneWeb plan called for over 1,200 satellites. Telesat has announced a goal of launching over 500 satellites. A big unknown is Samsung, which announced a plan a year ago to launch over 4,600 satellites. Even if all of these companies don’t fully meet their goals, there are going to be a lot of satellites in the sky over the next decade.

To put these huge numbers into perspective, consider the number of satellites ever shot into space. The United Nations Office for Outer Space Affairs (NOOSA) has been tracking space launches for decades. They reported at the end of 2019 that there have been 8,378 objects put into space since the first Sputnik in 1957. As of the beginning of 2019, there were 4,987 satellites still in orbit, although only 1,957 were still operational.

There is a lot of concern in the scientific community about satellite collisions and space junk. Low earth satellites travel at a speed of about 17,500 miles per hour to maintain orbit. Satellites that collide at that speed create many new pieces of space junk, also traveling at high speed. NASA estimates there are currently over 128 million pieces of orbiting debris smaller than 1 square centimeter, 900,000 objects between 1 and 10 square centimeters, and 22,000 pieces of debris larger than 4 inches.

NASA scientist Donald Kessler described the dangers of space debris in 1978 in what’s now described as the Kessler syndrome. Every space collision creates more debris and eventually there could be a cloud of circling debris that will make it nearly impossible to maintain satellites in space. While scientists think that such a cloud is almost inevitable, some worry that a major collision between two large satellites, or malicious destruction by a bad actor government could accelerate the process and could quickly knock out all of the satellites in a given orbit.

There has only been one known satellite collision when a dead Russian satellite collided with an Iridium communications satellite over a decade ago. That satellite kicked off hundreds of pieces of large debris. There have been numerous near misses, including with the manned Space Station. There was another near-miss in January between the defunct Poppy VII-B military satellite from the 1960s and a retired IRAS satellite that was used for infrared astronomy in the 1980s. It was recently reported that Russia launched a new satellite that passed through one of StarLink’s newly launched swarms.

The key avoiding collisions is to use smart software to track trajectories of satellites and provide ample time for the satellite owners to make corrections to the orbital path to avoid a collision. Historically, that tracking role has been done by the US military – but the Pentagon has made it clear that it is not willing to continue in this role. No software is going to help avoid collisions between dead satellites like the close-call in January. However, all newer satellites should be maneuverable to help avoid collisions as long as sufficient notice is provided.

A few years ago, the White House issued a directive that would give the tracking responsibility to the Commerce Department under a new Office of Space Commerce. However, some in Congress think the proper agency to track satellites is the Federal Aviation Agency which already tracks anything in the sky at lower levels. Somebody in government needs to take on this role soon, because the Pentagon warns that its technology is obsolete, having been in place for thirty years.

The need for tracking is vital. Congress needs to decide soon how this is to be done and provide the funding to implement a new tracking system. It would be ironic if the world solves the rural broadband problem using low orbit satellites, only to see those satellites disappear in a cloud of debris. If the debris cloud is allowed to form it could take centuries for it to dissipate.

An Update on ATSC 3.0

This is the year when we’ll finally start seeing the introduction of ATSC 3.0. This is the newest upgrade to broadcast television and is the first big upgrade since TV converted to all-digital over a decade ago. ATSC 3.0 is the latest standard that’s been released by the Advanced Television Systems Committee that creates the standards used by over-the-air broadcasters.

ATSC 3.0 will bring several upgrades to broadcast television that should make it more competitive with cable company video and Internet-based programming. For example, the new standard will make it possible to broadcast over-the-air in 4K quality. That’s four times more pixels than 1080i TV and rivals the best quality available from Netflix and other online content providers.

ATSC 3.0 also will support the HDR (high dynamic range) protocol that enhances picture quality by creating a better contrast between light and dark parts of a TV screen. ATSC 3.0 also adds additional sound channels to allow for state-of-the-art surround sound.

Earlier this year, Cord Cutters News reported that the new standard was to be introduced in 61 US markets by the end of 2020 – however, that has slowed a bit due to the COVID-19 pandemic. But the new standard should appear in most major markets by sometime in 2021. Homes will either have to buy ATSC-enabled TVs, which are just now hitting the market, or they can buy an external ATSC tuner to get the enhanced signals.

One intriguing aspect of the new standard is that a separate data path is created with TV transmissions. This opens up some interesting new features for broadcast TV. For example, a city could selectively send safety alerts and messages to homes in just certain parts of a city. This also could lead to targeted advertising that is not the same in every part of a market. Local advertisers have often hesitated to advertise on broadcast TV because of the cost and waste of advertising to an entire market instead of just the parts where they sell service.

While still in the early stages of exploration, it’s conceivable that ATSC 3.0 could be used to create a 25 Mbps data transmission path. This might require several stations joining together to create that much bandwidth. While a 25 Mbps data path is no longer a serious competitor of much faster cable broadband speeds, it opens up a lot of interesting possibilities. For example, this bandwidth could offer a competitive alternative for providing data to cellphones and could present a major challenge to cellular carriers and their stingy data caps.

ATSC 3.0 data could also be used to bring broadband into the home of every urban school student. If this broadband was paired with computers for every student, this could go a long way towards solving the homework gap in urban areas. Unfortunately, like most other new technologies, we’re not likely to see the technology in rural markets any time soon, and perhaps never. The broadband signals from tall TV towers will not carry far into rural America.

The FCC voted on June 16 on a few issues related to the ATSC 3.0 standard. In a blow to broadcasters, the FCC decided that TV stations could not use close-by vacant channels to expand ATSC 3.0 capabilities. The FCC instead decided to maintain vacant broadcast channels to be used for white space wireless broadband technology.

The FCC also took a position that isn’t going to sit as well with the public. As homeowners have continued to cut the cord there have been record sales in the last few years of indoor antennas for receiving over-the-air TV. Over-the-air broadcasters are going to be allowed to sunset the older ATSC 1.0 standard in 2023. That means that homes will have to replace TVs or will have to install an external ATSC 3.0 tuner if they want to continue to watch over-the-air broadcasts.

Who Owns Your Connected Device?

It’s been clear for years that IoT companies gather a large amount of data from customers. Everything from a smart thermometer to your new car gathers and reports data back to the cloud. California has tried to tackle customer data privacy through the California Consumer Privacy Act that went into effect on January 1.

Web companies must provide California consumers the ability to opt-out from having their personal information sold to others. Consumers must be given the option to have their data deleted from the site. Consumers must be provided the opportunity to view the data collected about them. Consumers also must be shown the identity of third parties that have purchased their data. The new law defines personal data broadly to include things like name, address, online identifiers, IP addresses, email addresses, purchasing history, geolocation data, audio/video data, biometric data, or any effort made to classify customers by personality type or trends.

However, there is one area that the new law doesn’t cover. There are examples over the last few years of IoT companies making devices obsolete and nonfunctional. Two examples that got a lot press involve Charter security systems and Sonos smart speakers.

When Charter purchased Time Warner Cable, the company decided that it didn’t want to support the home security business it had inherited. Charter ended its security business line earlier this year and advised customers that the company would no longer provide alarm monitoring. Unfortunately for customers, this means their security devices become non-functional. Customers probably felt safe in choosing Time Warner Cable as a security company because the company touted that they were using off-the-shelf electronics like Ring cameras and Abode security devices – two of the most common brands of DIY smart devices.

Unfortunately for customers, most of the devices won’t work without being connected to the Charter cloud because the company modified the software to only work in a Charter environment. Customers can connect some of the smart devices like smart thermostats and lights to a different hub, but customers can’t repurpose the security devices, which are the most expensive parts of most systems. When the Charter service ended, homeowners were left with security systems that can’t connect to a monitoring service or law enforcement. Charter’s decision to exit the security business turned the devices into bricks.

In a similar situation, Sonos notified owners of older smart speakers that it will no longer support the devices, meaning no more software upgrades or security upgrades. The older speakers will continue to function but can become vulnerable to hackers. Sonos offered owners of the older speakers a 30% discount on newer speakers.

It’s not unusual for older electronics to become obsolete and to no longer be serviced by the manufacturer – it’s something we’re familiar with in the telecom industry. What is unusual is that Sonos told customers that they cannot sell their older speakers without permission from the company. Sonos has this ability because the speakers communicate with the Sonos cloud. Sonos is not going to allow the old speakers to be registered by somebody else. If I was a Sonos customer I would also assume this to mean that the company is likely to eventually block old speakers from their cloud. The company’s notification told customers that their speakers are essentially a worthless brick. This is a shock to folks who spent a lot of money on top-of-the-line speakers.

There are numerous examples of similar incidents in the smart device industry. Google shut down the Revolv smart hub in 2016, making the device unusable. John Deere has the ability to shut off farm equipment costing hundreds of thousands of dollars if farmers use somebody other than John Deere for service. My HP printer gave me warnings that the printer would stop working if I didn’t purchase an HP ink-replacement plan.

This raises the question if consumers really own a device if the manufacturer or some partner of the manufacturer has the ability at some future time to shut the device down. Unfortunately, when consumers buy smart devices they never get any warning of the rights of the manufacturer to kill the devices in the future.

I’m sure the buyers of the Sonos speakers feel betrayed. People likely expect decent speakers to last for decades. I have a hard time imagining somebody taking Sonos up on the offer to buy new speakers at a discount to replace the old ones because in a few years the company is likely to obsolete the new speakers as well. We all have gotten used to the idea of planned obsolescence. Microsoft stops supporting older versions of Windows and users continue to use the older software at their risk. But Microsoft doesn’t shut down computers running old versions of Windows as Charter is doing. Microsoft doesn’t stop a customer from selling a computer loaded with Windows 5 to somebody else, as Sonos is doing.

These two examples provide a warning to consumers that smart devices might come with an expiration date. Any device that continues to interface with the original manufacturer through the cloud can be shut down. It would be an interesting lawsuit if a Sonos customer sues the company for essentially stealing their device.

It’s inevitable that devices grow obsolete over time. Sonos says the older speakers don’t contain enough memory to accept software updates. That’s probably true, but the company went way over the line when they decided to kill old speakers rather than let somebody sell them. Their actions tell customers that they were only renting the speakers and that they always belonged to Sonos.