Technology Right Around the Corner

Every once in a while I like to review technologies outside of telecom that are going to be impacting most of us in the near future. Today I’m writing about some technologies that seem likely to become commonplace within the next five years. Of course, as with any new innovation, the way these ideas are marketed and implemented will likely mean that some will become bigger than expected and others might fizzle.

Self-driving Trucks. It seems inevitable that we are going to eventually live in a world of smart cars that can drive themselves. But before we get to that place many industry experts believe that the first mass-adopted use of the new technologies will appear in long-haul trucking. The challenges for using self-driving trucks for local deliveries are a lot more complex and may not be solved until trucks are somehow paired with robots to load and unload local goods.

We spend a huge amount of money in this country moving things from one place to another, and our current system of using human drivers has some built-in inefficiencies. Trucking today is limited to a big extent due to the number of hours that a driver is allowed to drive per day due to safety regulations. Self-driving trucks can drive around the clock and only need to stop occasionally to refuel. The combination of eliminating truck-driver salaries and also extending the hours of daily drive time provides a huge economic incentive to make this work. There have already been trials of self-driving trucks. Another strategy being tried in Europe is to create truck convoys, with a live driver in the first truck leading a pack of self-driving trucks.

Enhanced Vision. IBM predicts that soon there will be inexpensive technology available that will enable us to ‘see’ in a wide range of spectrum including microwaves, millimeter waves and infrared. There have been infrared goggles available for decades, but IBM says that there will be glasses or small handheld devices that will operate in a similar manner and that will let us see in these other frequencies.

This opens up a wide range of products that will let people see at night, will let cars see through fog and rain, and will let workers and technicians see their work environment in a different and useful manner. In telecom picture a technician able to ‘see’ a millimeter-wave microwave beam to more efficiently install receivers. Imagine linemen able to climb and fix aerial cables easily at night.

But the possibilities for better vision are immense. Imagine policemen knowing in a glance if somebody is carrying a concealed weapon. Or consider a metal worker who can ‘see’ flaws in metal work that are not detectable with normal light. And perhaps best imagine being able to hike in the woods at night and able to see with the same clarity as the daytime.

Practical Quantum Computers. These have been on many lists of future technologies, but it looks like 2017 is the year that is finally going to see some practical developments of this new technology. There have been tiny steps taken in the field with D-Wave Systems of Canada now selling a precursor machine that uses a technology known as quantum annealing. But there is a lot of big money being put into the technology by Google, IBM, Microsoft and others that might lead to soon building a working quantum computer including the needed chips and the complex circuitry along with needed control software.

The challenge to building workable quantum computers has been the fact that the qubits – the basic unit of quantum information – are susceptible to interference. For qubits to work they must be able to achieve the dual states of quantum superposition (seeming to be in two physical states at the same time) and entanglement (the linking of a pair of qubits such that when something happens to one it simultaneously changes the paired qubit as well). The rewards for making this work means the development of computers that far exceed the reach of today’s best supercomputers. Various scientists working in the field say that breakthroughs are imminent.

The Cell Atlas. There have been great strides over the last decades in deciphering DNA and other chemical reactions within the human body. The next big challenge now being tackled is to create what is being called a cell atlas that will map all of the different types of cells in the human body. The goal is to understand in detail the exact function and location within the body of different kinds of cells as a way understand how cells interact with each other. It’s a huge undertaking since the human body contains over 37 trillion cells. Teams of scientists in the US, the UK, Sweden, Israel, Japan, and the Netherlands are undertaking this task. They are planning to catalog the different kinds of cells, assign each a different molecular signature and then map each kind of cell in a three-dimensional map of the body.

Many of the kinds of cells in our bodies have been studied in detail. But scientists expect the mapping process to uncover many additional kinds of cells and to also begin to let them start to understand the way that cells interface with the rest of the body. They are certain that this process will lead to many new discoveries and a far better understanding of the human body.

The process relies on three different technologies. The first is cellular microfluidics which allows scientists to isolate and manipulate individual cells and allows for a detailed analysis. The second are new machines that can rapidly decode individual cells for just a few cents per cell. These machines can decode as many as 10,000 cells per day. Finally there are new technologies that allow for labeling different kinds of cells on the basis of gene activity and to ‘map’ the location of the particular kind of cell within the body.

Wireless Networks Need Fiber

As I examine each of the upcoming wireless technologies it looks like future wireless technology is still going to rely heavily on an underlying fiber network. While the amount of needed fiber will be less than building fiber to every customer premise, supporting robust wireless networks is still going to require significant construction of new fiber.

This is already true today for the traditional cellular network and most existing towers are fiber-fed, although some have microwave backhaul. The amount of bandwidth needed at traditional cell sites is already outstripping the 1 or 2 GB capacity of wireless backhaul technologies. Urban cell sites today are fed with as much as 5 – 10 GB pipes and most rural ones have (or would like to have) a gigabyte feed. I’ve seen recent contractual negotiations for rural cell sites asking for as much as 5 GB of backhaul within the next 5 – 10 years.

Looking at the specification for future 5G cellular sites means that fiber will soon be the only backhaul solution for cell sites. The specifications require that a single cell site be capable of as much as 20 GB download and 10 GB upload. The cellular world is currently exploring mini-cell sites (although that effort has slowed down) to some degree due to the issues with placing these devices closer to customers. To be practical these small cell sites must be placed on poles (existing or newly built), on rooftops and on other locations found near to areas with high usage demand. The majority of these small sites will require new fiber construction. Today these sites can probably use millimeter wave radio backhaul, but as bandwidth needs increase, this is going to mean bringing fiber to poles and rooftops.

Millimeter wave radios are also being touted as a way to bring gigabit speeds to consumers. But delivering fast speeds means getting the radios close to customers. These radios use extremely high frequencies, and as such travel for short distances. As a hot spot a millimeter wave radio is only good for a little over 100 feet. But even if formed into a tight microwave beam it’s a little over a mile – and also requires true line-of-sight. These radios will be vying for the same transmitter locations as mini-cell sites.

Because of the short distances that can be delivered by the millimeter wave radios, this technology is going to initially be of most interest in the densest urban areas. Perhaps as the radios get cheaper there will be more of a model for suburban areas. But the challenge of deploying wireless in urban areas is that is where fiber is the most expensive to build. It’s not unusual to see new fiber construction costs of $150,000 and $200,000 per mile in downtown areas. The urban wireless deployment faces the challenge of getting both fiber and power to poles, rooftops and sides of buildings. This is the issue that has already stymied the deployment of mini-cell sites, and it’s going to become more of an issue as numerous companies want to build competing wireless networks in our cities. I’m picturing having the four major cellular companies and half a dozen wireless ISPs all wanting access to the same prime transmitter sites. All of these companies will have to deal with the availability of fiber, or will need to build expensive fiber to support their networks.

Even rural wireless deployments needs a lot of fiber. A quality wireless point-to-point wireless network today needs fiber at each small tower. When that is available then the current technologies can deploy speeds between 20 Mbps and 100 Mbps. But using wireless backhaul instead of fiber drastically cuts the performance of these networks and there are scads of rural WISPs delivering bandwidth products of 5 Mbps or less. As the big telcos tear down their remaining rural copper, the need for rural fiber is going to intensify. But the business case is often difficult to justify to build fiber to supply bandwidth to only a small number of potential wireless or wireline customers.

All of the big companies that are telling Wall Street about their shift to wireless technologies are conveniently not talking about this need for lots of fiber. But when they go to deploy these technologies on any scale they are going to run smack into the current lack of fiber. And until the fiber issue is solved, these wireless technologies are not going to deliver the kinds of speeds and won’t be quickly available everywhere as is implied by the many press releases and articles talking about our wireless future. I have no doubt that there will eventually be a lot of customers using wireless last mile – but only after somebody first makes the investment in the fiber networks needed to support the wireless networks.

Ready or Not, IoT is Coming

We are getting very close to the time when just about every appliance you buy is going to be connected to the IoT, whether you want it or not. Chips are getting so cheap that manufacturers are going to soon understand the benefits of adding chips to most things that you buy. While this will add some clear benefits to consumers it also brings new security risks.

IoT in everything is going to redefine privacy. What do I mean by that? Let’s say you buy a new food processor. Even if the manufacturer doesn’t make the device voice-controlled they are going to add a chip. That chip is going to give the manufacturer the kind of feedback they never had before. It’s going to tell them everything about how you use your food processor – how long before you take it out of the box, how often you use it, how you use the various settings, and if the device has any problems. They’ll also be able to map where all of their customers are, but more importantly they will know who uses their food processor the most. And even if you never register the device, with GPS they are going to know who you are.

Picture that same thing happening with everything you buy. Remember that Tostitos just found it cost effective to add a chip to a million bags of chips for the recent Superbowl. So chips might not just be added to appliances, but could be built into anything where the manufacturer wants more feedback about the use of their product.

Of course, many devices are going to go beyond this basic marketing feedback and will also include interactions of various kinds with customers. For instance, it shouldn’t be very long until you can talk to that same food processor through your Amazon Alexa and tell it what you are making. It will know the perfect settings to make your guacamole and will help you blend a perfect bowlful. Even people who are leery of home automation are going to find many of these features to be too convenient to ignore.

There is no telling at this early stage which IoT applications will be successful. For instance, I keep hearing every year about smart refrigerators and I can’t ever picture that ever fitting into my lifestyle. But like with any consumer product, the public will quickly pick the winners and losers. When everything has a chip that can communicate with a whole-house hub like Alexa, each of us will find at least a few functions we love so much that we will wonder how we lived without them.

But all of this comes with a big price. The big thing we will be giving up is privacy. Not only will the maker of each device in our house know how we use that device, but anybody that accumulates the feedback from many appliances and devices will know a whole lot more about us than most of us want strangers to know. If you are even a little annoyed by targeted marketing today, imagine what it’s going to be like when your house is blaring everything about you to the world. And there may be no way to stop it. The devices might all talk to the cellular cloud and be able to bypass your home WiFi and security – that’s why both AT&T and Verizon are hyping the coming IoT cloud to investors.

There is also the added security risk of IoT devices being used in nefarious ways. We’ve already learned that our TVs and computers and other devices in the house can listen to all of our private conversations. But even worse than that, devices that can communicate with the world can be hacked. That means any hacker might be able to listen to what is happening in your home. Or it might mean a new kind of hacking that locks and holds your whole house and appliances hostage for a payment like happens today with PCs.

One of the most interesting things about this is that it’s going to happen to everybody unless you live in some rural place out of range of cell service. Currently we all have choices about letting IoT devices into our house, and generally only the tech savvy are using home automation technology. But when there are chips embedded in most of the things you buy it will spread IoT to everybody. It’s probably going to be nearly impossible to neutralize it. I didn’t set out to sound pessimistic in writing this blog, but I really don’t want or need my toaster or blender or food processor talking to the world – and I suspect most of you feel the same way.

More on 5G Standards

I wrote a blog last week about the new 5G standard being developed by the International Telecommunications Union (ITU). This standard is expected to be passed this November. However this standard is not the end of the standards process, but rather the beginning. The ITU IMT-2020 standard defines the large targets that define a fully developed 5G product. Basically it’s the wish list and a fully-compliant 5G product will meet the full standard.

But within 5G there are already a number of specific use cases for 5G that are being developed. The most immediate three are enBB (enhanced mobile broadband, or better functioning cellphones), URLLC (ultra-low latency communications to enhance data connectivity) and mMTC (massive machine type communications, to communicate with hordes of IoT devices). Each use case requires a unique set of standards to define how those parts of the 5G network will operate. And there will be other use cases.

The primary body working on these underlying standards is the 3GPP (3rd Generation Partnership Project). This group brings together seven other standards bodies – ARIB, ATIS, CCSA, ETSI, TSDSI, TTA, TTC – which demonstrates how complicated it is to develop a new wireless technology that will be accepted worldwide. I could talk about what each group does, but that would take a whole blog. Each standards group looks at specific aspects of radio communications such as the modulating schemes to be used, or the format of information to be passed so that devices can talk to each other. But the involvement of this many different standards groups explains a bit about why it takes so long to go from a new technology concept like 5G to functioning wireless products.

There is currently a lot work being done to create the specific standards for different portions of a 5G network. This includes the Radio Access Network (RAN), Services and System Aspects (SA) and Core Network and Terminals (CT).

The 5G RAN group, which looks at radio architecture, began work in 2015. Their first phase of work (referred to as Release 15) is looking at both the eMBB and the URLCC use cases. The goal is to define the specific architecture and feature set that is needed to meet the 5G specification. This first phase is expected to be finished in the fourth quarter of 2018. The 5G RAN group is also working on Release 16, which looks more specifically at getting radios that can comply with all of the aspects of IMT-2020 and is targeted to be completed in December of 2019.

The 5G SA group has already been actively working on the services and systems aspects of 5G. The preliminary work from this group was finished last year and final approval of their phase 1 work was just approved at the Mobile World Congress. But the SA group and the RAN group worked independently and it’s expected that there will be work to be done at the end of each phase of the RAN group to bring the two groups into sync.

The work on the core network has begun with some preliminary testing and concepts, but most of their work can’t be started until the RAN group finishes its work in 2018 and 2019.

The reason I am writing about this is to demonstrate the roadblocks that still remain to rolling out any actual 5G products. Manufacturers will not commit to making any mass-produced hardware until they are sure it’s going to be compatible with all parts of the 5G network. And it doesn’t look like any real work can be done in that area until about 2020.

Meanwhile there is a lot of talk from AT&T, Verizon and numerous vendors about 5G trials, and these press releases always make it sound like 5G products will quickly follow these trials. But for the most part these trials are breadboard tests of some of the concepts of the 5G architecture. These tests provide valuable feedback on problems developed in the field and on what works and doesn’t work.

And these companies are also making 5G claims about some technologies that aren’t really 5G yet. Most of the press releases these days are talking about point-to-point or point-to-multipoint radios using millimeter wave frequencies. But in many cases these technologies have been around for a number of years and the ‘tests’ are attempts to use some of the 5G concepts to goose more bandwidth out of existing technology.

And that’s not a bad thing. AT&T, Verizon, Google and Starry, among others, are looking for ways to use high-bandwidth wireless technologies in the last mile. But as you can see by the progress of the standards groups defining 5G, the radios we see in the next few years are not going to be 5G radios, no matter what the marketing departments of those companies call them.

AT&T’s Broadband Trials

John Donovan, the chief strategy officer for AT&T, spoke at the Mobile World Congress recently and said that the company was trying five different technologies for the last mile. This includes WLL (wireless local loop), G.Fast, 5G, AirGig and fiber-to-the-premise. He said the company would be examining the economics of all of different technologies. Let me look at each one, in relation to AT&T.

Wireless Local Loop (WLL). The technology uses the companies LTE bandwidth but utilizes a point-to-multipoint network configuration. By using a small dish on the house to receive the signal the company is getting better bandwidth than can be received from normal broadcast cellular. The company has been doing trials on various different versions of the technology for many years. But there are a few recent trials of the newest technology that AT&T will be using for much of its deployment in rural America as part of the CAF II plan. That plan requires the ISP to deliver at least 10/1 Mbps. AT&T says that the technology is delivering speeds of 15 to 25 Mbps. The company says that even at the edge of a cellular network that a customer can get 10 Mbps about 90% of the time.

G.Fast. This is a technology that uses high frequencies to put more bandwidth on telephone copper wire. Speeds are reported to be as high as 500 Mbps, but only for very short distances under 200 feet. AT&T recently announced a G.Fast trial in an apartment building in Minneapolis. The technology is also being tested by CenturyLink and Windstream. All of these trials are using existing telephone copper inside of existing apartment buildings to deliver broadband. So this is not really a last mile technology. AT&T brings fiber to the apartment complex and then uses G.Fast as an inside wire technology. If they find it to be reliable this would be a great alternative to rewiring apartments with fiber.

5G. AT&T recently announced a few trials of early 5G technologies in Austin. They are looking at several technology ideas such carrier aggregation (combining many frequencies). But these are just trials, and AT&T is one of the companies helping to test pre-5G ideas as part of the worldwide effort to define the 5G specifications. These are not tests of market-ready technologies, but are instead field trials for various concepts needed to make 5G work. There is no doubt that AT&T will eventually replace LTE wireless with 5G wireless, but that transition is still many years in the future. The company is claiming to be testing 5G for the press release benefits – but these are not tests of a viable last mile technology – just tests that are moving lab concepts to early field trials.

AirGig. This one remains a mystery. AT&T says it will begin trialing the technology later this year with two power companies. There has been a little bit of clarification of the technology since the initial press release. This is not a broadband over powerline technology – it’s completely wireless and is using the open lines-of-sight on top of power poles to create a clear path for millimeter wave radios. The company has also said that they don’t know yet which wireless technology will be used to go from the poles into the home – they said the whole range of licensed spectrum is under consideration including the LTE frequencies. And if that’s the case then the AirGig is a fiber-replacement, but the delivery to homes would be about the same as WLL.

FTTP. Donovan referred to fiber-to-the-home as a trial, but by now the company understands the economics of fiber. The company keeps stretching the truth a bit about their fiber deployments. The company keeps saying that they have deployed fiber to 4 million homes, with 8 million more coming in the next three years. But the fact is they have actually only passed the 4 million homes that they can market to as is disclosed on their own web site. The twelve million home target was something that was dictated by the FCC as part of the settlement allowing the company to buy DirecTV.

We don’t know how many fiber customers AT&T has. They are mostly marketing this to apartment buildings, although there are residential customers around the country saying they have it. But they have not sold big piles of fiber connections like Verizon FiOS. This can be seen by looking at the steady drop in total AT&T data customers – 16.03 million in 2014, 15.78 million in 2015 and 15.62 million at the end of the third quarter of 2016. AT&T’s fiber is not really priced to be super-competitive, except in markets where they compete with Google Fiber. Their normal prices elsewhere on fiber are $70 for 100 Mbps, $80 for 300 Mbps and $99 for a gigabit.

Standards for 5G

itu_logo_743395401Despite all of the hype that 5G is right around the corner, it’s important to remember that there is not yet a complete standard for the new technology.

The industry just took a big step on February 22 when the ITU released a draft of what it hopes is the final specification for 5G. The document is heavy in engineering detail and is not written for the layman. You will see that the draft talks about a specification for ‘IMT-2020’ which is the official name of 5G. The goal is for this draft to be accepted at a meeting of the ITU-R Study Group in November.

This latest version of the standard defines 13 metrics that are the ultimate goals for 5G. A full 5G deployment would include all of these metrics. What we know that we will see is commercial deployments from vendors claiming to have 5G, but which will actually meet only some parts of a few of these metrics. We saw this before with 4G, and the recent deployment of LTE-U is the first 4G product that actually meets most of the original 4G standard. We probably won’t see a cellular deployment that meets any of the 13 5G metrics until at least 2020, and it might be five to seven more years after that until fully compliant 5G cellular is deployed.

The metric that is probably the most interesting is the one that establishes the goal for cellular speeds. The goals of the standard are 100 Mbps download and 50 Mbps upload. Hopefully this puts to bed the exaggerated press articles that keep talking about gigabit cellphones. And even should the technology meet these target speeds, in real life deployment the average user is probably only going to receive half those speeds due to the fact that cellular speeds decrease rapidly with distance from a cell tower. Somebody standing right next to a cell tower might get 100 Mbps, but even as close as a mile away the speeds will be considerably less.

Interestingly, these speed goals are not much faster than is being realized by LTE-U today. But the new 5G standard should provide for more stable and guaranteed data connections. The standard is for a 5G cell site to be able to connect to up to 1 million devices per square kilometer (a little more than a third of a square mile). This, plus several other metrics, ought to result in stable 5G cellular connections – which is quite different than what we are used to with 4G connections. The real goal of the 5G standard is to provide connections to piles of IoT devices.

The other big improvement over 4G are the expectations for latency. Today’s 4G connections have data latencies as high as 20 ms, which accounts for most problems in loading web pages or watching video on cellphones. The new standard is 4 ms latency, which would improve cellular latency to around the same level that we see today on fiber connections. The new 5G standard for handing off calls between adjoining cell sites is 0 ms, or zero delay.

The standard increases the demand potential capacity of cell sites and provides a goal for the ability of a cell site to process peak data rates of 20 Gbps down and 10 Gbps up. Of course, that means bringing a lot more bandwidth to cell towers and only extremely busy urban towers will ever need that much capacity. Today the majority of fiber-fed cell towers are fed with 1 GB backbones that are used to satisfy upload and download combined. We are seeing cellular carriers inquiring about 10 GB backbones, and we need a lot more growth to meet the capacity built into the standard.

There are a number of other standards. Included is a standard requiring greater energy efficiency, which ought to help save on handset batteries – the new standard allows for handsets to go to ‘sleep’ when not in use. There is a standard for peak spectral efficiency which would enable 5G to much better utilize existing spectrum. There are also specifications for mobility that extend the goal to be able to work with vehicles going as fast as 500 kilometers per hour – meaning high speed trains.

Altogether the 5G standard improves almost every aspect of cellular technology. It calls for more robust cell sites, improved quality of the data connections to devices, lower energy requirements and more efficient hand-offs. But interestingly, contrary to the industry hype, it does not call for gigantic increases of cellular handset data speeds compared to a fully-compliant 4G network. The real improvements from 5G are to make sure that people can get connections at busy cell sites while also providing for huge numbers of connections to smart cars and IoT devices. A 5G connection is going to feel faster because you ought to almost always be able to make a 5G connection, even in busy locations, and that the connection will have low latency and be stable, even in moving vehicles. It will be a noticeable improvement.

A New WiFi Standard

Wi-FiThere is a new version of WiFi coming soon that ought to solve some of the major problems with using WiFi in the home and in busy environments. The new standard has been labeled as 802.11ax and should start shipping in new routers by the end of this year and start appearing in devices in early 2018.

It’s the expected time for a new standard since there has been a new one every four or five years. 802.11a hit the market in 1999, 802.11g in 2003, 802.11n in 2009 and 802.11ac in 2013.

One of the most interesting thing about this new standard is that it’s a hardware upgrade and not a real change in the standards. It will be backwards compatible with earlier versions of 802.11, but both the router and the end devices must be upgraded to use the new standard. This means that business travelers are going to get frustrated when visiting hotels without the new routers.

One improvement is that the new routers will treat the 2.4 GHz and 5 GHz spectrums as one big block of spectrum, making it more likely to find an open channel. Most of today’s routers make you pick one band or the other.

Another improvement in 801.11ax is that the routers will have more antennas in the array, making it possible to connect with more devices at the same time. It’s also going to use MIMO (multiple input, multiple output) antenna arrays, allowing it to identify individual users and to establish fixed links to them. A lot of the problems in current WiFi routers come when routers get overwhelmed with more requests for service than the number of antennas that are available.

In addition to more signal paths the biggest improvement will be that the new 801.22ax routers will be able to better handle simultaneous requests for use of a single channel. The existing 802.11 standards are designed to share spectrum and when a second request is made to use a busy channel, the first transmission is stopped while the router decides which stream to satisfy – and this keep repeating as the router bounces back and forth between the two users. This is not a problem when there are only a few requests for simultaneous use, but in a crowded environment the constant stopping and starting of the router results in a lot of the available spectrum going unused and in nobody receiving a sustained signal.

The new 802.11ax routers will use OFDMA (orthogonal frequency division multiplying) to allow multiple users to simultaneously use the same channel without the constant stopping and starting at each new request for service. A hotel with a 100 Mbps backbone might theoretically be able to allow 20 users to each receive a 5 Mbps stream from a single WiFi channel. No wireless system will be quite that efficient, but you get the idea. A router with 802.11ax can still get overwhelmed, but it takes a lot more users to get to that condition.

We’ll have to wait and see how that works in practice. Today, if you visit a busy business hotel where there might be dozens of devices trying to use the bandwidth, the constant stopping and starting of the WiFi signal usually results in a large percentage of the bandwidth not being given to any user – it’s lost during the on/off sequences. But the new standard will give everybody an equal share of the bandwidth until all of the bandwidth is used or until it runs out of transmitter antennas.

The new standard also allows for scheduling connections between the router and client devices. This means more efficient use of spectrum since the devices will be ready to burst data when scheduled. This will allow devices like cellphones to save battery power by ‘resting’ when not transmitting since they save on making unneeded requests for connection.

All these various changes also mean that the new routers will use only about one-third the energy of current routers. Because the router can establish fixed streams with a given user it can avoid the constant off/off sequences.

The most interesting downside to the new devices will be that their biggest benefits only kick in when most of the connected devices are using the new standard. This means that the benefits on public networks might not be noticeable for the first few years until a significant percentage of cellphones, tablets, and laptops have been upgraded to the new standard.

Lidar

Tribrid_CarThere have been a mountain of articles about self-driving cars, but little discussion about how they see the world around them. The ability of computers to understand images is still in its infancy – in 2015 there was a lot of talk about how Google was teaching an AI program how to recognize cats within videos.

But obviously a self-driving car has to do a lot better than just ‘seeing’ around it – it needs to paint a 3D picture of everything around it in order to navigate correctly and to avoid problems. It turns out that the primary tool used by self-driving cars is called “Lidar.” Lidar stands for ‘light detection and ranging’ and fits neatly between sonar and radar.

Lidar works by sending out light beams and measuring how long it takes for reflected signals to return, much the same way that a bat sees the world using sonar. Sonar would be fairly useless in a self-driving car since sound waves get distorted in air and only paint an accurate picture for perhaps a dozen feet from the transmitter. That’s great for a bat catching a moth, but not useful for seeing oncoming traffic.

And the radio waves used in radar won’t really work well for self-driving cars. Radar works great for seeing objects far away, like metallic airplanes. But the radio waves pass through many objects (like people) meaning that radar doesn’t create a total picture of the world around it. And radar has problems creating an accurate picture of anything closer than 100 feet.

And that’s where lidar comes in. A lidar device works much like a big radar dish at an airport. It rotates and sends out light signals (actually infrared light signals) and then collects and analyzes the returning echoes to create a picture of the distances to objects around it. Lidar only became practical with modern computer chips which allow the transmitter to ‘rotate’ hundreds of times a second and which possess enough computing power to make sense of the echoed light waves.

And so a self-driving car doesn’t ‘see’ at all. The cars do not rely on standard cameras that try to make sense of the reflected ambient light around the car. The first prototypes of driverless cars tried to do this and could not process or make sense of images fast enough. Instead self-driving cars send out laser light at a specific frequency and then calculates the distance the light travels in every direction to create a picture of the world.

If you want to understand more about what this looks like, consider this Radiohead music video. Most of the images in the video were created with lidar. Don’t pay too much attention to the opening headshots because those are somewhat distorted for artistic effect. But the later images of seeing streets shows you the detail of a lidar image. Unlike the normal images our eyes see, a lidar image is massively more detailed in that the distance to everything in such a picture is known. Our eyeballs basically see in 2D and we use images from two eyes to simulate 3D. But a lidar image is fully 3D and gets full perspective from one transmitter.

Lidar does have limitations. It can be ‘blinded’ by heavy snows and rains. Lidar could be jammed by somebody transmitting a bright signal using the same light frequencies. And so smart cars don’t rely 100% on lidar but also use traditional cameras and sonar using the ultrasound frequencies to complement the lidar images.

Lidar is finding other uses. It’s being used, for example, in helicopters to search for things on the ground. A lidar system can spot a fleeing criminal or a lost child in the woods far more easily than older technologies or human eyeballs. Lidar can also create amazingly detailed images of anything. Archeologists are using it to create permanent images of dig sites during various stages of excavation before objects are removed. It’s not hard to think that within a few years that many traditional surveying techniques will be obsolete and that lidar will be able to locate and plot everything on a building lot, for example, down to the millimeter.

The Limitations of Cellular Data

SONY DSCIt’s hard these days to find anybody that is satisfied with the quality of data received over cellphones. A research report published by Aptelligent late last year showed that the US placed 10th in the world in overall cellular network performance, measured by the combination of reliability and speed. We all know that sometimes cellphone data is adequate, but can suddenly deteriorate to where you can’t load simple web pages. There are a number of factors baked into the cellular architecture that contribute to data performance. Following are a few of the key factors:

Data Power Drop-off. Cellular networks, by design, assume a massive drop-off of data speeds with distance. I don’t think most people understand how drastic the power curve is. Cellular companies show us bars to indicate the power of our connections – but these bars are not telling us a true story. The cellular architecture has a 100:1 data rate ratio from cell tower to the edge of the delivery area (generally a few miles). To provide an example, this means that if a cell site if designed to deliver 10 Mbps at the cell tower, that it will deliver only 1 Mbps at the mid-point of the cell tower range and only 0.1 Mbps at the edge.

Shape of the Cellular Footprint. It’s easy to think that there are nice concentric circles of cellphone signals propagating around cell towers. But nothing could be farther from the truth. If you go around any cell site and measure and plot the strength of signals you will find that the footprint of a cell tower looks a lot more like an amoeba, with the signal in some directions traveling a relatively short distance while in others it might travel much farther. If these footprints were static then engineers could design around the vagaries at a given cell site. But the footprint can change quite dramatically according to temperature, humidity and even the number of users concentrated in one portion of the footprint. This is why the delivery of broadcast wireless services is always going to more an art than a science, because the delivery footprint is constantly shifting, in many cases dramatically.

Proliferation of Antennas. Modern cellular networks have improved performance by significantly increasing the number of transmitting antennas on a cell tower (and also more receiving antennas in cell phones). This use of MIMO (multiple input, multiple-output) has produced a significant improvement for customers who are able to gain simultaneous signal from more than one transmitter. But there are two consequences of MIMO that actually decrease performance for some users. First, MIMO largely benefits those that are closest to the cell tower, and that means there are fewer quality connections available for those farther away from the cell tower. Second, MIMO has a secondary characteristic in that MIMO works best using cellular channels that are not-adjacent. And during time of heavy cellular usage this has the result of improving the signal strength in the MIMO channels but decreasing the strength of the other channels, again decreasing quality for customers that grab the weaker channels.

Impaired Hand Offs. Mobility is enabled in a cellular network when a customer is handed off from one cell site to the next while traveling. MIMO and other techniques that increase the signal to a given customer then make it more difficult for that customer to be handed to the next cell site. Hand offs were relatively error free when customers received a one channel signal from one transmitter, but now the quality of hand offs from one cell site to another can vary dramatically, resulting in more disconnects or drastic swings in the strength of connections.

Small-Cell Issues. All of the above issues will be compounded by the introduction of small-cells into the cellular network. In today’s cellular architecture a customer can only be handled by one cell tower at a time. Cellular networks don’t automatically assign the strongest connection to a customer, but rather the nearest available one. While small-cells will increase the opportunity to get a signal in a crowded environment, it also increases the chance of getting a poor connection, or of running into hand off issues for mobile customers.

2D Signal Propagation. Cell tower antennas are largely aimed to transmit close to the ground and do not waste signals by sending signals upwards in a 3D pattern. Anybody who has traveled to a big city and received poor signal on an upper floor of a tall hotel is familiar with this issue. The cellular signals are focused towards street level and not towards anybody higher. That’s not to say that you can’t get a cellular connection at the top of a highrise, or even in an airplane, but the vast majority of the connections (and the strongest connections) are aimed downward.

Crisis Propagation. Cell towers are arranged as an interconnected mesh. When something drastic happens to a given cell tower, such as losing power or being swamped with calls during an emergency, this not only shuts down the tower with a problem, but the problem cascades to nearby towers, often taking them out of service as well. This is similar to a rolling blackout in an electric grid. Carriers have been working on load balancing techniques to try to tamp down this problem, but it’s still relatively easy for a given cell tower to get overwhelmed and start a neighborhood and even regional cascade.

These issues all outline how complicated it is to design a great cellphone network. The above issues are worsened by the fact that in the US our cell sites were largely placed years ago to accommodate voice traffic and thus are not situated to instead provide optimum data traffic. But even a brand new cellular network designed to optimize data traffic would run into these same or different issues. It’s nearly impossible to design a cellular network that can handle all of the issues encountered in the real world. This makes me glad I’m not a cellular engineer.

The Transition to IP Telephony

ATTAT&T reported to the FCC about the progress of its transition of customers from a traditional TDM network to an all-IP network. AT&T had undertaken two trials of such a conversion in Carbon Hill, AL and Delray Beach, FL.

These were voluntary trials. AT&T had advertised widely and asked customers to move to the new IP-based services. In Carbon Hill 36% of residents and 28% of businesses voluntarily moved to the new service. In Delray Beach the numbers were similar with 38% and 25% converting. AT&T reported there were no reports of degraded service, including the transition of business customers to IP-based Centrex and similar services.

Since the trials were announced AT&T has also grandfathered Centrex and TV1-Analog Video service, meaning they will take no new orders for the services. The company also asked the FCC’s permission to discontinue 13 legacy services that are obsolete. This includes products that most people never heard of like 4-wire and voice-grade telemetry and various alarm bridging services. The company also has asked permission to discontinue six operator services including collect calling, person-to-person calling, billed to third party, busy line verification, busy line interrupt and international directory assistance.

These trials need to be put into perspective. From a technical perspective there is no reason to think that transitioning these service from TDM to IP-based technology wouldn’t work because a lot of the rest of the telephony world made that transition years ago. Cable companies like Comcast and anybody operating on an all-fiber network has been offering IP-based telephone products for many years. AT&T’s offerings include many products that are strictly copper-based, such as the legacy products they want to discontinue.

And that leads to the whole purpose behind these trials. AT&T wants to move customers off old copper networks to either a landline or wireless IP-based solution. Since the company’s goal is to tear down copper, the vast majority of such transitions will be to the company’s cellular network. A miniscule percentage of AT&T’s customers are on fiber – particularly residential customers since the company has launched very little FTTP in that market.

The trials are largely the result of what happened to Verizon on Fire Island a few years ago after Hurricane Sandy. There Verizon didn’t replace destroyed copper but moved people to a cellular-based service. But unlike these trials, which were meticulously slow and careful, it seems that in many of the Fire Island cases Verizon did not offer equivalent services to what they had offered before the hurricane. Apparently things like burglar alarms, medical monitoring devices, and other services didn’t work on the new wireless connections.

The FCC has already granted these big telcos the ability to tear down copper as long as they follow customer notification processes. My guess is that after these trials are blessed by the FCC that the companies will begin ripping down rural copper all over the country.

I expect that many customers are going to be unhappy when they lose their copper. Anybody who has traveled in rural areas understands that cellular coverage is often spotty, or even non-existent. Customers are worried about being cut off from telephony services inside their homes. It’s a legitimate concern for somebody with poor cellular service and with little or no broadband options, like we see in millions of rural homes and businesses.

But the time is coming soon when these transitions will not be voluntary like was done in these two communities. The big telcos will issue the legally required notices, and then they will proceed to shut off and tear down the copper. In doing so they will have undone the original FCC’s goal set by the Telecommunications Act of 1934, which was to make telephone service available everywhere. There are now going to be homes and communities that are going to be cut off from a workable alternative to make reliable voice calls.

I honestly never thought I’d see this happen. But I guess it was the pretty obvious end game after it became clear decades ago that the big telcos were not going to properly maintain their rural copper networks. We aren’t too far from the day when copper telephone networks join the list of other technologies that outlived their usefulness and are a thing of the past – at least for the giant telcos. There are still other companies like Frontier and Windstream that are fighting to extend the life of their copper, but we’ll have to see what the future holds for them and their customers.