The Industry

Apple Buys into 5G

Apple is coming out with a full range of new 5G iPhones. The phones have been designed to use the full range of new frequencies that the various cellular companies are touting as 5G, up to and including the millimeter wave spectrum offered in center cities by Verizon. In addition to 5G, the phones have new features like a better camera, better ease at using wireless charging, and a lidar scanner. The last change is the most revolutionary since lidar allows apps on the phone to better see and react to the surrounding environment.

But Apple is going all-in on the 5G concept. It’s a natural thing to do since cellular carriers have been talking non-stop about 5G for the last few years. However, by heavily advertising the new phones as 5G capable, Apple is possibly setting themselves up to be the brunt of consumer dissatisfaction when the public realizes that what’s being sold as 5G is just a repackaged version of 4G. The new features from an upgrade in cellular specifications will get rolled out over a decade, like we saw with the transition from 4G to 5G. In terms of the improvements of these new phones, were probably now at 4.1G, which is a far cry from what 5G will be like in ten years.

What I find most disturbing about the whole 5G phenomenon is that the cellular companies have essentially sold the public on the advantages of faster cellular speeds without anybody ever asking the big question of why cellphones need faster speed. Cellphones are, by definition, a single user device. The biggest data application that most people ever do on a cellphone is to watch video. If  4G phone is sufficient to watch video, then what’s the advantage up spending a lot of money to upgrade to 5G? Home broadband needs fast broadband to allow multiple people to use the broadband at the same time, but that isn’t true for a cellphone.

People do get frustrated with smartphones that get poor coverage inside big building, in elevators, in the inevitable cellular dead zones in every town, or rural areas too far away from cell towers. 5G phones won’t fix any of these problems because poor cellular coverage happens in areas that naturally block or can’t receive wireless signals. No technology can make up for lack of wireless signal.

The big new 5G feature in the iPhones is the ability to use all of the different frequencies that the cellular companies are now transmitting. However, these frequencies aren’t additive – if somebody grabs a new ‘5G’ frequency, the bandwidth on that frequency doesn’t add to what they were receiving on 4G. Instead, the user gets whatever frequency is available on the new spectrum channel. In many cases, the new 5G frequencies are lower than traditional cellular frequencies, and so data speeds are going to be a little slower.

The cellular companies are hoping that Apple is successful. The traditional frequencies used for 4G have been getting crowded, particularly in urban areas. Cellular data traffic has been growing at the torrid pace of 24% per year, and the traditional cellular network using big towers is getting overwhelmed.

Cellular companies have been trying to offload the 4G traffic volumes from the traditional cellular networks by opening up thousands of small cell sites. But their biggest hope for relieving 4G was to open up new bands of spectrum – which they have done. Every data connection made on a new frequency band is one that isn’t going to clog up the old and overfull cellular network. Introducing new bands of frequency doesn’t do the cellular networks any good unless people start using the new frequency bands – and that’s where the iPhone is a godsend to cellular companies. Huge volumes of data will finally migrate to the newly opened frequency bands as these new iPhones hit the market.

Unfortunately, users will likely not see any advantages from the change. Users will be migrating connection to a different frequency band, but it’s still 4G. It will be curious to see who takes the heat when the expensive new phones don’t outperform the old phones – will it be Apple or the cellular carriers?

Technology The Industry

Self-driving Cars and Broadband Networks

There are two different visions of the future of self-driving cars. Both visions agree that a smart car needs to process a massive amount of information in order to make real-time decisions.

One vision is that smart cars will be really smart and will include a lot of edge computing power and AI that will enable a car to make local decisions as the car navigates through traffic. Cars will likely to able to communicate with neighboring cars to coordinate vehicle spacing and stopping during emergencies. This vision requires only minimal demands for external broadband, except for perhaps to periodically update maps and to communicate with things like smart traffic lights.

The other vision of the future is that smart cars will beam massive amounts of data to and from the cloud that includes LiDAR imagery and GPS location information. Big data centers will then coordinate between vehicles. This second vision would require a massively robust broadband network everywhere.

I am surprised by the number of people who foresee the second version, with massive amounts of data transferred to and from the cloud. Here are just some of the reasons why this scenario is hard to imagine coming to fruition:

  • Volume of Data. The amount of data that would need to be transferred to the cloud is massive. It’s not hard to foresee a car needing to transmit terabytes of data during a trip if all of the decisions are made are made in a data center. Most prognosticators predict 5G as the technology that would support this network. One thing that seems to be ignored in these predictions is that almost no part of our current broadband infrastructure is able to handle this kind of data flow. We wouldn’t only need a massive 5G deployment, but almost every part of the existing fiber backbone network, down to the local level, would need to also be upgraded. It’s easy to fall into the trap that fiber can handle massive amounts of data, but the current electronics are not sized for this kind of data volumes.
  • Latency. Self-driving cars need to make instantaneous decisions and any delays of data going to and from the cloud will add delays. It’s hard to imagine any external network that can be as fast as a smart car making its own local driving decisions.
  • Migration Path. Even if the cloud is the ultimate network design, how do you get from here to there? We already have smart cars and they make decisions on-board. As that technology improves it doesn’t make sense that we would still pursue a cloud-based solution unless that solution is superior enough to justify the cost of migrating to the cloud.
  • Who will Build? Who is going to pay for the needed infrastructure? This means a 5G network built along every road. It means fiber built everywhere to support that network, including a massive beefing up of bandwidth on all existing fiber networks? Even the biggest ISPs don’t have both the financial wherewithal and the desire to tackle this kind of investment.
  • Who will Pay? And how is this going to get paid for? It’s easy to understand why cellular companies tout this vision as the future since they would be the obvious beneficiary of the revenues from such a network. But is the average family going to be willing to tack on an expensive broadband subscription for every car in the family? And does this mean that those who can’t afford a smart-car broadband connection won’t be able to drive? That’s a whole new definition of a digital divide.
  • Outages. We are never going to have a network that is redundant down to the street level. So what happens to traffic during inevitable fiber cuts or electronics failures?
  • Security. It seems sending live traffic data to the cloud creates the most opportunity for hacking to create chaos. The difficulty of hacking a self-contained smart car makes on-board computing sound far safer.
  • Who Runs the Smart-car Function? What companies actually manage this monstrous network? I’m not very enthused about the idea of having car companies operate the IT functions in a smart-car network. But this sounds like such a lucrative function I can’t foresee them handing this off to somebody else? There are also likely to be many network players involved and getting them all to perfectly coordinate sounds like a massively complex task.
  • What About Rural America? Already today we can’t figure out how to finance broadband in rural America. Getting broadband along every rural road is going to be equally as expensive as getting it to rural homes. Does this imply a smart-car network that only works in urban areas?

I fully understand why some in the industry are pushing this vision. This makes a lot of money for the wireless carriers and the vendors who support them. But the above list of concerns make it hard for me to picture the cloud vision. Doing this with on-board computers costs only a fraction of the cost of the big-network solution, and my gut says that dollars will drive the decision.

It’s also worth noting that we already have a similar example of this same kind of decision. The whole smart-city effort is now migrating to smart edge devices rather than exchanging massive data with the cloud. As an example, the latest technology for smart traffic control places smart processors at each intersection rather than sending full-time video to the cloud for processing. The electronics at a smart intersection will only communicate with the hub when it has something to report, like an accident or a car that has run a red light. That requires far less data, meaning far less demand for broadband than sending everything to the cloud. It’s hard to think that smart-cars – which will be the biggest source of raw data yet imagined – would not follow this same trend towards smart edge devices.



There have been a mountain of articles about self-driving cars, but little discussion about how they see the world around them. The ability of computers to understand images is still in its infancy – in 2015 there was a lot of talk about how Google was teaching an AI program how to recognize cats within videos.

But obviously a self-driving car has to do a lot better than just ‘seeing’ around it – it needs to paint a 3D picture of everything around it in order to navigate correctly and to avoid problems. It turns out that the primary tool used by self-driving cars is called “Lidar.” Lidar stands for ‘light detection and ranging’ and fits neatly between sonar and radar.

Lidar works by sending out light beams and measuring how long it takes for reflected signals to return, much the same way that a bat sees the world using sonar. Sonar would be fairly useless in a self-driving car since sound waves get distorted in air and only paint an accurate picture for perhaps a dozen feet from the transmitter. That’s great for a bat catching a moth, but not useful for seeing oncoming traffic.

And the radio waves used in radar won’t really work well for self-driving cars. Radar works great for seeing objects far away, like metallic airplanes. But the radio waves pass through many objects (like people) meaning that radar doesn’t create a total picture of the world around it. And radar has problems creating an accurate picture of anything closer than 100 feet.

And that’s where lidar comes in. A lidar device works much like a big radar dish at an airport. It rotates and sends out light signals (actually infrared light signals) and then collects and analyzes the returning echoes to create a picture of the distances to objects around it. Lidar only became practical with modern computer chips which allow the transmitter to ‘rotate’ hundreds of times a second and which possess enough computing power to make sense of the echoed light waves.

And so a self-driving car doesn’t ‘see’ at all. The cars do not rely on standard cameras that try to make sense of the reflected ambient light around the car. The first prototypes of driverless cars tried to do this and could not process or make sense of images fast enough. Instead self-driving cars send out laser light at a specific frequency and then calculates the distance the light travels in every direction to create a picture of the world.

If you want to understand more about what this looks like, consider this Radiohead music video. Most of the images in the video were created with lidar. Don’t pay too much attention to the opening headshots because those are somewhat distorted for artistic effect. But the later images of seeing streets shows you the detail of a lidar image. Unlike the normal images our eyes see, a lidar image is massively more detailed in that the distance to everything in such a picture is known. Our eyeballs basically see in 2D and we use images from two eyes to simulate 3D. But a lidar image is fully 3D and gets full perspective from one transmitter.

Lidar does have limitations. It can be ‘blinded’ by heavy snows and rains. Lidar could be jammed by somebody transmitting a bright signal using the same light frequencies. And so smart cars don’t rely 100% on lidar but also use traditional cameras and sonar using the ultrasound frequencies to complement the lidar images.

Lidar is finding other uses. It’s being used, for example, in helicopters to search for things on the ground. A lidar system can spot a fleeing criminal or a lost child in the woods far more easily than older technologies or human eyeballs. Lidar can also create amazingly detailed images of anything. Archeologists are using it to create permanent images of dig sites during various stages of excavation before objects are removed. It’s not hard to think that within a few years that many traditional surveying techniques will be obsolete and that lidar will be able to locate and plot everything on a building lot, for example, down to the millimeter.