Cellular Networks and Fiber

We’ve known for a while that the future 5G that the cellular companies are promising is going to need a lot of fiber. Recently Verizon CEO Lowell McAdam verified this when he said that the company will be building dense fiber networks for this purpose. The company has ordered fiber cables as large as 1,700 strands for their upcoming build in Boston in order to support the future fiber and wireless network there. That’s a huge contrast from Verizon’s initial FiOS builds that largely built a network using mostly 6-strand fibers in a lot of the Northeast.

McAdams believes that the future of urban broadband will be wireless and that Verizon intends to build the fiber infrastructure needed to support that future. Of course, with that much fiber in the environment the company will also be able to supply fiber-to-the-premise to those that need the largest amounts of bandwidth.

Boston is an interesting test case for Verizon. They announced in 2015 that they would be expanding their FiOS network to bring fiber to the city – one of many urban areas that they skipped during their first deployment of fiber-to-the-premise. The company also has engaged with the City government in Boston to develop a smart city – meaning using broadband to enhance the livability of the city and to improve the way the government delivers services to constituents. That effort means building fiber to control traffic systems, police surveillance systems and other similar uses.

And now it’s obvious that the company has decided that building for wireless deployment in Boston is part of that vision. It’s clear that Verizon and AT&T are both hoping for a world where most devices are wireless and that the wireless connections use their networks. They both picture a world where their wireless is not just used for cellphones like today, but will also be used to act as the last mile broadband connection for homes, for connected cars, and for the billions of devices used for the Internet of Things.

With the kind of money Verizon is talking about spending in Boston this might just become the test case for a connected urban area that is both fiber rich and wireless rich. To the extent that they can do it with today’s technology it sounds like Verizon is hoping to serve homes in the City with wireless connections of some sort.

I’ve discussed several times how millimeter wave radios have become cheap enough to be a viable alternative for bringing broadband to urban apartment buildings. That’s a business plan that is also being pursued by companies like Google. But I still am not aware of hardware that can reasonably be used with this same technology to serve large numbers of single family homes. At this point the electronics are still too expensive and there are other technological issues to overcome (such as having fiber deep in neighborhoods for backhaul).

So it will be interesting to watch how Verizon handles their promise to bring fiber to the homes in Boston. Will they continue with the promised FTTP deployment or will they wait to see if there is a wireless alternative on the horizon?

It’s also worth noting that Verizon is tackling this because of the density of Boston. The city has over 3,000 housing units per square mile, making it, and many other urban centers, a great place to consider wireless alternatives instead of fiber. But I have to contrast this with rural America. I’m working with several rural counties right now in Minnesota that have housing densities of between 10 and 15 homes per square mile.

This contrast alone shows why I don’t think rural areas are ever going to see much of the advantages of 5G. Even though it’s expensive to build fiber in a place like Boston, the potential payback is commensurate with the cost of the construction. I’ve always thought that Verizon made a bad strategic decision years ago when they halted their FiOS  construction before finishing building in the metropolitan areas on the east coast. Verizon has fared well in its competition with Comcast and others.

But there is no compelling argument for the wireless companies or anybody else to build fiber in the rural areas. The cost per subscriber is high and the paybacks on investment are painfully long. If somebody is going to invest in rural fiber they might as well use it to connect directly to customers rather than to spend the money in fiber plus adding a wireless network on top of it.

We are going to continue to see headlines about how wireless is the future, and for some places like Boston it might be. Past experience has shown us that wireless technology often works a lot different in the field compared to the lab, so we need to see if the wireless technologies being considered really work as promised. But even if they do, those same technologies are going to have no relevance to rural America. If anything the explosion of urban wireless might further highlight the stark differences between urban and rural America.

Ownership of Software Rights

There is an interesting fight currently at the US Patent Office that involves all of us in the telecom industry. The argument is over the right of ownership of the software that comes along these days with almost any type of electronics. The particular fight is between John Deere and tractor owners, but the fight is a precedent for similar software anywhere.

John Deere is arguing that, while a farmer may buy one of their expensive tractors, John Deere still owns the software that operates the tractor. When a farmer buys a tractor they must agree to the terms of the software license, just like we all agree with similar licenses and terms of service all of the time. The John Deere software license isn’t unusual, but what irks farmers is that it requires them to use John Deere authorized maintenance and parts for the term of the software license (which is seemingly forever).

The fight came to a head when some farmers experienced problems with tractors during harvest season and were unable to get authorized repair in a timely manner. Being resourceful they found alternatives and there is now a small black market for software that can replace or patch the John Deere software. But John Deere is attacking farmers that use alternate software saying they are violating the DMCA (Digital Millennium Copyright Act) which prohibits the bypassing of copyrighted locks on content. They argue that farmers have no right to open or modify the software on the tractors which remains the property of John Deere. The Patent Office is siding with John Deere.

This is not a unique fight for farmers and the owners of many electronics companies are taking the same approach. For example all of the major car manufacturers except Tesla have taken the same position. Apple has long taken this position with its iPhone.

So how does this impact the telecom industry? First, it seems like most sophisticated electronics we buy these days come with a separate software license agreement that must be executed as part of a purchase. So manufacturers of most of the gear you buy still think they own the proprietary software that runs your equipment. And many of them charge you yearly after buying electronics to ‘maintain’ that software. In our industry this is a huge high margin business for the manufacturers because telcos and ISPs get almost nothing in return for these annual software license fees.

I don’t think I have a client who isn’t still operating some older electronics. This may be older Cisco routers that keep chugging along, an old voice switch, or even something major like the electronics operating an entire FTTH network. It’s normal in the telecom industry for manufacturers to stop supporting most electronics within 7 to 10 years of its initial release. But unlike twenty years ago when a lot of electronics didn’t last more then the same 7 – 10 years, the use of integrated chips means that electronics are working a lot longer.

And therein lies the dilemma. Once a vendor stops supporting a technology they literally wash their hands of it – they no longer issue software updates, they stop stocking spare parts. They do everything in their power to get you to upgrade to something newer, even though the older gear might still be working reliably.

But if a telco or ISP makes any tweaks to this older equipment to keep it working – something many ISPs are notorious for – then theoretically anybody doing that has broken the law under the DMCA and could be subject to a fine up to $500,000 and a year in jail, for a first offense.

Of course, we all face this same dilemma at home. Almost everything electronic these days comes with proprietary software and the manufacturers of your PCs, tablets, smartphones, personal assistants, security systems, IoT gear and almost all new appliances probably think that they own the software in your device. And that raises the huge question of what it means these days to buy something, if you don’t really fully own it.

I know many farmers and I think John Deere is making a huge mistake. If another tractor company like Kubota or Massey Ferguson declares that they don’t maintain rights to the software then John Deere could see its market dry up quickly. There is also now a booming market in refurbished farm equipment that pre-dates proprietary software. But this might be a losing battle when almost everything we buy includes software. It’s going to be interesting to see how both the courts and the court of public opinion handle this.

Death of the Smartphone?

Over the last few weeks I have seen several articles predicting the end of the smartphone. Those claims are a bit exaggerated since the authors admit that smartphones will probably be around for at least a few decades. But they make some valid points which demonstrate how quickly technologies come into and out of our lives these days.

The Apple iPhone was first sold in the summer of 2007. While there were phones with smart capabilities before that, most credit the iPhone release with the real birth of the smartphone industry. Since that time the smartphone technology has swept the entire world.

As a technology the smartphone is mature, which is what you would expect from a ten-year old technology. While phones might still get more powerful and faster, the design for smartphones is largely set and now each new generation touts new and improved features that most of us don’t use or care about. The discussion of new phones now centers around minor tweaks like curved screens and better cameras.

Almost the same ten-year path happened to other electronics like the laptop and the tablet. Once any technology reaches maturity it starts to become commoditized. I saw this week that a new company named Onyx Connect is introducing a $30 smartphone into Africa where it joins a similarly inexpensive line of phones from several Chinese manufacturers. These phones are as powerful as US phones of just a few years ago.

This spells trouble for Apple and Samsung, which both benefit tremendously by introducing a new phone every year. People are now hanging onto phones much longer, and soon there ought to be scads of reasonably-priced alternatives to the premier phones from these two companies.

The primary reason that the end of the smartphone is predicted is that we are starting to have alternatives. In the home the smart assistants like Amazon Echo are showing that it’s far easier to talk to a device rather than work through menus of apps. Anybody who has used a smartphone to control a thermostat or a burglar alarm quickly appreciates the ability to make the changes by talking to Alexa or Siri rather than fumbling through apps and worrying about passwords and such.

The same thing is quickly happening in cars and when your home and car are networked together using the same personal assistant the need to use a smartphone while driving gets entirely eliminated. The same thing will be happening in the office and soon that will mean there is a great alternative to the smartphone in the home, the car and the office – the places where most people spend the majority of their time. That’s going to cut back on reliance of the smart phone and drastically reduce the number of people who want to rush to buy a new expensive smartphone.

There are those predicting that some sort of wearable like glasses might offer another good alternative for some people. There are newer version of smartglasses like the $129 Snap Spectacles that are less obtrusive than the first generation Google Glass. Smartglasses still need to overcome the societal barrier where people are not comfortable being around somebody who can record everything that is said and done. But perhaps the younger generations will not find this to be as much of a barrier. There are also other potential kinds of wearables from smartwatches to smart clothes that could take over the non-video functions of the smartphone.

Like with any technology that is as widespread as smartphones today there will be people who stick with their smartphone for decades to come. I saw a guy on a plane last week with an early generation iPod, which was noticeable because I hadn’t seen one in a few years. But I think that most people will be glad to slip into a world without a smartphone if that’s made easy enough. Already today I ask Alexa to call people and I can do it all through any device such as my desktop without even having a smartphone in my office. And as somebody who mislays my phone a few times every day, I know that I won’t miss having to use a smartphone in the home or car.

Just When You Thought It Was Safe . . .

Yet one more of our older technologies is now a big target for hackers. Recently hackers have been able to use the SS7 (Signaling System 7) network to intercept text messages from banks using two-factor authentication and then cleaning out bank accounts.

This is not the first time that SS7 has been used for nefarious purposes. Industry experts started to warn about the dangers of SS7 back in 2008. In more recent years there have been numerous reports that the SS7 network has been used by governments and others to keep tabs on the locations of some cellphones. But the use of the SS7 network to intercept text messages creates a big danger for anybody using online banking that requires text-massage authentication. Once a hacker intercepts a text verification code they can be inside your bank account.

Once a hacker is inside the SS7 network they can use the protocol to redirect traffic. This was recently demonstrated on 60 Minutes when German hackers intercepted phone calls made to congressman Ted Lieu, with his permission. SS7 can be used to direct, block or perform numerous functions on any telephone number, making it a great tool for spying.

Telephone techs are familiar with SS7 and it’s been with us since 1975. It was developed by Bell Labs and was the technology that allowed the creation of what we’ve come to call telephone features. SS7 technology allowed for the telephone system to snag pieces of called or calling numbers and other network information and led to the creation of such features as caller ID, call blocking, call forwarding and numerous other features.

In the telecom world SS7 is carried on a separate network from the paths used to route telephone calls. Every telephone carrier on the network has separate SS7 trunks that all connect regionally to SS7 hubs, known as STPs. It is the ubiquitous nature of SS7 that makes it vulnerable. There is an SS7 connection to every telephone switch, but also to private switches like PBXs. If the SS7 network was a private network that only connected telco central offices it would be relatively safe. But the proliferation of other SS7 nodes makes it relatively easy for a hacker to gain access to the SS7 network, or even to buy a connection into the SS7 network.

It has now become dangerous to use two-factor authentication for anything. While access to bank accounts is an obvious target, this kind of hacking could also gain access to social networks, entry into corporate WANs or any software platform using two-factor authentication. Some banks have already announced that they are going to abandon this kind of customer authentication, but many of the larger ones have yet to act. You have to think most of them are looking into alternatives, but it’s not particularly easy for a giant bank to change their customer interfaces.

There is a replacement for SS7 on the way. It’s an IP-based protocol called Diameter. This protocol can replace SS7 but also has a much wider goal of being the protocol to authenticate connections to the Internet of Things as well as VoIP communications from cell phones using WiFi.

Banks and others could change to the Diameter protocol and send encrypted authentication messages through email or a messaging system. But this would not be an easy change for the telephone industry to implement. The SS7 network is used today to support major switching functions like the routing of 800 calls and the many telephone features like caller ID. Changing the way those functions are done would be a major change for the industry. It’s one of the many items being looked at by the industry as part of the digital transition of the telephone network. But if it was decided tomorrow to start implementing this change it would require years to make sure that all existing switches keep working and that all of the SS7-enabled functions keep working as they should.

SS7 was implemented long before there was anything resembling a hacker. For the most part the SS7 network has been working quietly behind the scenes to do routing and other functions that have increased the efficiency of the telephone network. But like with most older electronic technologies the SS7 network has numerous flaws that can be exploited by malicious hacking. So it probably won’t be too many years until the SS7 networks are turned off.

Is our Future Mobile Wireless?

I had a conversation last week with somebody who firmly believes that our broadband future is going to be 100% mobile wireless. He works for a big national software company that you would recognize and he says the company believes that the future of broadband will be wireless and they are migrating all of their software applications to work on cellphones. If you have been reading my blog you know I take almost the opposite view, but there are strong proponents of a wireless future, and it’s a topic worth continually revisiting.

Certainly we are doing more and more things by cellphone. But I think those that view future broadband as mobile are concentrating on faster mobile data speeds but are ignoring the underlying overall data capacity of cellular networks. I still think that our future is going to become even more reliant on fiber in order to handle the big volumes of bandwidth we will all need. This doesn’t mean that I don’t love cellphone data – but I think it’s a complement for landline broadband and not an equivalent substitute. Cellphone networks have major limitations and they are not going to be able to keep up with our need for bandwidth capacity. Even today the vast majority of cellphone data is handed off to landline networks through WiFi. And in my mind that just makes a cellphone into another terminal on your landline network.

Almost everybody understands the difference in quality between using your cellphone in your home using WiFi versus doing the same tasks using only the cellular network. I largely use my cellphone for reading news articles. And while this is a lot lighter application than watching video, I find that I usually have problems opening articles on the web when I’m out of the house. Today’s 4G speeds are still pretty poor and the national average download speed is reported to be just over 7 Mbps.

I think all of the folks who think cellphones are the future are counting on 5G to make a huge difference. But as I’ve written many times, it will be at least a decade before we see a mature 5G cellular network – and even then the speeds are not likely to be hugely faster than the 4G specification today. 5G is really intended to increase the stability of broadband connections (less dropped calls) and the number of connections (able to connect to a lot of IoT devices). The 5G specifications are not even shooting for at a huge speed increase, with the specification calling for 100 Mbps download cellular speeds, which translates into an average of perhaps 50 Mbps connections for all of the customers within a cell site. Interestingly, that’s the same target speed of the 4G specification.

And those greater future speeds sounds great. Since a cellphone connection by definition is for one user, a faster speed means that a cellular connection will support a 4K video stream eventually. But what this argument ignores is that a home a decade from now is going to be packed with devices wanting to make simultaneous connections to the Internet. It is the accumulated volume of usage from all of those devices that is going to add up to huge broadband demand for homes.

Already today homes are packed with broadband hungry devices. We have smart TVs, cellphones, laptops, desktops and tablets all wanting to connect to the network. We have other bandwidth hungry applications like gaming boxes and surveillance cameras. More and more of us are cutting the cord and watching video online. And then there are going to piles of new devices with smaller broadband demands, but which in total will add up to significant bandwidth. Further, a lot of applications we use are now in the cloud. My home uses a lot of bandwidth every day just backing up my data files, connecting to software in the cloud, making VoIP calls, and automatically updating software and apps.

I’ve touted a statistic many times that you might be tired of hearing, but I think it’s at the heart of the matter. The amount of bandwidth used by homes has been doubling every three years since 1980, and there is no end in sight to that trend. Already today a 4G connection is inadequate to support the average home. If you don’t think that’s true, talk to the homes now using AT&T’s fixed LTE connections that deliver 10 Mbps. That kind of speed is not adequate today to provide enough bandwidth to use the many broadband services I discussed above. Cellular connections are already too slow today to provide a reasonable home broadband, even as AT&T is planning to foist these connections on millions of rural homes.

There is no reason to think that 5G will be able top satisfy the total broadband needs of a home. The only way it might do that is if we end up in a world where we have to buy a small cellular subscription for every device in our home – I know I would prefer to instead connect all of my devices to WiFi to avoid such fees. Yes, 5G will be faster, but a dozen years from now when 5G is finally a mature cellular technology, homes will need a lot more bandwidth and a 5G connections then will feel just as inadequate then as 4G feels today.

Unless we get to a future point where the electronics get so cheap that there will be a ‘cell site’ for every few homes, then it’s hard to figure that cellular can ever be a true substitute for landline broadband. And even if such a technology develops you still have to ask if it would make any sense to deploy. Those small cell sites are largely going to have to be fiber fed to deliver the needed bandwidth and backhaul. And in that case small cell sites might not be any cheaper than fiber directly to the premise, especially when considering the lifecycle costs of the cell site electronics. Even if we end up with that kind of network – it’s would not really be a cellular network as much as it would be using wireless loops as the last few feet of a landline network – something that for years we have called fiber-to-the-curb. Such a network would still require us to build fiber almost everywhere.

AT&T’s CAF II Solution

We now know the details of AT&T’s fixed broadband solution being installed to satisfy the FCC’s CAF II plan.

Let me start with some numbers to explain the FCC funding from the FCC. In the second round of the CAF II proceeding AT&T accepted a payment from the Universal Service Fund of about $428 million per year for six years, or over $2.5 billion dollars. That money is to be used to bring broadband to about 1.1 million homes. That works out to $2,300 per home.

I saw news last week about an AT&T CAF II ‘trial’ in Georgia. AT&T plans on using existing cellular spectrum to deliver a fixed broadband product. This will require the installation of a small exterior antenna at a customer site as well as the use of an AT&T modem inside of the home.

We’ve known for a while that AT&T planned to utilize their cellular spectrum rather than build or try to upgrade any copper plant, so this is no surprise. What is a bit of a surprise to me is the speeds being offered in the trial. AT&T will be providing a 10 Mbps download speed, which is the bare minimum required by the FCC’s CAF II program. We know from other trials AT&T has had around the country that this technology is capable of delivering at least twice that much bandwidth.

And the service won’t be cheap. The product is priced at $60 per month if a customer will sign a contract, and $70 per month with no contract. It’s a pretty interesting comparison between this and Verizon’s announcement of now offering gigabit speeds throughout its fiber footprint for $70 per month. I didn’t see any mention of a fee for use of the AT&T modem, but most ISPs charge for such devices, so that is probably going to be added to the price.

The AT&T product also comes with severe data caps. It comes with a monthly data cap of 160 gigabytes of total download. Overages will cost $10 for each additional 50 gigabytes, up to a maximum of $200 per month. I suspect a lot of rural homes that buy this as their first broadband product are going to be shocked at their first bill when they splurge on watching Netflix for the first time. My 3-person household uses about 700 gigabytes per month, which under this plan would cost $170 per month for somebody with a contract.

Like with all ISPs, I’m sure that the 10 Mbps data speed is undoubtedly best effort, meaning that at peak times (or if customers are too far away from a cellular tower) the speeds will be slower. That slow speed is going to severely hamper the ability for customers to use huge amounts of data since they aren’t easily going to be able to watch many simultaneous video streams.

I can’t be entirely negative, because for many households this will be their first broadband product, other than perhaps satellite data, which is largely unusable. And so to these homes it’s going to feel great to finally be able to stream data or have their kids able to do on-line homework from their homes.

But what is irksome about this product is that the federal government handed AT&T the money to do this. Certainly they will use some of the $2,300 per customer to build some new towers or to build a little fiber to towers. But the equipment to serve a customer is going to cost a lot less than this. I would bet that most customers will be served from existing towers using existing spectrum. This means that the federal government is paying for the full cost of implementing this product, but for which AT&T will reap all of the revenues and profits. That’s a pretty handsome return on investment for AT&T and amounts to an unneeded handout to one of the richest companies in the country.

Customers are going to quickly understand that, while they now have a minimal broadband capability, they don’t have anything close to the same broadband that much of the rest of the country has. Almost all of the big cable companies now sell broadband with minimum speeds of at least 50 Mbps download, often more. As households keep needing more data capacity over time – with the average household use of data doubling every three years – this AT&T product will become the broadband equivalent of dial-up within a decade.

The worst thing about this whole fiasco from my perspective is that the FCC is take big credit for bringing broadband to the parts of the country who get this kind of CAF II product, and they will probably count this as a job well done. Instead the FCC will have spent many billions on foisting broadband into rural America that is obsolete before it’s even launched. The shame is that this same money could have been used to seed matching grants in rural America that would have built fiber to a lot of these same homes. Small ISPs and telcos got excited when they first heard of the reverse auctions for the CAF II funding. But then, rather than holding those auctions, the FCC just handed this money to the big telcos with no competition for the funding – and this AT&T product is the end result of that bad decision.

Rural America is not going to be long fooled and will quickly recognize this as inferior broadband, but they are going to have no real alternatives. There is the small hope that there might be an infrastructure program from the current administration and Congress, but there is no assurance that such money might not also go to the big ISPs to do more of the same.

Two Visions for Self-Driving Cars

I was at a conference last week and I talked to three different people who believe that driverless cars are going to need extremely fast broadband connections. They cite industry experts who say that the average car is going to require terabytes per day of downloaded data to be functional and that only extremely fast 5G networks are going to be able to satisfy that need. These folks talk about needing high-bandwidth and very low latency wireless networks that can tell a car when to stop when encountering an obstacle. This vision sees cars as somewhat dumb appliances with a lot of the brains in the cloud. I would guess that wireless companies are hoping for this future.\

But I also have been reading about experts that instead think that cars will become rolling data centers with a huge amount of computing capacity on board. Certainly vehicles will need to communicate with the outside world, but in this vision a self-driving car only needs updates on things like the current location and for road conditions and traffic problems ahead – but not the masses of data anticipated by the first future vision cited above.

For a number of reasons I think the second vision is a lot more likely.

  • Self-driving cars are almost here now and that means any needed network to support them would need to be in place in the near future. That’s not realistically going to happen. Most projections say that a robust 5G technology is at least a decade away. There are a dozen companies investing huge sums on self-driving car technologies and they are not going to wait that long to even investigate if controlling cars from external sources makes sense. Every company looking into self-driving technology is operating under the assumption that the brains and sensing must be in the cars – and they are the ones that will drive the development and implementation of the new car technology. It’s not practical to think that the car industry can wait for deployment of the needed networks that are not under their control or reasonably available.
  • Who’s going to make the huge investments needed to build the network necessary to support self-driving cars? The ability to deliver terabytes of data to each car would require much faster data connections than can be delivered using the normal cellular frequencies. Consider how many fast simultaneous data connections would be needed to support all of the cars on a busy multilane highways in a major city. It’s an engineering challenge that would probably require using high frequencies. And that means putting lots of cell sites close to roads – and those cell sites will have to be largely fed by fiber to keep the latency low (wireless backhaul would add significant latency). Such a network nationwide would have to cost hundreds of billions of dollars between the widespread fiber and the huge number of mini-cell sites. I can’t picture who would agree to build such a network. The total annual capital budget for all of the wireless companies combined today is only in the low tens of billion range.
  • Even if somebody was to build the expensive networks who is going to pay for it? It seems to me like every car would need an expensive monthly broadband subscription, adding significantly to the cost of owning and driving a car. Most households are not going to want a car that comes with the need for an additional $100 – $200 monthly broadband subscription. But my back-of-the envelope tells me that the fees would have to be that large to compensate for such an extensive network that was built mostly to support self-driving cars.
  • The requirement for huge numbers of cars to download terabytes of data per day is a daunting challenge. The vast majority of the country today doesn’t even have a landline based broadband connection capable of doing that.
  • There are also practical reasons not to put the brains of a car in the cloud. What happens when there are power outages or cellular outages. I don’t care how well we plan – outages happen. I’d be worried about driving in a car if there was even just a temporary glitch in the network.
  • There are also issues of physics if this network requires any connections to be made by millimeter wave spectrum, or even spectrum that is just a little lower on the frequency scale. There is a huge engineering challenge to get such signals to track a moving vehicle reliably in real-time. Higher frequencies start having doppler shifts even at walking speeds. Compound this with the requirement to always have true line-of-sight and also the issue of connecting with many cars at the same time on crowded roads. I have learned to never say that something isn’t possible, but this presents some major engineering challenges that are going to take a long time to make work – maybe decades, and maybe never.
  • Finally are all of the issues having to do with security. I’m personally more worried about cars being hacked if they are getting most of their communications from the cloud. If cars are instead only getting location and other basic information from the outside it would be a lot easier to wall of the communications stream from the operating computing process, and reduce the chances of hacking. It also seems like a risk if cars get most of their brains from the cloud for a terrorist or mischief-maker to disrupt traffic by taking out small cell sites. There would be no way to ever make such devices physically secure.

I certainly can’t say that we’ll never have a time when self-driving cars are directed by a large outdoor cloud, as often envisioned in science fiction movies. But for now the industry is developing cars that are largely self-contained data centers and that fact alone may dictate the future path of the industry. The wireless carriers see a lot of potential revenue from self-driving cars, but I can’t imagine that the car industry is going to wait for them to develop the needed infrastructure.

How Do VPNs Work?

After Congress clarified last month that ISPs have the right to monitor and use customer data I have read dozens of articles that recommend that people start using VPNs (Virtual Private Networks) to limit ISP access to their data. I’ve received several emails asking how VPNs work and will discuss the technology today.

Definition. A VPN is a virtualized extension of a private network across a public network, like the open Internet. What that means in plain English is that VPN technology tries to mimic the same kind of secure connection that you would have in an office environment where your computer is directly connected to a corporate server. In a hard-wired environment everything is secure between the server and the users and all data is safe from anybody that does not have access to the private network. If the private network is not connected to the outside world, then somebody would have to have a physical connection to the network in order to read data on the private network.

Aspects of a VPN Connection. There are several different aspects that are used to create the virtualized connection. A VPN connection today likely includes all of the following:

  • Authentication. A VPN connection always starts with authentication to verify the identity of the remote party that wants to make the VPN connection. This could use typical techniques such as passwords, biometrics or two-factor authentication.
  • Encryption. Most VPN connections then use encryption for the transmission of all data once the user has been authenticated. This is generally done by placing software on the user’s computer that scrambles the data and that can only be unscrambled at the VPN server using the same software. Encryption is not a foolproof technique and the Edward Snowden documents proved that the NSA knows how to read most kinds of encryption – but it’s still a highly effective technique to use for the general transmission of data.
  • IP Address Substitution. This is the technique that stops ISPs from seeing a customer’s Internet searches. When you use your ISP without a VPN, your ISP assigns you an IP address to identify you. This ISP-assigned IP address then can be used by anybody on the Internet to identify you and to track your location. Further, once connected your ISP makes all connections for you on the Internet using DNS (Domain Name Servers). For instance, if you want to visit this blog, your ISP is the one that finds PotsandPansbyCCG and makes the connection using the DNS system, which is basically a huge roadmap of the public Internet. Since they are doing the routing your ISP has complete knowledge of every website you visit (your browsing history).  But when you use a VPN, the VPN provider provides you with a new IP address, one that is not specifically identified as you. When you visit a website for the first time using the new VPN-provided IP address that website does not know your real location, but rather the location of the VPN provider. And since the VPN provider also does the DNS function for you (routes you to web pages) your ISP no longer knows your browsing history. Of course, this means that the VPN provider now knows your browsing history, so it’s vital to pick a VPN that guarantees not to use that information.

Different VPN Protocols and Techniques. This blog is too short to explore the various different software techniques used to make VPN connections. For example, early VPNs were created with the PPTP (Point-to-Point Tunneling Protocol). This early technique would encapsulate your data into larger packets but didn’t encrypt it. It’s still used today and is still more secure than a direct connection on the open Internet. There are other VPN techniques such as IPSec (IP Security), L2TP (Layer 2 Tunneling Protocol), SSL and TLS (Secure Socket Layer and Transport Layer Security), and SSH (Secure Shell). Each of these techniques handles authentication and encryption in different ways.

How Safe is a VPN? A VPN is a way to do things on the web in such a manner that your ISP no longer knows what you are doing. A VPN also establishes an encrypted and secure connection that makes it far harder for somebody to intercept your web traffic (such as when you make a connection through a hotel or coffee shop WiFi network). In general practice a VPN is extremely safe because somebody would need to expend a huge amount of effort to intercept and decrypt everything you are doing. Unless somebody like the NSA was watching you, it’s incredibly unlikely that anybody else would ever expend the effort to try to figure out what you are doing on the Internet.

But a VPN does not mean that everything you do on the Internet is now safe from monitoring by others. Any time you connect to a web service, that site will know everything you do while connected there. The giant web services like Google and Facebook derive most of their revenues by monitoring what you do while using one of their services and then use that information to create a profile about you.  Using a VPN does not stop this, because once you use the Google search engine or log onto Facebook they record your actions.

Users who want to be protective of their identities are starting to avoid these big public services. There are search engines other than Google that don’t track you. You can use a VPN to mask your real identify on social media sites. For example, there are millions of Twitter accounts that are not specifically linked back to the actual user. But a VPN or a fake identity can’t help you if you use a social media site like Facebook where you make connections to real-life friends. I recall an article a few years back from a data scientist who said that he only needed to know three facts about you to figure out online who you are. Companies like Facebook will quickly figure out your identity regardless of how you got to their site.

But a VPN will completely mask your web usage from your ISP. The VPN process bypasses the ISP and instead makes a direct, and encrypted connection to the VPN provider instead. A VPN can be used on any kind of data connection and you can use a VPN for home computers and also for cellphones. So if you don’t want Comcast or AT&T to monitor you and use and sell your browsing history to others, then a VPN service will cut your ISPs out of the loop.

A New Cellular Technology

Steve Perlman and his company Artemis are experimenting with a new form of cellular transmission they are calling pCell. Perlman is an inventor who sold his company WebTV to Microsoft for half a billion dollars. Perlman also helped to create Apple QuickTime that brought video to the Macintosh.

His new invention completely changes the way that cell sites function. Today the cellular network is comprised of large cell sites that purposefully don’t overlap too much. These big cell sites then divvy up the available bandwidth to the users inside each cell. As everyone has experienced, data capacity can get overwhelmed in a busy cell site resulting in slow data speeds or an inability to even make a connection.

Perlman’s pCell technology takes a radically different approach. His technology would deploy numerous tiny transmitters using home and business IP connections. The pCell technology then combines connections from multiple tiny transmitters to create a ‘personal cell’ around each cellular phone or device. The personal cell is small, in the range of a centimeter and follows a phone’s location. On the Artemis web site is both a short video showing how this works along with an incredibly detailed whitepaper for those who want to really dig into the technology.

Perlman proposes to increase the bandwidth available to his pCells by connecting the tiny transmitters to existing landline data connections. This would offload pCell traffic from the cellular network, which would eliminate the bandwidth constraints from today’s big cell sites. Perlman has proposed that Google connect pCells to all customers that have Google Fiber in Kansas City as a way to create a network of tiny transmitters. Each Google Fiber customer would be encouraged to place a small transmitter on their roof. At the cellphone end, each pCell customer would have to swap to a new SIM card that recognizes the pCell connections.

In practice, if enough small transmitters are spread around a local area, then every pCell customer could make a connection that would use the maximum bandwidth allowed by the particular spectrum being deployed. Perlman describes this as each person getting all of the bandwidth of one cell site.

And that’s where I think Perlman gets into both market and regulatory trouble. He basically wants to introduce an alternative cellular technology, and cellular companies are unlikely to scrap the big cell networks in favor of this new technology. Unfortunately for Perlman the large cellular carriers license the spectrum they use today for 4G LTE and that gives them exclusive rights to that spectrum. I can’t imagine the cellular companies are going to allow Perlman to swap SIM cards and run an alternate network using their licensed spectrum.

Perlman likens this concept to the idea of using a cellular repeater to get a stronger data signal. There are a lot of such repeaters in place, mostly either to strengthen cellular signals in large buildings or to boost the signals in rural areas for those located near the outer edge of a cell site. But those repeaters are sanctioned by the cellular companies, and therein lies the difference from a regulatory perspective.

Perlman’s pCell technology could be a giant leap forward in cellphone technology. In fact, it looks like a great alternative to 5G. Perlman’s tiny transmitters are smaller and far less less expensive than the small cell sites that the cellular companies are now installing. The pCell technology would disperse hundreds of tiny transmitters in a neighborhood instead of the handful of expensive small cells that are envisioned by the cellular providers.

But if no cellular company is willing to try the technology then this is going to be a hard sale in the US. Customers don’t have any automatic right to intercept and reroute cellular traffic that uses licensed spectrum. And there probably isn’t enough usable public spectrum in urban areas to make this work with unlicensed spectrum. Perlman envisions this as the ‘Uberization’ of cellular and envisions that everybody with a transmitter would receive some small compensation from the cellular traffic carried by their landline connection. This truly sounds wonderful in that it would mean much faster connections and high-quality connections in crowded urban environments. But I’m highly skeptical that such a network would ever be allowed in practice unless sufficient public spectrum is available to make this work.

The Challenges of 5G Deployment

The industry is full of hype right now about the impending roll-out of 5G cellular. This is largely driven by the equipment vendors who want to stir up excitement among their stockholders. But not everybody in the industry thinks that there will be a big rush to implement 5G. For example, a group called RAN Research issued a report last year that predicted a slow 5G implementation. They think that 4G will be the dominant wireless technology until at least 2030 and maybe longer.

They cite a number of reasons for this belief. First, 4G isn’t even fully developed yet and the standards and implementation coalition 3GPP plans to continue to develop 4G until at least 2020. There are almost no 4G deployments in the US that fully meet the 4G standards, and RAN Research expects the wireless carriers to continue to make incremental upgrades, as they have always done, to improve cellular along the 4G path.

They also point out that 5G is not intended as a forklift upgrade to 4G, but is instead intended to coexist alongside. This is going to allow a comfortable path for the carriers to implement 5G first in those places that most need it, but not rush to upgrade places that don’t. This doesn’t mean that the cellular carriers won’t be claiming 5G deployments sometime in the next few years, much in the way that they started using the name 4G LTE for minor improvements in 3G wireless. It took almost five years after the first marketing rollout of 4G to get to what is now considered 3.5G. We are just now finally seeing 4G that comes close to meeting the full standard.

But the main hurdle that RAN Research sees with a rapid 5G implementation is the cost. Any wireless technology requires a widespread and rapid deployment in order to achieve economy of scale savings. They predict that the cost of producing 5G-capable handsets is going to be a huge impediment to implementation. Very few people are going to be willing to pay a lot more for a 5G handset unless they can see an immediate benefit. And they think that is going to be the big industry hurdle to overcome.

Implementing 5G is going to require a significant expenditure in small dense cell-sites in order to realize the promised quality improvements. It turns out that implementing small cell sites is a lot costlier and lot more expensive than the cellular companies had hoped. It also turns out that the technology will only bring major advantages to those areas where there is the densest concentration of customers. That means big city business districts, stadiums, convention centers and hotel districts – but not many other places.

That’s the other side of the economy of scale implementation issue. If 5G is only initially implemented in these dense customer sites, then the vast majority of people will see zero benefit from 5G since they don’t go to these densely packed areas very often. And so there are going to be two economy of scale issues to overcome – making enough 5G equipment to keep the vendors solvent while also selling enough more-expensive phones to use the new 5G cell sites. And all of this will happen as 5G is rolled out in drabs and dribbles as happened with 4G.

The vendors are touting that software defined networking will lower the cost to implement 5G upgrades. That is likely to become true with the electronics after they are first implemented. It will be much easier to make the tiny incremental 5G improvements to cell sites after they have first been upgraded to 5G capability. But RAN Research thinks it’s that initial deployment that is going to be the hurdle. The wireless carriers are unlikely to rush to implement 5G in suburban and rural America until they see overwhelming demand for it – enough demand that justifies upgrading cell sites and deploying small cell sites.

There are a few trends that are going to affect the 5G deployment. The first is the IoT. The cellular industry is banking on cellular becoming the default way to communicate with IoT devices. Certainly that will be the way to communicate with things like smart cars that are mobile, but there will be a huge industry struggle to instead use WiFi, including the much-faster indoor millimeter wave radios for IoT. My first guess is that most IoT users are going to prefer to dump IoT traffic into their landline data pipe rather than buy separate cellular data plans. For now, residential IoT is skewing towards the WiFi and towards smart devices like the Amazon Echo which provide a voice interface for using the IoT.

Another trend that could help 5G would be some kind of government intervention to make it cheaper and easier to implement small cell sites. There are rule changes being considered at the FCC and in several state legislatures to find ways to speed up implementation of small wireless transmitters. But we know from experience that there is a long way to go after a regulatory rule change until we see change in the real world. It’s been twenty years now since the Telecommunications Act of 1996 required that pole owners make their poles available to fiber overbuilders – and yet the resistance of pole owners is still one of the biggest hurdles to fiber deployment. Changing the rules always sounds like a great idea, but it’s a lot harder to change the mindset and behavior of the electric companies that own most of the poles – the same poles that are going to be needed for 5G deployment.

I think RAN Research’s argument about achieving 5G economy of scale is convincing. Vendor excitement and hype aside, they estimated that it would cost $1,800 today to build a 5G capable handset, and the only way to get that price down would be to make hundreds of millions of 5G capable handsets. And getting enough 5G cell sites built to drive that demand is going to be a significant hurdle in the US.