There’s No 5G Race

FCC Chairman Ajit Pai was recently quoted in the Wall Street Journal as saying, “In my view, we’re in the lead with respect to 5G”. Over the last few months I’ve heard this same sentiment expressed in terms of how the US needs to win the 5G race.

This talk is just more hype and propaganda from the wireless industry that is trying to create a false crisis concerning 5G in order to convince politicians that we need to break our regulatory traditions and give the wireless carriers everything they want. After all, what politician wants to be blamed for the US losing the 5G race? This kind of propaganda works. I was just at an industry trade association show and heard three or four people say that the US needs to win the 5G race.

There is no 5G race; there is no 5G war; there is no 5G crisis. Anybody that repeats these phrases is wittingly or unwittingly pushing the lobbying agenda of the big wireless companies. Some clever marketer at one of the cellular carriers invented the imaginary 5G race as a great way to emphasize the importance of 5G.

Stop and think about it for a second. 5G is a telecom technology, not some kind of military secret that some countries are going to have, while others will be denied. 5G technology is being developed by a host of multinational vendors that are going to sell it to anybody who wants it. It’s not a race when everybody is allowed to win. If China, or Germany, or Finland makes a 5G breakthrough and implements some aspect of 5G first, within a year that same technology will be in the gear available to everybody.

What I really don’t get about this kind of hype and rhetoric is that 5G is basically a new platform for delivering bandwidth. If we are so fired up to not lose the 5G race, then why have we been so complacent about losing the fiber race? The US is far down on the list of countries in terms of our broadband infrastructure. We’ve not deployed fiber optics nearly as quickly as many other countries, and worse we still have millions of households with no broadband and many tens of millions of others with inadequate broadband. That’s the race we need to win because we are keeping whole communities out of the new economy, whch hurts us all.

I hope that my readers don’t think I’m against 5G because I’m for any technology that improves access to bandwidth. What I’m against is the industry hype that paints 5G as the technology that will save our country – because it will not. Today, more than 95% of the bandwidth we use is carried over wires, and 5G isn’t going to move that needle much. There are clearly some bandwidth needs that only wireless will solve, but households and businesses are going to continue to rely on wires to move big bandwidth.

When I ask wireless engineers about the future they almost all have painted the same picture. Over time we will migrate to a mixture of WiFi and millimeter wave spectrum indoors to move around big data. When virtual and augmented reality was first mentioned a few years ago, one of the big promises we heard was for telepresence, where we’ll be able to meet and talk with remote people as if they are sitting with us. That technology hasn’t moved forward because it requires huge broadband beyond what today’s WiFi routers can deliver. Indoor 5G using millimeter wave spectrum will finally unleash gigabit applications within the home.

The current hype for 5G has only one purpose. It’s a slick way for the wireless carriers to push the government to take the actions they want. 5G was raised as one of the reasons to kill net neutrality. It’s being touted as a reason to gut most of the rest of existing telecom legislation. 5G is being used as the reason to give away huge blocks of mid-range spectrum exclusively to the big wireless companies. It’s pretty amazing that the government would give so much away for a technology that will roll out slowly over the next decade.

Please think twice before you buy into the 5G hype. It takes about five minutes of thinking to poke a hole in every bit of 5G hype. There is no race for 5G deployment and the US, by definition, can’t be ahead or behind in the so-called race towards 5G. This is just another new broadband technology and the wireless carriers and other entrepreneurs will deploy 5G in the US when it makes economic sense. Instead of giving the wireless companies everything on their wish list, a better strategy by the FCC would be to make sure the country has enough fiber to make 5G work.

Ideas for Better Broadband Mapping

The FCC is soliciting ideas on better ways to map broadband coverage. Everybody agrees that the current broadband maps are dreadful and misrepresent broadband availability. The current maps are created from data that the FCC collects from ISPs on the 477 form where each ISP lists broadband coverage by census block. One of the many problems with the current mapping process (I won’t list them all) is that census blocks can cover a large geographic area in rural America, and reporting at the census block level tends to blur together different circumstances where some folks have broadband and others have none.

There have been two interesting proposals so far. Several parties have suggested that the FCC gather broadband speed availability by address. That sounds like the ultimate database, but there are numerous reasons why this is not practical.

The other recommendation is a 3-stage process recommended by NCTA. First, data would be collected by polygon shapefiles. I’m not entirely sure what that means, but I assume it means using smaller geographic footprints than census blocks. Collecting the same data as today using a smaller footprint ought to be more accurate. Second, and the best idea I’ve heard suggested, is to allow people to challenge the data in the mapping database. I’ve been suggesting that for several years. Third, NCTA wants to focus on pinpointing unserved areas. I’m not sure what that means, but perhaps it means creating shapefiles to match the different availability of speeds.

These ideas might provide better broadband maps than we have today, but I’m guessing they will still have big problems. The biggest issue with trying to map broadband speeds is that many of the broadband technologies in use vary widely in actual performance in the field.

  • Consider DSL. We’ve always known that DSL performance decreases with distance from a DSL base station. However, DSL performance is not as simple as that. DSL also varies for other reasons like the size of the gauge of copper at a customer or the quality of the copper. Next door neighbors can have a significantly different DSL experience if they have different size wires in their copper drops, or if the wires at one of the homes have degraded over time. DSL also differs by technology. A telco might operate different DSL technologies out of the same central office and see different performance from ADSL versus VDSL. There really is no way for a telco to predict the DSL speed available at a home without installing it and testing the actual speed achieved.
  • Fixed wireless and fixed cellular broadband have similar issues. Just like DSL, the strength of a signal from a wireless transmitter decreases over distance. However, distance isn’t the only issue and things like foliage affect a wireless signal. Neighbors might have a very different fixed wireless experience if one has a maple tree and the other has a pine tree in the front yard. To really make it difficult to define the speed, the speeds on wireless systems are affected to some degree by precipitation, humidity and temperature. Anybody who’s ever lived with fixed wireless broadband understands this variability. WISPs these days also use multiple spectrum blocks, and so the speed delivered at any given time is a function of the particular mix of spectrum being used.

Regardless of the technology being used, one of the biggest issues affecting broadband speeds is the customer home. Customers (or ISPs) might be using outdated and obsolete WiFi routers or modems (like Charter did for many years in upstate New York). DSL speeds are just as affected by the condition of the inside copper wiring as the outdoor wiring. The edge broadband devices can also be an issue – when Google Fiber first offered gigabit fiber in Kansas City almost nobody owned a computer capable of handling that much speed.

Any way we try to define broadband speeds – even by individual home – is going to still be inaccurate. Trying to map broadband speeds is a perfect example of trying to fit a round peg in a square hole. It’s obvious that we can do a better job of this than we are doing today. I pity a fixed wireless ISP if they are somehow required to report broadband speeds by address, or even by a small polygon. They only know the speed at a given address after going to the roof of a home and measuring it.

The more fundamental issue here is that we want to use the maps for two different policy purposes. One goal is to be able to count the number of households that have broadband available. The improved mapping ideas will improve this counting function – within all of the limitations of the technologies I described above.

But mapping is a dreadful tool when we use it to start drawing lines on a map defining which households can get grant money to improve their broadband. At that point the mapping is no longer a theoretical exercise and a poorly drawn line will block homes from getting better broadband. None of the mapping ideas will really fix this problem and we need to stop using maps when awarding grants. It’s so much easier to decide that faster technology is better than slower technology. For example, grant money ought to be available for anybody that wants to replace DSL on copper with fiber. I don’t need a map to know that is a good idea. The grant process can use other ways to prioritize areas with low customer density without relying on crappy broadband maps.

We need to use maps only for what they are good for – to get an idea of what is available in a given area. Mapping is never going to be accurate enough to use to decide which customers can or cannot get better broadband.

Finally – A Whitebox Solution for Small ISPs

A few years ago I wrote about the new industry phenomenon where the big users of routers and switches like Facebook, Google, Microsoft, and Amazon were saving huge amounts of capital by buying generic routers and switches and writing their own operating software. Since those early days these companies have also worked to make these devices far more energy efficient. At the time of that blog, I noted that it was impractical for smaller ISPs to take advantage of the cheaper gear because of the difficulty and risk of writing their own operating software.

That’s all changed and there now is a viable way for smaller ISPs to realize the same huge savings on routers and switches. As you would expect, vendors stepped into the process to match whitebox hardware and operating software to create carrier-class routers and switches for a fraction of the cost of buying name brand gear.

There are a few new industry terms associated with this new industry. Whitebox refers to network hardware that uses commodity silicon and disaggregated software. Britebox refers to similar hardware, but which is built by mainstream hardware vendors like Dell. Commodity hardware refers to whitebox hardware matched with mainstream software.

There are a number of vendors of whitebox hardware including Edge-core Networks, SuperMICR, Facebook, and FS.Com. Much of the gear is built to match specifications provided by the big data center operators, meaning that hardware from different vendors is now becoming interchangeable.

The potential savings are eye-opening. One way to look at the cost of switches is to compare the cost per 10-gigabit MPLS port. In looking at list prices there is a whitebox switch available from FS.Com for $92 per port; Dell britebox hardware is $234 per port; Juniper is priced at $755 per port and Cisco at $2,412 per port. To be fair, a lot of buyers get discounts from the list prices of name brand hardware – but a 96% savings over list price is something that everybody needs to investigate.

As I mentioned, the whitebox hardware is also more energy efficient – saving money on air conditioning is the issue that led the big data center companies looking for a better solution. The FS.Com 10 gigabit switch uses about 200 watts of power annually; the Dell Britebox uses 234 watts; the Juniper switch uses 650 watts and the Cisco switch uses 300 watts. There is no question that a whitebox solution is greener and less expensive to operate.

There is also off-the-shelf software that has been created to operate the whitebox hardware. The most commonly used are IP Infusion OcNOS and Cumulus Linux. The cost of this disaggregated software is also far less expensive than the cost of the software embedded in the cost of mainstream hardware.

Probably the biggest concern of any ISP who is considering a whitebox solution is configuring their system and getting technical assistance when they have problems. The good news is that there are now vendors who have assembled a team to provide this kind of end-to-end support. One such vendor is IPArchiTechs (IPA). They have engineers that will configure and install a new system and a 24/7 helpdesk that provides the same kind of support available with name brand gear.

There are other advantages for using whitebox hardware. Should an ISP ever want to upgrade or change software the hardware can be reused and reprogrammed. The same thing goes with the disaggregated software – an ISP licenses the software and can transfer it to a different box without having to ‘buy’ the software again. The whitebox software also avoids upgrade fees often charged by vendors to increase speeds or to unlock unused ports.

There is whitebox gear available for most ISP functions. In some cases the same gear could be used in the core, in aggregation points or in the last mile just by changing the software – but there is whitebox hardware sized for the various uses. There are still a few network functions that the whitebox software hasn’t mastered, like BGP edge routing – but the hardware/software combination can handle the needs of most ISPs I work with.

Whitebox hardware and software has come of age. Anybody buying expensive Cisco or Juniper gear needs to consider the huge savings available from a whitebox solution. The big vendors have been successful by forcing customers to pay for numerous features and capabilities they never use – it makes more sense to buy more efficient hardware and pay for only the features you need.

Regulatory Sleight of Hand

I was looking through a list of ideas for blogs and noticed that I had never written about the FCC’s odd decision to reclassify commercial mobile broadband as private mobile broadband service in WC Docket No. 17-108 – The Restoring Internet Freedom order that was used to kill net neutrality and to eliminate Title II regulation of broadband. There was so much industry stir about those larger topics that the reclassification of the regulatory nature of mobile broadband went largely unnoticed at the time by the press.

The reclassification was extraordinary in the history of FCC regulation because it drastically changed the definition of one of the major industries regulated by the agency. In 1993 the Congress had enacted regulatory amendments to Section 332 of the FCC’s rules to clarify the regulation for the rapidly burgeoning cellular industry.

At that time there were about 16 million cellular subscribers that used the public switched telephone network (PSTN) and another two million private cell phones that used private networks primarily for corporate dispatch. Congress made a distinction between the public and private use of cellular technology and coined the term CMRS (Commercial Mobile Radio Service) to define the public service we still use today for making telephone calls on cell phones. That congressional act defined CMRS service as having three characteristics: a) the service is for profit, b) it’s available to the entire public, and c) it is interconnected to the PSTN. Private mobile service was defined as any cellular service that didn’t meet any one of the three tests.

The current FCC took the extraordinary step of declaring that cellular broadband is private cellular service. The FCC reached this conclusion using what I would call a regulatory sleight-of-hand. Mobile broadband is obviously still for profit and also available to the public, and so the FCC tackled the third test and said that mobile broadband is part of the Internet and not part of the public telephone network. It’s an odd distinction because the path of a telephone call and a data connection from a cellphone is usually identical. A cellphone first delivers the traffic for both services to a nearby cellular tower (or more recently to pole-mounted small cell sites). The traffic for both services is transported from the cell tower using ethernet transport that the industry calls trunking. At some point in the network, likely a switching hub, the voice and data traffic are split and the voice calls continue inside the PSTN while data traffic is peeled off to the Internet. There is no doubt that the user end of every cellular call or cellular data connection uses the network components that are part of the PSTN.

Why did the FCC go through these mental gymnastics? This FCC had two primary goals of this particular order. First, they wanted to kill the net neutrality rules established by the prior FCC in 2015. Second, they wanted to do this in such a way as to make it extremely difficult for a future FCC to reverse the decision. They ended up with a strategy of declaring that broadband is not a Title II service. Title II refers to the set of rules established by the Telecommunications Act of 1934 that was intended as the framework for regulating common carriers. Until the 2017 FCC order, most of the services we think of as telecommunications – landline telephone, cellular telephones, and broadband – were all considered as common carrier services. The current FCC strategy was to reclassify landline and mobile broadband as a Title I information service and essentially wash their hands from regulating broadband at all.

Since net neutrality rules applied to both landline and mobile data services, the FCC needed to first decree that mobile data was not a public and commercial service before they could remove it from Title II regulation.

The FCC’s actions defy logic and it’s clear that mobile data still meets the definition of a CMRS service. It was an interesting tactic by the FCC and probably the only way they could have removed mobile broadband from Title II regulation. However, they also set themselves up for some interesting possibilities from the court review of the FCC order. For example, a court might rule that mobile broadband is a CMRS service and drag it back under Title II regulation while at the same time upholding the FCC’s reclassification of landline broadband.

Why does this matter? Regulatory definitions matter because the regulatory process relies on an accumulated body of FCC orders and court cases that define the actual nature of regulating a given service. Congress generally defines regulation at a high level and later FCC decisions and court cases better define issues that are disputed. When something gets reclassified in this extreme manner, most of the relevant case law and precedents go out the window. That means we start over with a clean slate and much that was adjudicated in the past will likely have to be adjudicated again, but now based upon the new classification. I can’t think of any time in our industry where regulators decided to arbitrarily redefine the basic nature of a major industry product. We are on new regulatory ground, and that means uncertainty, which is never good for the industry.

The Impending Cellular Data Crisis

There is one industry statistic that isn’t getting a lot of press – the fact that cellular data usage is more than doubling every two years. You don’t have to plot that growth rate very many years into the future to realize that existing cellular networks will be inadequate to handle the increased demand in just a few years. What’s even worse for the cellular industry is that the growth is the nationwide average. I have many clients who tell me there isn’t nearly that much growth at rural cellular towers – meaning there is likely even faster growth at some urban and suburban towers.

Much of this growth is a self-inflicted wound by the cellular industry. They’ve raised monthly data allowances and are often bunding in free video with cellular service, thus driving up usage. The public is responding to these changes by using the extra bandwidth made available to them.

There are a few obvious choke points that will be exposed with this kind of growth. Current cellphone technology limits the number of simultaneous connections that can be made from any given tower. As customers watch more video they eat up slots on the cell tower that otherwise could have been used to process numerous short calls and text messages. The other big chokepoint is going to be the broadband backhaul feeding each cell cite. When usage grows this fast it’s going to get increasingly expensive to buy leased backbone bandwidth – which explains why Verizon and AT&T are furiously building fiber to cell sites to avoid huge increases in backhaul costs.

5G will fix some, but not all of these issues. The growth is so explosive that cellular companies need to use every technique possible to make cell towers more efficient. Probably the best fix is to use more spectrum. Adding an additional spectrum to a cell site immediately adds capacity. However, this can’t happen overnight. Any new spectrum is only useful if customers can use it and it takes a number of years to modify cell sites and cellphones to work on a new spectrum. The need to meet growing demand is the primary reason that the CTIA recently told the FCC they need an eye-popping 400 MHz of new mid-range spectrum for cellular use. The industry painted that as being needed for 5G, but it’s needed now for 4G LTE.

Another fix for cell sites is to use existing frequency more efficiently. The most promising way to do this is with the use of MIMO antenna arrays – a technology to deploy multiple antennas in cellphones to combine multiple spectrum together to create a larger data pipe. MIMO technology can make it easier to respond to a request from a large bandwidth user – but it doesn’t relieve the overall pressure on a cell tower. If anything, it might do the exact opposite and let cell towers prioritize those that want to watch video over smaller users who might then be blocked from making voice calls or sending text messages. MIMO is also not an immediate fix and also needs to work through the cycle of getting the technology into cellphones.

The last strategy is what the industry calls densification, which is adding more cell sites. This is the driving force behind placing small cell sites on poles in areas with big cellular demand. However, densification might create as many problems as it solves. Most of the current frequencies used for cellular service travel a decent distance and placing cell sites too close together will create a lot of interference and noise between neighboring towers. While adding new cell sites adds additional local capacity, it also decreases the efficiency of all nearby cell sites using traditional spectrum – the overall improvement from densification is going to be a lot less than might be expected. The worse thing about this is that interference is hard to predict and is very much a local issue. This is the primary reason that the cellular companies are interested in millimeter wave spectrum for cellular – the spectrum travels a short distance and won’t interfere as much between cell sites placed closely together.

5G will fix some of these issues. The ability of 5G to do frequency slicing means that a cell site can provide just enough bandwidth for every user – a tiny slice of spectrum for a text message or IoT signal and a big pipe for a video stream. 5G will vastly expand the number of simultaneous users that can share a single cell site.

However, 5G doesn’t provide any additional advantages over 4G in terms of the total amount of backhaul bandwidth needed to feed a cell site. And that means that a 5G cell site will get equally overwhelmed if people demand more bandwidth than a cell site has to offer.

The cellular industry has a lot of problems to solve over a relatively short period of time. I expect that in the middle of the much-touted 5G roll-out we are going to start seeing some spectacular failures in the cellular networks at peak times. I feel sympathy for cellular engineers because it’s nearly impossible to have a network ready to handle data usage that doubles every two years. Even should engineers figure out strategies to handle five or ten times more usage, in only a few years the usage will catch up to those fixes.

I’ve never believed that cellular broadband can be a substitute for landline broadband. Every time somebody at the FCC or a politician declares that the future is wireless I’ve always rolled my eyes, because anybody that understands networks and the physics of spectrum can easily demonstrate that there are major limitations on the total bandwidth capacity at a given cell site, along with a limit on how densely cell sites can be packed in an area. The cellular networks are only carrying 5% of the total broadband in the country and it’s ludicrous to think that they could be expanded to carry most of it.

New European Copyright Laws

I’ve always kept an eye on European Union regulations because anything that affects big web companies or ISPs in Europe always ends up bleeding over into the US. Recently the EU has been contemplating new rules about online copyrights, and in September the European Parliament took the first step by approving two new sets of copyright rules.

Article 11 is being referred to as a link tax. This legislation would require that anybody that carries headlines or snippets of longer articles online must pay a fee to the creator of the original content. Proponents of Article 11 argue that big companies like Google, Facebook and Twitter are taking financial advantage of content publishers by listing headlines of news articles with no compensation for the content creators. They argue that these snippets are one of the primary reasons that people use social media and they browse articles suggested by their friends. Opponents of the new law argue that it will be extremely complicated for a web service to track the millions of headlines listed by users and that they will react to this rule by only allowing headline snippets from large publishers. This would effectively shut small or new content creators from gaining access to the big platforms – articles would be from only a handful of content sources rather than from tens of thousands of them.

Such a law would certainly squash small content originators like this blog. Many readers find my daily blog articles via short headlines that are posted on Twitter and Linked-In every time I release a blog or when one of my readers reposts a blog. It’s extremely unlikely that the big web platforms would create a relationship with somebody as small as me and I’d lose my primary way to distribute content on the web. I guess, perhaps, that the WordPress platform where I publish could make arrangements with the big web services – otherwise their value as a publishing platform would be greatly diminished.

This would also affect me as a user. I mostly follow other people in the telecom and the rural broadband space by browsing through my feed on Twitter and LinkedIn to see what those folks are finding to be of interest. I skip over the majority of headlines and snippets, but I stop and read news articles I find of interest. The beauty of these platforms is that I automatically select the type of content I get to browse by deciding who I want to follow on the platforms. If the people I follow on Twitter can’t post small and obscure articles, then I would have no further interest in being on Twitter.

The second law, Article 13 is being referred to as the upload filter law. Article 13 would make a web platform liable for any copyright infringements for content posted by users. This restriction would theoretically not apply to content posted by users as long as they are acting non-commercially.

No one is entirely sure how the big web platforms would react to this law. At one extreme a platform like Facebook or Reddit might block all postings of content, such as video or pictures, for which the user can’t show ownership. This would mean the end of memes and kitten videos and much of the content posted by most Facebook users.

At the other extreme, this might mean that the average person could post such links since they have no commercial benefit from posting a cute cat video. But the law could stop commercial users from posting content that is not their own – a movie reviewer might not be able to include pictures or snippets from a film in a review. I might not be able to post a link to a Washington Post article as CCG Consulting but perhaps I could post it as an individual. While I don’t make a penny from this blog, I might be stopped by web platforms from including links to news articles in my blog.

In January the approval process was halted when 11 countries including Germany, Italy, and the Netherlands said they wouldn’t support the final language in these articles. EU law has an interesting difference from US law in that for many EU ordinances each country gets to decide, within reason, how they will implement the law.

The genesis of these laws comes from the observation that the big web companies are making huge money from the content created by others and not fairly compensating content creators. We are seeing a huge crisis for content creators – they used to be compensated through web advertising ‘hits’, but these revenues are disappearing quickly. The EU is trying to rebalance the financial equation and make sure that content creators are fairly compensated – which is the entire purpose of copyright laws.

The legislators are finding out how hard it will be to make this work in the online world. Web platforms will always try to work around laws to minimize payments. The lawyers of the web platforms are going to be cautious and advise the platforms to minimize massive class action suits.

But there has to be a balance. Content creators deserve to be paid for creating content. Platforms like Facebook, Twitter, Reddit, Instagram, Tumblr, etc. are popular to a large degree because users of the platforms upload content that they didn’t create – the value of the platform is that users get to share things of interest with their friends.

We haven’t heard the end of these efforts and the parties are still looking for language that the various EU members can accept. If these laws eventually pass they will raise the same questions here because the policies adopted by the big web platforms will probably change to match the European laws.

The Slow Deployment of 5G

Somebody asked me a few days ago why I write so much about 5G. My response is that I am intrigued by the 5G hype. The major players in the industry have been devoting big dollars to promote a technology that is still mostly vaporware. The most interesting thing about 5G is how politicians, regulators and the public have bought into the hype. I’ve never seen anything like it. I can remember other times when the world was abuzz over a new technology, but this was usually a reaction to an actual technology you could buy like the first laptop computers, the first iPhone and the first iPod.

Anybody that understands our industry knew that it will take a number of years to roll out any major new technology, particularly a wireless technology since wireless behaves differently in the field compared to the lab. We’re only a year past the release of 5G standards, and it’s unrealistic to think those standards could be translated into operation hardware and software systems in such a short time. You only have to look back at the history of 4G, which started as slowly as 5G and which finally had the first fully-compliant 4G cell site late last year.  It’s going to take just as long until we see a fully functional 5G cell site. What we will see, over time, is the incremental introduction of some of the aspects of 5G as they get translated from lab to the field. That rollout is further complicated for cellular use by the timeline needed to get 5G-ready handsets into peoples’ hands.

This blog was prompted by a Verizon announcement that 5G mobile services will be coming to 30 cities later this year. Of course, the announcement was short on details, because those details would probably be embarrassing for Verizon. I would expect that the company will introduce a tiny few aspects of 5G into the cell sites in business districts of major cities and claim that as a 5G roll-out.

What does that a roll-out this year mean for cellular customers? There are not yet any 5G capable cellphones. Both AT&T and Verizon have been working with Samsung to introduce a 5G version of their S10 phone later this year. Verizon has also been reported to be working with Lenovo for a 5G modular upgrade later this year. I’m guessing these phones are going to come with a premium price tag for the early adaptors willing to pay for 5G bragging rights. These phones will only work as 5G from the handful of cell sites with 5G gear – and that will only be for a tiny subset of the 5G specifications. I remember when one of my friends bought one of the first 4G phones and crowed about how it worked in downtown DC. At the time I told him his great performance was because he was probably the only guy using 4G – and sure enough, his performance dropped as others joined the new technology.

On the same day that I saw this Verizon announcement I also saw a prediction by Cisco that only 3% of cellular connections will occur over a 5G network by the end of 2022. This might be the best thing I’ve seen that pops the 5G hype. Even for folks buying the early 5G phones, there will be a dearth of cell sites around the country that will work with 5G for a number of years. Anybody who understands the lifecycle of cellular upgrades agrees with the Cisco timeline. It takes years to work through the cycle of upgrading cell sites, upgrading handsets and then getting those handsets to the public.

The same is true for the other technologies that are also being called 5G. Verizon made a huge splash just a few months ago about introducing 5G broadband using millimeter wave spectrum in four cities. Even at the time of that announcement, it was clear that those radios were not using the 5G standard, and Verizon quietly announced recently that they were ceasing those deployments while they wait for actual 5G technology. Those deployments were actually a beta test of millimeter wave radios, not the start of a rapid nationwide deployment of 5G broadband from poles.

AT&T had an even more ludicrous announcement at the end of 2018 where they announced 5G broadband that involved deployment of WiFi hotspots that were supposedly fed by 5G. However, this was a true phantom product for which they had no pricing and that nobody could order. And since no AT&T cell sites have been upgraded to 5G, one had to wonder how this involved any 5G technology. It’s clear this was technology roll-out by press release only so that they could have the bragging rights of saying they were the first ones to have 5G.

The final announcement I saw on that same day was one by T-Mobile saying they would begin deploying early 5G in cell sites in 2020. But the real news is that they aren’t planning on charging any more for any extra 5G speeds or features.

I come back to my original question about why I write about 5G so often. A lot of my clients ask me if they should be worried about 5G and I don’t have an answer for them. I can see that actual 5G technology is going to take a lot longer to come to market than the big carriers would have you believe. But I look at T-Mobile’s announcement on price and I also have to wonder what the cellular companies will really do once 5G works. Will AT&T and Verizon both spend billions to put 5G small cells in residential neighborhoods if it doesn’t drive any new cellular revenues? I have to admit that I’m skeptical – we’re going to have to wait to see what the carriers do rather than listen to what they say.

Making a Safe Web

Tim Berners-Lee was one of the founders of the Internet and implemented the first successful communication between a client and a server using HTTP in 1989. He’s always been a proponent for an open Internet and doesn’t like how the web has changed. The biggest profits on the web today come from the sale of customer data.

Berners-Lee has launched a new company along with cybersecurity expert John Bruce that proposes to ‘restore rightful ownership of the data back to every web user”. The new start-up is called Inrupt which is proposing to develop an alternate web for users who want to protect their data and their identity.

Berner-Lee has been working at the Computer Sciences and Artificial Intelligence Laboratory (CSAIL) at MIT to develop a software platform that can support his new concept. The platform is called Solid, which has the main goal of decoupling web applications from the data they produce.

Today our personal data is stored all over the web. Our ISPs make copies of a lot of our data. Platforms like Google, Facebook, Amazon, and Twitter gather and store data on us. Each of these companies captures a little piece of the picture of who we each are. These companies use our data for their own purposes and then sell it to companies that buy, sort and compile that data to make profiles on all of us. I saw a disturbing statistic recently that there are now up to 1,400 data points created daily for the typical data user every day – data gathered from our cellphones, smart devices, and our online web activity.

The Solid platform would change the fundamental structure of data storage. Each person on the Solid platform would create a cache of their own personal data. That data could be stored on personal servers or on servers supplied by companies that are part of the Solid cloud. The data would be encrypted and protected against prying.

Then, companies like Berners-Lee’s Inrupt would develop apps that perform functions users want without storing any customer data. Take the example of shopping for new health insurance. An insurance company that agrees to be part of the Solid platform would develop an app that would analyze your personal data to determine if you are a good candidate for the insurance policy. This app would work on your server to analyze your medical records and other relevant personal information. The app would do its analysis and decide if you are a good candidate for a policy. It might report information back to the insurance company such as some sort of rating of you as a potential customer, but the insurance would never see the personal data.

The Solid concept is counting on the proposition that there are a lot of people who don’t want to share their personal data on the open web. Berners-Lee is banking that there are plenty of developers who would design applications for those in the Solid community. Over time the Solid-based apps can provide an alternate web for the privacy-minded, separate and apart from the data-collection web we share today.

Berners-Lee expects that this will first take a foothold in industry groups that value privacy like coders, lawyers, CPAs, investment advisors, etc. Those industries have a strong desire to keep their client’s data private, and there is no better way to do that than by having the client keep their own data. This relieves lawyers, CPAs and other professionals from the ever-growing liabilities from data breaches of client data.

Over time Berners-Lee hopes that all sorts of other platforms will want to cater to a growing base of privacy-minded users. He’s hoping for a web ecosystem of search engines, news feeds, social media platforms, and shopping sites that want to sell software and services to Solid users, but with the promise of not gathering personal data. One would think current existing privacy-minded platforms like Mozilla Firefox would join this community. I would love to see a Solid-based cellphone operating system. I’d love to use an ISP that is part of this effort.

It’s an interesting concept and one I’ll be watching. I am personally uneasy about the data being gathered on each of us. I don’t like the idea of applying for health insurance, a credit card or a home mortgage and being judged in secret by data that is purchased about me on the web. None of us has any idea of the validity and correctness of such data. And I doubt that anybody wants to be judged by somebody like a mortgage lender using non-financial data like our politics, our web searches, or the places we visit in person as reported by our cellphones. We now live in a surveillance world and Berners-Lee is giving us the hope of escaping that world.

Streamlining Regulations

Jonathan Spalter of USTelecom wrote a recent blog calling on Congress to update regulations for the telecom industry. USTelecom is a lobbying arm representing the largest telcos, but which also still surprisingly has a few small telco members. I found the tone of the blog interesting, in that somebody who didn’t know our industry would read the blog and think that the big telcos are suffering under crushing regulation.

Nothing could be further from the truth. We currently have an FCC that seems to be completely in the pocket of the big ISPs. The current FCC walked in the door with the immediate goal to kill net neutrality, and in the process decided to completely deregulate the broadband industry. The American public hasn’t really grasped yet that ISPs are now unfettered to endlessly raise broadband prices and to engage in network practices that benefit the carriers instead of customers. Deregulation of broadband has to be the biggest regulatory giveaway in the history of the country.

Spalter goes on to praise the FCC for its recent order on poles that set extremely low rates for wireless pole connections and which lets wireless carriers place devices anywhere in the public rights-of-way. He says that order brought “fairness’ to the pole attachment process when in fact the order was massively unbalanced in favor of cellular companies and squashes any local input or authority over rights-of-ways – something that has always been a local prerogative. It’s ironic to see USTelecom praising fairness for pole attachments when their members have been vehemently trying to stop Google Fiber and others from gaining access to utility poles.

To be fair, Spalter isn’t completely wrong and there are regulations that are out of date. Our last major telecom legislation was in 1996, at a time when dial-up Internet access was spreading across the country. The FCC regulatory process relies on rules set by Congress, and since the FCC hasn’t acted since 1996, Spalter accuses Congress of having “a reckless abdication of government responsibility”.

I find it amusing that the number one regulation that USTelecom most dislikes is the requirement for the big telcos make their copper wires available to other carriers. That requirement of the Telecommunications Act of 1996 was probably the most important factor in encouraging other companies to compete against the monopoly telephone companies. In the years immediately after the 1996 Act, competitors ordered millions of wholesale unbundled network elements on the telco copper networks.

There are still competitors that using the telco copper to provide far better broadband than the telcos are willing to do, so we need to keep these regulations as long as copper remains hanging on poles. I would also venture a guess that the telcos are making more money selling this copper to the competitors than they would make if the competitors went away – the public is walking away from telco DSL in droves.

I find it curious that the telcos keep harping on this issue. In terms of the total telco market the sale of unbundled elements is a mere blip on the telco books. This is the equivalent to a whale complaining about a single barnacle on his belly. But the big telcos never miss an opportunty to harp about the issue and have been working hard to eliminate sale of copper to competitors since the passage of the 1996 Act. This is not a real issue for the telcos – they just have never gotten over the fact that they lost a regulatory battle in 1996 and they are still throwing a hissy fit over that loss.

The reality is that big telcos are less regulated than ever before. Most states have largely deregulated telephone service. The FCC completely obliterated broadband regulation. While there are still cable TV regulations, the big telcos like AT&T are bypassing those regulations by moving video online. The big telcos have already won the regulatory war.

There are always threats of new regulation – but the big telcos always lobby against new rules far in advance to weaken any new regulations. For example, they are currently supporting a watered-down set of privacy rules that won’t afford much protection of customer data. They have voiced support for a watered-down set of net neutrality rules that doesn’t obligate them to change their network practices.

It’s unseemly to see USTelecom railing against regulation after the telcos have already been so successful in shedding most regulations. I guess they want to strike while the iron is hot and are hoping to goad Congress and the FCC into finishing the job by killing all remaining regulation. The USTelcom blog is a repeat of the same song and dance they’ve been repeating since I’ve been in the industry – which boils down to, “regulation is bad”. I didn’t buy this story forty years ago and I still don’t buy it today.

The American Broadband Initiative

On February 13 the Secretary of Commerce Wilbur Ross led a group of more than 20 federal agencies in announcing what the administration is calling the American Broadband Initiative (ABI). The stated purpose of this initiative is to promote broadband deployment. This was accompanied by this Milestones Report that lists numerous specific federal initiatives and associated timelines. The stated purpose of the ABI is to streamline the federal permitting process and to leverage federal assets to lower the cost of deploying broadband.

Big announcements of this sort are usually mostly for public relations purposes rather than anything useful, and this is no exception. The main purpose of ABI seems to be to show rural America that the federal government cares about the lack of rural broadband. Unfortunately, this kind of PR effort works, as evidenced by a conversation I had with rural politician soon after the ABI announcement who hoped this would mean real movement towards broadband deployment in his region. I felt bad when I told him that I see nothing new or of consequence in the ABI announcement, and nothing that I thought would improve broadband in his area.

This is not to say that there was nothing of importance in the ABI. However, the most important initiatives included in the ABI are repeats of previous announcements. For example, the leading bullet point in the ABI is the announcement of the $600 million e-connectivity grant/loan program – something that everybody in the industry has known about since last fall. There were a few other repeats of past announcement such as the intention to ease the permitting process on federal land.

A lot of the announcements have to do with the permitting for broadband facilities and access to public land, including:

  • The U.S. Department of Interior will make it’s 7,000+ towers available to carriers and will publish a map. Any tall towers in this list are already included in the FCC tower database.
  • The NTIA is creating a web site that will centralize the information needed to get permits to place telecom assets on public land.
  • The GSA is undertaking an effort to document flow charts of the process required to get a permit for the use of federal land or federal towers.
  • The GSA will also tackle simplifying the permitting application forms.
  • The GSA is soliciting comments from the public to identify areas with poor cellular coverage, with the hope that the GSA can then identify public assets that might help alleviate lack of cellular coverage.

There are a few other announcements that could be beneficial such as streamlining the environmental and historic preservations reviews on public properties. Those requirements are a definite roadblock to using public land, but streamlining is not the same thing as eliminating, so I’d have to see what this means in practice to know if this is an actual improvement.

I have no doubt that these efforts will help a few broadband projects. However, federal lands tend to be lightly populated and I have to wonder how many broadband projects want to use federal lands? In the hundreds of broadband projects I’ve been involved in I can count on one hand the times when federal rights-of-way were an issue.

There is one situation this could be a benefit – the siting of antennas on top of federal buildings. In many small towns the court house is the tallest structure and has largely been unavailable to wireless providers. But until I see this work easily in real life I’m going to remain skeptical.

The ABI report is mostly fluff. It seems obvious that all cabinet agencies were asked to provide a list of ways they can help broadband, and they all scrambled to come up with something to report. While a few of the announced initiatives might help a handful of projects, for the most part the initiatives listed in the ABI aren’t going to help anybody. If the administration really wanted to help brpadband, they can create grant programs that don’t have forced ties to RUS loans than many ISPs can’t accept, or they would eliminate the inane requirement that federal grants can only be used where homes don’t have 10/1 Mbps speeds.