How We Use Cellphone Data

HTC-Incredible-S-SmartphoneNielsen recently took a look at how we use cellphone data. They installed apps on people’s phones that tracked data usage on both cellular networks and WiFi. The data comes from a massive study on the usage of 45,000 Android users in August. Nielsen also continues to study the usage of 30,000 cellular customers every month using the same app.

What Nielsen found wasn’t surprising in that they found that younger people use cellular data the most. They also found that Hispanics are the largest data users among various ethnic groups.

Here are the average monthly usage by age:

‘                      Cell Data         WiFi Data

18 – 24            3.2 GB            14.1 GB

25 – 34            3.6 GB            11.2 GB

35 – 44            2.9 GB              9.3 GB

45 – 54            2.1 GB              7.5 GB

55 – 64            1.4 GB              6.4 GB

65+                  0.9 GB              4.8 GB

This study quantifies a lot of things that we already knew about cellular usage. We know, for example, that younger people use their cellphones to watch video more than older people. I have anecdotal evidence of that by watching my 17-year old. If she’s representative of her age group then they are using cellular data even more than the 18-24 year olds. They communicate with pictures and videos where older generations use email, chat, and text messaging.

These numbers also show that most people are not yet using their cellphones as a substitute for landline data usage. Certainly there are many individuals for whom the cellphone is their only source for data, but these numbers show average cellphone data usage far below average landline usage. I have a number of clients that track landline customer data usage and most of them are reporting average monthly downloads somewhere between 100 GB and 150 GB per household. Comcast recently reported that their 6-month rolling median data usage is 75 GB – meaning half of their customers use less than that, and half use more. All of the numbers in the above charts, while representing individuals and not families, are still far below those numbers.

Nielsen also tracked data usage by ethnicity, as follows:

‘                                  Cell Data        WiFi Data

Hispanic                       3.8 GB          10.1 GB

Native American         3.5 GB            7.3 GB

African-American        3.3 GB            9.1 GB

Asian                            2.3 GB            9.9 GB

White                           2.2 GB             8.6 GB

This shows that Hispanics, on average, are the largest users of data, both cellular and WiFi. Whites are at the bottom of the average usage chart.

Nielsen also was able to look into usage by geography. They didn’t publish all of the results, but did provide some interesting statistics. For example, they have some strong evidence now that cities with widespread WiFi networks can save customers money on their cellphone plans. For example, New York City has a lot of public WiFi and users in the city use WiFi 14% more than the national average while using cellular data 12% less. Contrast this with a city like Los Angeles with little public WiFi, and citizens there use WiFi 9% less than the national average and use cellular data 13% more. This kind of study can provide the basis for a city to quantify the benefits to the public for building a public WiFi network.

AI, Machine Learning and Deep Learning

Data CenterIt’s getting hard to read tech articles any more that don’t mention artificial intelligence, machine learning or deep learning. It’s also obvious to me that many casual writers of technology articles don’t understand the differences and they frequently interchange the terms. So today I’ll take a shot at explaining the three terms.

Artificial intelligence (AI) is the overall field of working to create machines that carry out tasks in a way that humans think of as smart. The field has been around for a long time and twenty years ago I had an office on a floor shared by one of the early companies that was looking at AI.

AI has been in the press a lot in the last decade. For example, IBM used its Deep Blue supercomputer to beat the world’s chess champion. It really didn’t do this with anything we would classify as intelligence. It instead used the speed of a supercomputer to look forward a dozen moves and was able to rank options by looking for moves that produced the lowest number of possible ‘bad’ outcomes. But the program was not all that different than chess software that ran on PCs – it was just a lot faster and used the brute force of computing power to simulate intelligence.

Machine learning is a subset of AI that provides computers with the ability to learn without programming them for a specific task. The Deep Blue computer used a complex algorithm that told it exactly how to rank chess moves. But with machine language the goal is to write code that allows computers to interpret data and to learn from their errors to improve whatever task they are doing.

Machine learning is enabled by the use of neural network software. This is a set of algorithms that are loosely modeled after the human brain and that are designed to recognize patterns. Recognizing patterns is one of the most important ways that people interact with the world. We learn early in life what a ‘table’ is, and over time we can recognize a whole lot of different objects that also can be called tables, and we can do this quickly.

What makes machine learning so useful is that feedback can be used to inform the computer when it makes a mistake, and the pattern recognition software can incorporate that feedback into future tasks. It is this feedback capability that lets computers learn complex tasks quickly and to constantly improve performance.

One of the earliest examples of machine language I can recall is the music classification system used by Pandora. With Pandora you can create a radio station to play music that is similar to a given artist, but even more interestingly you can create a radio station that plays music similar to a given song. The Pandora algorithm, which they call the Music Genome Project, ‘listens’ to music and identifies patterns in the music in terms of 450 musical attributes like melody, harmony, rhythm, composition, etc. It can then quickly find songs that have the most similar genome.

Deep learning is the newest field of artificial intelligence and is best described as the cutting-edge subset of machine learning. Deep learning applies big data techniques to machine learning to enable software to analyze huge databases. Deep learning can help make sense out of immense amounts of data. For example, Google might use machine learning to interpret and classify all of the pictures its search engine finds on the web. This enables Google to be able to show you a huge number of pictures of tables or any other object upon request.

Pattern recognition doesn’t have to just be visual. It can include video, written words, speech, or raw data of any kind. I just read about a good example of deep learning last week. A computer was provided with huge library of videos of people talking along with the soundtracks and was asked to learn what people were saying just by how people moved their lips. The computer would make its best guess and then compare its guess to the soundtrack. With this feedback the computer quickly mastered lip reading and is now outperforming experienced human lip readers. The computer that can do this is still not ‘smart’ but it can become incredibly proficient at certain tasks and people interpret this as intelligence.

Most of the promises from AI are now coming from deep learning. It’s the basis for self-driving cars that learn to get better all of the time. It’s the basis of the computer I read about a few months ago that is developing new medicines on its own. It’s the underlying basis for the big cloud-based personal assistants like Apple’s Siri and Amazon’s Alexa. It’s going to be the underlying technology for computer programs that start tackling white collar work functions now done by people.

Regulation and Uncertainty

FCC_New_LogoThe prevalent opinion seems to be that the new administration will shake up the FCC and will make a lot of changes to telecom regulation. I expect I will be writing a number of blogs about those changes as they occur. But today I want to talk about regulation and uncertainty.

There has always been an interesting dynamic between regulators and large telecom providers. No matter what regulators do, the companies always have a wish list of regulations they would like to see, and the companies always complain in the press about being over-regulated. This has always been the case during my 35 years of following regulation in the industry. Regulators regulate and the big companies act like all regulation is killing them.

This has been true no matter the nature of the FCC that is in place. We currently have one of the most consumer-oriented commissions in recent memory. And there have been other liberal FCC’s such as the one under Reed Hundt that oversaw the introduction of the Telecommunications Act of 1996 and the creation of CLECs. There has also been pro-business FCCs like the one under Michael Powell. Almost by definition the FCC changes direction with changes in administration and gets more liberal or more conservative depending upon who is president.

The FCC is an independent agency and so they don’t always act beholden to the president. The make-up of Congress has always mattered as well and having split parties between the president and Congress generally has acted to somewhat temper the decisions of the FCC since Congress holds the purse strings of the agency.

It’s also important to remember that the FCC doesn’t make decisions in a vacuum. Almost every major policy change the FCC tries to implement gets challenged in court, and over the years the courts have reversed a number of major FCC initiatives.

But with all of that said, it sounds like we are going to see big changes. The new FCC is likely to reverse a lot (or even most) of the changes made by the current FCC. To a large degree the big telcos are going to be granted a lot of the things that are on their wish list.

But here is the kicker. The one thing that the big companies hate more than regulation is regulatory uncertainty. You can be sure that if the new FCC makes radical changes and undoes everything done by this democratic FCC, then the next time there is a democratic president things could easily be changed back again.

That uncertainty is poison to the industry. Just try to picture what this kind of regulatory fluctuation can mean. Take the issue of net-neutrality and the way the current FCC feels about zero-rating. This is the practice where an ISP will favor some content over others. For instance, AT&T plans to zero-rate their DirecTV Now product for their cellular customers, meaning customers will be able to watch it on their cellphones without violating their data caps. This gives the AT&T product a huge leg up over any other streaming service for their 110 million wireless subscribers.

If zero-rating is allowed by the next FCC then there will much bigger deals made. One can picture Netflix or Facebook Live paying AT&T to allow their content without violating the cellular data caps. Over a few years this will turn into big business for AT&T and is something that will be expected by their customers. What happens, though when a future democratic FCC reverses the decision on zero-rating and makes it taboo again? That would be hugely disruptive to the industry and would cost a ton of money to the players involved.

As much as AT&T wants zero-rating, I bet if you told them that over the next twenty years it would be allowed, then banned, and then perhaps allowed again, back and forth, that they might have a different feeling about it. What they really want is a regulatory environment that has some staying power, because that allows them to make long-term investments and business decisions. Regulatory uncertainty is bad for the big companies and they know it. And it’s bad for their stock prices. As much as these companies might be happy now to be getting a pro-business FCC, they will be massively unhappy if the pendulum swings too far the other way every four or eight years.

Just as the country is split down the middle between right and left, it looks like we have come to the point where FCC policy might swing wildly based on the party in power. We’ve had changes at the FCC before due to changes in administration, but we have never had anything like the swing that looks to be coming now, and the future ones that might go back the other way. This is not how regulation is supposed to work, but it might be our new reality.

Two Rural Wireless Technologies

Cell-TowerI see articles all of the time where the authors treat all the various wireless technologies used for rural markets as if they are all the same. They often talk about the ‘rural wireless’ solution without distinguishing the fact that there are different wireless technologies using different spectrums that have different operating characteristics. And this is dangerous because I’ve found that politicians and decision makers often don’t understand that ‘wireless’ can mean a wide range of different technologies. So this blog talks about the two primary wireless technologies that are going to be seeing big play in the future in rural broadband deployments.

The first wireless technology is cellular data. Rural customers for years have been using the cellular data plans to provide a bare-bones data connection at home. But priced at $10 to $15 per downloaded gigabyte this is one of the most expensive sources of broadband in the world. Yet rural cellular data is going to be in the news a lot more because AT&T and Verizon intend to use cellular data to replace landline copper connections. AT&T is actively deploying cellular data to fulfill its obligations under the CAF II program.

Cellular companies own a wide range of different licensed spectra, meaning they are the only ones who can use it. Most cellular frequencies share characteristics like the ability to penetrate obstacles such as trees or home walls fairly well. The one characteristic that is not understood about cellular frequencies when used for wireless data is that the strength and speed of the delivered bandwidth is largely determined by the distance of a customer from a cellphone tower. Customers close to a tower might get speeds several times faster than somebody 3 miles from a tower.

Both AT&T and Verizon intend to use ‘fixed’ cellular for rural broadband. That means they will install a small antenna outside a home which allows for a stronger signal than receiving the signal inside the house. Speeds will not only vary by distance from the tower, but also by the specific version of technology being used. For example, 3G cellular only delivers a few Mbps of speed. Current 4G technology can provide up to about 15 Mbps to those close to a tower, but there are many different generations of 4G deployments that will have slower speeds. 5G data will be even faster, with a goal of 50 Mbps, but for a number of reasons (which I won’t go into here) there might not be widespread deployment of rural 5G for at least a decade.

The other future primary technology used for rural wireless is a point-to-multipoint wireless technology. In these networks there is a transmitter on a tower that sends a focused microwave beam to a small dish at a customer’s home. The frequencies used for point-to-multipoint data are much higher than cellular frequencies. If these frequencies were openly broadcast like cellular they wouldn’t travel very far and that lack of distance is the reason the technology only uses a focused beam.

Today the most common deployment of this technology uses the unlicensed WiFi spectrum, which is the same used by WiFi routers in the home. The swaths of spectrum used are 2.4 GHz and 5.7 GHz. In a point-to-multipoint network, these two frequencies are often used together with the higher 5.7 GHz used to reach the closest customers and the lower frequency for customers who are farther away. In practical use in wide open conditions these frequencies can be used to serve customers up to about 3–4 miles from a transmitter. The frequencies have a theoretical cap of 28 Mbps of bandwidth, but it’s possible to get faster speeds by bonding multiple signals together.

The FCC recently authorized the use of 3.65 GHz–3.70 GHz frequency for rural broadband. This spectrum can penetrate trees better than WiFi. It can support a signal of up to 37 Mbps, but can be combined with WiFi to produce even faster speeds. In practical application this spectrum can be used to deliver up to 20 Mbps out five miles from the transmitter, with more bandwidth for those that are closer. There are places where this spectrum can’t be used if there is a government satellite farm or military base nearby.

The FCC is expected to soon release the use of ‘white space’ spectrum for rural broadband. This is spectrum that is the same range as TV channels 14 through 51, in four bands of frequencies in the VHF and UHF regions of 54–72 MHz, 76–88 MHz, 174–216 MHz, and 470–698 MHz. The downside of the spectrum is that it won’t be available everywhere since in some places TV stations will keep their existing spectrum.

The rural potential for white space spectrum is to extend point-to-multipoint radio systems. White space radios should be able to deliver perhaps 45 Mbps up to about 6 miles from the transmitter. That’s easily twice as far as what can be delivered today using unlicensed spectrum and can create a 12-mile circle around a transmitter.

Both of these wireless technologies must be fiber fed in order to achieve the best speeds. There are numerous deployments today of point-to-multipoint wireless systems that only deliver a few Mbps to customers. That’s usually due to the transmitters not being connected to fiber or from serving customers who are too far from a tower.

The distinction between the two kinds of wireless technologies matter. Today a cellular network is not capable of delivering speeds that are considered as broadband. But point-to-multipoint networks can exceed the FCC definition of broadband if the towers are fiber fed and if the customers are close enough to the tower. But this technology often gets a bad name because a lot of wireless companies deliver products that are far slower than the capability of the spectrum.

Latency and Broadband Performance

turtle_backThe industry always talks about latency as one of the two reasons (along with download speeds) that define a good broadband connection. I thought today I’d talk about latency.

As a reference, the standard definition of latency is that it’s a measure of the time it takes for a data packet to travel from its point of origin to the point of destination.

There are a lot of underlying causes for delays that increase latency – the following are primary kinds of delays:

  • Transmission Delay. This is the time required to push packets out the door at the originating end of a transmission. This is mostly a function of the kind of router and software used at the originating server. This can also be influenced by packet length, and it generally takes longer to create long packets than it does to create multiple short ones. These delays are caused by the originator of an Internet transmission.
  • Processing Delay. This is the time required to process a packet header, check for bit-level errors and to figure out where the packet is to be sent. These delays are caused by the ISP of the originating party. There are additional processing delays along the way every time a transmission has to ‘hop’ between ISPs or networks.
  • Propagation Delay. This is the delay due to the distance a signal travels. It takes a lot longer for a signal to travel from Tokyo to Baltimore than it takes to travel from Washington DC to Baltimore. This is why speed tests try to find a nearby router to ping so that they can eliminate latency due to distance. These delays are mostly a function of physics and the speed at which signals can be carried through cables.
  • Queueing Delay. This measures the amount of time that a packet waits at the terminating end to be processed. This is a function of both the terminating ISP and also of the customer’s computer and software.

Total latency is the combination of all of these delays. You can see by looking at these simple definitions that poor latency can be introduced at multiple points along an Internet transmission, from beginning to end.

The technology of the last mile is generally the largest factor influencing latency. A few years ago the FCC did a study of the various last mile technologies and measured the following ranges of performance of last-mile latency, measured in milliseconds: fiber (10-20 ms), coaxial cable (15-40 ms), and DSL (30-65 ms). These are measures of latency between a home and the first node in the ISP network. It is these latency differences that cause people to prefer fiber. The experience on a 30 Mbps download fiber connection “feels” faster than the same speed on a DSL or cable network connection due to the reduced latency.

It is the technology latency that makes wireless connections seem slow. Cellular latencies vary widely depending upon the exact generation of equipment at any given cell site. But 4G latency can be as high as 100 ms. In the same FCC test that produced the latencies shown above, satellite was almost off the chart with latencies measured as high as 650 ms.

The next biggest factor influencing latency is the network path between the originating and terminating end of a signal. Every time that a signal hits a network node the new router must examine the packet header to determine the route and may run other checks on the data. The delays of hitting network routers or of changing networks is referred to in the industry as hops, and each hop adds latency.

There are techniques and routing schemes that can reduce the latency that comes from extra hops. For example, most large ISPs peer with each other, meaning they pass traffic between them and avoid the open Internet. By doing so they reduce the number of hops needed to pass a signal between their networks. Companies like Netflix also use caching where they will store content closer to users so that the signal isn’t originating from their core servers.

Internet speeds also come into play. The transmission delay is heavily influenced by the upload speeds at the originating end of a transmission. And the queuing delay is influenced by download speeds at the terminating end of a transmission. This is illustrated with a simple example. If you want to download a 10 Mb file, it takes one-tenth of a second to download on a 100 Mbps connection and ten seconds on a 1 Mbps connection.

A lot of complaints about Internet performance are actually due to latency issues. It’s something that’s hard to diagnose since latency issues can appear and reappear as Internet traffic between two points uses different routing. But the one thing that is clear is that the lower the latency the better.

Can Satellites Solve the Rural Broadband Problem?

satelliteA few weeks ago Elon Musk announced that his SpaceX company is moving forward with attempting to launch low earth orbit (LEO) satellites to bring better satellite broadband to the world. His proposal to the FCC would put 4,425 satellites around the globe at altitudes between 715 and 823 miles. This contrasts significantly with the current HughesNet satellite network that is 22,000 miles above the earth. Each satellite would be roughly the size of a refrigerator and would be powered by a solar array.

This idea has been around a long time and I remember a proposal to do something similar twenty years ago. But like many technologies, this really hasn’t been commercially feasible in the past and it took improvements to the underlying technologies to make this possible. Twenty years ago they could not have packed enough processing power into a satellite to do what Musk is proposing. But Moore’s Law suggests that the chips and routers today are at least 500 times faster than two decades ago. And these satellites will also be power hungry and weren’t possible until modern solar power cells were created. This kind of network also requires the ability to make huge numbers of rocket launches – something that was impractical and incredibly expensive twenty years ago. But if this venture works it would provide lucrative revenue for SpaceX, and Elon Musk seems to be good at finding synergies between his companies.

Musk’s proposal has some major benefits over existing satellite broadband. By being significantly closer to the earth the data transmitted from satellites would have a latency of between 25 and 35 milliseconds. This is much better than the 600 milliseconds delays achieved by current satellites and would put the satellite broadband into the same range that is achieved by many ISPs. Current satellite broadband has too much latency to support VoIP, video streaming, or any other live Internet connections like Skype or distance learning.

The satellites would use frequencies between 10GHz and 30GHz, in the Ku and Ka bands. Musk says that SpaceX is designing every component from the satellites to earth gateways and customer receivers. For any of you that want to crawl through specifications, the FCC filing is intriguing.

The large number of satellites would provide broadband capability to a large number of customers, while also blanketing the globe and bringing broadband to many places that don’t have it today. The specifications say that each satellite will have an aggregate capacity of between 17 and 23 Gbps, meaning each satellite could theoretically process that much data at the same time.

The specifications say that the network could produce gigabit links to customers, although that would require making simultaneous connections from several satellites to one single customer. And while each satellite has a lot of capacity, using them to provide gigabit links would chew up the available bandwidth in a hurry and would mean serving far fewer customers. It’s more likely that the network will be used to provide speeds such as 50 Mbps to 100 Mbps.

But those speeds could be revolutionary for rural America. The FCC and their CAF II program is currently spending $9 billion to bring faster DSL or cellular service to rural America with speeds that must be at least 10/1 Mbps. Musk says this whole venture will cost about $10 billion and could bring faster Internet not only to the US, but to the world.

It’s an intriguing idea, and if it was offered by anybody else other than Elon Musk it might sound more like a pipedream than a serious idea. But Musk has shown the ability to launch cutting-edge ventures before. There is always a ways to go between concept and reality and like any new technology there will be bugs in the first version of the technology. But assuming that Musk can raise the money, and assuming that the technology really works as promised, this could change broadband around the world.

This technology would likely be the death knell of slower rural broadband technologies like LTE cellular, DSL, or poorly-deployed point-to-multipoint wireless systems. In today’s world the satellites would even compete well with current landline data products in more urban areas. But over a decade or two the ever-increasing speeds that customers will want will ultimately still be better served by landline connections. Yet for the near future this technology could be disruptive to numerous landline broadband providers.

It’s hard to envision the implications from providing fast broadband around the globe. For example, this would provide a connection to the web that is not filtered by a local government. It would also bring real broadband to any rural place that has available power. In the poorer nations of the world this would be transformational.  It’s hard to over-state the potential impacts that this technology could have around our planet if it’s deployed successfully.

Musk says he would like to launch his first satellite in 2019, so I guess we won’t have to wait too long to see if this can work.  I’ll be watching.

 

Unintended Consequencies

Tribrid_CarI write a lot about new technologies that are likely to impact our daily lives or the small carrier business over the next decade. It’s very easy when looking at new technologies to only think about the positive aspects of the technology and to not consider the negative or unintended consequences. And to be fair, both the positive and the negative should be considered when talking about new technologies – because transformational technologies always have unintended consequences.

A great example of an unintended consequence is the impact that owning smartphones, tablets, and similar electronics has had on children. I rarely see kids playing outside where I live. My wife swears that there are kids living all around us, but I must take her word for it since I never see them. Certainly when smartphones and other electronics were invented they were not intended to transform the way our youth spends their lives – but the unintended consequence of the technology has been to keep kids inside.

There are a number of major technologies that we are going to be seeing a lot more of within a decade such as self-driving cars, artificial intelligence, gene-splicing, etc. I read articles all of the time talking about the benefits these technologies will bring into our lives, but I rarely see anybody talking about the flip side.

Consider driverless cars. There are a number of benefits that can be envisioned for the technology. Self-driving cars will enable the elderly to be self-sufficient for longer. Self-driving cars will eliminate drunk drivers and texting while driving and will drastically cut down on traffic fatalities. But I can also think of a number of possible unintended consequences of driverless cars.

One of the big touted benefits of driverless cars is that they will increase urban driving efficiency since self-driving cars can move en masse without gaps between cars. But for anybody who has ever lived in a city, this could end up increasing gridlock rather than decreasing it. It’s not hard to imagine people going to the store and having their car circle the block endlessly until they are ready for it. There could be hordes of such empty cars driving in circles and clogging city streets.

Driverless cars also will free people up to do something other than driving. For people who commute this might mean extending the work day. I picture conference calls (voice and video) and other communication being scheduled during driving time.

Another unintended consequence of moving the home or office into the vehicle could be the death of traditional radio – a medium that is already struggling against streaming music and podcasts. Radio mostly thrives today based on advertising to people while they are driving. You can listen to radio without much physical interaction with the radio. But if driverless cars free people up to work or to do things they would have done at home, then the need for radio largely disappears – there are a lot of better ways to get music, news or entertainment if you are not occupied with driving.

Carried to extremes it’s not hard to imagine people living in their driverless cars. Take out the steering wheel and traditional seats and a car could be turned into a cozy cave. One could picture people adopting a totally mobile lifestyle using solar powered self-driving cars. It’s hard to imagine the effect on society of having a lot of nomadic car-people with no loyalty or identity to a fixed address.

I’m not particularly picking on the idea of driverless cars because I fully expect the benefits to outweigh any negative impacts. But some of these unintended consequences are not inconsequential. People will always find ways to use new technologies in ways that were not envisioned. I can make a similar list of for every other major technology that we are expecting to see in the coming decade. Not all unintended consequences are bad, but it’s likely that some of the incidental consequences of new technologies will have more impact on society than the intended ones.

The telecom industry is not immune from unintended consequences. For example, consider the deployment of fast broadband and the fact that fast broadband technologies are expensive to deploy. While broadband has brought great benefits to communities that have it, those parts of the country without it are starting to see big impacts from the lack of broadband. This isn’t something that anybody intended to happen, but it is a natural result of the expensive cost of deploying the new technology.

Is Altice Really Bringing FTTP?

suddenlink-truckLate last week Altice released a press announcement that said they are going to bring fiber-to-the-home to all of their newly acquired US properties within five years. For those not familiar with Altice, the company is now the fourth biggest cable company in the US and was created through the recent acquisitions of Suddenlink Communications for $9.1 billion and of Cablevision for $17.7 billion. These acquisitions bring the company about 4.6 million customers.

But there are parts of the press release that have me scratching my head. The headlines announce ‘A full-scale fiber-to-the-home network investment plan’ which will bring ‘large scale fiber-to-the-home deployment across its footprint.’ That sure sounds like the company will give everybody FTTP.

But deeper in the press release are several statements that have me wondering what the company is really planning to do. For example, they say they will ‘drive fiber deeper into our infrastructure.’ Deeper into the infrastructure is not necessarily the same as providing fiber the whole way to the home. That is the same kind of language that Comcast used when they announced their mostly-imaginary 2 gigabit broadband product.

Even more puzzling is the statement that “the new architecture will result in a more efficient and robust network with a significant reduction in energy consumption. Altice expects to reinvest efficiency savings to support the buildout without a material change in its overall capital budget.’ If Altice has 4.6 million customers then they must have around 6 million passings. They will be able to build a lot of the needed network by overlashing fiber onto existing coaxial cable. But even that will probably cost in the range of $500 per passing, meaning an outlay of $3 billion. And to bring fiber into the home costs in the range of $600 to $800 per customer. Add to that the core FTTP electronics of at least $200 per customer and the cost to converting existing customers to the fiber could cost another $3.7 to $4.6 billion, for a total outlay of at least $6.7 billion to $7.6 billion.

The energy savings they are talking about would be due to shutting down the existing hybrid fiber-coaxial cable network. To achieve that savings they would have to convert every customer to fiber – since it take as much electricity to run a network for a handful of customers as it does to run it for everybody. But I have a hard time believing they can save enough in power costs to pay for an expensive new fiber network without having to increase capital budgets. I have a number of clients operating HFC networks and they do not have gigantic power bills of anywhere the magnitude needed to produce that kind of savings.

This FTTP plan also has to be compared back to Altice’s promises to their shareholders. They promised to bring significant cost savings after the acquisition of Suddenlink and Cablevision and it’s already hard to see how they are going to do that. For example, their largest property is in New York and they promised the PUC there not to eliminate any customer-facing jobs (technicians and customers service reps) for five years.

They also talk about their fiber rollouts in Portugal and France. In Portugal fiber is being deployed mostly due to heavy subsidies from the government which is hoping that fiber will boost a poor economy. And in France their business plan is different than the US and Altice benefits greatly from a quad play that includes cellular service. My quick analysis of their financial performance shows that wireless drives a big piece of their profitability there, and it’s unlikely they are going to figure out a profitable wireless play here in the US.

Finally, the company seems to have spent heavily this past year on upgrading existing HFC cable networks. I’ve read a dozen local press releases in Suddenlink markets that talk about completing digital conversions and upping data speeds to as much as a gigabit using DOCSIS 3.0. It’s curious they would pour that much money into their HFC networks if they are getting ready to abandon them for fiber.

I hope I am wrong about this and I hope they bring fiber everywhere. That would certainly highlight Comcast and Charter’s decision to milk their HFC networks for decades to come. But the press-release as a whole sets off my radar and is reminiscent of similar press releases in recent years from AT&T and Comcast talking about gigabit deployments. There are just too many parts of this press release that don’t add up.

What’s Your Brand?

advertiseherebillboardmedA recent blog I wrote reminded me of this basic question that I have always asked clients, “What’s your brand?” Every business has a brand whether it’s explicit or implicit. People living in your service areas either knew you by what others say about you or by what you tell them about yourself.

So what does it mean to have a brand? It means that when people think of your company that they think about you in a certain way. – it’s the images, emotions, and decisions they associate with you in their mind. If you don’t work to create your own brand then you might just be ‘that telephone company’ or ‘those wireless guys.’ I guess there is nothing wrong with that, but it might mean that a new person to the area has to make a bunch of phone calls to figure out who you are, or that existing customers are more easily swayed by competitors with a better brand.

Having a brand is a lot more important if you are competing for customers, which most small carriers do these days. It’s important to have a brand when you are trying to sell to people who don’t know you, or to keep customers you already have.

So what do I mean by a brand? A brand can be almost anything that you want to have customers remember about you. I’ve seen hundreds of different types of brands in telecom. Some are very simple, such as ‘your local telecommunications company’. Others tell more what it’s like working with a company such as ‘making broadband easy,’ or ‘total business broadband solutions.’

But I am still surprised about how many small carriers don’t have a brand. I can understand this for a carrier who is a monopoly telephone company where customers have no other options. But it mystifies me why somebody who is not a monopoly doesn’t want an easy way for customers to understand who you are.

Some kinds of branding are obvious. For example, cooperatives and municipal broadband companies often remind customers that the business belongs to them. That can be an effective brand if done well. But I see a surprising number of these entities that don’t do a very good job at reminding people of this.

Probably the most common branding I see is companies claiming to be the ‘local telecom provider’ or ‘your local ISP.’ But unless there is something more to the story behind that claim to tell people why this is a good thing, then it can be somewhat shallow. After all, any company that will send a technician to somebody’s door is also local.

The one thing I know about branding is that whatever you tell the public had better be true. You don’t want to tout yourself as the ‘broadband company’ if you are still delivering slow DSL to a lot of your customers. That kind of branding can work against you and will remind your customers every day about how bad their broadband is. And such customers will leap to another carrier with better broadband if they ever have the chance.

If you don’t have a brand, it can be surprisingly challenging to pick one. But if you put a bunch of your employees in a room they can probably come up with a few good ideas. Here are some of the more common ones I see that I think are effective: ‘bringing gigabit broadband to X’; ‘telecom solutions since 1915’; ‘making broadband easy’; ‘21st century solutions for rural America.’

The chances are that you already have a brand but that you haven’t thought about it for a long time. If that’s the case then ask yourself. “Does my brand still tell the public what I want them to know about me?” Also ask, “Is this brand really who we are?” You might be surprised by the answer to those two questions, and if so, it’s time to update your brand.

Productizing Safety

padlockThe Internet is becoming a scarier place by the day to the average user. It seems like a week doesn’t go by when there isn’t news of some new and huge data breach or other nefarious use of the web. But as much as those big events might create a general industry sense of unease, these announcements also make people worried about their own individual Internet security.

The big ISPs like AT&T crow about recording and monetizing everything that their customers do on the web. And with a likely weakening or elimination of Title II regulation by the FCC this is likely to intensify. Every web site parks cookies on the computers of their visitors, and the bigger sites like Facebook and Google gather every fact fed to them and peddle it to the advertising machine. There are hackers that lock down PCs and hold them hostage until the owner pays a ransom. There are smart TVs that listen to us and IoT devices that track our movements inside our homes. There was news this week that smartphones with a certain Chinese chip have been sending every keystroke back to somebody in China.

All of this has to be making the average Internet user uneasy. And that makes me wonder if there is not a product of some sort that smaller ISPs can offer to customers that can make them feel safer on the web.

Savvy Internet users already take steps to protect themselves. They use ad blockers to reduce cookies. They use browsers like DuckDuckGo that don’t track them. They use encryption and visit sites using HTTPS. They scrub their machine regularly of cookies and extra and unidentified files. In the extreme some use a VPN to keep their ISP from spying on them.

Small ISPs are generally the good guys in the industry and don’t engage in the practices used by AT&T, Comcast and Verizon. I know some small ISPs that try to communicate to their customers about safety. But I think safety is now one of the biggest worries for people and I think small ISPs can do more.

Customers can really use the help. It’s easy to assume that customers ought to understand basic safety procedures, but the vast majority of them load some sort of virus protection on their PC the day they buy it and never think of safety again. They repeatedly do all of the bad things that lead to trouble. They open attachments on emails. They don’t update their software to have the latest security patches. They use social media and other sites without setting basic privacy filters.

I think there is an opportunity for small ISPs to be proactive in helping to make their customers feel safer, and in the process can create more loyal customers. I think there are two possible ways to undertake this. One is an intensive education campaign to inform customers about better web practices. I’m not talking about the occasional safety reminder, but instead a steady and concentrated effort to tell your customers ways to be safer on the web. Brand yourself as being a provider that is looking out for their safety. But don’t pay it lip service – do it in a proactive and concentrated way.

I also think there is a space for a ‘safety’ product line. For example, I have clients who run a local version of the Geek Squad and who repair and maintain people’s computers. It would not be hard to expand on that idea and to put together a ‘safety’ package to sell to customers.

Customers could have a service tech come to their home for a day each year and you could ‘fix’ all of their safety weaknesses. That might mean installing ad blockers and a spyware scrubber. It would mean updating their browsers and other software to the latest version. It could mean helping them to safely remove software they don’t use including the junkware that comes with new computers. It might include making sure they are using HTTPS everywhere. It also might mean selling a VPN for those who want the highest level of security.

I have clients who have been selling this kind of service to businesses for years, but I can’t think of anybody who does this in any meaningful way for residential customers. But since the web is getting less safe by the day there has to be an opportunity for small ISPs to distinguish themselves from larger competitors and to also provide a needed service – for pay of course.