How Much Should You Spend to Stay in the Cable Business?

Old TVOne of the questions I am often asked is, “How much money should I be willing to spend to stay in the cable TV business?” It’s a really great question because I think every small cable provider probably understands that they are losing money today on cable. Plus everything they read tells them that the cable business poised to undergo tremendous change.

The cable business is by far the most capital intensive of the three triple play services. Cable headends are expensive and they seem to need constant upgrades. Programmers alone cost you a lot of money. They move networks between satellites; they change the compression on channels; they push you to add more high definition channels and additional networks. And settop boxes cost a lot to maintain. They break and wear out. People move and take them with them. And they get obsolete – Scientific Atlanta recently stopped supporting one of the more common settop boxes in the market.

Additionally there is always pressure to offer all of the bells and whistles. A lot of content is being added to video on demand and customers seem to really like it (although they don’t spend as much on VOD movies as we were all once promised). The big cable companies offer TV everywhere so that customers can watch TV on computers, laptops, tablets and smartphones. The middleware vendors are always coming out with updates and improvements and want to charge more.

This leads you to ask the big picture question – how much money do I pour into a business line that is losing money? This is something that every business faces from time to time. I know a decade ago many of my clients faced this same question with their sales of PBX and key systems. Most of them lost money on that business line if they were honest about the real cost of being in the business. And many of them decided to cut the cord and ditched the losing business line, although others kept it.

Business school basics would tell you that you need to ditch lines of business that are losing money. But dropping cable is not easy, and for a triple-play provider it would be like McDonald’s dropping the Big Mac. It’s not easy for a triple-play provider to decide to get out of the cable business, and for most of them it’s not practical. So what do you do about this situation? Every company is different, but here are some ideas worth considering:

Charge More. If your cable line of business is not profitable, then reduce your losses by raising your rates more than normal. I had one client who raised rates 10% per year for four years and finally got close to profitability. You will lose some customers along the way, but there is no particular reason for you to subsidize a losing product by having rates lower than the surrounding market. And within the product line make sure you are charging enough for settop boxes, HD channels and the other ancillary cable products.

Pinch Pennies on Capital. If you are losing money on cable then you will never recover any investment made in cable assets. Delay upgrades and delay putting in new channels as long as possible. Make sure you are buying forward-looking settop boxes that will be compatible with future changes in the industry.

Don’t Try to Do Everything.  At some point you need to decide if you can really afford all of the bells and whistles. You really need to understand your market and your customers and decide what will happen if you don’t offer TV Everywhere or if you don’t update the VOD system. Let’s face it, little companies can’t keep up with big companies like Comcast. Comcast has headends that serve millions of customers and that makes it easier for them to justify upgrades. Unless your customers absolutely demand everything, then start paring back your offering. Perhaps cut channels and don’t implement every industry upgrade. It may not feel right to not have the state-of-the-art system, but it feels pretty good to not be bleeding money.

Is There a Web Video Crisis – Part IV and Final

The InternetIn the previous three installments of this blog I looked at the issues behind the demands of Comcast and Verizon to charge content providers for creating an Internet ‘fast lane’. In particular I have focused on the recent actions between Comcast and NetFlix. In everything I have read about this issue I never saw any specific reason cited why Comcast thought they needed the extra payments from NetFlix, and this blog series has been about looking for such reasons.

In the earlier blogs I looked at the various components of the Comcast network and my conclusion is that end-user customer fees ought to be covering the cost of the wires, or at least that is how all of the companies smaller than Comcast and Verizon see the issue. I then looked at the issue of preparing the network for peak video usage during simulcasts. Again, my conclusion is that this is a function that is a normal part of making your network operational and doesn’t seem like a reason to charge a premium price to get what is supposed to be there. Finally, I looked at peering, data centers and the network of routers and switches. My conclusion there was that peering generally saves money for Comcast and Verizon and that their savings from peering are far larger than their costs.

In the months leading up to the announcement that the two parties had reached a deal, I had seen numerous complaints from customers who said that their NetFlix was not working well on both Comcast and Verizon. And there were numerous articles like this one asking if Comcast and Verizon were throttling NetFlix. There was clearly something fishy going on and it and it was clear that both Verizon and Comcast were somehow slowing down NetFlix bits as compared to other bits. The complaints were all coming from NetFlix traffic and we didn’t see the same complaint about AmazonPrime or other video providers. And I heard no complaints anywhere about the speeds on the TV Anywhere products offered directly by Comcast and Verizon. I know I was watching Game of Thrones online in HD through my Comcast subscription and it always worked perfectly.

Then, when there was an announcement, it was made to sound like NetFlix was the one who was requesting premium access from Comcast. The Verizon deal was done much more quietly and there was no similar insinuation there. But almost instantly after Comcast struck the deal with NetFlix the speeds popped back up to former levels

One has to ask if NetFlix really got premium treatment of their bits or if Comcast simply removed whatever impediments were slowing them down. I will be the first to admit that I, like almost everybody else, am an outsider and we really don’t know what the two parties discussed as part of this announcement. But when I look at the facts that are known to me, what I see is that Comcast and Verizon were flexing their monopoly powers and slowing NetFlix down to extract payment out of them

There is no doubt that the NetFlix traffic causes cost to these two companies. Video traffic has been growing rapidly on the Internet and NetFlix is the largest single provider of video. But I step back and have to ask the basic question of what end-user fees for Internet are supposed to cover. A customer pays for a connection of a given speed, and it seems to me like these companies have promised a customer that they could use that speed. There is the caveat that Comcast has a data cap – a topic of another blog – but as long as a customer stays under that data cap they ought to always get the speed they have purchased. It shouldn’t matter if that customer chooses to use that speed and capacity to watch NetFlix or read silly telecom blogs – they have paid for a certain level of performance.

For Comcast to say that their network is not capable of delivering the accumulated speeds they have sold to customers sounds to me like they have oversold the capacity of their network. They want customers to buy fast speeds, but they don’t actually want them to use it. I’m not a lawyer, but this starts sounding like fraud, or something similar to fraud.

I simply don’t understand why the FCC would listen to any argument that says that content providers have to somehow pay extra to get normal performance. Because that is what it looks like NetFlix had to do. I can imagine as part of that agreement that there was a nondisclosure signed of the terms, and this NetFlix is not out yelling like they probably ought to be

But the long-term results of what Comcast and Verizon have done is that end users are going to pay twice for video access. They already pay to get a data pipe which is large enough to receive video. And now the cost of movies or movie subscriptions is going to increase to cover what NetFlix has to pay to deliver those movies. NetFlix is certainly not going to eat such costs.

And so the consumer is being screwed by a clear case of corporate greed. I have come to the conclusion that Comcast extracted payments out of NetFlix simply because they are large enough to do so. That is an abuse of monopoly power, and that power is only going to get worse if they are allowed to buy Time Warner.

Is There a Web Video Crisis – Part III

Zeus_peering_around_a_corner__(9386751334)In the earlier two installments of this blog I looked at various components of the Internet backbone to see if any of them might be the reason why Comcast and Verizon want to charge NetFlix and other content providers for and Internet ‘fast lane’. I’ve looked at the fiber and distribution networks as well as the routers in data centers. I also considered caching. In this article I will finally look at peering as a possible reason why there ought to be an Internet fast lane.

Peering is when two networks directly interchange traffic rather than let it route over the open Internet. Peering is done to save money and the savings can be significant. A company like Comcast pays something less than a dollar per dedicated megabit to accept traffic from the Internet. If they can instead have Google hand them the traffic coming from Google services they can avoid paying for the bandwidth that traffic would have incurred coming through the open Internet.

I would assume that Comcast and Google peer because Google peers with many of my much smaller clients. Peering involves having a fiber cross-connection between the two companies at a data center in each Comcast market. Each party would normally be responsible for their own routers and collocation costs at a data center. So Comcast’s cost of peering with Google are relatively small charges for collocation and cross-connection in the data center, while the savings would be gigantic.

This is a good place to note the difference to Comcast for traffic they receive from the web and traffic they send to the web. The companies that sell Internet access sell symmetrical data pipes that provide the same amount of bandwidth in both directions. But Comcast does not sell symmetrical data product to their customers and they provide vastly faster download speeds than upload speeds. For example, the Comcast 100 Mbps download product only has an upload speed of 5 – 6 Mbps. This means that the real cost to Comcast and similar ISPs for Internet traffic is paying for downloading because they have a huge amount of excess capacity in the upload direction. Peering saves them so much money because it shrinks the size of download pipe they must purchase.

So it’s a given that Comcast saves money by peering with Google. With peering they would not have to purchase the bandwidth to provide all of the accumulated traffic for Gmail, Google Search, Google Maps, all the android apps being used on home WiFi networks. It is estimated that for most ISPs that Google is involved with around 25% of all web traffic, so peering with Google can save Comcast from buying a significant amount of Internet bandwidth.

It also costs Google money to peer with Comcast because they also have to pay to use the data centers. But Google likes peering because it speeds up their traffic and gives their customers a better experience using Google products. Peering avoids the extra hops that come from using the open Internet. Generally both parties in a peering arrangement see it as a win:win situation.

Comcast is claiming that one of the reasons they need to charge for a ‘fast lane’ is to cover the costs of peering with NetFlix. I find that claim to be interesting. There is one subtle difference between Comcast’s traffic from Google and traffic from NetFlix. The traffic from NetFlix is one directional in the direction of Comcast while there is some traffic in both directions between Comcast and Google (although it is still mostly towards Comcast). This means that the peering savings for Comcast to peer with NetFlix is even more dramatic than it is with Google. Comcast saves so much money by peering with NetFlix that they could pay NetFlix to peer and still save a ton of money.

When the story about Comcast and NetFlix first came out it was somewhat confusing because NetFlix was using Level3 and other intermediate carriers between themselves and Comcast. It makes sense that they would do this because NetFlix doesn’t own any actual network. The presence of an intermediate carrier does not change the fact that peering with NetFlix is an incredibly good deal for Comcast. The press reports were confusing and it sounded like Comcast wants NetFlix to peer directly with them and not use intermediate carriers. I can only interpret that to mean that Comcast wants NetFlix to buy transport from them and not from intermediate carriers. And this might be how Comcast is ‘charging’ for the peering arrangement. What I find totally mysterious in all of this is how Comcast is using the peering arrangement as a reason why they should be able to charge anything to NetFlix. Again, Comcast saves so much money through this peering that they ought to be the ones paying NetFlix to peer. The whole peering argument has me scratching my head.

And the picture of a cat? It’s peering!

Is There a Web Video Crisis – Part II

Data CenterIn the first part of this series I looked at the three areas of the customer network – the edge network, the distribution network and the Internet backbone. I came to the conclusion that if Comcast and Verizon operate the same way as the hundreds of carriers that I work for that the fees paid by end user customers ought to be sufficient to cover the costs of those portions of the network and to ensure that the network is robust enough to cover video. It seems to me that nobody but Comcast and Verizon seems to have a need to charge for an Internet ‘fast lane’.

But those three network components are not the entire Internet network, so to be fair to Comcast and Verizon there are a few other places to look. In this blog I will consider what happens when a lot of video hits the web at the same time. Let’s see if this might be the reason Comcast needs an Internet fast lane.

There are two different ways that video traffic can be larger than normal on web. The first is when there is a major event simulcast on the web. Simulcast is when a video is sent to many locations at exactly the same time. The granddaddy of such events is the Superbowl. But there are a lot of other big events like the Olympics and the soccer World Cup. In those instances there are a whole lot of people watching the same event. Simulcast doesn’t always involve sports and one of the more recent web crashes was during the finale of True Detective on HBO Go.

There have been a few major crashes in the past during simulcast events and as often as not the problem has been at the programmer’s server which received more requests for signals than it could handle. But considering simulcast highlights another part of the Internet – the servers, switches and routers used to send, route and receive traffic over the web. These devices are the routing core of the Internet and are found today at large data centers. It certainly is possible for these devices to get overwhelmed. In the past when there have been web crashes it was mostly likely these devices and not the fiber data network that got overwhelmed by video

On a per customer basis the servers, routers and switches are the least expensive part of the Internet network. This is not to say that they are cheap, but they cost a lot less than building fiber networks. As mentioned above, the point of stress on simulcast video are the originating servers, and thus it would be incredibly cynical of Comcast to claim that they need to charge a premium price to NetFlix because they don’t have enough servers and routers to handle the traffic. Their terminating routers ought to be sufficient and ready to handle large volumes of videos as a normal course of business.

The other way that web video traffic can get big is when a lot of people are watching video and each one of them is watching something different. Today people watch what they want when they want and this is the primary way that the web handles video. But there are times when usage is greater than normal, and perhaps this is what drives the need for a fast lane.

Broadcasters like NetFlix have helped to ameliorate the affects of large video volumes by caching. For example NetFlix will put a caching server at any large headend at their own cost to cut down on the stress on the web. A NetFlix caching server will contain a copy of all of the programming that NetFlix predicts that people will most want to watch. Anybody who then watches one of these shows initiates the program from the local caching server rather than making a new web request back at the NetFlix hubs. I would have to assume that NetFlix has provided numerous caching servers to Comcast and Verizon, so this cannot be a reason to charge more for a fast lane.

But caching doesn’t always solve large demand. First, a NetFlix caching device only contains what NetFlix predicts will be popular, and if something else they host goes viral it won’t be on their caching server. But more importantly, there is a ton of video content on the web that is not going to be on these kinds of caching servers. If some video from Facebook or YouTube goes vital it is likely not to be already cached because nobody could have predicted it would go viral.

But there is a new technology that should solve the caching issue. Cisco and other smaller companies like PeerApp and Qwilt have introduced a technology called transparent caching. This technology caches content on the fly. If more than two users in a network ask to see the same content it makes a local copy of that content. Within minutes of teens loving some new YouTube video it would be cached locally and would stay in the cache until demand for it stops. This technology will drastically reduce the requests back to the originating servers at providers like NetFlix and YouTube.

My conclusion of this discussion is that I find a hard time seeing where Comcast or Verizon can claim that their routers, switches and servers are inadequate to handle the traffic from NetFlix. These are one of the cheaper components of the web on a per customer basis and they ought to have adequate resources to handle simulcasts or viral videos. Even if they don’t, the new technology of transparent caching promises to drastically reduce the web traffic associated with video since any popular content will be automatically locally cached.

Is There a Web Video Crisis? – Part I

The InternetThe whole net neutrality issue has been driven by the fact that companies like Comcast and Verizon want to charge large content providers like NetFlix to offset some of their network cost to carry their videos. Comcast implies that without such payments that NetFlix content will have trouble making it to customers. By demanding such payments Comcast is saying that their network is having trouble carrying video, meaning that there is a video crisis on the web or on the Comcast network.

But is there? Certainly video is king and constitutes the majority of traffic on the web today. And the amount of video traffic is growing rapidly as more customers watch video on the web. But everybody has known for years that this is coming and Comcast can’t be surprised that it is being asked to deliver video to people.

Let’s look at this issue first from the edge backwards. Let’s say that on average in a metro area that Comcast has sold a 20 Mbps download to each of its customers. Some buy slower or faster speeds than that, but every one of Comcast’s products is fast enough to carry streaming video. Like all carriers Comcast does something called oversubscription, meaning that they sell more access to customers than their network can supply. But in doing so they are banking on the fact that everybody won’t watch video at the same time. And they are right, it never happens. I have a lot of clients with broadband networks and I can’t think of one of them who has been overwhelmed in recent years by demands from customers on the edge. Those edge networks ought to be robust enough to deliver the speeds that are sold to customers. That is the primary thing customers are paying for.

So Comcast’s issues must be somewhere else in the network, because their connections to customers ought to robust enough to deliver video to a lot of people at the same time. One place that could be a problem is the Internet backbone. This is the connection between Comcast and the Internet. I have no idea how Comcast manages this, but I know how hundreds of smaller carriers do it. They generally buy enough capacity so that they are rarely use more than some base amount like 60% of the backbone. By keeping a comfortable overhead on the Internet pipe they are ready for those rare days when usage bursts much higher. And if they do get too busy they usually have the ability to burst above their proscribed bandwidth limits to satisfy customer demands. This costs them more, but the capacity is available to them without them having to ask for it.

So one would not think that the issue for Comcast is their connection to the Internet. They ought to be sizing this according to the capacity that they are selling in aggregate to all of the end users. The price of backbone has been dropping steadily for years and the price that their customers pay them for bandwidth should be sufficient for them to make the backbone robust enough.

That only leaves one other part of the network, which is what we refer to as distribution. This is the fiber connections that go between a headend or a hub out to neighborhoods. Certainly these connections have gotten larger over time and I would assume that like all carriers that Comcast has had to increase capacity in the distribution plant. Where a neighborhood might have once been perfectly fine sharing a gigabit of data they might now need ten gigabits. That kind of upgrade means upgrading to a larger laser on the fiber connection between the Comcast headend and neighborhood nodes.

Again, I would think that the prices that customers pay ought to cover the cost of the distribution network, just as it ought to cover the edge network and the backbone network. Comcast has been unilaterally increasing speeds to customers over time. They come along and periodically increase speeds for customers, say from 10 to 15 Mbps. One would assume that they would only increase speeds if they have the capacity to actually deliver those new higher speeds.

From the perspective of looking at the base components of the network – the edge, the backbone and the distribution, I can’t see where Comcast should be having a problem. The prices that customers pay ought to be more than sufficient to make sure that those three components are robust enough to deliver what Comcast is selling to customers. If it’s not, then Comcast has oversold their capacity, which sounds like their issue and not NetFlix.

In the next article in this series I will look at other issues such as caching as possible reasons for why Comcast needs extra payments from NetFlix. Because it doesn’t appear to me that NetFlix ought to be responsible for the way Comcast builds their own networks. One would think that their networks are built to generally deliver the bandwidth to customers they have paid for regardless of where on the web that bandwidth is coming from.

Computerizing our Jobs

Rowa_RoboterI often write about new technology such as cognitive software like Siri or driverless cars. These types of innovations have the potential to make our lives easier, but there are going to be significant societal consequences to some of these innovations. Late last year Carl Benedikt Frey and Michael A. Osborne published a paper that predicts that about 47% of all current American jobs are at risk of being replaced by some form of automated computerized technology.

We have already been seeing this for many years. For example, in the telecom industry there used to be gigantic operator centers with rooms full of operators who helped people place calls. Those centers and those jobs have largely been eliminated through automation. But not all of the jobs that have been eliminated are so obvious. For example, modern accounting software like QuickBooks for small business and more complex software for larger businesses have displaced many accountants. Where a large company might have once had large rooms of accounts payable and accounts receivable personnel, these software systems have eliminated a significant portion of those staffs. And many small businesses perform their accounting functions today without an accountant.

Computerization has also wiped out entire industries and one can only imagine the numbers of jobs that were lost when iTunes largely replaced the music industry or NetFlix and Hulu have replaced video rental stores.

Automation has created some new jobs. For instance, looking at this video of an Amazon fulfillment center we can see that there a lot of people involved in moving packages quickly. But we also see a huge amount of automation and you know that Amazon is trying to figure out ways to automate the remaining functions in these warehouses. It’s not a big stretch to envision robots taking the places of the ‘pickers’ in that video.

Some of the innovations on the horizon have the potential to eliminate other large piles of people. Probably the most obvious technology with that potential is driverless cars. One can envision jobs like taxi drivers disappearing first, eventually followed by truck drivers. But there are other jobs that go along with this like many of the autobody shops that are in business to repair car accidents due to human poor driving. We have already seen Starbucks trialing an automated system that replaces baristas and I saw one of these automated systems in an airport last month. There is a huge boom right now in developing manufacturing robots and this are going to replace much of the manual labor in the manufacturing process. But this also will allow factories to return to America and bring at least some jobs back here.

But this study predicts a much wider range of jobs that are at risk. The real threat to jobs is going to be through the improvement of cognitive software. As an example, IBM’s Watson has been shown to be more accurate than nurses and doctors in diagnosing illnesses. We are now at the point where we can bring supercomputers into the normal workplace. I read four different articles this week about companies who are looking to peddle supercomputing as product. That kind of computing power could start to replace all sorts of white collar and middle management jobs.

The study predicts a huge range of jobs that computers can replace. They include such jobs as patent lawyers, paralegals, software engineers and financial advisors. In fact, the paper predicts that much of the functions in management, financial services, computer technology, education, legal and media can be replaced by cognitive software.

Economists have always predicted that there would always be new jobs created by modernization to replace the jobs that are lost. Certainly that is true to some extent because all of those jobs in the Amazon warehouse were not there before. But those jobs replace store clerks in the many stores that have lost sales to Amazon. The real worry, for me, is that the sheer number of jobs lost to automation will happen in such a short period of time that it will result in permanent unemployment for a large percentage of the population.

One job that the paper predicts will be replaced is technical writer. As a technical blogger I say “Watson, the game is afoot! IBM, bring it on.”

Cyber Espionage

SpyVsSpyI had already written this blog a few days ago, but before I could publish it the news came out that the US has indicted Chinese officials for spying on US companies including Westinghouse, Alcoa Alumina and US Steel. They were clearly doing this to seek advantages for Chinese companies in areas like nuclear plant design, metallurgy and solar energy. Our outrage seems a little disingenuous since the Snowden leaked materials show that the US has been spying on Huawei, the Chinese telecom manufacturer.

You hear a lot about cyber attacks on the web, and these mostly involve denial of service attacks where somebody sends so much traffic to a given IP address that they overwhelm the site and effectively shut it down. But until this announcement there has not been a lot of news about cyber espionage. How does cyber espionage work? Instead of shutting down a site, the goal of cyber espionage is to gather information about somebody, ideally without them ever detecting it. The goal is to worm into somebody’s network to gain access to all of their files and communications.

Cyber espionage is done both by companies that spy on each other and by governments. Nobody knows how much of this is going on, but one has to suspect that since it can be done that it is being done on a large scale. Nobody can be entirely sure who is capable of this, because one good hacker can make this work. One would have to think that the cases the US just uncovered are only the tip of the iceberg.

Cyber espionage is very different than cyber warfare where countries make concerted attacks against other to try to shut down key institutions, disable electric grids and create general havoc. At the end of 2013 it was thought that Australia, Belarus, China, France, India, Iran, Israel, North Korea, Pakistan, Russia and the US have there wherewithal to conduct cyber warfare. But almost anybody can undertake espionage.

The most typical way to undertake cyber spying is through a trojan horse virus. The goal is to get somebody at a company or organization to open an email that has an infected document. The information on the sent email will be forged and will be made to look like it comes from a familiar person. The trojan horse doesn’t need to be sent to somebody high up in an organization because the goal of the invasion is to worm into the company servers.

Since EXE files don’t get through firewalls, attached file will be in the form of a Word, Excel or Adobe Reader file so as to not look suspicious. Normally these kinds of files are not malicious, but in this case the document would secretly include executable code. Generally when the file is opened the trojan horse then does two things. It opens the attached file to distract the recipient of the file and it also would execute binary code that would create a backdoor program that would give the spying party access to that computer. The trojan horse file would hide itself somewhere on the computer and establish a connection back to the spy, who would then be able to do almost anything possible from that computer.

Once in place the trojan horse can do such things as capture keystrokes to know what the infected machine is doing. They could even enable the microphone and listen in on conversations. But the most important feature of this kind of invasion is that the spy is given access to data on company servers at whatever level of security is enjoyed by the infected machine. In many cases, once they have gotten a foot inside the target location they will create an infected file from that user to get to other people within the organization.

A number of organizations have uncovered these kinds of spying attacks. The very nature of the attack always makes it nearly impossible to know where the attack originated. But one would think that most of the organizations that have been invaded in this way have no idea that they are being spied upon.

There is no really foolproof way of protecting against this kind of invasion. It only takes one employee to open an infected file and the spying is in place. The only real security from this kind of espionage is to not have confidential information on servers that can be connected to the Internet. That means that both the servers and the machines that use them must be fully isolated from external communications. That kind of security is extraordinary and generally only the military and other government organizations would take such drastic steps to protect top secret data. It’s the rare corporation that will tolerate that level of extra hassle in the name of security.

AT&T to Add Rural Broadband?

Satellite_dish_(Television)There is one part of the AT&T and DirecTV proposed merger that really has me scratching my head. Buried within the announcement was a statement that AT&T would use this merger to add 15 million broadband subscribers over the next four years, mostly in rural areas. That goes in the opposite direction of what AT&T has been saying for the last several years. For instance, AT&T told the FCC last year that it was going to be asking for permission to cut down the copper lines from millions of rural customers.

And it goes against the trend of AT&T’s broadband sales. Let’s look at the numbers. In 2011 AT&T reported 16,427,000 data customers. At the end of 2013 it was virtually the same number at 16,425,000. So overall, AT&T has been totally flat in the total number of data customers. But looking beneath those numbers we see something else. During that same two years AT&T added almost 1.5 million customers to its U-Verse product, a bundled data and cable product using two bonded copper wires. Assuming that most of these new U-Verse customers are buying data, then AT&T lost a lot of traditional DSL customers at the same time it was growing the U-Verse product.

So AT&T has been losing traditional DSL customers and it has plans to cut down millions of copper wires. And yet the DirecTV merger is going to somehow help it almost double its data customers, particularly in rural areas? How might they do that? I can think of a couple of scenarios.

One possibility is that this part of the announcement is all fluff intended to help get the merger through the FCC. Nothing gets a better ear these days at the FCC than the promise to bring broadband to rural customers. So AT&T might be blowing smoke and hoping that this helps to get the merger approved. But let’s suppose they are serious about this and that they really are going to vigorously chase data customers again. How might they do that? I can think of two scenarios.

First, they could use the DirecTV merger as a reason to reinvigorate their investment in copper. The fact is that AT&T has always had it within its power to do better in rural broadband. Most of its rural DSL electronics are first or second generation equipment, and for a relatively moderate investment they could beef up rural DSL to become competitive. Perhaps bundling it with a TV product that brings a profit stream from numerous rural homes changes the business plan and makes DSL look attractive again as a long-term investment. But I have a hard time believing this. Their rural copper plant is old and I believe them when they say they want to tear it down and get out of the landline business.

The only other option that makes sense to me is that they use a DirecTV bundle to entice people off their copper and onto wireless data. In doing so they would also be furthering their goal of getting out of the copper business. I have written a number of blogs talking about how rural cellular systems cannot take the place of landlines for the delivery of data. Cellular systems are great at delivering bursts of data, especially after being upgraded to 4G, but they are not designed, nor can they be designed to support multiple people watching streaming video. It doesn’t take many video customers to lock up the typical cell site. And this is a matter of physics as much as anything, so there is no easy way to fix this other than to move to really small cell sites with a few customers on each cell. And that would require a big investment in rural fiber.

So I am skeptical of the AT&T announcement. This announcement might have made sense if AT&T wanted to buy Dish Networks, which owns a significant amount of spectrum that could be used to deliver point-to-multipoint data in rural areas. But DirecTV has no broadband assets or plans. My best guess is that they will use this merger as an excuse to move people off copper, something they are already working hard at. But there is also the chance that this is all smoke and mirrors to help get the FCC to approve the merger.

Look! Up in the Sky, It’s a Bird, It’s a . . .

San-Carlos-UFOI have always been fascinated by people who say that they have seen UFOs. I’ve spoken to some of them and they seem very sincere. I remain skeptical that there really are airships from another planet constantly visiting us and I figure that in most cases that there is an explanation of what these people saw in the sky. Over the next decades there is going to be the potential for a lot more ‘UFO’ sightings because many different technologies are moving to the sky.

The first of note is the race between Google and Facebook to bring Internet access to the 5 billion people that don’t have it today. Google is considering a fleet of balloons that they have named Loon as a platform to bring Internet everywhere. Facebook has teamed up with other companies in a collaboration called Internet.org. This group is considering both satellite technology and solar-powered drones that would fly at 20,000 meters, above both weather and commercial air traffic.

But these two initiatives are just the beginning of what’s coming to the sky. Today over a billion people on the planet don’t have access to electricity, and so that issue must be addressed first before bringing Internet access. There are a number of firms looking at ways to bring electricity to remote area using airborne wind turbines. One of these is the BAT being trialed in Alaska by Altaeros Energies. The BAT is a helium-filled cylindrical blimp with a wind-turbine in the center. It can fly up to 1,000 feet to catch steady winds and can generate enough electricity in this first version to power twelve full-sized homes.

But there are other companies looking at air-born electric generation. One is Google-owned Makani out of California whose wind generator looks like a glider airplane. Another firm looking into tethered wind generators is LTA Windpower out of Canada that has a design that looks like a blimp with wings. Finally in the race is EnerKite out of Berlin which has designed what looks like a large kite that can generate electricity.

All of these airborne wind generators are counting on the fact that winds are steadier and stronger as close as 1,000 feet off the ground. Building generators that can reach into those steady winds is far more efficient per dollar than land-based wind turbines. Plus, these wind generators can bring electric generation to remote places – to a small group of rural homes off the grid, to remote locations used for scientific research, to rural villages in third world countries or to temporary locations like airplane crash sites.

While we are looking up at the sky we are probably going to start seeing drones used for all sorts of commercial purposes. The biggest such announcement was Amazon talking about using drones to deliver small packages within 30-minutes of taking an order. But there are other companies around the world looking at delivery drones. SF Express in China is already using delivery drones on a trial basis. And a UK franchise of Domino’s pizza has demonstrated pizza delivery with drones.

In addition to the commercial applications are the personal and government applications. Anybody can buy a drone today that can be used to peek in your neighbor’s windows. Many science fiction books have predicted a future when police drones will be sent quickly to crime and accident scenes to record fresh evidence, and of course, in science fiction books this always morphs into drones that poke into every aspect of our privacy.

Finally, there are a lot of recent articles talking about the possibility of soon building an affordable flying car or some sort of hovercraft that could be used for local transportation. In a few years there might be so many devices in the air that talk of UFOs will be ignored since there will usually be another explanation.

A Right to be Forgotten?

International_newspaper,_Rome_May_2005In a surprising ruling, the European Union’s Court of Justice has ruled that Google must expunge information that is “inadequate, irrelevant or no longer relevant” from the results of its search engine upon request. The case that drove this ruling was one where a Spanish man, Mario Gonzalez, asked to have information deleted from Google. In 1998 he received a notice that his house would go into repossession for not paying his property taxes.

You’ve probably seen these kind of notices that are put into newspapers once a year for everybody who is delinquent on their property taxes. Like most people, the taxes were paid and the house did not go to tax foreclosure. But Sr. Gonzalez had asked Google to delete the information since he found it embarrassing. The information recently came into Google when the newspaper that had printed the original notice digitized their older newspapers.

There was no dispute in this case that the facts stated in the newspaper were true, because Sr. Gonzalez had been late paying his property taxes and was properly notified of this fact along with everybody else who was late in paying his tax bill. He simply wanted this information deleted because he found it embarrassing and he thought it was no longer relevant.

As I have thought about this I think this is a dreadful ruling. It is being called having the right to be forgotten. But it is something else and it gives people the right to edit their life on line to say what they want it to say. To hell with the facts, but if anything pops up in a Google search you don’t like, then let’s get rid of it. Had a DUI ten years ago – how embarrassing. Raped somebody twenty years ago before you cleaned up your act and became a preacher – kill the story. You’re a politician and people write unflattering articles about your votes – then wipe them out.

This ruling is not about privacy, it is about changing what the world sees about you, regardless if those things are true. If this ruling is allowed to stand it will make the European Internet look like the Chinese one where ten thousand censors read the net all day to scrub out things they don’t find politically correct. Every unsavory person in the world can partake in revisionist history and make themselves look as chaste as the Flying Nun.

This ruling would make Google the policeman of what people want on the Internet instead of just a neutral purveyor of facts. In this case, Sr. Gonzalez and many of his neighbors did not pay their taxes on time. What if he did this every year, not just once in 1998? A prospective employer might want to know this sort of thing about somebody before they hire them.

The trouble with this ruling is that only the worst among us will use this ruling to erase their history. Most people would not be bothered by having true things about them on the web, but thieves, child molesters, political demagogues and con artists will have a field day with this ruling erasing the truth about themselves. There will be no police crime reports on-line because they might offend the criminals. There will not be a big pile of stories about the Westboro Baptist Church or the Nazi party because those groups will get them all expunged.

I certainly hope that some sanity comes to the courts there and this gets overturned. It is an insane ruling and it puts Google and other search engines into an impossible situation. Carried to logical conclusion it puts every newspaper and blogger at risk for having to pull down anything negative they have said about somebody, even if it is true. This protects the first amendment rights of somebody who doesn’t like something said about them in favor of the rights of somebody else to have a negative opinion of them. It effective says that anything ever printed about somebody is slander, even if it’s true.

It is a dangerous step when we start hiding the truth and can edit our lives retroactively. Mr. Gonzalez was late paying his taxes. He doesn’t dispute that he belonged on the public list of late payers. The newspaper that published that list had every right to do so, and until this ruling Google has every right to search old newspapers as parts of its search engine. The truth is the truth and none of us are going to like the consequences of people having the ability to change their public past. Once implemented this means that you can no longer have any faith in anything that you find on the Internet because it might have been edited.