To Encrypt or Not to Encrypt

SpyVsSpyWe are seeing a major policy tug-of-war about privacy on the Internet. On one side are law enforcement and national security agencies that want to be able to monitor everything that happens on the web. On the other side are those that value privacy the most. This is not a new debate and has been going on since the 90s.

Encryption has been around for a while, but it’s generally believed that agencies like the NSA have cracked most existing encryption schemes and are able to readily decipher communications between most parties on the web.

Recently, Michael D. Steinbach, assistant director of the FBI’s Counterterrorism Division, testified to Congress that the FBI has no problem with encryption as long as the government still has access to the underlying data. He thinks that encryption between people is a good thing to keep personal data from being intercepted by bad guys on the web, but he still thinks that there are law enforcement and national security concerns that are more important than individual privacy concerns. The real concern is that encryption will allow criminals and terrorists to go ‘dark’ and evade detection or monitoring.

But the revelation that the NSA is spying on everybody has really upset the technology community that run the Internet. The  vision of the Internet was to be a place for the free exchange of information and many technologists believe that widespread surveillance squelches that. And very few people like the idea that the government knows your every secret. And so we see companies that are working to find ways to make communications private from snooping—including from the government.

Apple is the largest company to take a stance and they have initiated end-to-end encryption on the iPhone. The way they have done this only the sender and receiver of a communication can unlock a given message and Apple is not maintaining any way to crack the encryption themselves. This means that Apple is unable to reveal what is inside customer communications even if served with a court order. I am guessing that one day this is going to be put to a legal test and I can picture laws being passed that stop companies like Apple from doing this. And I am sure Apple will fight back, so ultimately this might have to be determined by the Supreme Court.

But there are other groups working on a privacy solution that even laws might not be able to touch very easily. One such company is Ethereum. This is a crowd-funded group in Europe who is building upon the early work with bit coins to build a decentralized communications system where there is nobody in charge because there is no centralized network hub – there is no company like Apple at the core of such a network. In such a hubless network it’s much harder for the government, or even companies like Google and Facebook to spy on you.

This requires the establishment of peer-to-peer networks that is a very different way of structuring the web. Today the basic web structure is based upon software sitting at specific servers. Things are routed today because there is a massive database of DNS addresses that list where everything can be found.

But Ethereum is taking a totally different approach. They have built apps that find space on millions of customers’ computers and servers. Thus, they are located everywhere, and yet at no specific place. Ethereum is using this distributed network and building upon the block-chain technology that underlays bit coin trading. The block-chain technology is so decentralized and so secure that nobody but the sender and receiver can know what is inside a communications chain.

Ethereum isn’t really a company, but rather a collective of programmers that intend to disband once they have established the safer communication methods. And they are not the only ones doing this, just one of the more visible groups. This creates a huge dilemma for law enforcement. There is a huge amount of web traffic dedicated to nefarious purposes like drug trafficking and child pornography, without even considering terrorists groups. Governments have had some limited success in shutting down platforms like Silk Road, but the systems Ethereum and others are building don’t have a centralized hub or a place where the system can be stopped.

I have no doubt that the government will find ways to crack into these systems eventually, but for now it seems like the privacy advocates are one step ahead of them, much in the same way that hackers are one step ahead of the web security companies.

I don’t know how I feel about this. Certainly nobody benefits by enabling huge rings of criminals and terrorists. And yet I get angry thinking that the government is tracking everything I am doing online. I’ve read all of the sci-fi books that explore the terrible consequences of government abuse due to surveillance and it’s not pretty. I am sure that I am like most people in that I really have nothing to hide. But it still makes me very uneasy to think that we are all being watched all of the time.

Lifeline Data and the Digital Divide

FCC_New_LogoThe FCC recently approved moving forward with the process of establishing a low-income subsidy for landline data service. The target subsidy they have set is payment of $9.25 per month towards the broadband bill of qualifying households. I’m really not sure how I feel about this.

Certainly we have a digital divide. While there are still many millions of rural homes that have no broadband alternative, there are even more urban households who can’t afford broadband. The numbers bear this out. A Pew Research survey earlier this year reported that the broadband penetration rate for homes that make less than $25,000 per year is 60% while 97% of homes that make more than $150,000 per year have broadband. The overall national average broadband penetration right now is at about 74% of households and it’s clear that poorer homes have a hard time affording broadband.

If you accept the premise that broadband is becoming a necessity to participate in our culture, and even more importantly that broadband is vital at home for school kids, then we do need a way to get broadband to people who need it.

But I wonder if this program is really going to make a difference and if it will get broadband into a whole lot more homes (versus giving payments to some of those 60% of low income homes that already have broadband). The dollar amount, at $9.25 doesn’t feel like a very big discount on broadband bills that are likely to be $40 or higher in most places. If a home is having trouble affording a $40 broadband bill, I wonder if reducing that to $30 is really going to make it affordable? I’m not sure that the policy makers who are deciding this really understand how little disposable income most working poor families have.

And paying for broadband isn’t the whole cost because homes that can’t afford broadband also have a hard time affording computers. It’s not like you can buy a computer once – I know I have rarely had a computer that lasts more than three years, with some of them dying earlier than that.

I know that many cities already have programs that tackle the computer issue. I know of programs that distribute refurbished computers to homes. And there are more and more school systems that are giving school kids an iPad or other computer so that they don’t have to worry about having a computer at home. For this federal program to be really successful is going to require more of those kinds of programs.

I also wonder how the FCC will cap the amount of money this is going to cost. It’s not going to take a whole lot of households to eat up any funds they set aside for this. The current Lifeline telephone subsidy cost $1.6B in 2014 and pays a $9.25 subsidy for a landline or a cellphone for homes that are below 135% of the poverty line established by the Department of Health and Human Services.

The revised plan is going to keep the $9.25 subsidy and somehow use it to cover both telephone and data connections. The exact details aren’t out, but it was said that no household could collect more than one $9.25 subsidy. If a home is already getting the phone subsidy then they wouldn’t get any additional break on their data connection.

I think every school kid ought to somehow have access to a computer and broadband. I just don’t know that this particular program is going to change the current situation a whole lot and I wonder if there ought to be a different approach. The digital divide is real and kids in poor families are the most affected by it. If this program doesn’t make a big difference I hope we are willing to try something else.

Shrinking Competition

1854_gold_dollar_obvI bet that the average person thinks that telecom competition is increasing in the country. There are so many news releases talking about new and faster broadband that people probably thinks broadband is getting better everywhere. The news releases might mention Google Fiber or talk about 4G or 5G data and infer that competition is increasing in most places across the country. But I travel a lot and I am pretty certain that in most markets broadband competition is shrinking.

There are a few places getting new fiber. Google has built a few cities. CenturyLink has woken up from the long sleep of Quest and is building some fiber in some markets. And there are a handful of municipalities and other companies building fiber in some markets. This is bringing faster broadband to some cities, or more accurately to some neighborhoods in some cities since almost nobody is building fiber to an entire metro market. But it’s hard to say that this fiber is bringing price competition. Google has priced their gigabit fiber at $70 per month and everybody else is charging the same or more. And these big bandwidth products are only intended for the top third of the market – they are cherry picking products. Cities that are getting fiber are mostly not seeing price competition, particularly for the bottom 2/3 of the market.

But in most markets in the US the cable companies have won the broadband battle. I’ve seen a surveys from a few markets that show that DSL penetration is as low as 10% – and even then at the lower speeds and prices in most markets – and the cable companies serve everybody else.

It seems the two biggest telcos are headed down the path to eventually get out of the landline business. Verizon stopped building new FiOS and has now sold off some significant chunks of FiOS customers. It’s not hard to imagine that the day will come over the next decade when they will just quietly bow out of the landline business. It’s clear when reading their annual report that the landline business is nothing more than an afterthought for them. I’ve read rumors that AT&T is considering getting out of the U-Verse business. And they’ve made it clear that they want completely out of the copper business in most markets. And so you are also likely to see them start slipping out of the wireline business over time.

I can’t tell you how many people I meet who are convinced that wireless cellular data is already a competitor of landline data. It is not a competitor for many reasons. One primary reason is physics; for a wireless network in a metropolitan area to be able to deliver the kind of bandwidth that can be delivered on landlines would require fiber up and down every street to feed the many required cell sites. But it’s also never going to be a competitor due to the draconian pricing structure of cellular data. It’s not hard to find families who download more than a 100 gigabits during a month and with Verizon or AT&T wireless that much usage would probably cost $1,000 per month. Those two GIANT companies are counting on landline-based WiFi everywhere to give their products a free ride and they do not envision cellular data supplanting landlines.

Broadband customer service from the large companies has gone to hell. The large cable companies and telcos are among the worst at customer service when measured against all industries. This might be the best evidence of the lack of competition – because the big carriers don’t feel like they have to spend the money to be good. Most customers have very few options but to buy from one of the behemoths.

We were supposed to heading towards a world where the big telcos built fiber and got into the cable business to provide a true competitor to the cable companies. A decade ago the common consensus was that the competition between AT&T and Time Warner and between Verizon and Comcast was going to keep prices low, improve customer service, and offer real choices for people. But that has never materialized.

Instead what we have are the cable companies dominating landline broadband and the two largest telcos controlling the wireless business. Other competition at this point is not much more than a nuisance to both sets of companies. We see prices on broadband rising while broadband speeds in non-competitive markets are stagnating. And, most unbelievable to me, we’ve seen the US population replace a $25/month landline that sufficed for the family with cellphones that cost $50 or more for each family member. I can’t recall anybody predicting that years ago. It kind of makes a shambles of fifty years worth of severe telephone regulation that used to fight against telcos raising rates a dollar or two.

So I contend that overall competition in the country is shrinking, and if Verizon and AT&T get out of the landline business it will almost disappear in most markets. Even where we are seeing gigabit networks, the competition is with speed and not with price. People are paying more for telecom products than we did years ago, and price increases are outstripping inflation. Make no mistake – if I could get a gigabit connection I would buy it – but giving the upper end of the market the ability to spend more without giving the whole market the option to spend less is not competition – it’s cherry picking.

Li-Fi

Light bulbThere is another new technology that you might be hearing about soon. It’s called Li-Fi and also goes by the name of Visible Light Communications (VLC) or Optical WLAN. This technology uses light as a source of data transmission, mostly within a room, and will compete with WiFi and other short-distance transmission technologies.

Early research into the technology used fluorescent lamps and achieved data speeds of only a few Kbps. The trials got more speed after the introduction of LED lighting, but the technology didn’t really take off until professor Harold Haas of the University of Edinburgh created a device in 2011 that could transmit at 10 Mbps. Haas calculated the theoretical maximum speed of the technology at the time at 500 Mbps, but recent research suggests that the maximum speeds might be as fast someday as 1.5 Gbps.

There are some obvious advantages of the technology

  • Visible light is totally safe to people and eliminates any radiation issues involved in competitors like 802.11ad.
  • It’s incredibly difficult to intercept and eavesdrop on Li-Fi transmissions that stay within a room between the transmitter and receiver.
  • It’s low power, meaning it won’t drain batteries, and uses relatively simple electronics.

But there are drawbacks as well:

  • The speed of the transmission is going to be limited to the data pipe that feeds it. Since it’s unlikely that there will ever be fiber built to lightbulbs, then Li-Fi is likely to be fed by broadband over powerline, which currently has a maximum theoretical speed of something less than 1 Gbps and a practical speed a lot less.
  • At any reasonable speed Li-Fi needs a direct line-of-sight. Even within a room, if anything comes between the transmitter and the receiver the transmission stops. Literally waving a hand into the light bean will stop transmission. This makes it awkward to use it for almost any mobile devices or something like a virtual reality headset.

There are a few specific uses considered for the Li-Fi technology.

  • This possibly has more uses in an industrial setting where data could be shared between computers, machines, and robots in such a way as to insure that the light path doesn’t get broken.
  • The primary contemplated use of the technology is to send large amounts of data between computers and data devices. For example, Li-Fi could be used to transmit a downloaded movie from your computer to a settop box. This could be a convenient, interference-free way to move data between computers, phones, game consoles, and smart TVs.
  • It can be used at places like public libraries to download books, videos, or other large files to users without having them log onto low-security WiFi networks. It would also be great as a way to hand out eCatalogs and other data files in convention centers and other places where wireless technologies often get bogged down due to user density.
  • Another use is being called CamCom. It would be possible to build Li-Fi into every advertising display at a store and let the nearest light bulb transmit information about the product to shoppers along with specials and coupons. This could be done through an app much more quickly than using QR codes.

The biggest hindrance to the technology today is the state of LEDs. But Haas has been leading researchers from the Universities of ­Cambridge, Oxford, St. Andrews, and Strathclyde in work to improve LEDs specifically for the purposes of Li-Fi. They have created a better LED that provides almost 4 Gbps operating on just 5 milliwatts of optical output power. These kinds of speeds can only go a very short distance (inches), but they hope that through the use of lenses that they will be able to transmit 1.1 Gbps for up to 10 meters.

They are also investigating the use of avalanche photodiodes to create better receivers. An avalanche photodiode works by creating a cascade of electrons whenever it’s hit with a photon. This makes it much easier to detect transmitted data and to cut down on packet loss.

It’s likely at some point within the next few years that we’ll see some market use of the Li-Fi technology. The biggest market hurdle for this and every other short-range transmission technology to overcome is to convince device makers like cellphone companies to build the technology into their devices. This is one of those chicken and egg situations that we often see with new technologies in that it can’t be sold to those who would deploy it, like a store or a library, until the devices that can use it are on the market. Unfortunately for the makers of Li-Fi equipment, the real estate on cellphone chips and other similar devices is already very tightly packed and it is going to take a heck of a sales job to convince cellphone makers that the technology is needed.

Are We Expecting too Much from WiFi?

Wi-FiI don’t think that a week goes by when I don’t see somebody proposing a new use for WiFi. This leads me to ask if we are starting to ask too much from WiFi, at least in urban areas.

Like all spectrum, WiFi is subject to interference. Most licensed spectrum has strict rules against interference and there are generally very specific rules about how to handle contention if somebody is interfering with a licensed spectrum-holder. But WiFi is the wild west of spectrum and it’s assumed there is going to be interference between users. There is no recourse to such interference – it’s fully expected that every user has an equal right to the spectrum and everybody has to live with the consequences.

I look at all of the different uses for WiFi and it’s not too hard to foresee problems developing in real world deployments. Consider some of the following:

  • Just about every home broadband connection now uses WiFi as the way to distribute data around the house between devices.
  • Comcast has designed their home routers to have a second public transmitter in addition to the home network, so these routers initiate two WiFi networks at the same time.
  • There is a lot of commercial outdoor WiFi being built that can bleed over into home networks. For example, Comcast has installed several million hotspots that act to provide convenient connections outside for their landline data customers.
  • Many cities are contemplating building citywide WiFi networks that will provide WiFi for their citizens. There are numerous network deployments by cities, but over the next few years I think we will start seeing the first citywide WiFi networks.
  • Cable companies and other carriers are starting to replace the wires to feed TVs with WiFi. And TVs require a continuous data stream when they are being used.
  • Virtual reality headsets are likely to use WiFi to feed the VR headsets. There are already game consoles using WiFi to connect to the network.
  • There is a new technology that will use WiFi to generate the power for small devices like cellphones. For this technology to be effective the WiFi has to beam continuously.
  • And while not big bandwidth user at this point, a lot of IoT devices are going to count on WiFi to connect to the network.

On top of all of these uses, the NCTA sent a memo to the FCC on June 11 that warned of possible interference with WiFi spectrum from outside through the LTE-U or LAA spectrum used for cellphones. Outside interference is always possible, and in a spectrum that is supposed to have interference this might be hard to detect or notice for the average user. There is generally nobody monitoring the WiFi spectrums for interference in the same ways that wireless carriers monitor their licensed spectrum.

All of these various uses of the spectrum raise several different concerns:

  • One concern is just plain interference – if you cram too many different WiFi networks into one area, each trying to grab the spectrum, you run into traditional radio interference which cuts down on the effectiveness of the spectrum.
  • WiFi has an interesting way of using spectrum. It is a good spectrum for sharing applications, but that is also its weakness. When there are multiple networks trying to grab the WiFi signal, and multiple user streams within those networks, each gets a ‘fair’ portion of the spectrum which is going to somehow be decided by the various devices and networks. This is a good thing in that it means that a lot of simultaneous streams can happen at the same time on WiFi, but it also means that under a busy load the spectrum gets chopped into tiny little steams that can be too small to use. Anybody who has tried to use WiFi in a busy hotel knows what that’s like.
  • All WiFi is channelized, or broken down into channels instead of being one large black of spectrum. The new 802.11ac that is being deployed has only two 160 MHz channels and once those are full with a big bandwidth draw, say a virtual reality headset, then there won’t be room for a second large bandwidth application. So forget using more than one VR headset at the same time, or in general trying to run more than one large bandwidth-demanding application.

It’s going to be interesting to see what happens if these problems manifest in homes and businesses. I am imagining a lot of finger-pointing between the various WiFi device companies – when the real problem will be plain old physics.

The Power of Why

whyI had a conversation with a friend the other day that reminded me of some advice that I have given for a long time. My friend is developing a new kind of software and his coders and programmers are constantly telling him that they can’t solve a particular coding issue. He drives them crazy because any time they tell him they can’t do something, he expects them to be able to tell him why it won’t work. They generally can’t immediately answer this question and so they have to go back and figure out why it can’t be done.

I laughed when he told me this, because it’s something I have been telling company owners to do for years and I might even have been the one to tell him to do this many years ago. When somebody tells you that something can’t be done, you need to make them tell you why. Over the years I have found asking that simple question to be one of the more powerful management tools you can use.

So what is the value in knowing why something doesn’t work? I’ve always found a number of reasons for using this tool:

  • It helps to turn your staff into critical thinkers, because if they know that they are always going to have to explain why something you want won’t work, then they will learn to ask themselves that question before they come and tell you no.
  • And that is important because often, when examining the issue closer, they will find out that perhaps the answer really isn’t no and that there might be another solution they haven’t tried. So making somebody prove that something won’t work often leads to a path to make it work after all.
  • But even if it turns out that the answer is no, then looking closely at why a given solution to a problem wouldn’t work will often let you find another solution, or even a partial solution to your problem. I find that thinking a problem the whole way through is a useful exercise even when it doesn’t produce a solution.
  • This makes better employees, because it forces them to better understand whatever they are working on.

Let me give a simple example of how this might work. Let’s say you ask one of your technicians to set up some kind of special routing for a customer and they come back and tell you that it can’t be done. That first response, that it won’t work, doesn’t give you any usable feedback. If you take it at face value then you are going to have to tell your customer they can’t have what they are asking for. But when you send that technician back to find out why it won’t work, there are a wide range of possible answers that might come back. It may turn out upon pressing them that the technician just doesn’t know how to make it work – which means that they need to seek help from another resource. They might tell you that the technical manual for the router you are using says it won’t work, which is not an acceptable answer unless technical support at the router company can tell you why. They may tell you that you don’t own all of the software or hardware tools needed to make it work – and now you can decide if obtaining those tools makes sense for the application you have in mind. You get the point: understanding why something doesn’t work often will lead you to one or more solutions.

My whole consulting practice revolves around finding ways to make things work. My firm gets questions every day about things clients can’t figure out on their own. We never automatically say that something can’t be done, and for the vast majority of the hard questions we are asked we find a solution. The solution we find may not always be what they want to hear, because the solution might be too expensive or for some other reason won’t fit their needs, but they usually happy to learn all of the facts.

Give this a try. It’s really easy to ask why something won’t work. But the first few times you do this you are going to get a lot of blank stares from your staff if they have not been asked this question many times before. But if this becomes one of the tools in your management toolbox, then I predict you are going to find out that a lot of the unsolvable problems your staff has identified are solvable after all. That’s what I’ve always found. Just don’t do this so well that nobody ever calls us with the hard questions!

Augmented vs. Virtual Reality

Escher-6We are about to see the introduction of the new generation of virtual reality machines on the market. Not far behind them will probably be a number of augmented reality devices. These devices are something that network operators should keep an eye on, because they are the next generation of devices that are going to be asking for significant bandwidth.

The term ‘augmented reality’ has been around since the early 1990s and is used to describe any technology that overlays a digital interface over the physical world. Until now, augmented reality has involved projecting opaque holograms to blend into what people see in the real world. Virtual reality takes a very different approach and immerses a person in a fully digital world by projecting stereoscopic 3D images onto a screen in front of your eyes.

A number of virtual reality headsets are going to hit the market late this year into next year:

  • HTC Vive is hoping to hit the market by Christmas of this year. This is being developed in conjunction with Valve. This device will be a VR headset that will incorporate some augmented reality, which will allow a user to move and interact with virtual objects.
  • Oculus Rift, owned by Facebook, is perhaps the most anticipated release and is expected to hit the market sometime in 2016.
  • Sony is planning on releasing Project Morpheus in 1Q 2016. This device will be the first VR device integrated into an existing game console.
  • Samsung will be releasing its Gear VR sometime in 2016. This device is unique in that it’s powered by the Samsung Galaxy smartphone.
  • Raser will be releasing a VR headset based upon open source software that they hope will allow for more content delivery. Dates for market delivery are still not known.

All of these first generation virtual reality devices are for gaming and, at least in the first few generations, that will be the primary use for these devices. Like with any new technology, price is going to be an issue for the first generation devices, but one has to imagine that within a few years these devices might be as common as, or even displace, traditional game consoles. The idea of being totally immersed in a game is going to be very attractive.

There are two big players in the augmented reality market—Microsoft’s HoloLens and the Google-backed Magic Leap. These devices don’t have a defined target release date yet. But the promise for augmented reality is huge. These devices are being touted as perhaps the successor to the smartphone and as such have a huge market potential. This list of potential applications for an augmented reality device is mind boggling large, which must be what attracted Google to buy into Magic Leap.

The MagicLeap works by beaming images directly into a user’s retinas and the strength and intensity of the beam can create the illusion of 3D. But as with Google Glass, a user is also going to be able to see the real world behind the image. This opens up a huge array of possibilities that range from gaming, where the device takes over a large share of the visual space, to the same sorts of communicative and informative functions done by Google Glass.

The big hurdles for augmented reality are how to power the device as well as overcoming the social stigma around wearing a computer in public—who can forget the social stigma that instantly accrued to glassholes, those who wore Google Glass into bars and other public places? As a device it must be small, low power, inconspicuous to use, and still deliver an amazing visual experience to users. It’s probably going to take a while to work out those issues.

The two kinds of devices will compete with each other to some extent on the fringes of the gaming community, and perhaps in areas like providing virtual tours of other places. But for the most part the functions they perform and the markets they chase will be very different.

The Latest on Malware

HeartbleedCisco has identified a new kind of malware that takes steps to evade being cleansed from systems. The example they provide is the Rombertik malware. This is one of a new form of malware that actively fights against being detected and removed from devices.

Rombertik acts much like a normal virus in its ability to infect machines. For example, once embedded in one machine in a network it will send phishing emails to others to infect other machines and uses other typical malware behavior. But what is special about Rombertik and other new malware is how hard they fight to stay in the system. For example, the virus contains a false-data generator to overwhelm analysis tools, contains tools that can detect and evade a sandbox (a common way to trap and disarm malware), and has a self-destruct mechanism that can kill the infected machine by wiping out the master boot record.

The problem with this new family of malware is that it evades the normal methods of detection. Typical malware detection tools look for telltale signs that a given website, file, or app contains malware. But this new malware is specifically designed to either hide the normal telltale signs, or else to morph into something else when detected. So as this new malware is detected, by the time you try to eradicate it in its original location it has moved somewhere else.

This new discovery is typical of the ongoing cat and mouse game between hackers and malware security companies. The hackers always get a leg up when they come out with something new and they generally can go undetected until somebody finally figures out what they are up to.

This whole process is described well in two reports issued by web security companies. Menlo Security reports that there was 317 million pieces of malware produced in 2014 in their State of the Web 2015: Vulnerability Report. In this report they question if the security industry is really ready to handle new kinds of attacks.

The report says that enterprises spent more than $70 billion on cybersecurity tools in 2014 but still lost nearly $400 billion as a result of cybercrime. They report that the two biggest sources of malware in large businesses come either through web browsing or from email – two things that are nearly impossible to eliminate from corporate life.

Menlo scanned the Alexa top one million web sites (those getting the most traffic) and found the following:

  • 34% of web sites were classified as risky due to running software that is known to be vulnerable to hacking.
  • 6% of websites were found to be serving malware, spam, or are part of a botnet.

The other recent report on web vulnerabilities came from Symantec, which can be downloaded here. Symantec said that hackers no longer need to break down the doors of corporate networks when the keys to hack them are readily available. That mirrors the comments by Menlo Security and is referring to the fact that companies operate software with known vulnerabilities and then take a long time to react when security breaches are announced.

The report says that in 2014 firms took an average of 50 days to implement security patches. Hackers are launching new kinds of malware and then leaping on the vulnerability before patches are in place. The biggest example of this in 2014 was the Heartbleed malware, where hackers were widely using it within 4 hours of it hitting the web while companies took a very long time to come up with a defense. Symantec says there were 24 separate zero-day attacks in 2014 – meaning an introduction of a new kind of malware that was either undetectable or for which there was no immediate defense.

Symantec reports much the same thing as Menlo Security in that the big vulnerability of malware is what it can do once it is inside of a network. The first piece of malware can hit a network in many different ways, but once there uses a number of sophisticated tools to spread throughout the network.

There is certainly nothing foolproof you can do to keep malware out of your corporate systems. But most of the ways that networks get infected are not through hackers, but though employees. Employees still routinely open spam emails and attachments and respond to phishing emails – so making sure you employees know more about malware and it’s huge negative impact might be your best defense.

Broadband CPNI

FCC_New_LogoThe FCC said before they passed the net neutrality rules that they were going to very lightly regulate broadband providers using Title II. And now, just a few weeks after the new net neutrality rules are in place, we already see the FCC wading into broadband CPNI (customer proprietary network information).

CPNI rules have been around for a few decades in the telephony world. These rules play a dual purpose of providing customer confidentiality (meaning that phone companies aren’t supposed to do things like sell lists of their customers). They also provide protection of customer calling information by requiring a customer’s explicit permission to use their data. Of course, we have to wonder if these rules ever had any teeth at all since the large telcos shared everything they had with the NSA. But I guess that is a different topic and it’s obvious that the Patriot Act trumps FCC rules.

The CPNI rules for telephone service are empowered by Section 222 of Title II. It turns out that this is one of the sections of Title II for which the FCC didn’t choose to forebear for broadband, and so now the FCC has opened an investigation into whether they should apply the same, or similar, rules for broadband customers.

It probably is necessary for them to do this, because once Title II went into effect for broadband this gave authority in this area to the FCC. Until now, customer protection for broadband has been under the jurisdiction of the Federal Trade Commission.

There clearly is some cost for complying with CPNI rules, and those costs are not insignificant, especially for smaller carriers. Today any company that sells voice service must maintain, and file with the FCC, a manual showing how they comply with CPNI rules. Further, they have to periodically show that their staff has been trained to protect customer data. If the FCC applies the same rules to ISPs, then every ISPs that sells data services is going to incur similar costs.

But one has to wonder if the FCC is going to go further with protecting customer data. In the telephone world usually the only information the carriers save is a record of long distance calls made from and to a given telephone number. Most phone companies don’t track local calls made or received. I also don’t know of any telcos that record the contents of calls, except in those circumstances when a law enforcement subpoena asks them to do so.

But ISPs know everything a customer does in the data world. They know every web site you have visited, every email you have written, everything that you do on line. They certainly know more about you than any other party on the web. And so the ISPs have possession of data about customers that most people would not want shared with anybody else. One might think that in the area of protecting customer confidentiality the FCC might make it illegal for an ISP to share this data with anybody else, or perhaps only allow sharing if a customer gives explicit permission.

I have no idea if the larger telcos use or sell this data today. There is nothing currently stopping them from doing so, but I can’t ever recall hearing of companies like Comcast or AT&T selling raw customer data or even metadata. But it’s unnerving to think that they can, and so I personally hope that the FCC CPNI rules explicitly prohibit ISPs from using our data. I further hope that if they need a customer’s permission to use their data that this is not one of those things that can be buried on page 12 of the terms of service you are required to approve in order to use your data service.

What would be even more interesting is if the FCC takes this one step further and doesn’t allow any web company to use your data without getting explicit permission to do so. I don’t have idea if they even have that authority, but it sure would be a huge shock to the industry if they tried to impose it.

The Law of Accelerating Returns

exponential-growth-graph-1Ray Kurzweil, the chief engineer at Google, was hired because of his history of predicting the future of technology. According to Kurzweil, his predictions are common sense once one understands what he calls the Law of Accelerating Returns. That law simply says that information technology follows a predictable and exponential trajectory.

This is demonstrated elegantly by Moore’s Law, in which Intel cofounder Gordon Moore predicted in the mid-60s that the number of transistors incorporated in a chip will double every 24 months. His prediction has held true since then.

But this idea doesn’t stop with Moore’s Law. The Law of Accelerating Returns says that this same phenomenon holds true for anything related to information technology and computers. In the ISP world we see evidence of exponential growth everywhere. For example, most ISPs have seen the the amount of data downloaded by the average household double every four years, stretching back to the dial-up days.

What I find somewhat amazing is that a lot of people the telecom industry, and certainly some of our regulators, think linearly while the industry they are working in is progressing exponentially. You can see evidence of this everywhere.

As an example, I see engineers designing new networks to handle today’s network demands ‘plus a little more for growth’. In doing so they almost automatically undersize the network capacity because they don’t grasp the multiplicative effect of exponential growth. If data demand is doubling every four years, and if you buy electronics that you expect to last for ten to twelve years, then you need to design for roughly eight times the data that the network is carrying today. Yet that much future demand just somehow feels intuitively wrong and so the typical engineer will design for something smaller than that.

We certainly see this with policy makers. The FCC recently set the new definition of broadband at 25 Mbps. When I look around at the demand in the world today at how households use broadband services, this feels about right. But at the same time, the FCC has agreed to pour billions of dollars through the Connect America Fund to assist the largest telcos in upgrading their rural DSL to 15 Mbps. Not only is that speed not even as fast as today’s definition of broadband, but the telcos have up to seven years to deploy the upgraded technology, during which time the broadband needs of the customers this is intended for will have increased to four times higher than today’s needs. And likely, once the subsidy stops the telcos will say that they are finished upgrading and this will probably be the last broadband upgrade in those areas for another twenty years, at which point the average household’s broadband needs will be 32 times higher than today.

People see evidence of exponential growth all of the time without it registering as such. Take the example of our cellphones. The broadband and computing power demands expected from our cellphones is growing so quickly today that a two-year-old cellphone starts to feel totally inadequate. A lot of people view this as their phone wearing out. But the phones are not deteriorating in two years and instead, we all download new and bigger apps and we are always asking our phones to work harder.

I laud Google and a few others for pushing the idea of gigabit networks. This concept says that we should leap over the exponential curve and build a network today that is already future-proofed. I see networks all over the country that have the capacity to provide much faster speeds than are being sold to customers. I still see cable company networks with tons of customers still sitting at 3 Mbps to 6 Mbps as the basic download speed and fiber networks with customers being sold 10 Mbps to 20 Mbps products. And I have to ask: why?

If the customer demand for broadband is growing exponentially, then the smart carrier will increase speeds to keep up with customer demand. I talk to a lot of carriers who think that it’s fundamentally a mistake to ‘give’ people more broadband speed without charging them more. That is linear thinking in an exponential world. The larger carriers seem to finally be getting this. It wasn’t too many years ago when the CEO of Comcast said that they were only giving people as much broadband speed as they needed, as an excuse for why the company had slow basic data speeds on their networks. But today I see Comcast, Verizon, and a number of other large ISPs increasing speeds across the board as a way to keep customers happy with their product.