New Technology – January 2015

TransistorIn this month’s blog about new technology I focus on innovations having to do with computers. It seems like there are innovations in this area almost every month.

Faster Computing through Chip Flaws. One of the more interesting lines of research at chip manufacturers is to make chips better by making them perform worse. MIT has done research that shows that many of the tasks that we perform on computers such as looking at images or transmitting voice don’t require perfect accuracy. Yet chips are currently designed to pass on every bit of data for every task.

MIT has shown that introducing flaws into the data path for these kinds of functions can speed up computing time while also cutting power usage of a chip by as much as 19%. So the MIT researchers have developed a tool they call Chisel which helps chip designers figure out just how much error they can introduce to any given task. For example, the program will analyze the impact of making mistakes for 1% or 5% of pixels when transmitting pictures and will compare quality of the finished transmission with the power savings that comes from allowing transmission errors.

Computers that Don’t Forget. A few companies like Avalance, Crocus, Samsung and Toshiba make MRAM (magnetoresistive random-access memory) devices that are replacing older RAM and DRAM technologies in chips to provide non-volatile memory. The expectation in the industry is that these kind of storage devices will replace volatile memory (hard disks) within ten years because they are much faster and use far less energy.

There are a few initiatives working on improved MRAM technologies.NEC and Tohoku University of Sendai Japan has developed a 3D processor architecture where MRAM layers are combined with logic layers. The chip uses a technology they call Spin-Cam (content addressable memory) that promises to allow more fixed memory with faster access speeds.

KAIST, a public research university in Daejeon, South Korea has developed a chip they are calling TRAM (topologically switching RAM) that uses a phase-changing super capacitor to quickly write to non-volatile memory.

Computers with Common Sense. The Paul G. Allen Foundation is awarding grants to projects that aim to teach computers to understand what they see and read. The projects will look at several different fields of machine reasoning to try to understand diagrams, data visualizations, photographs and textbooks.

The grants are part of a larger $79.1 million initiative into artificial intelligence research. This new research fits well into other Allen initiatives in deep learning to allow computers to explain what’s happening in pictures or to classify large sections of text without human supervision.

Quantum Memory. Researchers at the University of Warsaw have developed a quantum memory that will allow the transmission of the results from quantum computers over distance. Quantum computers operate very differently than Boolean computers in that they deal with probabilities rather than number crunching. Until now there has never been an ability to transfer the results of a calculation of a quantum computer because the very act of reducing it to ones and zeros destroys the result. For example, transmission of quantum results had no way to deal with the normal laser amplifiers in a fiber optic network.

The quantum memory consists of a 1 inch by 4 inch glass tube that is coated with rubidium and filled with krypton gas. When hit with a series of three lasers the quantum information gets imprinted onto the rubidium atoms for a very short period of time of perhaps a few microseconds. But this is enough time for the data to be re-gathered and forwarded to the next quantum storage device.

Self-Healing Computers. With hacking and malware on the rise, a new line of defense will be to give our computers the ability to heal themselves. Today we use a very static defense system for our computers consisting of mostly firewalls and virus checking. But anything that slips past those static defenses can be deadly.

There is an initiative at the Department of Homeland Security which is funding the development of a more active defense system that not only detects problems but which automatically fights back. The first stage of this new active defense is being called continuous diagnostics and mitigation (CDM). The goal of CDM is to enable each device in the network to self-monitor itself for signs of having been hacked. The first CDM systems will activate malware software to try to immediately rid the machine of the invader.

The next step after CDM will be to form a network-wide active defense that will allow networks to provide feedback about threats identified from individual CDM computers. In this next step the whole network will help to fight back against a problem found on one machine in the network. The ultimate goal is to create self-healing computers that continually make sure that all systems and data are exactly as should be.

Big Carriers Wear Many Hats

hatI have seen several articles lately that note how carriers like AT&T and Verizon play both sides of the regulatory fence. But this is something that industry insiders who work with these carriers have known for years.

These articles talk about how these carriers are very good at picking and choosing their arguments based upon the fact that they are regulated in multiple ways. Both AT&T and Verizon are unique in the industry because they are certified in almost every way possible in the carrier world: they are incumbent regulated telephone companies; they are a wireless carriers; they are long distance companies; both of them operate as a CLEC in some markets where they serve businesses not in their telco footprint; there are times when they are what is called a carrier’s carrier, meaning they sell wholesale connections to other carriers; and finally, they provide broadband which they deem to be fully non-regulated.

When one of these large carriers makes a statement, the first thing I have always asked upon reading it is which one of these different regulated entities within the company is doing the talking. Each one of those types of entities operates under different rules and this often gives leeway to their policy people to say something that might be truthful for one part of their company and yet be stretching the truth if applied to another.

A lot of people, even those in the industry assume that these carriers lie a lot, when in many cases they are just doing this regulatory shuffle. I have seen this same behavior for decades and I have come to understand that generally what these companies say is the truth, at least for one of the many hats they wear. The trick is to know who they are talking for, because that let’s you then respond to them properly when you need to get them to do something. But this behavior drives people crazy who are not from the industry and they just think these companies lie a lot.

On a practical note this is something I run into all of the time. For example, I help clients purchase connections to AT&Tat places like tandems. One would think that the cost to connect to an AT&T tandem would be the same for everybody, but it can vary widely depending on your own regulatory status and upon which AT&T entity you are buying from. For instance, if you buy from AT&T the phone company then you buy special access connections at tariff rates. However, if you are a CLEC then you might be able to buy at rates from interconnection agreements. If you are carrying wireless traffic you might be able to buy from yet a third set of rates. If you are a large business you can buy either from AT&T the RBOC or from AT&T the CLEC. If you are large enough you can probably negotiate unique rates that are contractual. And AT&T is very good at trying to get you to buy a more expensive connection than you might be entitled to. It’s a very confusing puzzle that comes from the fact that different parts of AT&T are regulated differently.

Several of the articles I saw talked about a recent interaction between AT&T and the FTC. The FTC sued AT&T because they have been throttling their unlimited wireless data customers for years. This has been widely covered in the press. When an unlimited customers hits some threshold like 5 GB of data for month, their speed is slowed down to the point of barely working. The FTC is tasked with monitoring false or misleading advertising and they say that these plans are not unlimited if the speeds are too slow to use.

But AT&T is claiming that the FTC has no authority to sue them. They say instead that it is the FCC that should be investigating them. Basically, by this claim AT&T is saying that they are a common carrier and subject to Title II rules, which would give the jurisdiction of the case to the FCC.

But wireless data is not subject to Title II regulation. That might change in February if the FCC decides to regulate wireline and broadband data under Title II as part of the net neutrality order. So it’s very interesting to see AT&T claiming in the FTC suit that they are a common carrier. Because for wireless data they clearly are not regulated today by the FCC. But most of the rest of their business is covered by Title II.

You know how I said that most of the time you can figure out which entity is doing the talking if you think it about long enough? This is not one of those times, and sometimes they just lie.

Our Aging Internet Protocols

HeartbleedThe Internet has changed massively over the last decade. We now see it doing amazing things compared to what it was first designed to do, which was to provide communications within the government and between universities. But the underlying protocols that are still the core of the Internet were designed in an on-line world of emails and bulletin boards.

Those base protocols are always under attack from hackers because the protocols were never designed with safety in mind or designed for the kind of uses we see today on the Internet. The original founders of the Internet never foresaw that people with malicious intent would ever attack the underlying protocols and wreak havoc. In fact, they never expected it to grow much outside their cosy little world.

There is one group now looking at these base protocols. The Core Infrastructure Initiative (CII) was launched in April of 2014 after the Heartbleed virus wreaked havoc across the Internet by attacking OpenSSL. There are huge corporations behind this initiative, but unfortunately not yet huge dollars. But companies like Amazon, Adobe, Cisco, Dell, Facebook, Google, HP, IBM, Microsoft and about every other big name in computing and networking is a member of the group. The group currently is funding proposals from groups who want to research ways to upgrade and protect the core protocols underlying the Internet. There is not yet a specific agenda or plan to fix all of the protocols, but rather some ad hoc projects. But the hope is that somebody will step up to overhaul these old protocols over time to create a more modern and safer web.

The genesis of the CII is to be able to marshall major resources after the next Heartbleed-like attack. It took the industry too long to fix Heartbleed and the concept is that if all of the members of the organization mobilize, then major web disruptions can be diagnosed and fixed quickly.

Following are some of the base protocols that have been around since the genesis of the Internet. At times each of these has been the target of hackers and malicious software.

IPv4 to IPv6. I just wrote last week about the depletion of IPv4 IP addresses. At some future point in time the industry will throw the switch and kill IPv4 and there is major concern that hackers have already written malicious code to pounce on networks that first day they are solely using IPv6. Hackers have had years to think about how to exploit the change while companies have instead been busy figuring out how to get through the conversion.

BGP: Border Gateway Protocol. BGP is used to coordinate changes in Internet topology and routing. The problem with the protocol is that it’s easily spoofed because nobody can verify if a specific web address belongs to a specific network. Fixing BGP is a current priority at the Core Infrastructure Initiative.

DNS: Domain Name System. This is the system that translates IP addresses into domain names. DNS is often the target of hacking and is how the Syrian Electronic Army hacked the New York Times. There are serious flaws in the DNS protocol that have been hastily patched but not fixed.

NTP: Network Time Protocol. NTP’s function is to keep clocks in sync between computer networks. In the past, flaws in the system have been used to launch denial-of-service attacks. It appears that this has been fixed for now, but the protocol was not designed for safety and could be exploited again.

SMTP: Simple Mail Transfer Protocol. SMTP is a protocol used to transfer emails between users. The protocol has no inherent safety features and was an early target of hackers. Various add-ons are now used to patch the protocol, but any server not using these patches (and many don’t) can put other networks at risk. Probably the only way to fix this is to find an alternative to email.

SSL: Secure Sockets Layer. SSL was designed to provide encryption protection for application layer connections like HTTP. Interestingly the protocol has had a replacement in place since 1997 – Transfer Layer Security. But SSL is still included in most networks to provide backward compatibility and 0.3% of web traffic still uses it. SSL was exploited in the infamous POODLE attack and the easiest way to make this secure would be to finally shut it down.

 

Are Smartphones Bad for Us?

SONY DSCI saw that last week was the eighth anniversary of the day when Steve Jobs introduced the iPhone at MacWorld in San Francisco. Smartphones are so ubiquitous today that it feels like it’s been longer than eight years and it’s already hard to imagine a world without smartphones. Certainly something may come along to be even more amazing, but this so far is the transformational technology of the century.

The iPhone certainly transformed Apple. In 2006, the year before the iPhone was introduced they had revenues of $19 billion with the largest product being the iPod at $7.7 billion. Last year Apple had revenues of $182.8 billion with the iPhone producing revenues of $102 billion. iPods were still at a surprising $2.3 billion (who still buys iPods?).

There were smart phones around before the iPhone from companies like Palm and Blackberry. But the packaging of the iPhone caught the eye of the average cellphone user and the smartphone industry exploded. One of my friends bought an iPhone on the day they came out and I remember being very unimpressed. I asked him what it did that was new and the only thing he could come up with FaceTime – but he didn’t know anybody else who had an iPhone at the time and we couldn’t try it. The original iPhone didn’t have many apps, but that void was quickly filled.

Now that smartphone usage is ubiquitous in the US, we are starting to see studies looking at the impact of using them. Not all of these studies are good news.

Researcher Andrew Lepp at Kent State University looked at how smartphones affect college students. Lepp’s study found that frequent smartphone usage can be linked to increased anxiety, lower grades and generally less happiness. Students who are able to put down their phones are happier and have higher grades. Lepp’s study also showed, unsurprisingly that students with the highest smartphone usage have worse cardiovascular health – meaning they are in worse physical shape.

Researchers at Michigan State found that work-related smartphone use after 9 PM adversely affects a person’s performance the following day. They found that not taking a break from work results in mental fatigue and lack of engagement the next day. Researchers at Florida State found similar results and postulated that the smartphone backlighting interferes with melatonin, a chemical that regulates falling asleep and staying asleep.

The statistics from various surveys on smartphone usage are eye-opening:

  • 80% of all smartphone users check their phone within 15 minutes of waking.
  • Smartphone users with Facebook check Facebook an average of 14 times per day.
  • A scary 24% of users check their smartphone while driving.
  • 39% of smartphone users use their smartphones in the bathroom (I have no idea what this means).

My own theory is that smartphones do so many different functions they can feed into many different versions of addictive behavior. People can use a smartphone and get addicted to playing games, or addicted to texting their friends, or addicted to using FaceTime, or addicted to reading sports scores and stories.

It’s not like addiction to technology is new. We all remember people who got addicted to early computer games. Perhaps there is still somebody today in their basement addictively playing Pong. There are many stories of people before smartphones who texted thousands of times per day. These early studies don’t surprise me and I am sure that many more studies will describe even more woes that can be added to the list of how technology can be bad for us.

Okay, I admit I use my smartphone in the bathroom – it’s a good chance to catch up on tech news. But that’s it. I swear!

The Tug-of-War Over Municipal Competition

Capitol_domeI wrote back in July that politics had reared its head in the issue of allowing municipalities to compete in broadband. Depending on who is doing the counting there are 19 to 21 states that restrict local governments from competing with broadband. In some of these states there is an outright ban on competition. In some states a local government can only build wholesale networks to lease out to others. And in some states there are numerous hurdles that make it really hard for a local government to get into business.

And there are attempts every year to add to the list. There currently is a move at the Missouri legislature to create a total ban against municipal competition. Last year a ban was passed in North Carolina (although it was made to look like it was not a ban).

Politics has entered the fray again recently in a big way. President Obama recently made a speech in Cedar Rapids Iowa in favor of having the FCC eliminate all of the bans on municipal competition. FCC Chairman Tom Wheeler has been talking about this for a long time and last summer the FCC asked for comments on petitions files by Chattanooga Tennessee and Wilson North Carolina that asked to lift restrictions that stop them from expanding their existing fiber networks.

The President had barely finished his speech when the two republican Commissions at the FCC issued formal statements against the idea. Commissioner Michael O’Rielly said, “It is clear that this Administration doesn’t believe in the independent nature of the FCC. It is disappointing that the Commission’s leadership is without a sufficient backbone to do what is right and reject this blatant and unnecessary interference designed to further a political goal. Substantively, this missive is completely without ‎statutory authority and would be a good candidate for court review, if adopted. In reality, this debate is about preempting a state’s right to prevent taxpayer rip-offs. Municipal broadband has never proven to be the panacea that supporters claim and the Administration now boasts. Instead, we have seen a long track record of projects costing more than expected and delivering less than promised.”

And Commissioner Ajit Pai said: “As an independent agency, the FCC must make its decisions based on the law, not political convenience. And U.S. Supreme Court precedent makes clear that the Commission has no authority to preempt state restrictions on municipal broadband projects. The FCC instead should focus on removing regulatory barriers to broadband deployment by the private sector.”

Obviously the President and the these Commissioners disagree about the ability of the FCC to preempt state law. I have no idea which side is right about this and I assume that if the FCC passes this that the Supreme Court will eventually decide who is right.

But what I find sad about this is that so many telecom issues are now partisan and are being argued blindly along party lines rather than being looked at for their merits. Municipal broadband is certainly one such issue.

While Commissioner O’Rielly says there is a long track record of poor performance by municipal broadband networks, the facts say otherwise. There are well over 100 municipal networks that are offering fiber to every home and business in their towns and you can count the ones that have gotten into trouble on one hand. If you look at competitive commercial broadband ventures the failure rate has to be higher than that. One of the major premises of competition is that it comes with no guarantee of success. But some communities want broadband badly enough to take this risk. And even where there has been failure, the towns still end up with a fiber network. I think the citizens of Provo are happy to now have Google fiber, which started out with a municipal system that performed poorly.

The whole anti-municipal effort starts with a handful of huge telcos and cable companies that don’t want municipal competition. In reality, these companies are against any competition and they do whatever they can to squelch commercial competition as well. But these companies are very good at lobbying and they are directly behind the recent efforts in states to expand the ban against municipal broadband.

Even though I think every town in the U.S. ought to have fiber broadband, even I am skeptical about the idea of some of the largest cities in the country being able to compete in the broadband business. One only has to look at what is happening in Austin Texas where there are now four different companies offering fast broadband in the wake of the Google announcement to build there. I don’t know that a large city could handle that kind of competition. But no large City has ever come close to building a broadband network and this issue is really about small-town America.

There are tens of thousands of little towns that have been left behind in broadband deployment. While big cities now have 100 Mbps cable modems or even gigabit fiber, these small towns have data speeds that are already not adequate and which fall farther behind each year. The rate of household broadband usage is doubling every three years and places that have broadband below 10 Mbps (often way below that) are going to be left in the economic wastelands if they don’t get broadband. They will lose businesses and jobs and their kids are going to grow up without the same technical skills that everybody else takes for granted.

That is what ought to be debated, but instead opinions on the issue are split down party lines. Why in the world would a rural republican congressman not want his local communities to build broadband if nobody else will do it for them? I honestly don’t get it, but I feel the same confusion about most issues where partisan politics overrides logic. There are arguments to be made both for and against overriding state bans against broadband. But if this was a fair debate there would be some people from both parties on each side of the issue.

Finally Time to Convert to IPv6?

Rolling diceIn early 2011 the Internet Assigned Numbers Authority (IANA) allocated the last block of IPv4 addresses to various countries and warned that at the rate of historic usage that all those numbers would get gobbled up by ISPs within a few months. They further foresaw all sorts of new demands for IP addresses for new industry products like wearables, BYOD devices being connected to corporate WANs, an explosion of smartphones in the developing world and the early stages of the Internet of Things.

The IANA warned then that ISPs should begin migrating to IPv6 to avoid running out of IP addresses. But here we are almost four years later and a lot of ISPs still have not converted to IPv6. And yet somehow we are not quite out of numbers. The American Registry for Internet Numbers (ARIN) for the United States and Canada is still handing out IP addresses even today. How is that possible?

The major Internet players in the markets developed ways to conserve and reclaim IP addresses. At the end of 2013 ARIN still had 24 million addresses available. ARIN has been able to stretch the numbers by doing things like reclaiming IP addresses from dead ISPs, and by doling out IP addresses in much smaller blocks than historically. ARIN was predicting that those addresses would be gone before the end of 2014. But again the industry confounded them and there were still 16 million IP addresses at the end of 2014. You can see the count of available addresses at this web site, which is updated weekly.

So is now finally the time to convert to IPv6 or can the industry stretch this further? The issue is going to be of the most concern for growing networks that need a lot of new addresses. If you are growing you might should  convert to IPv6 before you find yourself stopped dead due to lack of IP addresses. ISPs that are not growing are probably good for some time since they can usually use numbers abandoned by old customers and assign them to new ones.

Why haven’t more ISPs converted to IPv6? There are a number of reasons.

  • IPv6 is not backwards compatible and once you convert you need to run what is called a dual stack that will process both IPv4 and IPv6 addresses. And you will have to do that until IPv4 addresses are finally dead.
  • The conversion to IPv6 can be expensive. Every part of your network needs to be IPv6 compatible – that means core hardware, end-user hardware, your Internet backbone provider and even content providers on the web.
  • Not all of the content on the Internet is IPv6 compatible. Almost every major site like Google, Yahoo, Facebook and anybody else with a household name is now IPv6 compatible, but there are older content providers who have not bothered, and maybe who will never bother to make the conversion. (And it’s not so much the content, but the servers they sit on).
  • There is no problem buying new hardware that is IPv6 compatible. So any new gear being installed today is already IPv6 compatible. But this is no help for older routers and gear that is not easily upgraded, and many companies are holding off to avoid the capital outlay or upgrades that will be needed to convert to IPv6. There probably is no way to upgrade a 10-year old DSL modem or a DOCSIS 1.1 modem to IPv6 without changing devices.

But we are finally seeing some large players converting. For example, Comcast converted all of their cable modem customers to IPv6 during 2014. But some other large ISPs are still holding back. Other parts of the industry have converted – most smartphones now use IPv6 as well as new game consoles.

Now that some large ISPs have converted and most of the web has converted most industry experts expect the rate of conversion to accelerate. Small ISPs need to pay attention to what is happening with IPv6 because you don’t want to be the last one to make the conversion. Once most of the rest of the world has converted you can expect to start having compatibility problems in unexpected places. And eventually the large carriers are going to declare IPv4 dead and cut it off. I’m thinking that most small ISPs ought to finally think about converting by no later than next year. If you wait longer than that it becomes a crap shoot.

The GAO Studies Data Caps

meter-hiThe General Accountability Office (GAO) issued a report near the end of last year that said that broadband customers don’t want caps on their data. That’s not a surprising finding. The report was generated in response to a request from Rep Ann Eshoo of California. So the GAO issued a survey and held focus groups looking into the issue. They also talked to the large ISPs and the wireless companies about their networks.

They concluded that there is no reason to have caps on landline bandwidth because networks generally do not experience congestion. The large ISPs used the congestion argument a number of years ago when they first experimented with data caps. But the large cable companies and telcos admit that congestion is no longer an issue.

The average customer understands this as well. You don’t have to think back many years to a time when your home Internet would bog down every night after dinner when most homes jumped onto the Internet. But there have been big changes in the industry that have gotten rid of that congestion.

Probably foremost has been the cost to connect to the Internet backbone. Just five years ago I had many small ISP clients who were paying as much as $15 per dedicated megabit for raw bandwidth from the Internet. That has dropped significantly and the price varies from less than a dollar to a few dollars depending upon the size and the remoteness of the ISP.

During this same time there has been an explosion in customers watching video and video is by far the predominant use today of an ISP network. Video customers won’t tolerate congestion without yelling loudly because if the bandwidth drops too much, video won’t work. When Internet browsing consisted mostly of looking at web pages customers were less critical when their bandwidth slowed down.

So ISPs today mostly over-engineer their networks and they generally try to provide a cushion of 20% or more bandwidth than what customers normally need. This has also become easier for ISPs because of the way they now pay for bandwidth. Ten years ago an ISP paid for the peak usage their network hit during the month. They paid for the whole month at the bandwidth demand they experienced during their busiest hour of the month (or perhaps an average of a couple of the busiest hours). Slowing customers down during the busiest times under that pricing structure could save an ISP a lot of money. But now that bandwidth is cheap, ISPs routinely just buy data pipes that are larger than what they need to provide a bandwidth cushion. Because the ISP has already paid for the bandwidth to provide a cushion there is zero incremental cost for a customer to cross over some arbitrary bandwidth threshold. The ISP has paid for the bandwidth whether it’s used or not.

This is not to say that there is never congestion. Some rural networks, particularly those run by the largest companies are still poorly engineered and still have evening congestion. And there are always those extraordinary days when more people use the Internet than average, like when there is some big news event. But for the most part congestion is gone, and the ideas of data caps should have gone away with the end of congestion.

If the data caps aren’t about congestion then they can only be a way for ISPs to charge more money. There is no other explanation. ISPs with data caps are taking advantage of their biggest users to nail them for using the Internet in the way it was intended to be used.

Consider a cord cutter household. Derrel, my VP of Engineering has cut the cord and his family, including five kids, uses OTT services like Netflix as their only form of television viewing. The average streaming video in the US uses between 1 Gb and 2.3 GB per hour, depending upon the quality of the stream. For ease of calculation let’s call that 1.5 GB per hour. If Derrel’s family watches video for three hours per day he would use would use 135 GB in a month, and at six hours per day 270 GB. Plus he would still use bandwidth for other things like web browsing, emails with attachments, backing up data in the cloud, etc. Since Derrel works at home and I send him a lot of huge files, let’s say that he uses 100 GB a month for these functions.

And so Derrel might be using 370 GB a month if his family watches 6 hours of streaming video per day, and more if they watch TV more. How does this compare the data caps in the market?

  • Comcast had a data cap of 300 GB. They removed it after public outcry but are back testing it again in some southern markets. It just came back onto my bill in Florida.
  • AT&T Uverse customers have a monthly cap of 250 GB while AT&T DSL customers have a cap of 150 GB.
  • CenturyLink has a cap of 250 GB on any plan that delivers more than 1.5 Mbps.
  • Cox has caps that range between 50 GB and 400 GB depending upon the plan.
  • Charter has caps between 100 GB and 500 GB.
  • Suddenlink has caps between 150 GB and 350 GB.
  • MediaCOm has caps between 250 GB and 999GB.
  • Cable One has caps between 300 GB and 500 GB.

Derrel’s household, which is probably pretty representative of a cord cutter household would almost certainly  be over the monthly data cap for all of these providers. Note that there are plenty of providers that don’t have data caps. Companies like Verizon and Frontier don’t have them. But I feel certain that if data caps start producing a lot of revenue that you will see those companies look at data caps too.

Data caps on landline data really make me mad because I understand how networks are engineered and also how ISPs buy their underlying data. This is nothing more than trying to find a way to squeeze more money out of the data product. It lets ISPs advertise a low price but charge a lot of their customers more than that.

The Explosion of Malware

virusIt seems the on-line world is getting more dangerous for end-users and ISPs. Numerous industry sources report a huge increase in malware over the last two years. AV-Test, which tests the effectiveness of anti-virus software says that their software detected 143 million cases of malware, up 73% from the year before. In 2012 they saw only 34 million. Over the last two years they found more malware than in the previous ten years combined. Another security software vendor, Kaspersky said that it saw a fourfold increase in mobile malware last year.

What’s behind this exponential increase in malware? Experts cite several reasons:

  • This is partially due to the way that antivirus software works. It generally is designed to look for specific pieces of software that has been identified as being malicious. But hackers have figured this out and they now make minor changes to the form of the software without changing its function to get it to slip past the antivirus software.
  • Some hackers are now encrypting their malware to make it harder for antivirus software to detect.
  • Hackers are now routinely launching waterholing attacks where they create a denial of service attack against a website for the purpose of infecting it with malware, which they then hopes spreads from there.
  • It’s getting easier for hackers to obtain the code of malware. It’s published all over the web or is widely for sale giving new hackers the ability to be up and running without having to develop new code.
  • There is a new kind of tracking cookie called a zombie cookie because it comes back after being deleted. The best known case of this is tracking being done by Turn which is putting this software on Verizon Wireless cell phones.
  • Malware is being delivered in new ways. For instance, it used to be mandatory for malware to somehow be downloaded, such as downloading an attachment from spam. But in the last few years there are new delivery methods like attaching malware to remnant ad space on web sites that download automatically when somebody opens a popular web page. Cisco just warned that they see social media being the newest big source of malware in 2015.
  • Malware isn’t just for computers any longer. Cisco warms that the biggest new target for malware this year is going to be cell phones and mobile devices. And they believe Apple is going to be a big target. Cisco and others have been warning for several years that the connected devices that are part of the early Internet of Things are also almost all vulnerable to hacking.
  • Due to dramatic cases where millions of credit card numbers and passwords have been stolen hackers now have reasons and to target specific people to do things like empty their bank accounts and don’t always attack the public at large.
  • Cyber-warfare has hordes of government hackers from numerous countries unleashing malware at each other and the rest of us are often collateral damage.

The scary thing about all of this is that the malware purveyors seem to be getting ahead of the malware police and there seem to be a lot of malware that isn’t being caught by antivirus programs. This has always been a cat and mouse game, but right now we are at one of those dangerous places where the bad guys are ahead.

Larger businesses have responded to the increase in malware by having malware attack plans. These are step-by-step plans of what to do during and after an attack on their systems. These plans includes a lot of common sense ideas like backing up data often, making sure all software is licensed and up to date, and even little things like making sure that there are hard copies of contact information for employees and customers should systems go offline.

But there really is no way to plan for this on a home computer and if you get infected with bad enough software you are going to probably be paying somebody to clean your machine. It’s hard to know what to do other than maintaining a virus checker and backing up data.

Why Change the Definition of Broadband?

slow-downThe FCC is going to vote at its January 29th meeting to possibly increase the definition of broadband from 4 Mbps download and 1 Mbps upload to as much as 25 Mbps download and 3 Mbps upload. The higher speeds are what Chairman Tom Wheeler favors and was contained in the first draft of the Annual Broadband Progress report that goes to Congress each year.

This proposal has me scratching my head because the same FCC just announced a few weeks ago that the large price-cap telcos are going to qualify for the $9 billion in new funding from the Connect America Fund by deploying technology capable of providing speeds of 10 Mbps download and 1 Mbps upload.

I am having trouble getting my head around that disconnect. The FCC is willing to spend a huge amount of money, spread over as many as seven years on the giant telcos that are promising to deliver 10/1 Mbps service to rural areas. If at the same time the FCC changes the definition of broadband, then those upgraded connections are not even going to be considered as broadband.

To make this worse, it’s almost certain that sometime during the next seven years the definition of broadband will be increased again, making any technology that delivers only 10 Mbps seem really slow and outdated by the end of seven years.

I understand the FCC’s dilemma a little. The big telcos are the ones that serve huge portions of rural America and so the FCC is thinking that luring them into serving at least 10/1 broadband is better than nothing. Unfortunately, that’s all it – just better than nothing.

It seems to me before we hand the large telcos that money that we ought to first see if somebody else is willing to take the same money to build fiber to those same rural areas. $9 billion is a lot of money and it would go a long way towards seeding a lot of rural fiber projects. But the current Connect America Fund rules say that if the big telcos accept the CAF money that nobody else has a shot at it.

It’s not like there aren’t companies willing to build faster facilities in rural America. There are plenty of independent telephone companies, municipalities and electric cooperatives that would think about building rural fiber if they got help with the funding. It’s my understanding that there were hundreds of applicants for the FCC’s recent experimental grants who offered to build rural fiber networks. Wouldn’t it make a lot more sense to give these companies a chance to compete with the big telcos for the $9 billion?

Let’s face it. If the big telcos upgrade rural America to 10 Mbps, this is their last hurrah. They won’t ever being doing additional upgrades in those areas. And so the FCC is dooming these areas to those speeds for decades to come.

The FCC’s own numbers say that the average household today already needs at least 10 Mbps. And we know that bandwidth utilization in households is doubling every three years. So if a household needs 10 Mbps today, by the end of the seven years of CAF II funding it is going to need nearly 50 Mbps.

Meanwhile, seven years from now there will be a lot of urban and suburban households that can buy 1 Gbps. And the ones who can’t get that will probably be able to buy 100 Mbps or more.

These rural areas are already way behind the cities. Some of the areas that will be built by CAF have either no broadband or else slow connections at maybe 1 or 2 Mbps. So upgrading them to 10 Mbps is going to feel like a big improvement to those households. But almost by the time the ink dries on those projects those areas are going to be further behind the urban areas than they are today.

I don’t know why we are having a federal program that is supporting rural DSL. DSL isn’t inherently bad and it’s reported that there are places in urban areas where AT&T is now goosing several hundred Mbps from DSL. But that is not what is going to happen over the older wires and the longer distances in rural America. The FCC wants to pay the big telcos to upgrade the electronics on wires that are at least fifty years old and that degrade a little more every year.

I’m actually not against using CAF funding to upgrade the DSL in areas where nobody else is willing to do something faster. But I can’t understand why we aren’t first having an auctions for serving these areas with speed as the determining factor on who gets the federal funding. Under that kind of auction most of the money would probably still go to the telcos, but the money might also bring fiber to a million or more rural households – and that would be real progress. Rural America is doomed to remain behind unless they get fiber. And $9 billion would be a great start towards building that fiber.

I guess the main question this raises for me is why the FCC is changing the definition of broadband. If 25 Mbps is to mean anything then I would think that the FCC would not fund anything that isn’t considered broadband. Otherwise, it’s just a goal that has little meaning. It’s something the big telcos can wink at while they get paid for deploying something that is not even broadband.