The Race to Zero

Zero

This is my 500th blog entry, which means I have written several books worth of words. When I started the blog I feared I might run out of ideas in a few months, but our industry has become so dynamic that I am regularly handed more topics than there are days in the week.  

The cloud industry is often characterized by what is being called the race to zero. This is the phenomenon of ever-dropping prices for data storage. The race towards cheaper prices has always been driven by Amazon through repeatedly reducing their cloud storage prices. Every time they reduce the price of their AWS storage services, the other big cloud companies like Microsoft and Google always go along.

There are numerous reasons for the price drops, all having to do with improved computer technology. Memory storage devices have dropped in price regularly, while at the same time a number of new storage technologies are being used. The large cloud companies have moved to more efficient large data centers to gain economy of scale. And lately the large companies are all designing their own servers to be faster and more energy efficient, since energy prices for cooling are one of the largest costs of running a data center.

I remember in the late 90s looking to back up my company LAN offsite for the peace of mind of having a backup of our company records. At that time our data consisted of Word, Excel, PowerPoint, and Outlook files for around twenty people, and I’m sure that wasn’t more than a a dozen gigabytes of total data. I got a quote for $2,000 per month, which consisted of setting up a shadow server that would mimic everything done on my server, backed up once per day. At the time I found that was too expensive, so we stayed with using daily cassette back-up.

Let’s compare that number to today. There are now numerous web services that give away a free terabyte of storage. I can get a free terabyte from Flickr to back up photos. I can get the same thing from Oosah that allows me to store any kind of media and not just pictures. I can get a huge amount of free storage from companies like Dropbox to store and transmit almost anything. And the Chinese company Tencent is offering up to 10 terabytes of storage for free.

It’s hard for somebody who doesn’t work in a data center to understand all of the cost components for storage. I’ve seen estimates on tech sites that say that storage costs for a gigabyte’s worth of data have dropped from $9,000 in 1993 to around 3 cents in 2013. Regardless of how accurate those specific numbers are, they demonstrate the huge decrease in storage cost over the last few decades.

But consumers and businesses don’t necessarily see all of these savings, because the industry has gotten smarter and now mostly charges for value-added services rather than the actual storage. Take the backup service Carbonite as an example. Their service will give you unlimited cloud storage for whatever data you have on your computer. Their software then activates each night and backs up whatever changes you made on your computer during the day. This is all done by software and there are no people involved in the process.

Carbonite charges $59.99 per year to back up any one computer. For $99.99 per year you can add in one external hard drive to any one computer. And for $149.99 per year you can back up videos (not included in the other packages) plus they will courier you a copy of your data if you have a crash.

The value of Carbonite is that their software automatically backs you up once a day (and we all know we forget to do that). But that is not a complicated process and there have been external hard drives available for years with the same feature. But Carbonite is selling the peace-of-mind of not losing your data by putting it in the cloud. It must be a very profitable business since the cost of the actual data storage is incredibly cheap. Consider how much extra profit they make when somebody pays them $40 extra to back up an external hard drive.

In the business world, the fees paid for the cloud are all about software and storage cost isn’t an issue other than for someone who wants to store massive amounts of data. One might think the companies in the cloud business are selling offsite storage, but their real revenue comes from selling value-added software that helps you operate your business and manage your data. The storage costs are almost an afterthought these days.

The race to zero is not even close to over. In one of my blogs last week I talked about how using magnetized graphene might increase the storage capacity of devices by a million-fold. That upgrade is still in the labs, but it demonstrates that progress to ever-cheaper storage is far from done. We’ve come a long way from the 720 kb that I used to squeeze onto a floppy diskl!

New Technology – Telecom and Computing Breakthroughs

The InternetToday I look at some breakthroughs that will result in better fiber networks and faster computers – all components needed to help our networks be faster and more efficient.

Increasing Fiber Capacity. A study from Bell Labs suggests that existing fiber networks could be made 40% more efficient by changing to IP transit routing. Today operators divvy up networks into discrete components. For example, the capacity on a given route may be segmented into distinct dedicated 100 Gig paths that are then used for various discrete purposes. This takes the available bandwidth on a given long-haul fiber and breaks it into pieces, much in the same manner as was done in the past with TDM technology to break data into T1s and DS3s.

The Bell Lab study suggests a significant improvement if the entire bandwidth on a given fiber is treated as one huge data pipe, much in the same manner as might be done with the WAN inside of a large business. This makes sense because there is always spare or unused capacity on each segment of the fiber’s bandwidth and putting it all together into one large pipe makes the spare capacity available. Currently Alcatel Lucent, Telefonica, and Deutsche Telekom are working on gear that will enable the concept.

Reducing Interference on Fiber. Researchers at University College London have developed a new set of techniques that reduce interference between different light wave frequencies on fiber. It is the accumulation of interference that requires optical repeaters to be placed on networks to refresh optical signals.

The research team took a fresh approach to how signals are generated onto fiber and pass the optical signals through a comb generator to create seven equidistantly-spaced and frequency-locked signals, each in the form of a 16 QAM super-channel. This reduces the number of different light signals on the fiber to these seven channels which drastically reduces the interference.

The results were spectacular and they were able to generate a signal that could travel without re-amplification for 5,890 kilometers, or 3,660 miles. This has immediate benefit for undersea cables since finding ways to repeat these signals is costly. But there are applications beyond long-haul fiber and the team is now looking at ways to use the dense super-channels for cable TV systems, cable modems, and Ethernet connections.

Faster Computer Chips. A research team at MIT has found a way to make multicore chips faster. Multicore chips contain more than one processor and are used today for intense computing needs in places like data centers and in supercomputers.

The improvement comes through the creation of a new scheduling technique they are calling CDCS (computation and data co-scheduling). This technique is a way to more efficiently distribute data flow and the timing of computations on the chips. The new algorithm they have developed allows data to be placed near to where calculations are performed, reducing the movement of data within the chip. This results in a 46% increase in computing capacity while also reducing power consumption by 36%. Consequently, this will reduce the need for cooling which is becoming a major concern and one of the biggest costs at data centers.

Faster Cellphones. Researchers at the University of Texas have found a way to double the speed at which cellphones and other wireless devices can send or receive data. The circuit they have developed will let the cellphone radio deploy in ‘full-duplex’ mode, meaning that the radio can make both send and receive signals at the same time.

Today a cellphone radio can do one or the other and your phone’s radio constantly flips between sending or receiving data. Radios have always done this so that the frequencies from the transmitting part of the phone, which are normally the stronger of the two signals, don’t interfere with and drown out the incoming signals.

The new circuit, which they are calling a circulator, can isolate the incoming and outgoing signals and acts as a filter to keep the two separate. Circulators have been is use for a long time in devices like radar, but they have required large, bulky magnets made from expensive rare earth metals. But the new circulator devised by the team does this same function using standard chip components.

This circulator is a tiny standalone device that can be added to any radio chip and it acts like a traffic manager to monitor and control the incoming and outgoing signals. This simple, new component is perfect for cellphones, but will benefit any two-way radio, such as WiFi routers. Since a lot of the power used in a cellphone goes to flipping between send and receive mode, this new technology ought to also provide a significant improvement to battery life.

Million-Fold Increase in Hard Drive Capacity? Researchers at the Naval Research Laboratory have developed a way to magnetize graphene, and this could lead to data storage devices with a million-time increase in storage per size of the device. Graphene is a 1-atom thick sheet of carbon which can be layered to make multi-dimensional stacked chips.

The scientists have been able to magnetize the graphene by sitting it on a layer of silicon and submerging it in a pool of cryogenic ammonia and lithium for about a minute. They then introduce hydrogen, which renders the graphene electromagnetic. The process is adjustable, and with an electron beam you can shave off hydrogen atoms and effectively write on the graphene chip. Today we already have terabyte flash drives. Anybody have a need for an exabyte flash drive?