Future Technology – May 2018

I’ve seen a lot of articles recently that promise big improvements in computer speeds, power consumption, data storage, etc.

Smaller Transistors. There has been an assumption that we are at the end of Moore’s Law due to reaching the limit on the smallness of transistors. The smallest commercially available transistors today are 10 nanometers in diameter. The smallest theoretical size for silicon transistors is around 7 nm since below that size the transistor can’t contain the electron flow due to a phenomenon called quantum tunneling.

However, scientists at the Department of Energy’s Lawrence Berkeley Laboratory have developed a 1 nanometer transistor gate, which is several magnitudes smaller than silicon transistors. The scientists used molybdenum disulfide, a lubricant commonly used in auto shops. Combining this material with carbon nanotubes allows electrons to be controlled at the 1 nm distance. Much work is still needed to go from lab to production, but this is the biggest breakthrough in transistor size in many years and if it works will provide a few more turns of Moore’s Law.

Better Data Storage. A team of scientists at the National University of Singapore have developed a technology that could be a leap forward in data storage technology. The breakthrough uses skyrmions which were identified in 2009. The scientists have combined cobalt and palladium into a film that is capable of housing the otherwise unstable skyrmions at room temperatures.

Once stabilized the skyrmions, at only a few manometers in size, can be used to store data. If these films can be stacked they would provide data storage with 100 times the density of current storage media. We need better storage since the amount of data we want to store is immense and expected to increase 10-fold over the next decade.

Energy Efficient Computers.  Ralph Merkle, Robert Freitas and others have created a theoretical design for a molecular computer than would be 100 billion times more energy efficient than today’s most energy efficient computers. This is done by creating a mechanical computer that creates small physical gates at the molecular level that mechanically open and close to create circuits. This structure would allow the creation of the basic components for computing such as AND, NAND, NOR, NOT, OR, XNOR and XOR gates without electronic components.

Today’s computers create heat due to the electrical resistance in components like transistors, and it’s this resistance that requires huge electricity bills to operate and then cool big data centers. Mechanical computer create less heat from the mechanical process of opening and closing logic gates, and this friction can be nearly eliminated by creating tiny gates at the molecular level.

More Powerful Supercomputers. Scientists at Rice University and the University of Illinois at Urbana-Champaign have developed a process that significantly lowers the power requirements while making supercomputers more efficient. The process uses a mathematical technique developed in the 1600s by Isaac Newton and Joseph Raphson that cut down on the number of calculations done by a computer. Computers normally calculate every mathematical formula to the seventh or eight decimal point, but using the Newton-Raphson tool can reduce the calculations to only the third or fourth decimal place while also increasing the accuracy of the calculations by three orders of magnitude (1000 times).

This method drastically reduces the amount of time needed process data, which makes the supercomputer faster while drastically reducing the amount of energy needed to perform a given calculation. This has huge implications when running complex simulations such as weather forecasting programs that require the crunching of huge amounts of data. Such programs can be run much more quickly while producing significantly more accurate results.

The Mature Telco

Fiber CableAll businesses go through similar phases – start-up, growth and maturity, and it’s important to understand which stage your business is in. When I first got into the industry in the 70s many small telcos had reached maturity. They companies were 50 to 75 years old. They had copper networks that were in good shape and there was no technology on the horizon that was going to threaten or compete with copper. As monopolies they knew their customers and they rarely changed products or prices. Their businesses were predictable from day to day and even from year to year.

But like many industries the telco industry got swept up from all of the changes that came from the constant improvements in computer chips. This technology revolution that started around 1980 has produced chips that doubled in density every three years (Moore’s law). And that brought us the electronics revolution of computers, smartphones, the Internet and the cloud.

And telco technology improved just like other electronics. We saw the widespread introduction of fiber optics into the network. We saw competition spring up from the coaxial networks of the cable companies (that also were improving along with telcos). We saw the growth of customer demand for broadband and telecom networks have become data networks much more than they are voice networks. The technology changes means that the industry has been in turmoil since 1980. For every new technology we saw, we knew that a few years later something newer and better would come along. We saw most of the historic telco vendors like Nortel and AT&T disappear due to the turmoil in the industry. And most (but not all small telcos were swept up by these changes andwere not as predictable as before.

But we are now seeing a slowdown of the constant technology upgrades. Moore’s law is finally starting to slow a bit and experts say there will likely only be a few more doublings of computer chip technology. More importantly a lot of small telcos have built, or soon will build fiber networks to replace their copper. And it is these fiber networks that are starting to bring telcos back to the mature stage again. Companies that build fiber networks know that they are not facing another major technology upgrade for a long time. We don’t even know how long modern fiber will last, but I’ve talked to scientists that say they expect it to be functional for 75 or even 100 years. We’ve also seen that fiber electronics last for a lot longer than we once expected. I know companies that are still operating fiber electronics built in the early 2000s – and which are still not showing any signs of failure.

So companies with fiber networks can now feel secure that they won’t be facing major future capital spending. They can hunker down and pay off any debt incurred to build fiber. Future electronics upgrades are liable to be introduced gradually rather than with a forklift. We also see both cable TV and telephone services moving to the cloud and telcos can buy these services wholesale from the cloud rather than operating headends and switches.

The industry as a whole still has some turmoil. The whole regulatory scheme that has driven telco revenues is changing. Voice regulations are being phased out but we are now seeing some new regulations for broadband. The biggest change for small telcos is that they are losing subsidies and the cost-based settlements that helped to fund their companies.

But companies that have built fiber and that will be solvent after the shakeout of the changes in settlement revenues are going to become those mature companies again. Companies can remain in growth mode if they continue to expand geographically.But once growth stops, a fiber-based company will become a mature company.

As mature companies they need to change their focus from building and upgrading mode to instead putting more attention on customer service. Because as long as they can keep their customers happy these mature companies will likely have a long and profitable future in front of them. The transition can be hard if a company doesn’t recognize it. For instance, many companies will keep construction crews and other remnants of their expansion days on board even though they are likely to never need them. So it’s important for a company that is not likely to grow to take a hard look into the future and to make the changes necessary to take best advantage of again being a mature company.

The End of Moore’s Law

ibm_chip1I’ve been meaning to write this blog for a while. It is now commonly being acknowledged that we are nearing the end of Moore’s law. Moore’s law is named after Gordon Moore, an engineer who later was one of the founders of Intel. In 1965, Moore made the observation that the number of transistors that could be etched onto a circuit board would double every two years. He originally thought this would last for a decade or so, but the microchip industry has fulfilled his prediction for over 50 years now.

In 1965 a single transistor cost about $8 in today’s dollars and now, after so many years of doubling, we can put billions of transistors onto a chip, at a tiny fraction of a cent each. It was the belief that chips could continue to improve that helped to launch Silicon Valley, and that enabled the huge array of technological changes that have been brought about by cheap computer chips.

The companies that make chips have thrived by creating a new generation of chips every few years that represented a significant leap forward in computing power. I think every adult understands the real life consequences of these changes – we’ve all been through the cycle of having to upgrade computers every few years, and more recently of having to upgrade cellphones. Each subsequent generation of PC or smartphone was expected to be considerably faster and more powerful.

But we are starting to reach the end of Moore’s law, mostly driven by limits of physics and the size of atoms. It now looks like there will be better chips perhaps every three years. And within a decade or so Moore’s law will probably come to an end. There may be faster and better computers developed after that point – but improvements will have to come from somewhere other than cramming more transistors into a smaller space.

There are researchers looking to improve computers in other ways – through better software or through chip designs that can be more efficient with the same number of transistors. For instance, IBM and others have been working on chips that use layers of single chips built into a matrix – essentially a 3D chip. And there has been a lot of research into using light instead of electricity to speed up the computing process.

We are already starting to see the result of the slowdown of Moore’s law. The PC and tablet industries are suffering because people are hanging onto those devices a lot longer than they used to. Apple and Samsung are both struggling due to a drastic reduction in the sale of premium smartphones – because new phones are no longer noticeably better than the old ones.

Faster chips also fueled a lot of other technologies, including many in the telecom world. Faster chips have brought us better and faster servers, routers, and switches. Better chips have led to improved generations of fiber optic gear, voice switches, cable TV headends, settop boxes – basically every kind of telecom electronics. No doubt these technologies will keep improving, but soon the improvements won’t be from faster and more powerful processors. The improvements will have to come from elsewhere.

Faster and more powerful chips have enabled the start of whole new industries – smart cars, drones, robots, and virtual reality. But those new industries will not get the same boost during their fledgling years like what happened in the past to other electronics-based industries. And that has a lot of technology futurists concerned. Nobody is predicting the end to innovation and new industries. But anything new that comes along will not get the boost that we’ve enjoyed these many decades through the knowledge that a new technology would improve almost automatically with more powerful processors.





New Technology for August 2015

ibm_chip1This is my monthly look at new technologies that might eventually impact our industry.

Small Chip from IBM. IBM and a team of partners including Samsung, GlobalFoundries, and the State University of New York at Albany have made a significant leap forward by developing a computer chip that measures 7 nanometers, or billionths of an inch. That’s half the size of other cutting-edge chips in the industry. Experts are calling IBM’s new chip a leap that is two generations ahead in the current chip industry. IBM is also introducing a 10 nanometer chip as well.

IBM’s trial chip contained transistors just to prove the concept and so the chip wasn’t designed for any specific purpose. But this size breakthrough means that the industry can now look forward to putting much greater computer power into small devices like a smart watch or a contact lens.

A chip this small can be used in two different ways. It can reduce power requirements in existing devices, or it could be used to greatly increase computational power using the same amount of chip space.

IBM has contracted with GlobalFoundries to build the new chip for the next ten years. This should provide significant competition to Intel since currently nobody else in the industry is close to having a 7 nanometer chip.

Cheaper Batteries. Yet-Ming Chiang of MIT has developed processes that will significantly reduce the cost of building batteries. Chiang has not developed a new kind of battery, but instead has designed a new kind of battery factory. The new factory can be built for a tenth of the price of an existing battery factory which ought to result in a reduction of battery prices by about 30%.

This is an important breakthrough because there is a huge potential industry for storing electric power generation offline until it’s needed. But at today’s battery prices this is not really practical. This can be seen by looking at the price of Elon Musk’s new storage batteries for solar power – they are priced so that socially conscious rich people can use the technology, but they are not cheap enough yet to make this a widespread technology that is affordable for everybody.

A 30% reduction in battery costs starts to make lithium-ion batteries competitive with fossil fuel power. Today these batteries cost about $500 per kilowatt-hour, which is four times the cost of using gasoline. Chiang’s goal is to get battery costs down to $100 per kilowatt-hour.

Metasheets. A metasheet is a material that will block a very narrow band of radiation but let other radiation pass. Viktar Asadchy of Aalto University in Finland has developed a metamirror that will block specific radiation bands and will reflect the target radiation elsewhere while letting other radiation pass.

This is not a new concept, but attempts to do this in the past have usually bounced the target radiation back at the source. This breakthrough will let the target radiation be bounced to a material that can absorb it and radiate it as heat.

This could be a big breakthrough for numerous devices by creating small filters that will allow the dissipation of dangerous radiation from chips, radios, and other devices. This could result in far safer electronics and also can cut down on interference caused by stray radiation and make many electronics components function better.

The Law of Accelerating Returns

exponential-growth-graph-1Ray Kurzweil, the chief engineer at Google, was hired because of his history of predicting the future of technology. According to Kurzweil, his predictions are common sense once one understands what he calls the Law of Accelerating Returns. That law simply says that information technology follows a predictable and exponential trajectory.

This is demonstrated elegantly by Moore’s Law, in which Intel cofounder Gordon Moore predicted in the mid-60s that the number of transistors incorporated in a chip will double every 24 months. His prediction has held true since then.

But this idea doesn’t stop with Moore’s Law. The Law of Accelerating Returns says that this same phenomenon holds true for anything related to information technology and computers. In the ISP world we see evidence of exponential growth everywhere. For example, most ISPs have seen the the amount of data downloaded by the average household double every four years, stretching back to the dial-up days.

What I find somewhat amazing is that a lot of people the telecom industry, and certainly some of our regulators, think linearly while the industry they are working in is progressing exponentially. You can see evidence of this everywhere.

As an example, I see engineers designing new networks to handle today’s network demands ‘plus a little more for growth’. In doing so they almost automatically undersize the network capacity because they don’t grasp the multiplicative effect of exponential growth. If data demand is doubling every four years, and if you buy electronics that you expect to last for ten to twelve years, then you need to design for roughly eight times the data that the network is carrying today. Yet that much future demand just somehow feels intuitively wrong and so the typical engineer will design for something smaller than that.

We certainly see this with policy makers. The FCC recently set the new definition of broadband at 25 Mbps. When I look around at the demand in the world today at how households use broadband services, this feels about right. But at the same time, the FCC has agreed to pour billions of dollars through the Connect America Fund to assist the largest telcos in upgrading their rural DSL to 15 Mbps. Not only is that speed not even as fast as today’s definition of broadband, but the telcos have up to seven years to deploy the upgraded technology, during which time the broadband needs of the customers this is intended for will have increased to four times higher than today’s needs. And likely, once the subsidy stops the telcos will say that they are finished upgrading and this will probably be the last broadband upgrade in those areas for another twenty years, at which point the average household’s broadband needs will be 32 times higher than today.

People see evidence of exponential growth all of the time without it registering as such. Take the example of our cellphones. The broadband and computing power demands expected from our cellphones is growing so quickly today that a two-year-old cellphone starts to feel totally inadequate. A lot of people view this as their phone wearing out. But the phones are not deteriorating in two years and instead, we all download new and bigger apps and we are always asking our phones to work harder.

I laud Google and a few others for pushing the idea of gigabit networks. This concept says that we should leap over the exponential curve and build a network today that is already future-proofed. I see networks all over the country that have the capacity to provide much faster speeds than are being sold to customers. I still see cable company networks with tons of customers still sitting at 3 Mbps to 6 Mbps as the basic download speed and fiber networks with customers being sold 10 Mbps to 20 Mbps products. And I have to ask: why?

If the customer demand for broadband is growing exponentially, then the smart carrier will increase speeds to keep up with customer demand. I talk to a lot of carriers who think that it’s fundamentally a mistake to ‘give’ people more broadband speed without charging them more. That is linear thinking in an exponential world. The larger carriers seem to finally be getting this. It wasn’t too many years ago when the CEO of Comcast said that they were only giving people as much broadband speed as they needed, as an excuse for why the company had slow basic data speeds on their networks. But today I see Comcast, Verizon, and a number of other large ISPs increasing speeds across the board as a way to keep customers happy with their product.