Technology Shorts – September 2016

truenorthHere are some new technology developments that are likely to someday improve telecommunications applications.

Single Molecule Switch. Researchers at the Peking University of Beijing have created a switch that can be turned on and off by a single photon. This opens up the possibility of developing light-based computers and electronics. To make this work the researchers needed to create a switch using just one large molecule. The new switches begin with a carbon nanotube to which three methylene groups are inserted into the molecule, creating a switch that can be turned on and off again.

Until now researchers had not found a molecule that was stable and predictable. In earlier attempts of the technology a switch would turn ‘on’ but would not always turn off. Further, they needed to create a switch that lasted, since the switches created in earlier attempts began to quickly break down with use. The new switches function as desired and look to be good for at least a year, a big improvement.

Chips that Mimic the Brain. There are now two different chips that have hit the market that are introducing neural computing in a way that mimics the way the brain computes.

One chip comes from KnuEdge, founded by a former head of NASA. Their first chip (called “Knupath”) has 256 cores, or neuron-like brain cells on each chip, connected by a fabric that lets the chips communicate with each other rapidly. This chip is built using older 32 nanometer technology, but a newer and smaller chip is already under development. But even at the larger size the new chip is outperforming traditional chips by a factor of two to six times.

IBM also has released a neural chip it’s calling TrueNorth. The current chip contains 4,096 cores, each one representing 256 programmable ‘neurons’. In traditional terms that gives the chip the equivalent of 5.4 billion transistors.

Both chips have taken a different approach than traditional chips which use a von-Neumann architecture where the core processor and memory are separated by a buss. In most chips this architecture has been slowing down performance when the buss gets overloaded with traffic. The neural chips instead can simultaneously run a different algorithm in each core, instead of processing each algorithm in sequential order.

Both chips also use a fraction of the power required by traditional chips since they only power the parts of the chips that are being used at any one time. The chips seem to be best suited to an environment where the chips can learn from their experience. The ability of the chips to run simultaneous algorithms means that they can provide real-time feedback within the chip to the various processors. It’s not hard to imagine these chips being used to learn and control fiber networks and be able to tailor customer demand on the fly.

Improvements in WiFi. Researchers at MIT’s Computer Science and Artificial Intelligence Lab have developed a way to improve WiFi capabilities by a factor of three in crowded environments like convention centers or stadiums. They are calling the technology MegaMIMO 2.0.

The breakthrough comes from finding a way to coordinate the signals to users through multiple routers. WiFi signals in a real-world environment bounce off of objects and scatter easily, reducing efficiency. But by coordinating the signals to a given device like a cellphone through multiple routers the system can compensate for the interference and scattering by recreating a coherent understanding of the user signal.

While this has interesting application in crowded public environments, the real potential will be realized as we try to coordinate with multiple IoT sensors in an environment.

Cool New Stuff – Computing

Generic-office-desktop2As I do once in a while on Fridays I am going to talk about some of the coolest new technology I’ve read about recently, both items related to new computers.

First is the possibility of a desk-top supercomputer in a few years. A company called Optalysys says they will soon be releasing a first generation chip set and desk-top size computer that will be able to run at a speed of 346 gigaflops in the first generation. A flop is a measure of instructions per second that can be performed by a computer. A gigaflop is 109 instructions, a petaflop is 1015 instructions and an exaflop is 1018. The fastest supercomputer today is the Tinahhe-2, built by a Chinese university and which operates at 34 petaflops, which is obviously much faster than this first desktop machine.

The computer works by beaming low-intensity lasers through layers of liquid crystal. They say that in upcoming generations that they will have a machine that can do 9 petaflops by 2017 and they have a goal of having a machine that will do 17.1 exaflops (17,100 petaflops) by 2020. The 2017 version will be half as fast as the fastest supercomputer today and yet be far smaller and use far less power. This would make it possible for many more companies and universities to own a supercomputer. And if they really can achieve their goal by 2020 it means another big leap forward in supercomputing power since that machine would be several magnitudes faster than the Chinese machine today. This is exciting news because in the future there are going to be mountains of data to be analyzed and it’s going to take myriad, and affordable supercomputing to keep up with the demands of big data.

In a somewhat related, but very different approach, IBM has announced that it has developed a chip that mimics the way the human brain works. They have developed a chip they call TrueNorth that contains the equivalent of one million human neurons and 256 million synapses.

The IBM chip is a totally different approach to computing. The human brain stores memories and does computing within the same neural network and this chip does the same thing. IBM has been able to create what they call spiking neurons within the chip, which means that the chip can store data as a pattern of pulses much in the same way the brain does. This is a fundamentally different approach than traditional computers that use what is called Von Neumann computing that separates data and computing. One of the problems with traditional computing is that data has to be moved back and forth to be processes, meaning that normal computers don’t do anything in real time and there are often data bottlenecks.

The IBM TrueNorth chip, even in this first generation is able to process things in real time. Early work on the chip has shown that it can do things like recognize images in real time both faster and with far less power than traditional computers. IBM doesn’t claim that this particular chip is ready to put into products and they see it as the first prototype for testing this new method of computing. It’s even possible that this might be a dead-end in terms of commercial applications, although IBM already sees possibilities for this kind of computer to be used for both real time and graphics applications.

This chip was designed as part of a DARPA program called SyNAPSE, which is short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics, which is an effort to create a brain-like hardware. The end game of that program is to eventually design a computer that can learn, and this first IBM chip is a long way from that end game. And of course, anybody who has seen the Terminator movies knows that DARPA is shooting to develop a benign version of Skynet!