I ran across a reference to Moore’s Law the other day. This was named after Gordon Moore, an engineer who later became one of the founders of Intel. In 1965, Moore observed that the number of transistors that could be squeezed into a given area of a circuit board was doubling every two years. He predicted this trend would last for perhaps another decade, but the microchip industry kept fulfilling his prediction for over fifty years, and the general consensus is that the Moore’s Law prediction died somewhere between 2016 and 2018. Chip density has continued to improve, but at a slower rate, and there is general consensus that we are getting close to reaching the maximum possible density of transistors, limited by the law of physics.
For a number of years, there were a lot of predictions that the end of Moore’s Law would mean the end of faster computing. For decades, our devices became obsolete every few years as the next generation of faster chips hit the electronics market. But chip makers have discovered a wide variety of ideas and technologies that continue to improve the speed of computing.
Consider some of the techniques that continue to improve computing power:
- ASICs (Application-Specific Integrated Circuits): These are specialized chips designed for one specific task. For example, there are ASIC chips in data centers that are specifically designed to handle specific tasks related to processing masses of data.
- GPUs (Graphics Processing Units): These chips take the opposite approach of ASICs and are designed to handle parallel tasks and tackle multiple functions at the same time. First designed for gaming, a GPU breaks a task into many independent threads and executes each thread using a different small core. GPUs keep improving as chip makers and software designers find better ways to coordinate and control the many threads.
- 3D Stacking. Another new technique is stacking transistors and memory vertically as well as horizontally. This provides a huge boost to computing power. One of the more widely used 3D stacking technique is the use of chiplets, which are small chip components that can be stacked and fused to create a multi-layer chip. Chiplets benefit from Through-Silicon Vias (TSVs) that enables fast low-power communications between layers.
- Memory Integration. One of the biggest bottlenecks for any chip is the process of moving data into and out of the chip core during the computing process. Memory integration creates temporary memory directly on the chip to store data that is still needed for the specific calculations being handled. Pulling needed information out of the chip’s own memory bypasses the normal data transfer issue.
- Optical Computing. Optical computing uses light instead of electrons to transfer data around a chip. The primary benefit of light computing is that different colors of light can be used to allow for multiple streams of data transfer at the same time, instead of the single stream that comes from electrons. I recently wrote a blog that talked about a technology that can generate multiple wavelengths of light directly on a chip, which eliminates the need for bulky external lasers.
- Optimized Algorithms: Some of the biggest improvements in chip speed come from rewriting software to be “hardware-aware”, meaning it perfectly aligns with a chip’s architecture.
- Reconfigurable Computing. This is an architecture where portions of the chip can be programmed to change function or spatial configuration during the computing process.
- Quantum Computing. Quantum computing increases computing power using qubits and the principles of superposition and entanglement. Qubits exist in multiple states instead of the two states of 1 and 0 for digital computing, which provides an exponential increase in calculation power.
We’ve just barely begun exploring many of these ideas, and there are likely many breakthroughs still to come. Picture what something like a reconfigurable architecture using multiple colors of chip-generated light waves in a quantum computer might mean.