New Science – October 2022

Today’s blog looks at some new technologies that may someday have an impact on computing and broadband. We’re living in a time when labs everywhere are making some big breakthroughs with new technology, and it’s hard to predict which ones will become part of our everyday lives.

Artificial Synapses. Engineers at MIT have developed a new kind of artificial synapse that can process data several million times faster than the human brain. The human brain is still the best computer in the world due to the unique structure of neurons and synapses. Scientists have been working for years to try to mimic the structure of the human brain by developing chips that can perform multiple computations simultaneously using data stored in local memory instead of elsewhere. Early work in the field has created neural networks to mimic the way the brain works.

The new technology differs from past attempts by using protons instead of electrons to shuttle data. The scientists created a new kind of programmable resistor that uses protons and which allows for the use of analog processing instead of precise digital processing. The core of the new device is phosphosilicate glass (PSG), which is silicon dioxide with added phosphorus. This material allows for the passage of protons at room temperature while blocking electrons.  A strong electric field can move protons through the chip at almost the speed of light, allowing for the processing of data a million times faster than earlier neural nets.

Replacement of Silicon? Researchers at the EPFL School of Engineering in Lucerne, Switzerland have discovered some interesting properties of vanadium dioxide that would allow building devices that can remember previous external stimuli. This might allow for making chips out of VO­­­­­­­­­­­­­­­2 that would play the same role today as silicon while also acting as a data storage medium. This would allow for the storage of data directly as part of the structure of a chip.

Scientists found in the past that VO2 can outperform silicon as a semiconductor. VO2 also has an interesting characteristic where it changes from an insulator to a metal at 154 degrees Fahrenheit. Researchers found that when VO2 is heated and then cooled that it remembers any data stored at the higher temperature. The researchers believe that VO2 can be used to create permanent data storage that would be embedded directly into the material comprising a chip.

One-Way Superconductor. Scientists at the Delft University of Technology in the Netherlands, along with scientists from Johns Hopkins, have been able to create one-way superconductivity without using magnetic fields – something that was thought to be impossible.  This would be an amazing breakthrough because semiconductors that use superconducting materials would be hundreds of times faster than chips today with zero energy loss during data processing – something that might remove much of the heat created in data centers.

The researchers have found the possibility by using Trinobiumoctabromide (Nb3Br8). They were able to create diodes with a film of the material only a few atoms thick to create a Josephson diode, which are the core component for quantum computing.

The biggest challenge remaining for the team is to enable the superconducting diode to function at temperatures above 77K, which would enable functioning using liquid nitrogen cooling. One of the challenges of all superconductors has been the ability to enable the process at anything other than super-cold temperatures. But it’s not hard to envision using the technology to create large data centers of quantum computers.

Making a Safe Web

Tim Berners-Lee was one of the founders of the Internet and implemented the first successful communication between a client and a server using HTTP in 1989. He’s always been a proponent for an open Internet and doesn’t like how the web has changed. The biggest profits on the web today come from the sale of customer data.

Berners-Lee has launched a new company along with cybersecurity expert John Bruce that proposes to ‘restore rightful ownership of the data back to every web user”. The new start-up is called Inrupt which is proposing to develop an alternate web for users who want to protect their data and their identity.

Berner-Lee has been working at the Computer Sciences and Artificial Intelligence Laboratory (CSAIL) at MIT to develop a software platform that can support his new concept. The platform is called Solid, which has the main goal of decoupling web applications from the data they produce.

Today our personal data is stored all over the web. Our ISPs make copies of a lot of our data. Platforms like Google, Facebook, Amazon, and Twitter gather and store data on us. Each of these companies captures a little piece of the picture of who we each are. These companies use our data for their own purposes and then sell it to companies that buy, sort and compile that data to make profiles on all of us. I saw a disturbing statistic recently that there are now up to 1,400 data points created daily for the typical data user every day – data gathered from our cellphones, smart devices, and our online web activity.

The Solid platform would change the fundamental structure of data storage. Each person on the Solid platform would create a cache of their own personal data. That data could be stored on personal servers or on servers supplied by companies that are part of the Solid cloud. The data would be encrypted and protected against prying.

Then, companies like Berners-Lee’s Inrupt would develop apps that perform functions users want without storing any customer data. Take the example of shopping for new health insurance. An insurance company that agrees to be part of the Solid platform would develop an app that would analyze your personal data to determine if you are a good candidate for the insurance policy. This app would work on your server to analyze your medical records and other relevant personal information. The app would do its analysis and decide if you are a good candidate for a policy. It might report information back to the insurance company such as some sort of rating of you as a potential customer, but the insurance would never see the personal data.

The Solid concept is counting on the proposition that there are a lot of people who don’t want to share their personal data on the open web. Berners-Lee is banking that there are plenty of developers who would design applications for those in the Solid community. Over time the Solid-based apps can provide an alternate web for the privacy-minded, separate and apart from the data-collection web we share today.

Berners-Lee expects that this will first take a foothold in industry groups that value privacy like coders, lawyers, CPAs, investment advisors, etc. Those industries have a strong desire to keep their client’s data private, and there is no better way to do that than by having the client keep their own data. This relieves lawyers, CPAs and other professionals from the ever-growing liabilities from data breaches of client data.

Over time Berners-Lee hopes that all sorts of other platforms will want to cater to a growing base of privacy-minded users. He’s hoping for a web ecosystem of search engines, news feeds, social media platforms, and shopping sites that want to sell software and services to Solid users, but with the promise of not gathering personal data. One would think current existing privacy-minded platforms like Mozilla Firefox would join this community. I would love to see a Solid-based cellphone operating system. I’d love to use an ISP that is part of this effort.

It’s an interesting concept and one I’ll be watching. I am personally uneasy about the data being gathered on each of us. I don’t like the idea of applying for health insurance, a credit card or a home mortgage and being judged in secret by data that is purchased about me on the web. None of us has any idea of the validity and correctness of such data. And I doubt that anybody wants to be judged by somebody like a mortgage lender using non-financial data like our politics, our web searches, or the places we visit in person as reported by our cellphones. We now live in a surveillance world and Berners-Lee is giving us the hope of escaping that world.

Technology Shorts – September 2016

truenorthHere are some new technology developments that are likely to someday improve telecommunications applications.

Single Molecule Switch. Researchers at the Peking University of Beijing have created a switch that can be turned on and off by a single photon. This opens up the possibility of developing light-based computers and electronics. To make this work the researchers needed to create a switch using just one large molecule. The new switches begin with a carbon nanotube to which three methylene groups are inserted into the molecule, creating a switch that can be turned on and off again.

Until now researchers had not found a molecule that was stable and predictable. In earlier attempts of the technology a switch would turn ‘on’ but would not always turn off. Further, they needed to create a switch that lasted, since the switches created in earlier attempts began to quickly break down with use. The new switches function as desired and look to be good for at least a year, a big improvement.

Chips that Mimic the Brain. There are now two different chips that have hit the market that are introducing neural computing in a way that mimics the way the brain computes.

One chip comes from KnuEdge, founded by a former head of NASA. Their first chip (called “Knupath”) has 256 cores, or neuron-like brain cells on each chip, connected by a fabric that lets the chips communicate with each other rapidly. This chip is built using older 32 nanometer technology, but a newer and smaller chip is already under development. But even at the larger size the new chip is outperforming traditional chips by a factor of two to six times.

IBM also has released a neural chip it’s calling TrueNorth. The current chip contains 4,096 cores, each one representing 256 programmable ‘neurons’. In traditional terms that gives the chip the equivalent of 5.4 billion transistors.

Both chips have taken a different approach than traditional chips which use a von-Neumann architecture where the core processor and memory are separated by a buss. In most chips this architecture has been slowing down performance when the buss gets overloaded with traffic. The neural chips instead can simultaneously run a different algorithm in each core, instead of processing each algorithm in sequential order.

Both chips also use a fraction of the power required by traditional chips since they only power the parts of the chips that are being used at any one time. The chips seem to be best suited to an environment where the chips can learn from their experience. The ability of the chips to run simultaneous algorithms means that they can provide real-time feedback within the chip to the various processors. It’s not hard to imagine these chips being used to learn and control fiber networks and be able to tailor customer demand on the fly.

Improvements in WiFi. Researchers at MIT’s Computer Science and Artificial Intelligence Lab have developed a way to improve WiFi capabilities by a factor of three in crowded environments like convention centers or stadiums. They are calling the technology MegaMIMO 2.0.

The breakthrough comes from finding a way to coordinate the signals to users through multiple routers. WiFi signals in a real-world environment bounce off of objects and scatter easily, reducing efficiency. But by coordinating the signals to a given device like a cellphone through multiple routers the system can compensate for the interference and scattering by recreating a coherent understanding of the user signal.

While this has interesting application in crowded public environments, the real potential will be realized as we try to coordinate with multiple IoT sensors in an environment.

New Technology – Telecom and Computing Breakthroughs

The InternetToday I look at some breakthroughs that will result in better fiber networks and faster computers – all components needed to help our networks be faster and more efficient.

Increasing Fiber Capacity. A study from Bell Labs suggests that existing fiber networks could be made 40% more efficient by changing to IP transit routing. Today operators divvy up networks into discrete components. For example, the capacity on a given route may be segmented into distinct dedicated 100 Gig paths that are then used for various discrete purposes. This takes the available bandwidth on a given long-haul fiber and breaks it into pieces, much in the same manner as was done in the past with TDM technology to break data into T1s and DS3s.

The Bell Lab study suggests a significant improvement if the entire bandwidth on a given fiber is treated as one huge data pipe, much in the same manner as might be done with the WAN inside of a large business. This makes sense because there is always spare or unused capacity on each segment of the fiber’s bandwidth and putting it all together into one large pipe makes the spare capacity available. Currently Alcatel Lucent, Telefonica, and Deutsche Telekom are working on gear that will enable the concept.

Reducing Interference on Fiber. Researchers at University College London have developed a new set of techniques that reduce interference between different light wave frequencies on fiber. It is the accumulation of interference that requires optical repeaters to be placed on networks to refresh optical signals.

The research team took a fresh approach to how signals are generated onto fiber and pass the optical signals through a comb generator to create seven equidistantly-spaced and frequency-locked signals, each in the form of a 16 QAM super-channel. This reduces the number of different light signals on the fiber to these seven channels which drastically reduces the interference.

The results were spectacular and they were able to generate a signal that could travel without re-amplification for 5,890 kilometers, or 3,660 miles. This has immediate benefit for undersea cables since finding ways to repeat these signals is costly. But there are applications beyond long-haul fiber and the team is now looking at ways to use the dense super-channels for cable TV systems, cable modems, and Ethernet connections.

Faster Computer Chips. A research team at MIT has found a way to make multicore chips faster. Multicore chips contain more than one processor and are used today for intense computing needs in places like data centers and in supercomputers.

The improvement comes through the creation of a new scheduling technique they are calling CDCS (computation and data co-scheduling). This technique is a way to more efficiently distribute data flow and the timing of computations on the chips. The new algorithm they have developed allows data to be placed near to where calculations are performed, reducing the movement of data within the chip. This results in a 46% increase in computing capacity while also reducing power consumption by 36%. Consequently, this will reduce the need for cooling which is becoming a major concern and one of the biggest costs at data centers.

Faster Cellphones. Researchers at the University of Texas have found a way to double the speed at which cellphones and other wireless devices can send or receive data. The circuit they have developed will let the cellphone radio deploy in ‘full-duplex’ mode, meaning that the radio can make both send and receive signals at the same time.

Today a cellphone radio can do one or the other and your phone’s radio constantly flips between sending or receiving data. Radios have always done this so that the frequencies from the transmitting part of the phone, which are normally the stronger of the two signals, don’t interfere with and drown out the incoming signals.

The new circuit, which they are calling a circulator, can isolate the incoming and outgoing signals and acts as a filter to keep the two separate. Circulators have been is use for a long time in devices like radar, but they have required large, bulky magnets made from expensive rare earth metals. But the new circulator devised by the team does this same function using standard chip components.

This circulator is a tiny standalone device that can be added to any radio chip and it acts like a traffic manager to monitor and control the incoming and outgoing signals. This simple, new component is perfect for cellphones, but will benefit any two-way radio, such as WiFi routers. Since a lot of the power used in a cellphone goes to flipping between send and receive mode, this new technology ought to also provide a significant improvement to battery life.

Million-Fold Increase in Hard Drive Capacity? Researchers at the Naval Research Laboratory have developed a way to magnetize graphene, and this could lead to data storage devices with a million-time increase in storage per size of the device. Graphene is a 1-atom thick sheet of carbon which can be layered to make multi-dimensional stacked chips.

The scientists have been able to magnetize the graphene by sitting it on a layer of silicon and submerging it in a pool of cryogenic ammonia and lithium for about a minute. They then introduce hydrogen, which renders the graphene electromagnetic. The process is adjustable, and with an electron beam you can shave off hydrogen atoms and effectively write on the graphene chip. Today we already have terabyte flash drives. Anybody have a need for an exabyte flash drive?

Can You Really Multitask?

jugglingThis blog is a bit off my normal beat, but I’ve read several articles lately about the effects of technology on our brains. I think the findings of these studies are things that you will find interesting.

I think most people will agree that we are busier today than we have ever been before. Not only do we lead hectic lives, but we have compounded our lives with connections through our smartphones and computers to coworkers, family and friends all throughout the day and night.

I meet people all of the time who say that they are good multitaskers and who say they that they are good at handling the new clutter in our modern personal and  work lives. There are days when I feel I am good at it and days when I definitely am not.

Researchers at MIT say that multitasking is an illusion. Earl Miller, a neuroscientist there says that our brains are not wired to multitask. What you think of as multitasking is really the brain doing only one thing at a time and switching quickly between tasks. He says there is a price to pay for doing this because what we call multitasking leads to the production of the stress hormone cortisol as well as adrenaline. Multitasking creates a dopamine feedback loop which rewards the brain for losing focus and searching for the next stimulation. The bottom line is that multitasking leads to less focus and makes us less efficient.

Miller says that multitasking is a diabolical illusion that makes us feel like we are getting things done, when instead we are just keeping the brain busy. When we multitask we don’t do any of the tasks as well as if we stopped and concentrated on them one at a time. And it’s addictive. Those of us old enough can remember back to a simpler time when we often made choices not to do things. If we were reading a book or watching a TV show we chose not to let ourselves get easily distracted. But since multitasking rewards the brain for getting distracted, we now routinely will break into everything we are doing to read an email, look to see who texted, see who commented on something we said on Facebook or Twitter.

I decided to test myself and I decided to watch a one-hour show on Netflix to see if I could watch it end-to-end without distraction. I was amazed at how poorly I did. Every few minutes I found myself wanting to go do something else. And a few times I almost automatically clicked on a different application on the computer. And I wanted to stop a lot more than I did and it was a real effort to stay focused on the show I was watching. I thought this would be easy, but apparently I am now addicted to multitasking. I wonder how many of you can do better?

One of the reasons we have gotten pulled into multitasking is a new expectation that we are always available. It used to be easy to drop out of sight by simply walking out of range of the telephone. People were not surprised to miss you when they called and leaving voice messages was a big deal. But today the expectation is that we have our smart phone with us and turned on at all times, and through that we can be called, texted, emailed and reached on demand.

Research shows that multitasking kills our concentration and is more detrimental to our short-term memory than smoking marijuana. Cannabinol, a chief ingredient of pot interferes directly with memory brain receptors and directly interferes with our ability to concentrate on several things at one. But research have shown that if you break off concentrating on a task to answer an email that your IQ temporarily drops ten points. And cumulative multitasking degrades your brain’s performance more than smoking pot.

Researchers at Stanford have shown that if you learn something new while multitasking that the information goes to the wrong part of the brain. For instance, if you read work emails while doing something else like watching TV that the information from the emails goes to the striatum, the place where we normally store skills and physical memories and not where we store ideas and data memory. If not interrupted, the same emails would be stored in the hippocampus, which is essentially our brain’s hard drive that is good at retrieving data when we need it.

Multitasking comes at a big cost. Asking the brain to constantly shift tasks burns up a lot of glucose in the brain which we need to stay focused. So multitasking can lead to feeling tired and disoriented after even a short time. I used to believe that deep thinking caused your brain to get tired, but staying on one task actually uses far less energy that constantly shifting from one task to another.

This makes me worry about what we are doing to our children who now multitask at an early age. Perhaps there is some hope since one of the new trends among many teenagers is a rebellion against technology, and that is probably a healthy thing. If the pressure to be always connected is hard on adults, one can only imagine the peer pressure it creates among teens. People of my age use email as our primary method of communication while teens almost exclusively use text. The biggest problem with texting, according to the researchers is that it demands hyper-immediacy and you are expected to return a response as soon as you get a text.

I have started my own little rebellion against multitasking. I am not checking emails more than a few times a day and I rarely check to see if somebody has texted me. After all, I need to save some time for Twitter!

New Technology – January 2015

TransistorIn this month’s blog about new technology I focus on innovations having to do with computers. It seems like there are innovations in this area almost every month.

Faster Computing through Chip Flaws. One of the more interesting lines of research at chip manufacturers is to make chips better by making them perform worse. MIT has done research that shows that many of the tasks that we perform on computers such as looking at images or transmitting voice don’t require perfect accuracy. Yet chips are currently designed to pass on every bit of data for every task.

MIT has shown that introducing flaws into the data path for these kinds of functions can speed up computing time while also cutting power usage of a chip by as much as 19%. So the MIT researchers have developed a tool they call Chisel which helps chip designers figure out just how much error they can introduce to any given task. For example, the program will analyze the impact of making mistakes for 1% or 5% of pixels when transmitting pictures and will compare quality of the finished transmission with the power savings that comes from allowing transmission errors.

Computers that Don’t Forget. A few companies like Avalance, Crocus, Samsung and Toshiba make MRAM (magnetoresistive random-access memory) devices that are replacing older RAM and DRAM technologies in chips to provide non-volatile memory. The expectation in the industry is that these kind of storage devices will replace volatile memory (hard disks) within ten years because they are much faster and use far less energy.

There are a few initiatives working on improved MRAM technologies.NEC and Tohoku University of Sendai Japan has developed a 3D processor architecture where MRAM layers are combined with logic layers. The chip uses a technology they call Spin-Cam (content addressable memory) that promises to allow more fixed memory with faster access speeds.

KAIST, a public research university in Daejeon, South Korea has developed a chip they are calling TRAM (topologically switching RAM) that uses a phase-changing super capacitor to quickly write to non-volatile memory.

Computers with Common Sense. The Paul G. Allen Foundation is awarding grants to projects that aim to teach computers to understand what they see and read. The projects will look at several different fields of machine reasoning to try to understand diagrams, data visualizations, photographs and textbooks.

The grants are part of a larger $79.1 million initiative into artificial intelligence research. This new research fits well into other Allen initiatives in deep learning to allow computers to explain what’s happening in pictures or to classify large sections of text without human supervision.

Quantum Memory. Researchers at the University of Warsaw have developed a quantum memory that will allow the transmission of the results from quantum computers over distance. Quantum computers operate very differently than Boolean computers in that they deal with probabilities rather than number crunching. Until now there has never been an ability to transfer the results of a calculation of a quantum computer because the very act of reducing it to ones and zeros destroys the result. For example, transmission of quantum results had no way to deal with the normal laser amplifiers in a fiber optic network.

The quantum memory consists of a 1 inch by 4 inch glass tube that is coated with rubidium and filled with krypton gas. When hit with a series of three lasers the quantum information gets imprinted onto the rubidium atoms for a very short period of time of perhaps a few microseconds. But this is enough time for the data to be re-gathered and forwarded to the next quantum storage device.

Self-Healing Computers. With hacking and malware on the rise, a new line of defense will be to give our computers the ability to heal themselves. Today we use a very static defense system for our computers consisting of mostly firewalls and virus checking. But anything that slips past those static defenses can be deadly.

There is an initiative at the Department of Homeland Security which is funding the development of a more active defense system that not only detects problems but which automatically fights back. The first stage of this new active defense is being called continuous diagnostics and mitigation (CDM). The goal of CDM is to enable each device in the network to self-monitor itself for signs of having been hacked. The first CDM systems will activate malware software to try to immediately rid the machine of the invader.

The next step after CDM will be to form a network-wide active defense that will allow networks to provide feedback about threats identified from individual CDM computers. In this next step the whole network will help to fight back against a problem found on one machine in the network. The ultimate goal is to create self-healing computers that continually make sure that all systems and data are exactly as should be.

New Technology – December 2014

MagneticMapHere are some of the interesting new technologies I’ve run across in recent weeks:

Faster Data Speeds. Researchers at Aalborg University, MIT and Caltech have developed a new mathematically-based technique that can boost Internet data speeds up to 10 times. In a nutshell they code data packets and embed them within an equation. The equation can be solved when all of the packets are received at the other end.

While this sounds complicated, it is vastly faster than the current TCP/IP standard that is used to transmit packets. With TCP/IP once a data file begins to be transmitted the packets must be both sent and received in order, and they use the same data path over the Internet. If a packet is bad or gets lost the TCP/IP process slows down trying to find the missing packet. But under the new technique, different packets can take different paths on the Internet and it doesn’t matter if they are receive in the right order. They are reordered as the equation is solved.

In prototype trials this speeded up data transmissions from between 5 and 10 times. And transmissions are inherently safer because all of the packets don’t take the same path, making it a lot harder to intercept them. This technology can apply to any data transmission network. This is one of those changes that is a fundamental breakthrough because we have been using TCP/IP for decades and everything is geared to use it. But this has promise to become the new data transmissions standard.

Any Surface Can be an Antenna. Scientists at Southeast University in Nanjing China have developed a meta-material that can turn any hard surface into an antenna. They do this by embedding tiny U-shaped metallic components in the surface. These little Us act like what is called a Luneburg lens. Normal lenses are made out of one material and refract light in a consistent way. But a Luneburg lens is made up of multiple materials and can bend the light in multiple ways. For example, these materials can be used to focus on a point that is off to the side of the lens (something normal lenses can’t do) or they can radiate all incoming radiation in the same direction.

These meta-material surfaces can be designed to act as an antenna, meaning that almost any surface could become an antenna without having to have an external dish or receiver. Perhaps even more interesting, these same meta-materials can be used to scatter radiation which could make fighter jets invisible to radar.

Another Step Towards Photonic Chips. Researchers at Stanford have developed an optical link that uses silicon strips to bend light at right angles. This adds a 3D aspect into the chip topography which will help to accommodate the speeds needed by future faster computers. The can be reconfigured on the fly to use different light wavelengths making it possible to use the strips to change the nature of the computer as needed. This is one of the many steps that is needed to create a purely photonic computer chip.

Cooling With Magnets. Scientists in Canada and Bulgaria have developed a way to produce cooling using magnetic fields. This works by removing ferromagnetic materials from magnetic fields which causes them to cool down. They have found several substances that are efficient in heat transfer. Further, they are using water as the heat transfer fluid eliminating harmful hydrofluorocarbons. This can be used for refrigerators or air conditioners without the coils and pipes by just rotating the cooling element in a magnetic field.

Synthetic Gasoline out of Water. German company Sunfire GmbH has developed a process that can make synthetic fuel from water and carbon dioxide. The technology has been around for a long time and uses a process called the Fischer-Tropsch process. But the company has found a way to make the process far more efficient. The fuel that is produced has a high energy coefficient of 50%, similar to diesel fuel, compared to a much lower efficiency for gasoline between 14% and 30%. But the company thinks they can get the efficiency up to 70%.

The interesting thing about the technology is that it is carbon neutral since it takes the carbon dioxide out of the atmosphere to create the fuel, as compared to pulling it out of the ground. The are also numerous benefits from having a more efficient. With this technology we can keep our gasoline cars without having to rely on the petroleum industry. It could help to take the politics out of oil and could let us cut back on the amount of petroleum we need to refine.