The DARPA Spectrum Challenge

darpaDARPA (the Defense Advanced Research Projects Agency) has launched a grant challenge to find a way to more efficiently use spectrum in the US. The prize is called the Spectrum Collaboration Challenge (SC2) and DARPA is offering a $2 million reward to whoever comes up with the best way to adapt in real-time to congested spectrum conditions while maximizing the use of our spectrum. The winner of the challenge won’t be a solution that dominates the use of spectrum, but will instead be looking at solutions that collaboratively share spectrum in the best manner between multiple users.

DARPA assumes that it’s going to require artificial intelligence to be able to make real-time decisions about spectrum sharing. They realize there is no easy answer and so the competition will start in 2017 and last until 2020. What is probably the coolest thing about the challenge is that DARPA is creating a large wireless test-bed they are calling the Colosseum that is going to let participants try out their ideas. This will provide researchers with remote capabilities to conduct experiments in simulated real-life environments such as a busy urban street or a battlefield (which is primary the main reason they are interested in this).

It’s a great idea because our spectrum in this country is certainly a mess. There are certain bands of spectrum that are used very heavily and other spectrum that lies fallow and unused. Further, the FCC has chopped most spectrum up into discrete channels and provided buffers between channels that go largely unused.

What really makes spectrum a challenge is that different bands are ‘owned’ by different parties and the whole point of buying spectrum from the FCC is for the buyer to use it in almost any way that makes sense to them. But the consequence of spectrum ownership is that huge swaths of spectrum are unused or at least unusable by everybody except the spectrum owner. But one would think in a battlefield situation that just about any spectrum can be used without worrying about the rules.

And while any solution that is found will probably benefit the military more than anybody else, there is still a huge amount of good that could be done with better spectrum collaboration. Certainly spectrum owners could make some or all of the spectrum they control open to collaborative sharing, for some sort of compensation.

A lot of people might look at this idea and think that this could mean great things for cellphones and other mobile communications. But cellphones have a whole different issue that makes them a very poor candidate for sharing in too many different swaths of spectrum. A primary issue goal for cellphones is power conservation and it costs a lot of power to operate antennas in too many frequencies.

Most cellphone makers today limit a phone to only using a few different frequencies at once. This is one of the reasons for the huge variance people get in 3G and 4G data rates – many of the phones on the market only look at a few different frequencies, to the detriment of how much bandwidth can be downloaded at any one time. This is something that cellphone makers don’t talk about and you have to look deep into a cellphone’s specifications to understand the frequency capabilities of a given handset.

There are software defined radios today that are a lot larger than handsets and which can be easily tuned to different frequencies. But this is something that is incredibly challenging today to do on the fly and to do accurately. And of course, to do what DARPA has in mind means coordination and collaboration so that a given sender and receiver are using the same frequencies at the same time. It’s the kind of challenge that can make a wireless engineer’s head hurt and it probably will take an AI to be able to handle the complexities involved in truly sharing multiple spectrum bands in real time.

Have We Entered the Age of Robots?

robbyI read a lot of tech news, journals, and blogs and it recently dawned on me that we have already quietly entered the age of robots. Certainly we are not yet close to having C-3PO from Star Wars, or even Robbie the Robot from Lost in Space. But I think that we have crossed that threshold that future historians will point to as the start of the age of robots.

There are research teams all over the world working to get robots to do the kinds of tasks that we want from a C-3PO. As the recent DARPA challenge showed, robots are still very awkward at doing simple physical tasks—but they are now able to get them done. There are research teams that are figuring out how to make robots move in the many subtle ways that humans move and they will figure it out.

The voice recognition used by robots still has a long way to go to be seamless and accurate. As you see when you use Apple’s Siri, there are still times when voice recognition just doesn’t get us. But voice recognition is getting better all the time.

And robots still are not fabulous at sensing their surroundings, but this, too, is improving. Who would ever have thought that in 2015 we would have driverless cars? Yet they are seemingly now everywhere and a number of states have already made it legal for them to share the road with the rest of us.

The reason I think we might have already entered the Robot Age is that we can now make robots that are capable of doing each of the many tasks we want out of a fully functional robot. Much of what robots can do now is rudimentary but all that is needed to get the robots from science fiction to real life is more research and development and further improvements in computing power. And both are happening. There is a massive amount of robot research underway and computer power continues to grow exponentially. I would think that within a decade computing power will have improved enough to overcome the current limitations.

All of the components needed to create robots have already gotten very cheap. Sensors that cost a $1,000 can now be bought for $10. The various motors used for robot motion have moved from expensive to affordable. And as real mass production comes into play, the cost of building a robot is going to continue to drop significantly.

We already have evidence that robots can succeed. Driverless cars might be the best example. One doesn’t have to look very far into the future to foresee driverless cars being a major phenomenon. I can’t think that Uber really expects to make a fortune by poorly paying and mistreating human drivers such that the average Uber driver last less than half a year. Surely Uber is positioning themselves to have the first fleet of driverless taxis, which will be very profitable without having a labor cost.

We see robots being integrated into the workplace more so than into homes. Amazon is working feverishly towards totally automating their distribution centers. I think this has been their goal for a decade and once its all done with robots the part of the business that has always lost money for Amazon will become quite profitable. There are now robots being tested in hospitals to deliver meals, supplies, and drugs. There are robot concierges in Japan. And almost every factory these days has a number of steel collar workers. You have to know that Apple is looking forward to the day soon when they can make iPhones entirely with robots and avoid the bad publicity they keep getting from their factories today.

The average person will look at video from the recent recent DARPA challenge and see clumsy robots and be convinced that robots are still a long way off. But almost every component needed to make robots better is improving at an exponential pace, and we know from history that things that grow exponentially always surprise people by ‘bursting’ onto the scene. I would not be at all surprised to see a workable home maid robot within a decade and to see a really awesome one within twenty years. I know when there is a robot that can do the laundry, load the dishwasher, wash the floor, and clean the cat litter than I am going to want one. Especially cleaning the cat litter—is somebody working on that?

New Tech – July 2015

light beamsAs I do periodically, I’ve compiled a roundup of some of the coolest new technology that might be affecting our industry in the near future.

Light-Based Computers: Researchers at Stanford University have finally found an efficient way to transmit data between computer chips using light. This might finally enable light-based computers.

Light-based computing has two advantages over electricity-based computing. First, light transmissions are faster, meaning that data can be moved more quickly where it’s needed and will vastly increase the capacity of a chip. The other big advantage is that it will be greener and will not generate much heat. Today about 80% of the power poured into a chip is converted to heat, which is why you need a fan for your home computer and why data centers need huge amounts of power to keep them cool.

The breakthrough was done by taking advantage of the tiny imperfections found in any chip. They have developed an algorithm that will work with each unique chip to design the exact place where light gateways should be placed. This is a very different concept than today where chips are all uniform and the goal is to make each chip exactly the same. The architecture of the chip then uses many extremely thin layers of silicon, perhaps 20 layers in the width of a hair, and the creation of light gateways that work in 3-dimensions.

Faster Fiber: Scientists at the Qualcomm Institute in San Diego have been able to increase the power of optical signals in long-haul fiber by a factor of 20. They were able to send a signal 7,400 miles through fiber without amplification.

Long-haul fibers today use a whole range of different light spectrums in order to carry more data. However, as you cram in additional light paths you also increase interference between light paths which we call crosstalk. This eventually distorts the signal and requires the signal to be regenerated and re-amplified. The Qualcomm scientists have found a technique they call ‘combing’ that conditions the light stream before it is sent to greatly reduce the crosstalk.

This breakthrough means that existing fiber signals can be sent a lot farther without regeneration in applications like undersea fibers. But in normal fiber applications this technique means that about twice as much data can be crammed into the same light path – effectively doubling the capacity of fiber.

Biodegradable Chips: Engineers at the University of Wisconsin at Madison have developed a chip made almost entirely out of wood cellulose. The advantage of this technology is that we can create chips for many uses that will be disposable or recyclable with other trash. We have a huge worldwide waste problem with current electronics that should not be put into landfills due to containing heavy metals and other unhealthy compounds.

While these chips probably won’t be used for high-density computing like in data centers, this could become a standard way to make chips for the many things we use that are eventually disposable. One can certainly envision this as the basis of many chips for the Internet of Things.

A Replacement for GPS: DARPA is working on a replacement for GPS. GPS was developed by DARPA just a few decades ago, but there are already a lot of places where GPS doesn’t work, such as underground. Since GPS is satellite-based, DARPA is also worried about it being jammed in combat situations.

The new location system will be based upon self-calibrating gyroscopes that will always ‘know’ where they are at. This would create a location technology that is not satellite-based and not subject to outside interference. It also would work better than current GPS in three dimensions, meaning that it would more accurately be able to measure changes in altitude.

While the technology doesn’t need an external reference to calibrate itself or know its locations, they are also building in what they call ASPN (All Source Positioning and Navigation). This means a device would be able to pick up radio, television signals, or other spectrum to double-check their position so as to be able to confirm their location and recalibrate as needed.