Death of the Smartphone?

Over the last few weeks I have seen several articles predicting the end of the smartphone. Those claims are a bit exaggerated since the authors admit that smartphones will probably be around for at least a few decades. But they make some valid points which demonstrate how quickly technologies come into and out of our lives these days.

The Apple iPhone was first sold in the summer of 2007. While there were phones with smart capabilities before that, most credit the iPhone release with the real birth of the smartphone industry. Since that time the smartphone technology has swept the entire world.

As a technology the smartphone is mature, which is what you would expect from a ten-year old technology. While phones might still get more powerful and faster, the design for smartphones is largely set and now each new generation touts new and improved features that most of us don’t use or care about. The discussion of new phones now centers around minor tweaks like curved screens and better cameras.

Almost the same ten-year path happened to other electronics like the laptop and the tablet. Once any technology reaches maturity it starts to become commoditized. I saw this week that a new company named Onyx Connect is introducing a $30 smartphone into Africa where it joins a similarly inexpensive line of phones from several Chinese manufacturers. These phones are as powerful as US phones of just a few years ago.

This spells trouble for Apple and Samsung, which both benefit tremendously by introducing a new phone every year. People are now hanging onto phones much longer, and soon there ought to be scads of reasonably-priced alternatives to the premier phones from these two companies.

The primary reason that the end of the smartphone is predicted is that we are starting to have alternatives. In the home the smart assistants like Amazon Echo are showing that it’s far easier to talk to a device rather than work through menus of apps. Anybody who has used a smartphone to control a thermostat or a burglar alarm quickly appreciates the ability to make the changes by talking to Alexa or Siri rather than fumbling through apps and worrying about passwords and such.

The same thing is quickly happening in cars and when your home and car are networked together using the same personal assistant the need to use a smartphone while driving gets entirely eliminated. The same thing will be happening in the office and soon that will mean there is a great alternative to the smartphone in the home, the car and the office – the places where most people spend the majority of their time. That’s going to cut back on reliance of the smart phone and drastically reduce the number of people who want to rush to buy a new expensive smartphone.

There are those predicting that some sort of wearable like glasses might offer another good alternative for some people. There are newer version of smartglasses like the $129 Snap Spectacles that are less obtrusive than the first generation Google Glass. Smartglasses still need to overcome the societal barrier where people are not comfortable being around somebody who can record everything that is said and done. But perhaps the younger generations will not find this to be as much of a barrier. There are also other potential kinds of wearables from smartwatches to smart clothes that could take over the non-video functions of the smartphone.

Like with any technology that is as widespread as smartphones today there will be people who stick with their smartphone for decades to come. I saw a guy on a plane last week with an early generation iPod, which was noticeable because I hadn’t seen one in a few years. But I think that most people will be glad to slip into a world without a smartphone if that’s made easy enough. Already today I ask Alexa to call people and I can do it all through any device such as my desktop without even having a smartphone in my office. And as somebody who mislays my phone a few times every day, I know that I won’t miss having to use a smartphone in the home or car.

2017 Technology Trends

Alexander_Crystal_SeerI usually take a look once a year at the technology trends that will be affecting the coming year. There have been so many other topics of interest lately that I didn’t quite get around to this by the end of last year. But here are the trends that I think will be the most noticeable and influential in 2017:

The Hackers are Winning. Possibly the biggest news all year will be continued security breaches that show that, for now, the hackers are winning. The traditional ways of securing data behind firewalls is clearly not effective and firms from the biggest with the most sophisticated security to the simplest small businesses are getting hacked – and sometimes the simplest methods of hacking (such as phishing for passwords) are still being effective.

These things run in cycles and there will be new solutions tried to stop hacking. The most interesting trend I see is to get away from storing data in huge data bases (which is what hackers are looking for) and instead distributing that data in such a way that there is nothing worth stealing even after a hacker gets inside the firewall.

We Will Start Talking to Our Devices. This has already begun, but this is the year when a lot of us will make the change and start routinely talking to our computer and smart devices. My home has started to embrace this and we have different devices using Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa. My daughter has made the full transition and now talks-to-text instead of screen typing, but us oldsters are catching up fast.

Machine Learning Breakthroughs will Accelerate. We saw some amazing breakthroughs with machine learning in 2016. A computer beat the world Go champion. Google translate can now accurately translate between a number of languages. Just this last week a computer was taught to play poker and was playing at championship level within a day. It’s now clear that computers can master complex tasks.

The numerous breakthroughs this year will come as a result of having the AI platforms at Google, IBM and others available for anybody to use. Companies will harness this capability to use AI to tackle hundreds of new complex tasks this year and the average person will begin to encounter AI platforms in their daily life.

Software Instead of Hardware. We have clearly entered another age of software. For several decades hardware was king and companies were constantly updating computers, routers, switches and other electronics to get faster processing speeds and more capability. The big players in the tech industry were companies like Cisco that made the boxes.

But now companies are using generic hardware in the cloud and are looking for new solutions through better software rather than through sheer computing power.

Finally a Start of Telepresence. We’ve had a few unsuccessful shots at telepresence in our past. It started a long time ago with the AT&T video phone. But then we tried using expensive video conference equipment and it was generally too expensive and cumbersome to be widely used. For a while there was a shot at using Skype for teleconferencing, but the quality of the connections often left a lot to be desired.

I think this year we will see some new commercial vendors offering a more affordable and easier to use teleconferencing platform that is in the cloud and that will be aimed at business users. I know I will be glad not to have to get on a plane for a short meeting somewhere.

IoT Technology Will Start Being in Everything. But for most of us, at least for now it won’t change our lives much. I’m really having a hard time thinking I want a smart refrigerator, stove, washing machine, mattress, or blender. But those are all coming, like it or not.

There will be More Press on Hype than on Reality. Even though there will be amazing new things happening, we will still see more press on technologies that are not here yet rather than those that are. So expect mountains of articles on 5G, self-driving cars and virtual reality. But you will see fewer articles on the real achievements, such as talking about how a company reduced paperwork 50% by using AI or how the average business person saved a few trips due to telepresence.

AI, Machine Learning and Deep Learning

Data CenterIt’s getting hard to read tech articles any more that don’t mention artificial intelligence, machine learning or deep learning. It’s also obvious to me that many casual writers of technology articles don’t understand the differences and they frequently interchange the terms. So today I’ll take a shot at explaining the three terms.

Artificial intelligence (AI) is the overall field of working to create machines that carry out tasks in a way that humans think of as smart. The field has been around for a long time and twenty years ago I had an office on a floor shared by one of the early companies that was looking at AI.

AI has been in the press a lot in the last decade. For example, IBM used its Deep Blue supercomputer to beat the world’s chess champion. It really didn’t do this with anything we would classify as intelligence. It instead used the speed of a supercomputer to look forward a dozen moves and was able to rank options by looking for moves that produced the lowest number of possible ‘bad’ outcomes. But the program was not all that different than chess software that ran on PCs – it was just a lot faster and used the brute force of computing power to simulate intelligence.

Machine learning is a subset of AI that provides computers with the ability to learn without programming them for a specific task. The Deep Blue computer used a complex algorithm that told it exactly how to rank chess moves. But with machine language the goal is to write code that allows computers to interpret data and to learn from their errors to improve whatever task they are doing.

Machine learning is enabled by the use of neural network software. This is a set of algorithms that are loosely modeled after the human brain and that are designed to recognize patterns. Recognizing patterns is one of the most important ways that people interact with the world. We learn early in life what a ‘table’ is, and over time we can recognize a whole lot of different objects that also can be called tables, and we can do this quickly.

What makes machine learning so useful is that feedback can be used to inform the computer when it makes a mistake, and the pattern recognition software can incorporate that feedback into future tasks. It is this feedback capability that lets computers learn complex tasks quickly and to constantly improve performance.

One of the earliest examples of machine language I can recall is the music classification system used by Pandora. With Pandora you can create a radio station to play music that is similar to a given artist, but even more interestingly you can create a radio station that plays music similar to a given song. The Pandora algorithm, which they call the Music Genome Project, ‘listens’ to music and identifies patterns in the music in terms of 450 musical attributes like melody, harmony, rhythm, composition, etc. It can then quickly find songs that have the most similar genome.

Deep learning is the newest field of artificial intelligence and is best described as the cutting-edge subset of machine learning. Deep learning applies big data techniques to machine learning to enable software to analyze huge databases. Deep learning can help make sense out of immense amounts of data. For example, Google might use machine learning to interpret and classify all of the pictures its search engine finds on the web. This enables Google to be able to show you a huge number of pictures of tables or any other object upon request.

Pattern recognition doesn’t have to just be visual. It can include video, written words, speech, or raw data of any kind. I just read about a good example of deep learning last week. A computer was provided with huge library of videos of people talking along with the soundtracks and was asked to learn what people were saying just by how people moved their lips. The computer would make its best guess and then compare its guess to the soundtrack. With this feedback the computer quickly mastered lip reading and is now outperforming experienced human lip readers. The computer that can do this is still not ‘smart’ but it can become incredibly proficient at certain tasks and people interpret this as intelligence.

Most of the promises from AI are now coming from deep learning. It’s the basis for self-driving cars that learn to get better all of the time. It’s the basis of the computer I read about a few months ago that is developing new medicines on its own. It’s the underlying basis for the big cloud-based personal assistants like Apple’s Siri and Amazon’s Alexa. It’s going to be the underlying technology for computer programs that start tackling white collar work functions now done by people.

Broadband and the Elderly

Caduceus.svgAlmost every list of potential Internet benefits I have ever seen includes the goal of using broadband to allow people to remain in their homes as they age. It’s one of those uses of broadband that has always been right around the corner. And yet, there is still no suite of products that can deliver on this goal.

This is a bit surprising because America is aging and surveys show that a large majority of aging person wants to stay in their home as long as possible. Nursing home and other kinds of care are expensive and people are willing to spend the money on home care if that is possible.

But I think there is some hope on the horizon. AARP has been holding annual expos to allow vendors to display new products for the elderly. In the broadband / technology area the number of vendors at these demos have grown from 80 in 2012 to 228 in 2015. So there are companies working on the needed technologies and products.

It’s not hard to picture what such a suite of products would look like. It certainly would contain the following:

  • A health monitoring system that would check on vitals statistics such as heartbeat, blood pressure, blood sugar and whatever factors were most important for a particular person.
  • A monitoring system that can track the movements of an elderly person and report when they have fallen or not moved for a while.
  • A system that prompts people to take pills or other needed treatments on time.
  • A 2-way communications system that allows the elderly to stay socially connected to the outside world, to have visits with doctor, etc.
  • A smart bot of some sort (like the Apple Siri or the Amazon Echo) that can help the elderly get things done like make appointments or call for groceries.
  • Eventually there would be a robot or robots to make life easier. They could perform everyday functions like taking out the trash, washing dishes or other tasks that are needed by the stay-at-home person.

We are just now starting to see the first generation of useful personal bots, and it should not be too many more years before a smart bot like Siri can become the interface between an elderly person and the world. We still need bots to get better at understanding natural language, but that seems to be improving by leaps and bounds.

We are probably a decade before there will be the first truly useful house robots that can tackle basic household chores. But once these become common it won’t take long for them to improve to the point where they could become nurse, housekeeper and cook for an elderly person.

According the AARP, the biggest hurdle to developing the needed suite of products is the lack of investors willing to fund the needed technologies. For whatever reason investors are not readily backing companies that want to develop products in this space. This is not unusual for complex technologies like this one. Since the solution is complex, investments in any one part of the product suite are risky. Even should a new product work well there is no guarantee that any given product will be included in the eventual bundles of home care products. This makes investors leery about backing any one solution at the early stage of the industry.

But the pressure will remain to develop these products. The US (and much of the rest of the world) is aging. I just read yesterday that there are over 50,000 people in Japan over 100 years old, up from only a thousand or so a few decades ago. Health breakthroughs are letting people live longer and more productive lives. As a society we need to find a solution for our aging population (since we are all going to get there soon enough).

One thing is for sure – good broadband is a key component of this suite of products. If we don’t find a way to get broadband to everybody by the time these products hit the market, then we will be punishing the elderly that live where there is poor broadband. If you think there is a loud public outcry today from folks without broadband, wait until people’s lives depend upon it.

Mr. Watson . . . . come here.

Watson-supercomputer-635This week IBM cut the ribbon on a “Watson Client Experience Center” in New York City, where along with five other centers it will provide access to the Watson supercomputer. A few weeks ago IBM also announced the availability of what it calls Bluemix, a suite of several cognitive-based cloud services. Several of the articles I read about this announcement say that Watson is bringing artificial intelligence to the world. But it’s not. Watson is a pretty amazing computer system and can do a lot of great things, but the computer is still no smarter than your toaster. You may ask how I can say that since Watson was able to soundly beat the two best Jeopardy champs a few years ago.

Let’s look at how Watson works. First, Watson is a supercomputer, meaning that it has massive computational power and a fast input / output. Watson is configured as cluster of ninety IBM Power 750 servers each of which uses a 3.5 GHz POWER7 eight core processor, with four threads per core. In total, the system has 2,880 POWER7 processor cores and has 16 terabytes of RAM. Watson has a natural language interface meaning that it is designed to be queried by conversation, in the same manner as Apple’s Siri.

Watson uses a hypothesis generator. What this means is that when it is asked something, Watson searches its databases and compiles all of the answers that seem to answer the question posed to it. Through its sheer blazing computational speed Watson can search this entire database quickly. It then ranks the results according to the frequency that it encounters answers. For the Jeopardy challenge Watson was fed with multiple reference sources like encyclopedias, textbooks and all of Wikipedia.

Finally, Watson uses what IBM calls dynamic learning. This means that when Watson makes a mistake, which has to be often when working in something as imprecise as English, Watson can take feedback from the user when told that its answer is wrong. It stores this feedback and uses its ‘learning’ to influence the rankings when it next encounters the same question.

But under it all Watson is no smarter than your desktop computer because there is no actual intelligence in the system, artificial or otherwise. What Watson does to simulate intelligence is to present a friendly language interface and fast computational power to come up with answers to questions. But Watson is only as ‘smart’ as the databases underneath of it. For Jeopardy they did not allow Watson access to the Internet because the internet is full of incorrect facts. Watson has no way of distinguishing between what is true or not true, other than through feedback from users who correct its mistakes. But Watson would be like many of us and would fall for every Internet hoax that hits the web. For example, there was an Internet hoax earlier this year that said that Flo from the insurance commercials was killed, and if Watson was connected to the web it would believe such an untrue rumor based upon the sheer volume of claims made about the hoax.

This is not to say that Watson can’t do amazing things. Imagine Watson paired with Siri. Let’s face it, Siri is okay with driving directions but can quickly get flustered on almost anything else. With Watson’s database behind Siri it would become much more useful in a hurry. And even for driving directions Watson would help Siri be better. Siri is great at getting you between towns, but I’ve noticed that in crowded urban environments that Siri regularly wants you to pull into the wrong parking lot or driveway, and over time Watson would help Siri learn these little nuances of the map through user feedback.

Expect over the next few years to see a flood of new apps that do a better job of working through spoken interface. Already there are interesting new ventures that plan on incorporating Watson. For example, the founder of Travelocity wants to roll out a service called WayBlazer that will help you figure out things to do when you travel. The goal is to help you find activities that interest you rather than being steered to the normal tourist traps. A start-up called LifeLearn wants to build a tool to help veterinarians diagnose pet ailments better. A company called SparkCognition wants to offer a service to help security people spot security risks by having Watson ‘think like a security expert’. Expect all sorts of new programs and apps that take advantage of Watson’s language interface and the ability to quickly search databases.

This is a big breakthrough in that this is the first time that mass computational power will be brought into our daily lives through apps. Those apps are going to start doing things that we have always wanted computers to do. But let’s not forget how quickly computers are getting better. I reported last month on a company that expected to have a desktop supercomputer by 2017 that will be several magnitudes faster than Watson. Within a decade there will be computers everywhere with the power that Watson has today. And let’s also not forget that Watson is not smart and that there is zero cognition in the system. Watson doesn’t think, but rather just searches and compiles large databases quickly. That is incredibly useful and I will be glad to use Watson-based services – but this is not yet anything close to artificial intelligence.

Will We Be Seeing Real Artificial Intelligence?

robbyI have always been a science fiction fan and I found the controversy surrounding the new movie Transcendence to be interesting. It’s a typical Hollywood melodrama in which Johnny Depp plays a scientist who is investigating artificial intelligence. After he is shot by anti-science terrorists his wife decides to upload his dying brain into their mainframe. As man and machine merge they reach that moment that AI scientists call the singularity – when a machine becomes aware. And with typical Hollywood gusto this first artificial intelligence goes on to threaten the world.

The release of this movie got scientists talking about AI. Stephen Hawking and other physicists wrote an article for The Independent after seeing the movie. They caution that while developing AI would be the largest achievement of mankind, it also could be our last. The fear is that a truly aware computer will not be human and that it will pursue its own agenda over time. An AI will have the ability to be far smarter than mankind and yet contain no human ethics or morality.

This has been a recurrent theme in science fiction starting with Robby the Robot up through Hal in 2001, Blade Runner and The Terminator. But when Hawking issues a warning about AI one has to ask if this is moving out of the realm of science fiction into science reality.

Certainly we have some very rudimentary forms of AI today. We have Apple’s Siri and Microsoft’s Cortana that help us find a restaurant or schedule a phone call. We have IBM’s Deep Blue that can beat the best chess players in the world, win at Jeopardy and that is now making medical diagnosis. And these are just the beginning and numerous scientists are working on the next breakthroughs in machine intelligence that will help mankind. For example, a lot of the research into how to understand big data is based upon huge computational power coupled with some way to make sense out of what the data tells us. But not all AI research leads to good things and it’s disconcerting to see that the military is looking into building self-aware missiles and bombs that can seek out their targets.

One scientist I have always admired is Douglas Hofstadter, the author of Godel, Escher, Bach – An Eternal Golden Braid, that won the Pulitzer prize in 1980. It’s a book I love and one that people call the bible of artificial intelligence. It’s a combination of exercises in computing, cognitive science, neuroscience and psychology and it inspired a lot of scientists to enter the AI world. Hofstadter says that Siri and Deep Blue are just parlor games that overpower problems with sheer computational power. He doesn’t think these kinds of endeavors are going to lead to AI and that we won’t get there until we learn more about how we think and what it means to be aware.

With that said, most leading scientists in the field are predicting the singularity anywhere from 20 to 40 years from now. And just about everybody is sure that it will happen by the end of this century. Hawking is right and this will be the biggest event in human history to date – we will have created another intelligence. Nobody knows what that means, but it’s easy to see how a machine intelligence could be dangerous to mankind. Such an intelligence could think circles around us and could compete with us for our resources. It would likely put most of us out of work since it would most of the thinking for us.

And it will probably arise without warning. There are numerous paths being taken in AI research and one of them will probably hit pay dirt. Do we really want a smart Siri, one that is smarter than us? My answer to that question is, only if we can control it. However, there is a good chance that we won’t be able to control such a genie or ever put it back into its bottle. Add this to the things to worry about, I guess.

Web 3.0

WWW_balloonWeb 3.0 is the name that has been given to the next generation web. While not everybody agrees with the designations, web 1.0 was the first generation web where everything was flat web sites. With Web 1.0 we browsed website to see what other people wanted us to tell us.

We are now in Web 2.0 where users can interactively create content. Instead of just looking at web sites users now interact and create content on social networks like Facebook, Twitter and LinkedIn. YouTube has so much user generated content that it is one of the biggest traffic generators on the web. And web sites are no longer static and users can post our opinions on a newspaper article or create funny reviews on an Amazon product.

Web 3.0 is expected to go a step further and personalize the web experience. It is expected users will have a personal assistant that will learn their preferences and help them navigate the web. Apple’s Siri is one of the first generation of this type of assistant, but they are expected to soon advance far past Siri.

The biggest improvement of Web 3.0 is that it will understand context, which is lacking in Siri and today’s search engines like Google. But in the future if you tell your assistant that you want to buy a mouse, it will know from the context if you mean the computer device or the little furry animal. The real advantage of the ability to understand context is that search engines will get smarter and will bring you facts. Today the web searches on key words and brings you every web site that contains one of your search words. But in Web 3.0 it is expected that you can ask a question like, “What year was Abraham Lincoln elected?” and get the answer instead of a bunch of web sites about Lincoln, Nebraska.

A personal assistant will also make life easier. For instance, you can tell your assistant that you want to meet a friend for a birthday lunch and also buy them a present. You assistant will talk to your friend’s assistant behind the scene and find a restaurant that is convenient for both of you and that you both will like. And it will suggest presents to you, and once you choose one will buy it for you, have it gift wrapped and delivered to the restaurant. And all of this happens behind the scene with an assistant that understands context.

There will be more to Web 3.0 than just the personal assistant. As more brains get built into the web the way we use it can be smarter as well. As an example, Google just patented something they call geolocation technology. This, and tools like it are going to bring some aspects of artificial intelligence to your personal assistant. For example, with geolocation, advertisers will be able to make offers to you (really to your assistant) that are dependent upon your location. They might offer you a special on a meal, a drink or a purchase that is a few stores in front of you as you walk down the street. But your assistant will learn to filter such requests and will only bring to your attention the ones that are going to be of interest to you.

The personalized web is going to transform the web experience. You will finally be able to use the web to find the facts you want instantly. You will be able to use the web as your social secretary, or as your to-do list or in any other manner of your choosing.

Hello Siri . . .

Image representing Siri as depicted in CrunchBase

Image by None via CrunchBase

Gartner, a leading research firm, issued a list of the ten top strategic technology trends for 2014. By strategic they mean that these are developments that are getting a lot of attention and development in the industry, not necessarily that these developments will come to full fruition in 2014. One of the items on the list was ‘smart machines’ and under that category they included self-driving cars, smart advisors like IBM’s Watson and advanced global industrial systems, which are automated factories.

But I want to look at the other item on their list which is contextually aware intelligent personal assistants. This essentially will be Apple’s Siri on steroids. This is expected to be done at first mostly using cell phones or other mobile device. Eventually one would think that this will migrate towards something like Google Glass, a smart phone, a bracelet or some other way to have this always on you.

Probably the key part of the descriptive phrase is contextual. To be useful, a person’s personal assistant has to learn and understand the way they talk and live in order to become completely personalized to them. By contextual, the current Siri needs to grow to learn things by observation. To be the life-changing assistant envisioned by Gartner is going to require software that can learn to anticipate what you want. For example, as you are talking to a certain person your assistant ought to be able to pick out of the conversation those bits and pieces that you are going to want it to remember. For example, somebody may tell you their favorite restaurant or favorite beer and you would want your assistant to remember that without you telling it to do so.

Both Apple and Microsoft’s current personal assistants have already taken the first big step in the process in that they are able to converse some in conversation language mode. Compare what today’s assistants can already do to Google’s search engine, which makes you type in awkward phrases. Any assistant is going to have to be able to be completely fluent in a person’s language.

One can easily envision a personal assistant for life that helps you learn when you are young and who then sticks with you for life. Such an assistant will literally become the most important ‘person’ in somebody’s life. An effective assistant can free a person from many of the mundane tasks of life. You will never get lost, have to make an appointment, remember somebody’s birthday or do many of the routine things that are part of life today. A good assistant will free you from the mundane. But it still won’t take out the trash, although it can have your house-bot do that.

In the future you can envision this assistant tied into the Internet of things so it would be the one device you give orders to. It would then translate and talk to all of your other systems. It would talk to your smart house, talk to your self-driving car, talk to the system that is monitoring your health, etc.

The biggest issue with this kind of personal assistant is going to be privacy. A true life-assistant is going to know every good and bad thing about you, including your health problems and every one of your ugly bad habits. It is going to be essential that this kind of system stay completely private and be somehow immune to hacking. Nobody can trust an assistant in their life that others can hack or peer into.

One might think that this is something on the distant horizon, but there are many industry experts who think this is probably the first thing on the smart machine list that will come to pass, and that there will be pretty decent versions of this within the next decade. Siri is already a great first step, although often completely maddening. But as this kind of software improves it is not hard to picture this becoming something that you can’t live without. It will be a big transition for older people, but our children will take to this intuitively.