Death of the Smartphone?

Over the last few weeks I have seen several articles predicting the end of the smartphone. Those claims are a bit exaggerated since the authors admit that smartphones will probably be around for at least a few decades. But they make some valid points which demonstrate how quickly technologies come into and out of our lives these days.

The Apple iPhone was first sold in the summer of 2007. While there were phones with smart capabilities before that, most credit the iPhone release with the real birth of the smartphone industry. Since that time the smartphone technology has swept the entire world.

As a technology the smartphone is mature, which is what you would expect from a ten-year old technology. While phones might still get more powerful and faster, the design for smartphones is largely set and now each new generation touts new and improved features that most of us don’t use or care about. The discussion of new phones now centers around minor tweaks like curved screens and better cameras.

Almost the same ten-year path happened to other electronics like the laptop and the tablet. Once any technology reaches maturity it starts to become commoditized. I saw this week that a new company named Onyx Connect is introducing a $30 smartphone into Africa where it joins a similarly inexpensive line of phones from several Chinese manufacturers. These phones are as powerful as US phones of just a few years ago.

This spells trouble for Apple and Samsung, which both benefit tremendously by introducing a new phone every year. People are now hanging onto phones much longer, and soon there ought to be scads of reasonably-priced alternatives to the premier phones from these two companies.

The primary reason that the end of the smartphone is predicted is that we are starting to have alternatives. In the home the smart assistants like Amazon Echo are showing that it’s far easier to talk to a device rather than work through menus of apps. Anybody who has used a smartphone to control a thermostat or a burglar alarm quickly appreciates the ability to make the changes by talking to Alexa or Siri rather than fumbling through apps and worrying about passwords and such.

The same thing is quickly happening in cars and when your home and car are networked together using the same personal assistant the need to use a smartphone while driving gets entirely eliminated. The same thing will be happening in the office and soon that will mean there is a great alternative to the smartphone in the home, the car and the office – the places where most people spend the majority of their time. That’s going to cut back on reliance of the smart phone and drastically reduce the number of people who want to rush to buy a new expensive smartphone.

There are those predicting that some sort of wearable like glasses might offer another good alternative for some people. There are newer version of smartglasses like the $129 Snap Spectacles that are less obtrusive than the first generation Google Glass. Smartglasses still need to overcome the societal barrier where people are not comfortable being around somebody who can record everything that is said and done. But perhaps the younger generations will not find this to be as much of a barrier. There are also other potential kinds of wearables from smartwatches to smart clothes that could take over the non-video functions of the smartphone.

Like with any technology that is as widespread as smartphones today there will be people who stick with their smartphone for decades to come. I saw a guy on a plane last week with an early generation iPod, which was noticeable because I hadn’t seen one in a few years. But I think that most people will be glad to slip into a world without a smartphone if that’s made easy enough. Already today I ask Alexa to call people and I can do it all through any device such as my desktop without even having a smartphone in my office. And as somebody who mislays my phone a few times every day, I know that I won’t miss having to use a smartphone in the home or car.

2017 Technology Trends

Alexander_Crystal_SeerI usually take a look once a year at the technology trends that will be affecting the coming year. There have been so many other topics of interest lately that I didn’t quite get around to this by the end of last year. But here are the trends that I think will be the most noticeable and influential in 2017:

The Hackers are Winning. Possibly the biggest news all year will be continued security breaches that show that, for now, the hackers are winning. The traditional ways of securing data behind firewalls is clearly not effective and firms from the biggest with the most sophisticated security to the simplest small businesses are getting hacked – and sometimes the simplest methods of hacking (such as phishing for passwords) are still being effective.

These things run in cycles and there will be new solutions tried to stop hacking. The most interesting trend I see is to get away from storing data in huge data bases (which is what hackers are looking for) and instead distributing that data in such a way that there is nothing worth stealing even after a hacker gets inside the firewall.

We Will Start Talking to Our Devices. This has already begun, but this is the year when a lot of us will make the change and start routinely talking to our computer and smart devices. My home has started to embrace this and we have different devices using Apple’s Siri, Microsoft’s Cortana and Amazon’s Alexa. My daughter has made the full transition and now talks-to-text instead of screen typing, but us oldsters are catching up fast.

Machine Learning Breakthroughs will Accelerate. We saw some amazing breakthroughs with machine learning in 2016. A computer beat the world Go champion. Google translate can now accurately translate between a number of languages. Just this last week a computer was taught to play poker and was playing at championship level within a day. It’s now clear that computers can master complex tasks.

The numerous breakthroughs this year will come as a result of having the AI platforms at Google, IBM and others available for anybody to use. Companies will harness this capability to use AI to tackle hundreds of new complex tasks this year and the average person will begin to encounter AI platforms in their daily life.

Software Instead of Hardware. We have clearly entered another age of software. For several decades hardware was king and companies were constantly updating computers, routers, switches and other electronics to get faster processing speeds and more capability. The big players in the tech industry were companies like Cisco that made the boxes.

But now companies are using generic hardware in the cloud and are looking for new solutions through better software rather than through sheer computing power.

Finally a Start of Telepresence. We’ve had a few unsuccessful shots at telepresence in our past. It started a long time ago with the AT&T video phone. But then we tried using expensive video conference equipment and it was generally too expensive and cumbersome to be widely used. For a while there was a shot at using Skype for teleconferencing, but the quality of the connections often left a lot to be desired.

I think this year we will see some new commercial vendors offering a more affordable and easier to use teleconferencing platform that is in the cloud and that will be aimed at business users. I know I will be glad not to have to get on a plane for a short meeting somewhere.

IoT Technology Will Start Being in Everything. But for most of us, at least for now it won’t change our lives much. I’m really having a hard time thinking I want a smart refrigerator, stove, washing machine, mattress, or blender. But those are all coming, like it or not.

There will be More Press on Hype than on Reality. Even though there will be amazing new things happening, we will still see more press on technologies that are not here yet rather than those that are. So expect mountains of articles on 5G, self-driving cars and virtual reality. But you will see fewer articles on the real achievements, such as talking about how a company reduced paperwork 50% by using AI or how the average business person saved a few trips due to telepresence.

AI, Machine Learning and Deep Learning

Data CenterIt’s getting hard to read tech articles any more that don’t mention artificial intelligence, machine learning or deep learning. It’s also obvious to me that many casual writers of technology articles don’t understand the differences and they frequently interchange the terms. So today I’ll take a shot at explaining the three terms.

Artificial intelligence (AI) is the overall field of working to create machines that carry out tasks in a way that humans think of as smart. The field has been around for a long time and twenty years ago I had an office on a floor shared by one of the early companies that was looking at AI.

AI has been in the press a lot in the last decade. For example, IBM used its Deep Blue supercomputer to beat the world’s chess champion. It really didn’t do this with anything we would classify as intelligence. It instead used the speed of a supercomputer to look forward a dozen moves and was able to rank options by looking for moves that produced the lowest number of possible ‘bad’ outcomes. But the program was not all that different than chess software that ran on PCs – it was just a lot faster and used the brute force of computing power to simulate intelligence.

Machine learning is a subset of AI that provides computers with the ability to learn without programming them for a specific task. The Deep Blue computer used a complex algorithm that told it exactly how to rank chess moves. But with machine language the goal is to write code that allows computers to interpret data and to learn from their errors to improve whatever task they are doing.

Machine learning is enabled by the use of neural network software. This is a set of algorithms that are loosely modeled after the human brain and that are designed to recognize patterns. Recognizing patterns is one of the most important ways that people interact with the world. We learn early in life what a ‘table’ is, and over time we can recognize a whole lot of different objects that also can be called tables, and we can do this quickly.

What makes machine learning so useful is that feedback can be used to inform the computer when it makes a mistake, and the pattern recognition software can incorporate that feedback into future tasks. It is this feedback capability that lets computers learn complex tasks quickly and to constantly improve performance.

One of the earliest examples of machine language I can recall is the music classification system used by Pandora. With Pandora you can create a radio station to play music that is similar to a given artist, but even more interestingly you can create a radio station that plays music similar to a given song. The Pandora algorithm, which they call the Music Genome Project, ‘listens’ to music and identifies patterns in the music in terms of 450 musical attributes like melody, harmony, rhythm, composition, etc. It can then quickly find songs that have the most similar genome.

Deep learning is the newest field of artificial intelligence and is best described as the cutting-edge subset of machine learning. Deep learning applies big data techniques to machine learning to enable software to analyze huge databases. Deep learning can help make sense out of immense amounts of data. For example, Google might use machine learning to interpret and classify all of the pictures its search engine finds on the web. This enables Google to be able to show you a huge number of pictures of tables or any other object upon request.

Pattern recognition doesn’t have to just be visual. It can include video, written words, speech, or raw data of any kind. I just read about a good example of deep learning last week. A computer was provided with huge library of videos of people talking along with the soundtracks and was asked to learn what people were saying just by how people moved their lips. The computer would make its best guess and then compare its guess to the soundtrack. With this feedback the computer quickly mastered lip reading and is now outperforming experienced human lip readers. The computer that can do this is still not ‘smart’ but it can become incredibly proficient at certain tasks and people interpret this as intelligence.

Most of the promises from AI are now coming from deep learning. It’s the basis for self-driving cars that learn to get better all of the time. It’s the basis of the computer I read about a few months ago that is developing new medicines on its own. It’s the underlying basis for the big cloud-based personal assistants like Apple’s Siri and Amazon’s Alexa. It’s going to be the underlying technology for computer programs that start tackling white collar work functions now done by people.

Responding to Customers in a Virtual World

Amazon EchoI don’t know how typical I am, but I suspect there are a whole lot of people like me when it comes to dealing with customer service. It takes something really drastic in my life for me to want to pick up the phone and talk to a customer service rep. Only a lack of a broadband connection or having no money in my bank account would drive me to call a company. My wife and I have arranged our life to minimize such interactions. For instance, we shop by mail with companies that allow no-questions-asked returns.

The statistics I hear from my clients tell me that there are a lot of customers like me – ones that would do almost anything not to have to talk to a person at a company. And yet, I also hear that most ISPs average something like a call per month per customer. That means for everybody like me who rarely calls there are people that call their ISP multiple times per month.

It’s not like there aren’t things that I want to know about. There are times when it would be nice to review my bill or my open balance, but unless there is no alternative I would never call and ask that kind of question. But from what I am told, billing inquiries and outages are the predominant two reasons that people call ISPs. And if a carrier offers cable TV you can add picture quality to that list.

Customer service is expensive. The cost of the labor and the systems that enable communicating with customers is a big expense for most ISPs. Companies have tried various solutions to cut down on the number of customer calls, and a few of them have seen some success. For instance, some ISPs make it easy for a customer to handle billing inquiries or to look at bills online. This is something banks have been pretty good at for a long time. But it’s apparently difficult to train customers to use these kinds of web tools.

Many companies are now experimenting with online customer service reps who offer to chat online with you from their website. I don’t know the experiences other people have had with this, but I generally find this even less satisfactory than talking to a real person – mostly due to the limitations that both parties have of typing back and forth to each other. Unless you are looking for something really specific and easy, communicating by messaging is not very helpful and might even cost a company more labor than talking to somebody live. Even worse, many companies use a lot of pre-programmed scripts for online reps to reduce messaging response times, and those scripts can be frustrating for a customer.

There are a handful of solutions I have seen that offer new tools for making it easier for customers to communicate with an ISP. For instance, NuTEQ has developed an interactive test messaging system that can answer basic questions for customers without needing a live person at the carrier end. Customers can check account balances, report an outage, schedule and monitor the status of a tech visit or do a number of other tasks without needing to talk to somebody having to wade through a customer service web site.

But I think the real hope for me is going to be the advent in the near future of customer service bots. Artificial Intelligence and voice recognition are getting good enough so that bots are going to be able to provide a decent customer service experience. Early attempts at this have been dreadful. Anybody who has tried to change an airline ticket with an airline bot knows that it’s nearly impossible to do anything useful in the current technology.

But everything I read says that we will soon have customer service bots that actually work as well as a person without the annoying habits or real service reps like putting a caller on hold, losing a caller when supposedly transferring you to somewhere else, or in trying to upsell you to a product you don’t want. And there ought to be no holding times since bots are always available. If a bot could quickly answer my question I would have no problem talking to a bot.

But as good as bots are going to be even within a few years, I am waiting for the next step after that. I have been using the Amazon Echo and getting things done by talking to Alexa, the Amazon AI. Alexa still has a lot of the same challenges like Siri or the other AIs, but I am pleasantly surprised about how often I get what I want on the first try. In my ideal world I would tell Alexa what I want and she (it?) would then communicate with the bots, or even the people at a company to answer my questions. At the speed at which AI technology is improving I don’t think this is going to be too many years away. I may like customer service when my bot can talk to their bot.