Are We at the End of Creative Destruction?

closed-factory-Flickr-sludgegulperCapitalism has thrived over the past few centuries due in large part to a phenomenon called creative destruction. This phrase was coined by economist Joseph Schumpeter and refers to the process where new technologies and new industries replace old ones, with a net gain for society.

There are thousands of examples of this through time. In the transportation area alone we’ve moved from horse & buggies and canals to cars, subways, interstate highways, and airplanes. We’ve moved from hand looms to cloth factories to synthetic fabrics (and were able also to replace the ironing board). There is almost nothing from a hundred or more years ago that is still made in the same way.

Creative destruction has always resulted in a net good for the economy and has thus been good for mankind. For every technology that has been displaced something better took its place. Overall there were more jobs created by new technologies than displaced by old ones. Look at the automotive industry as a great example and consider how many jobs there are in car and parts factories, gas stations, repair shops, etc. — far more than were employed taking care of horses and trolleys.

I read an article that said that per capita incomes in the US were 28 times higher in 2000 than they were in 1790. This is largely due to creative destruction, which is why it’s the bedrock of capitalism. Each new technology that has come along has been more efficient than the last, and the growth in wealth has largely been due to this increase in efficiency.

This is not to say that creative destruction is not disruptive. People who worked for industries being displaced have always lost their jobs. This is never smooth on a local level and there are thousands of communities along the way that suffered when their local factory or businesses were supplanted by something new. To a large degree, over the last century the new businesses tend to be in urban areas, which is one of the contributors to the long-term trend of rural populations migrating to cities around the world.

But there are now a number of economists who think that we might have reached the end of creative destruction and that the old paradigm is no longer functional. Economists measure overall efficiency of an economy using labor productivity – the output per hour of the average worker. For example, since World War II productivity grew at an annual rate of about 3%. But starting in 2004 that rate has slowed to 1% per year, and most recently is under 0.5%. This is one of several factors that have led to wage stagnation.

There is still a lot of wealth being created in the country, but a lot of it now comes from information technologies and not from the historic phenomenon of replacing industries with something better. A great example is Facebook and all other social media. They have created tremendous wealth for their creators, but they are replacing and/or monetizing older ways of socializing. Granted they are new industries, but they bring very few jobs. It’s amazing how many of the largest web companies have created billions in wealth with less than a few hundred workers.

Lately, we are also seeing whole industries dying without being replaced by new jobs. The typical example given is photography. There was a huge photography industry that was replaced by digital cameras. Gone are most of the companies that made cameras and film and who processed pictures for people, replaced by one minor component of smartphones. But this has happened to other industries like the music business and the news business. Blogs are interesting, but they really don’t take the place of having live news coverage of worldwide events.

Worse, we are standing at the edge of a time when large numbers of jobs might be permanently replaced. For example, companies like Amazon created a large numbers of jobs in their warehouses, but they are working to automate the whole warehouse process. We see robots starting to take on roles like hospital orderlies, hotel concierge, baristas, and a number of other jobs. The combination of robots and AI is also likely to start replacing scores of traditional white-collar jobs like accountants, paralegals and other information workers.

The big changes we are now seeing are due largely to both the application of Moore’s law and the fact that computers are getting strong enough to be able to mimic human behavior well enough to take over functions that only humans could do.

I always read that there is still going to be room in the world for human creativity. The problem with that is that society probably won’t value a lot more creative jobs than there has been. We’ve always had our inventors and scientists and writers and artists and most people are not able to do this kind of creative work, nor is there going to be a huge uptick in demand for these kinds of skills (meaning somebody willing, or able, to pay for it). It could be that creative destruction is now going to be replaced by plain destruction and that technology is going to replace a lot of jobs that cannot be replaced. If so, the world better get ready to find ways to deal with a lot of people who can’t find paying work. It’s a scary thought.

Voice Over LTE

4g_mastA few people have been lucky enough to try Voice over LTE (VoLTE) on their cellphones. This is a new application that carries voice calls over the 4G data bandwidth instead of as a separate voice channel as is used for traditional cellular calls.

I say they are lucky, because the quality of VoLTE is much better than the quality of normal cellular calls. This is due to the call being able to handle a wider range of voice frequencies (normal phone calls have always chopped off both lower and higher frequencies, which is why people don’t sound the same on the phone as they do in person). VoLTE is supposed to be close in quality to High Definition voice (HD) which is currently being provided by some landline providers.

VoLTE calls are more akin to a call made on Skype with a quality microphone. If you’ve ever talked to somebody on Skype who was in a boardroom or somewhere with great microphones you will know what I am talking about. You can hear somebody with as rich of a voice sound as talking to them in person. When Skype is not so good it’s mostly due to the crappy microphones in PCs and laptops and not due to the technology.

There are still some significant drawbacks to VoLTE that the industry is working out. Roaming is the biggest issue. Currently, if you are talking on VoLTE and move out of the range of 4G the call will drop. The calls are not downward compatible to 3G or 2G data connections. There is also compatibility issues between carriers since there are still no standards, so you might have trouble talking to somebody using another cellphone provider. AT&T and Verizon are working to make their two networks compatible, but other carriers have not yet been integrated with anybody else. Finally, VoLTE only makes a difference if both callers are talking on VoLTE.

But the major drawback today is one of availability; all of the US carriers have introduced VoLTE only on a trial basis in a few markets. And even where it’s available, it’s only been introduced for a small number of handsets by each carrier. You are more likely going to get to try this first if you use an iPhone or Samsung Galaxy.

Early testers of the technology have made some interesting observations about it. Certainly being able to hear the other party better is a huge benefit. The calls also connect much faster since the call is not making its way through the normal telephone network. One of the most interesting observations is that sometimes you can make VoLTE calls when there isn’t normal cell phone coverage. This is due to the fact that some of the spectrum used to deliver 4G has a larger footprint than the spectrum used to make voice calls.

There are a number of benefits to the carriers of the technology in that it relieves pressure from the spectrum used for voice-only. We’re all familiar with trying to make a call in a stadium or on a freeway at rush hour and not being able to get a signal. But as long as you can get a 4G data connection, even a slow one, you will probably be able to make VoLTE calls.

Calling with the technology is also going to save on cellphone battery life. Today your cellphone spends a lot of energy changing between different frequencies to handle voice and data, or between different types of data.

The technology also supports video calls, which means that it will be easier to have video calls on all phones similar to the  the FaceTime app that comes with iPhones.

Probably the biggest issue with the technology will be how the carriers price it. Callers with small data caps are going to be nervous using VoLTE if it counts against their data plans.

The network owners are working out standards and technologies. Currently, a VoLTE call must be routed back to the switch of the cellular provider before a call can be routed, which is an inelegant network solution. But the industry is working towards a standard called RAVEL (Roaming Architecture for Voice over LTE with Local Breakout) that is going to allow calls to be routed locally when appropriate.

One has to think that eventually this is going to become the voice standard and that the carriers will do away with using a separate frequency for voice. That would allow them to make their networks into 100% data networks and eventually do away with the idea of selling minutes.

There were some field trials of the technology in 2014 and we will be seeing more implementation during 2015. But don’t expect this to be widely available in major markets until 2016 and obviously later in markets that still use 3G.

Can There Be a Safer Internet?

Supporters hold yellow umbrellas as Hong Kong student leaders arrive at the police headquarters in Hong KongI probably feel very much like most people in that the Internet is feeling less and less safe to use. Viruses have been around a long time, but once you learned to not open emails you didn’t recognize, that risk became somewhat minimal. But now you can get viruses just by opening a web site that has corrupted ads. I know this because I got three such viruses a few weeks ago.

But that’s not even the scary part since I can generally scrub viruses from my computer. There are far worse risks than viruses today. To start with, there are the people who are sending malware and then holding your computer hostage until you pay them (and who then, apparently, still don’t fix your machine).

And it appears that everybody is spying on us. Edward Snowden has shown us numerous ways that the NSA is watching us. I literally get dozens of new tracking cookies on my computer every day from commercial companies that want to track me somehow. And every large web company is apparently gathering data on us, including companies like Facebook and Google along with most of the apps we put onto our smartphones.

But since my work depends on using the Internet, and since it also has become one of my major sources of entertainment, I am not likely to abandon the Internet due to lack of safety. I do what I can to be safe, but I doubt it makes much difference. I scrub my machine every day from tracking cookies and I use browsers that supposedly don’t track me. But my guess is that those two things make almost no difference for protecting my computer or my privacy.

The biggest problem, aside from every web entity trying to build a profile on me, is that the entire web is based upon a model where everything we do winds up somewhere at end points that cannot be made safe. Everybody is touting encryption as a way to stay safer on the web, but every encrypted message end up at a machine somewhere that decrypts it, and it is the end computers and servers that are the weak points in the Internet. Your data is stored on servers that are out of your control, and your safety relies on the people running those servers to be safe. And we all know that hackers are breaking into servers every day, and it may even turn out that there are back door spying keys built directly into most server software.

There are experts who say that the lack of safety might kill the Internet. We are incredibly reliant on companies that we don’t know to keep our data safe – and we have seen that both hackers and nefarious insiders can compromise almost any company. If the hackers win the war then it will become too unsafe to buy anything over the web (or even give your credit card numbers to vendors in some other manner if they are going to keep the info on their servers).

But there are alternate models of the Internet that might offer solutions. One of these is known as a block chain. Block chains are a decentralized system of communication that lets end users communicate directly with each other without having to go through the normal centralized servers. The block chain technology is most well known as the basis of Bitcoin and other cryptocurrencies. There have been numerous articles and papers written about the wild swings in Bitcoin pricing, but that has to do with basic economics rather than the underlying technology that allows the transactions.

In a block chain network, each member of the network has a copy of the software that identifies them as part of a particular block chain. Before communication is allowed between any two members of a block chain the identity of each party must be verified by somebody else who is part of the chain. With such verification the communication is allowed. The process is slow compared to normal web transaction, perhaps 10,000 times slower than a normal text or email. But it is safe. The steps needed to operate a block chain are as follows:

  1. New transactions are broadcast to all nodes.
  2. Each node collects new transactions into a block.
  3. Each node works on finding a difficult proof-of-work for its block.
  4. When a node finds a proof-of-work, it broadcasts the block to all nodes.
  5. Nodes accept the block only if all transactions in it are valid.
  6. Nodes express their acceptance of the block by working on creating the next block in the chain, using the hash of the accepted block as the previous hash.

There are already some examples of block chains being used for communications other than financial ones. For example, the protesters in Hong Kong last year established a block chain so that they could communicate with each other without Chinese government oversight.

There are new companies that want to use block chains to bring safety into other types of communications. For example, Codius is using block chains to provide safe online legal transactions. This provides a way for parties to safely sign contracts without having to exchange paper. Ethereum is working on a block chain technology that could be used as the basis for any kind of communication. They think their platform could be used for things like private chats, private emails, and even safe web searches.

One can envision many other uses for using block chains to create safe communications among specified groups of users. That might be a corporation and its employees, all of the students in a given dorm, or just about any group that wants safe communications. Such a closed system would provide for secure and private communication within the block. It’s not a total solution, but it’s a start.

What’s Up With Cable?

Fatty_watching_himself_on_TVThe results for 2014 are in, so today I am going to take a fresh look at the cable industry. The largest nine traditional cable companies lost just under 1.2 million cable customers in 2014, an improvement over the 1.7 million they lost in 2013. But looking at the bigger picture, the top thirteen cable companies lost only 125,000 customers for the year, which is slightly higher than 95,000 in 2013. Within those numbers, Direct TV and Dish Networks together added 20,000 subscribers for the year and Verizon and AT&T added just under 1.1 million cable customers for the year, down from 1.4 million from the prior year.

The industry as a whole is hanging solid and these thirteen companies have 95.2 million customers. Hidden in these numbers is the growth of cord cutters. For a number of years running, the cable industry as a whole has been slightly shrinking even though there is roughly one million new households entering the market each year.

Of course, the growth for the cable companies is in broadband. The largest cable companies in the group added 2.6 million high-speed data customers in 2014, while AT&T added 1,000 and Verizon 190,000. Time Warner Cable said in their annual report this year that their data product has a 97% margin, a number that opened a lot of eyes.

There are two other trends that are not captured in these numbers. First is the growth in time spent by people watching online programming like Netflix and Amazon Prime; and with that a corresponding decrease in time spent watching traditional cable TV programming. The overall hours spent per viewer for traditional cable dropped 4.4% for the year, but Nielsen reported that this was accelerating at the end of 2014. The most shocking number published this year came from Nielsen which reported that over 10 million millennials had largely fled linear TV just in the last year. Primetime viewing dropped by 12% during 2014 as more viewers are changing to time-shifted viewing.

The other trend is in the continued increase in rates. Most of the cable companies are reporting profits up 7–9%, due in part to more data customers, but also due to continued rate increases. As an example, Cablevision raised cable rates by 5.3% last year, or $7.86 and their average revenue per customer is now up to $155.20. It’s a bit mind boggling to think that’s the average and that there are a lot of households paying a lot more than that.

For yet another year the largest cable companies came in dead last in nationwide customer satisfaction surveys. This puts cable companies behind banks, airlines, and large chain stores and the satisfaction scoring for the cable companies dropped significantly just since 2013.

There is anxiety in cable boardrooms. Just in the last weeks there have been mixed signals from Wall Street when some industry analysts downgraded cable stocks due to the FCC’s net neutrality ruling, while others said there would be no significant impact from it. I tend to side with the second crowd since the FCC has excused broadband from rate (and most other kinds of hands-on) regulation.

But the real anxiety comes from a look at the demographics supporting the industry. The average age of cable viewers is increasing quickly as younger people eschew watching traditional TV. The average age of viewers for many shows and networks is now over 55, up sharply from even a decade ago. This is already starting to be felt in terms of advertising revenues, with the pre-sale for the current ad season down sharply from 2013.

There is also a lot of anxiety over Over-the-Top (OTT) programming on the web. It seems like there are weekly announcements of new alternatives coming online. The biggest recent shocks were when HBO, Disney, and ESPN said they would have some product on the web. These have been considered the bedrock channels of the cable company line-ups. Sling TV seems to be doing well with an abbreviated line-up (but which keeps growing). Sony is supposed to be unveiling what they are calling a major new online product later this year, and there are another dozen companies trying to put together web TV packages. The FCC is also looking at changing the rules that might make it easier for online content providers to obtain programming. The feeling is that 2015 is possibly going to be a sea change year and that we will start to see major shifts in the industry.

Meanwhile, programmers keep raising the rates they charge to cable companies, and the rate of programming increases is accelerating. Many programmers don’t seem overly concerned about the problems faced by the cable companies because many of them expect to have content included in online packages, and many are seeing explosive growth internationally in subscribers.

Liberty Media chairman John Malone chastised the industry recently for not implementing TV everywhere fast enough. That is the product that lets customers watch programming on any device on their own time. He says that this is probably the number one reason why Netflix and others have fared so well (which does sort of ignore the cost issue).

The larger cable companies are putting more effort into this area as witnessed by the new X1 settop boxes that Comcast is deploying. They have reported that there is significantly less churn from customers who have the newer technology. What can be said is that the industry is in turmoil. It may not look so bad when looking at customer numbers, but everybody in the industry senses that things are going to start changing quickly.

As an aside, I know somebody with the new X1 box and they tell me a different story than what Comcast is publicly saying. They recently moved and were given the new X1 box and they hate it. It regularly won’t record shows, or it goes offline and they can’t access regular programming or their recorded programming. They’ve asked repeatedly to get back their old style of box. They instead have been given numerous credits and one manager, as he was giving them a credit, admitted that Comcast had rolled out the new box too fast and there were problems with it everywhere. They have called several times to cancel but have instead been given another credit. When I told them what I was writing, they speculated that there is less churn because Comcast is just not letting people go. I don’t know how widespread the problems are with the new box, but cable companies have been known to withhold bad news from investors in the past.

Regulatory Alert – FCC Triples Regulatory Fines

FCC_New_LogoYou should be aware that the FCC recently adopted a new policy that automatically triples fines for violations of payment rules to the Universal Service Fund, the Telecommunications Relay Service Fund, for local number portability  (LNP), for the North American Numbering Plan and for other federal regulatory fee programs.

The largest telecom organizations like USTelecom, CTIA, NCTA and CompTel have filed a joint petition asking the Federal Communications Commission (FCC) to reconsider a new policy. In this filing these companies say that the FCC should have opened a proceeding to investigate the matter rather than arbitrarily trebling penalties.

So be aware that this is even more of incentive to be a good citizen and take care in filing paperwork and paying fees to these various federal programs.

 

What Does the FCC Municipal Ruling Really Mean?

Scales-Of-Justice-12987500-300x300On the same day that the FCC passed its new net neutrality rules it also granted the petitions of Chattanooga TN and Wilson NC to allow them to expand their broadband networks. In both of these petitions the municipal network is surrounded by areas with poor or zero broadband, and residents of the area have been asking the two cities to extend their fiber network to serve them. But in both cases there were state laws that restricted the systems from expanding.

On the surface, the FCC ruling is only about these two specific cases, but the FCC has made it clear that they will entertain petitions by other jurisdictions that are being restricted by state laws. FCC Chairman Wheeler said in the ruling that there are several ‘irrefutable truths’ about broadband: “One is, you can’t say that you’re for broadband and then turn around and endorse limits on who can offer it. Another is that you can’t say, I want to follow the explicit instructions of Congress to remove barriers to infrastructure investment, but endorse barriers on infrastructure investment. You can’t say you’re for competition but deny local elected officials the right to offer competitive choices.”

While this ruling obviously gives great hope to many communities that don’t have broadband, there is still a long way to go until this ruling makes any practical difference in the market. There are already several parties that say they are going to challenge the ruling in court, so this issue will have to slog its way through the legal process before it goes into effect. The primary issue for a challenge is the FCC’s authority to overturn state restrictions on broadband.

The FCC is relying on language passed by Congress as part of the Telecommunications Act of 1996. In that law, section 706 of the Act says the following:

SEC. 706. ADVANCED TELECOMMUNICATIONS INCENTIVES.

(a) IN GENERAL-The Commission and each State commission with regulatory jurisdiction over telecommunications services shall encourage the deployment on a reasonable and timely basis of advanced telecommunications capability to all Americans (including, in particular, elementary and secondary schools and classrooms) by utilizing, in a manner consistent with the public interest, convenience, and necessity, price cap regulation, regulatory forbearance, measures that promote competition in the local telecommunications market, or other regulating methods that remove barriers to infrastructure investment.

Further, Section 253 of the Act also included language that that bars states from enacting laws that prohibited ‘any entity’ from providing any interstate or intrastate telecommunications service. I’ve read that language from the Act a number of times and it certainly, on the surface, seems to give the FCC the authority to override the telecom laws in North Carolina and Tennessee that stopped the municipal systems from expanding. I’ve chatted with a few of the legislators over the years that helped to write the Telecom Act and they believed that when they wrote the Act that they were enabling municipal competition.

But as is often the case, a law that Congress passes isn’t fully effective until it’s been tested in court. In this case there have been two prior challenges to the law. A year after the passage of the Act, the City of Abilene challenged the Texas law that was a flat ban on municipal competition in the state, and lost before the FCC and then on appeal to the federal Court of Appeals for the DC Circuit. In 1997, Missouri also banned public entities from providing telecom services. Cities in the state challenged this at the FCC, lost and then appealed to the Eighth Circuit Court of Appeals, which unanimously ruled in cities’ favor. The Supreme Court took the case and let the Missouri law stand.

But the current cases are different than the two prior challenges. Both of those cases challenged an outright ban to competition. But in the new cases, the cities asked to be relieved from specific restrictions that stopped them from expanding their existing service beyond a defined footprint. In Tennessee, the City of Chattanooga is restricted to offering broadband in the same area where they serve electric customers. In Wilson, the City is restricted to the City boundaries. In both cases there are nearby customers just outside of those boundaries that each city wants to serve, and the ruling gives them the right to expand.

So this is going to be up to the courts to decide. Certainly one thing has changed since those two earlier rulings in that the FCC is now in favor of overturning states’ rights. In the earlier cases the FCC ruled against the petitioners, and so the courts started with that refusal in judging the cases. These kinds of cases usually boil down to whether the FCC has the authority to rule, which is not exactly the same thing as ruling about whether the challenger to the law was right or wrong. In the last challenges the courts said that the FCC had the authority to deny the municipal petitions. This time any challenges will begin with an FCC ruling in favor of the cities and we’ll just have to wait and see if that makes any difference in the courts.

KPMG’s Cloud Survey

Cloud_computing_icon_svgLate last year KPMG published the results of a survey on cloud computing. You can see the results here. The survey was given to 500 CEOs, CIOs, and CFOs of large companies with annual revenues of over $100 million.

You might ask why these results matter much to anybody who is smaller than that. I think it matters because in the IT world, what the big companies do moves downhill to the rest of us. As an example, if the large companies, with all of their buying power, move away from enterprise level routers, then the rest of us will be dragged in that same direction as the market for enterprise routers stops evolving and dries up. The large companies collectively have the majority of the buying power in this market.

When cloud computing got started a few years back the original sales pitch for the change was all about cost savings. Cloud vendors all touted that it was far cheaper to use computing resources in large data centers than to own your own computer resources that includes a dedicated staff to operate an IT network. And while cost savings is still part of the reason to change to the cloud, it’s no longer the only reason. The survey found the following reasons given by large companies for using the cloud:

  • Cost savings – 49%
  • Enabling mobile work forces – 42%
  • Improving customer service and partner interfaces – 37%
  • Understanding corporate data better – 35%
  • Accelerating product development – 32%
  • Developing new business lines – 30%
  • Sharing data globally – 28%
  • Faster time to market – 28%

In a similar survey from 2012 the responses were primarily about cost savings. For example, the emphasis on enabling a mobile workforce then was only given as a reason by 12% of respondents. What bought about such a big shift in the way that large companies think about the cloud in only a two year period?

The reason is that the cloud was originally a hardware transition. It let companies stop having to buy and maintain expensive computer systems and a large staff to operate them. Executives were tired of constantly being told that their systems were obsolete (and in our fast changing world they usually were). More importantly, executives were tired of being told that it was too hard to accomplish whatever they most wanted to do and they felt that their IT functions were often holding back their company. Many executives thought of their IT department as a black box which they didn’t understand very well.

In the last few years it has become clear that the cloud is not just a substitute for hardware and staff, but is also a catalyst for changing software. Large corporations have often been locked into huge software systems from companies like Oracle or Microsoft. While these packages did some things very well, there were some functions where they were just adequate, and other functions for which they were downright horrible. But the computer systems and IT staff tended to make everything work with a few integrated software packages rather than support a lot of different programs for various functions.

At the same time there has been a revolution in network hardware and a shift to the efficiencies of using large data centers, there is also a host of new software on the market that is extremely good at just a few functions. Companies have found that while they were breaking free of the restrictions of an in-house IT network and staff that they have also been able to break the bundles of the large software packages.

And this can be seen by looking at the claims that the respondents to the survey made about what they have already been able to achieve through the cloud:

  • Improve business performance – 73%
  • Improve the level of service automation – 72%
  • Reduce costs – 70%
  • Better integration of systems – 68%
  • Introduce new features and functions – 68%
  • Enhance interaction with customers and partners – 67%
  • Rapidly deploy new solutions – 67%
  • Replace legacy systems – 66%

Most of these results reflect changes in software as much as they represent just changing computer platforms. This is not to say that a shift to the cloud is seamless. For example, there is a lot of corporate anxiety about the security of their data. But overall, the large corporations are so far very happy with the shift and most plan on transitioning more to the cloud. Smaller companies are going to feel the tug to move to the cloud for the same reasons. It’s likely that you can save money and begin using newer and better software after such a change.

New Technology – Medical Applications

Medical_Software_Logo,_by_Harry_GouvasThis month I look at some technology advances in medicine.

Robot Drug Researcher. A team at the University of Manchester has developed an AI system they call Eve which is designed to assist in drug research. Eve is a combination of a computer and a system of mechanical arms that lets Eve mix various chemicals to search for new compounds. The drug industry has already developed sophisticated software that helps to visualize chemical compounds, and Eve adds the ability to ‘learn’ on top of the existing software platforms.

During the original test of concept for Eve, the computer found a possible useful compound for fighting drug-resistant malaria. Eve found a chemical called TNP-470 that effectively targets an enzyme that is key to the growth of Plasmodium vivax, one of the parasites that causes malaria. Many drugs do their job by ‘fitting’ a chemical into a disease agent to block its function, in the way that a key fits into a lock. Drug chemists often search for cures by looking at classes of chemicals that might work in a given application based upon the shapes. But then they have to slog through hundreds of thousands of tests to find the perfect solution. Eve can automate and speed up that search process. The team was not really expecting this kind of immediate breakthrough, but it shows the potential for automating the searching process.

Microchips Deliver Drugs Precisely. Biomedical engineer Robert Langer has developed a system that will allow an implanted chip to release drugs in response to a WiFi signal. The chips have up to a thousand tiny wells and can hold many doses of the same drug or a number of different drugs. Each little well has a cover that can be opened in response to a wireless signal.

This technology could be useful in treating some forms of cancer as well as certain kinds of diabetes where small timed releases of drugs are the only effective treatment as compared to a large injection from a shot or a pill. With 1,000 possible doses the device could deliver drugs over a long period of time and might also be useful for such things as birth control.

Organs-on-a-Chip. Fraunhofer, a German research company, recently announced that it has developed what they are calling organ-on-a-chip. The company has developed chips where human cells from various organs are put into tiny wells and connected by tiny canals. The chips, when fully functioning, can then represent a functioning human for the purposes of testing the effects of various drugs.

The promise for the technology is that it will be able to greatly speed up the drug testing process, and can possibly replace having to test drugs on animals before a drug can be tested on humans. Normal drug testing can take years, and researchers have never been fully enamored with animal testing since they have always known that many drugs affect humans differently than animals; this testing method can give more precise feedback. The hope is that the organs-on-a-chip will knock years off of the testing process for promising drugs while also more quickly identifying drugs that have a detrimental effect on human tissues.

Robot Orderlies. The University of California, San Francisco’s Mission Bay wing is testing a robot orderly they have named Tug. The robots are being used to shuttle things around the hospital, and they deliver such things as clean linens, meals or drugs to rooms as needed. The hospital plans on having a fleet of 25 of the robots by this month. Already each of the robots at the hospital is logging 12 miles of hallway travel per day.

The robots navigate using built-in maps of the hospital. They are programmed to not be intrusive and, for example, will patiently wait to get past people who are blocking a hallway. The robots take the elevators which they call by wireless signal. There have been trials of robot orderlies before, but this is the largest trial to date and the robots are taking over a host of orderly services.

Smartphone as Medical Monitor. Apple has teamed up with a number of leading hospitals to conduct trials where they will use the iPhone 6 and smart watches to monitor patients. The idea is to monitor patients 24 hours per day after they have been released from the hospital for treatment of major health problems. The monitoring will give the doctors the ability to watch key metrics such as heart beat, blood pressure, blood sugar, and other important indicators, much as they would have done if the patient was still in the hospital.

Apple is calling the technology package HealthKit and it puts them far ahead of rivals such as Samsung and Google since over a dozen hospitals are now trialing the technology. The trials are to help doctors determine the degree to which tracking patients’ symptoms will help their treatment. For now the trials are working with critically ill patients, but the eventual plan is to develop routine tracking for the general population that will help to spot health issues before they become otherwise apparent. You can envision someday getting a call from your doctor asking you to come in since your blood pressure or blood sugar are outside normal bounds.

A Path to the Infosphere

Futurama_Comic_S1Eric Schmidt of Google recently made headlines when he said: “The Internet will disappear.” By that he meant that it will become so seamless that it will surround us everywhere. Obviously a lot of things have to happen before we can all move to the ubiquitous infosphere. For instance, as I just covered in another blog, we will need small nonintrusive wearables. Gone will be the fitness trackers and smartwatches and even the cell phones. We’ll instead have to have some small device that is always with us and that can communicate with us both audibly and visually. This could be an earbud or even implanted chip along with some device that can cast images into our retinas, something far less clunky than Google glass.

But aside from better devices, the biggest change is going to have to be in the way the web functions. Gone would be today’s interface with the web through browsers where we interface with one program or one website at a time. The way we work on computers today is too linear and while we may have many programs running, each of them is separate, and we dip into them one at a time.

The wireless world has already shown us a partial path to the future by virtue of having moved to a world of apps rather than URL websites. But apps still suffer from the same problem of being used one at a time, and there is very little linking between apps today. There are apps today that want to dip into other apps to grab existing data, but I normally get the impression that this is more for the benefit of the app company than it is for the user. I constantly run across apps that ask if they can have access to my contacts list on Facebook or LinkedIn and I always say no. Unless it’s some sort of a communications app, these companies are just fishing for more leads to try to sell their product. We don’t need more advertising linking, but functional linking.

There is an attempt in the app world to establish better links between apps. For instance, Google’s App Indexing and Facebook’s App Links are the start of an effort to create what the industry is calling deep linking, which are ways for apps to usefully share data for the benefit of the user. There are a number of other software companies working in this area.

Today, content providers build custom cross-linking libraries to fulfill this function. The cross-linking process makes it possible to move seamlessly from one app to another. But such links are custom-made and are very specific to a small set of apps. The links share data fields so that a customer using one app can be sent to a second app without logging in again and without having to provide basic data about who they are.

But what’s really needed in the long run, if we are going to get to a seamless infosphere, is a software system that automatically allows a user to shift from one app to another without ever having to log in. The whole idea of logging in has to go away. We need apps that can authenticate who we are and that don’t have to ask us basic questions about who we are. So another thing that is needed for a seamless infosphere is some sort of foolproof authentication. Apps need to be able to trust that we are who we say we are.

But the flip side of that is that as users we need to know that apps won’t spy on us and suck out every bit of information about us. When apps can talk to each other without our permission, we need to have some sort of privacy matrix established that defines what we are willing to share and not share. And the apps must follow the rules that each of us establishes. So another thing needed for us to feel safe in the infosphere is some sort of trustworthy privacy rules that all programs we interface with will follow.

One of the early dangers I see from the linking process is that it could become very proprietary. If Google, Facebook, or Apple develops a suite of linked apps that work well together but that don’t link to outside apps, then we will have taken a step backwards and will have undone the intent of the recent net neutrality ruling. That ruling ensures that large ISPs don’t restrict entry of new competitors into the web market. But that ruling does not protect against the large content providers getting so large and ubiquitous that they kill off competitors by locking them out of linked systems. So eventually we are going to need net neutrality rules for content providers.

So we are almost there for a ubiquitous web. All we need are a total migration to apps, better wearable devices, foolproof authentication, better privacy screens to protect our data, rules that allow any app to safely link with others, and net neutrality rules that don’t let any content provider control the infosphere. Come on Silicon Valley. We’re waiting.