Big Companies and Telecommuting

One of the biggest benefits most communities see when the first get good broadband is the ability for people to telecommute or work from home. Communities that get broadband for the first time report that this is one of the most visible changes made in the community and that soon after getting broadband almost every street and road has somebody working from home.

CCG is a great example of telecommuting and our company went virtual fifteen years ago. The main thing that sent us home in those days was that residential broadband was better than what we could get at the office. All of our employees could get 1 – 2 Mbps broadband at home and that was also the only speed available at our offices over a T1. But we found that even in those early days that a T1 was not enough speed to share among multiple employees.

Telecommuting really picked up at about the same time that CCG went virtual. I recall that AT&T was an early promoter of telecommuting as was the federal government. At first these big companies let employees work at home a day or two a week as a trial. But that worked out so well that over time big organizations felt comfortable with people working out of their homes. I’ve seen a number of studies that show that telecommuting employees are more productive than office employees and work longer hours – due in part to not have to commute. Telecommuting has become so pervasive that there was a cover story in Forbes in 2013 announcing that one out of five American workers worked at home.

Another one of the early pioneers in telecommuting was IBM. A few years ago they announced that 40% of their 380,000 employees worked outside of traditional offices. But last week the company announced that they were ending telecommuting. They told employees in many of their major divisions like Watson development, software development and digital marketing and design that they must move back into a handful of regional offices or leave the company.

The company has seen decreasing revenues for twenty straight quarters and there is speculation that this is a way to reduce their work force without having to go through the pain of choosing who will leave. But what is extraordinary about this announcement is how rare it is. It’s only the second major company that has ended telecommuting in recent memory, the last being Yahoo in 2013.

Both IBM and Yahoo were concerned about earnings and that is probably one of the major reasons that drove their decision to end telecommuting. It seems a bit ironic that companies would make this choice when it’s clear that telecommuting saves money for the employer – something IBM crowed about earlier this year.

Here are just a few of the major findings that have been done about the benefits of telecommuting. It’s improves employee morale and job satisfaction. It reduces attrition, reduces sick and unscheduled leave. It saves companies on office space and overhead costs. It reduces discrimination by equalizing people by personality and talent rather than race, age or appearance. It increases productivity by eliminating unneeded meetings and because telecommuters work more hours than office workers.

But there are downsides. It’s hard to train new employees in a telecommuting environment. One of the most common ways to train new people is to have them spend time with somebody more experienced – something that is difficult with telecommuting. Telecommuting makes it harder to brainstorm ideas, something that benefits from live interaction. And possibly the biggest drawback is that telecommuting isn’t for everybody. Some people cannot function well outside of a structured environment.

As good as telecommuting is for companies it’s even better for smaller and rural communities. A lot of people want to live in the communities they grew up in, around friends and family. We’ve seen a brain drain from rural areas for decades as kids graduate from high school or college and are unable to find meaningful work. But telecommuting lets people live where there is broadband. Many communities that have had broadband come to town report that they see an almost instant uptick in housing prices and demand for housing. And part of that increased demand is from those who want to choose a community rather than follow a job.

One of the more interesting projects I’ve worked on with the telecommuting issue was when I helped the city of Lafayette, Louisiana get a fiber network. Lafayette is not a rural area but a thriving mid-size city, and yet one of the major reasons the residents wanted fiber was the chance to keep their kids at home. The area is largely Cajun with a unique culture and the community was unhappy to see their children have to relocate to larger cities to get jobs after graduating from the university there. Broadband alone can’t fix that kind of problem, but Lafayette is reportedly happy with the changes brought from the fiber network. That’s the kind of benefit that’s hard to quantify in dollar terms.

Technology Shorts – September 2016

truenorthHere are some new technology developments that are likely to someday improve telecommunications applications.

Single Molecule Switch. Researchers at the Peking University of Beijing have created a switch that can be turned on and off by a single photon. This opens up the possibility of developing light-based computers and electronics. To make this work the researchers needed to create a switch using just one large molecule. The new switches begin with a carbon nanotube to which three methylene groups are inserted into the molecule, creating a switch that can be turned on and off again.

Until now researchers had not found a molecule that was stable and predictable. In earlier attempts of the technology a switch would turn ‘on’ but would not always turn off. Further, they needed to create a switch that lasted, since the switches created in earlier attempts began to quickly break down with use. The new switches function as desired and look to be good for at least a year, a big improvement.

Chips that Mimic the Brain. There are now two different chips that have hit the market that are introducing neural computing in a way that mimics the way the brain computes.

One chip comes from KnuEdge, founded by a former head of NASA. Their first chip (called “Knupath”) has 256 cores, or neuron-like brain cells on each chip, connected by a fabric that lets the chips communicate with each other rapidly. This chip is built using older 32 nanometer technology, but a newer and smaller chip is already under development. But even at the larger size the new chip is outperforming traditional chips by a factor of two to six times.

IBM also has released a neural chip it’s calling TrueNorth. The current chip contains 4,096 cores, each one representing 256 programmable ‘neurons’. In traditional terms that gives the chip the equivalent of 5.4 billion transistors.

Both chips have taken a different approach than traditional chips which use a von-Neumann architecture where the core processor and memory are separated by a buss. In most chips this architecture has been slowing down performance when the buss gets overloaded with traffic. The neural chips instead can simultaneously run a different algorithm in each core, instead of processing each algorithm in sequential order.

Both chips also use a fraction of the power required by traditional chips since they only power the parts of the chips that are being used at any one time. The chips seem to be best suited to an environment where the chips can learn from their experience. The ability of the chips to run simultaneous algorithms means that they can provide real-time feedback within the chip to the various processors. It’s not hard to imagine these chips being used to learn and control fiber networks and be able to tailor customer demand on the fly.

Improvements in WiFi. Researchers at MIT’s Computer Science and Artificial Intelligence Lab have developed a way to improve WiFi capabilities by a factor of three in crowded environments like convention centers or stadiums. They are calling the technology MegaMIMO 2.0.

The breakthrough comes from finding a way to coordinate the signals to users through multiple routers. WiFi signals in a real-world environment bounce off of objects and scatter easily, reducing efficiency. But by coordinating the signals to a given device like a cellphone through multiple routers the system can compensate for the interference and scattering by recreating a coherent understanding of the user signal.

While this has interesting application in crowded public environments, the real potential will be realized as we try to coordinate with multiple IoT sensors in an environment.

What Are Smart Cities?

Jetsons cityI’ve been seeing smart cities mentioned a lot over the last few years and so I spent some time lately reading about them to see what all the fuss is about. I found some of what I expected, but I also found a few surprises.

What I expected to find is that the smart city concept means applying computer systems to automate and improve some of the major systems that operate a city. And that is what I found. The first smart city concept was one of using computers to improve traffic flow, and that is something that is getting better all the time. With sensors in the roads and computerized lights, traffic systems are able to react to the actual traffic and work to clear traffic jams. And I read that this is going to work a lot better in the near future.

But smart city means a lot more. It means constructing interconnected webs of smart buildings that use green technology to save energy or to even generate some of the energy they need. It means surveillance systems to help deter and solve crimes. It means making government more responsive to citizen needs in areas like recycling, trash removal, snow removal, and general interfaces with city systems for permits, taxes, and other needs. And it’s going to soon mean integrating the Internet of Things into a city to perfect the many goals of governments doing a better job.

I also found that this is a worldwide phenomenon and there is some global competition between the US, Europe, China, and India to produce the most efficient smart cities. The conventional wisdom is that smart cities will become the foci of global trade and that smart cities will be the big winners in the battle for global economic dominance.

But I also found a few things I didn’t know. It turns out that the whole smart city concept was dreamed up by companies like IBM, Cisco, and Software AG. The whole phenomenon was not so much a case of cities clamoring for solutions, but rather of these large companies selling a vision of where cities ought to be going. And the cynic in me sees red flags and wonders how much of this phenomenon is an attempt to sell large, overpriced hardware and software systems to cities. After all, governments have always been some of the best clients for large corporations because they will often overpay and have fewer performance demands than commercial customers.

I agree that many of the goals for smart cities sound like great ideas. Anybody who has ever sat at a red light for a long time while no traffic was moving on the cross street has wished that a smart computer could change the light as needed. The savings for a community for more efficient traffic is immense in terms of saved time, more efficient businesses, and less pollution. And most cities could certainly be more efficient when dealing with citizens. It would be nice to be able to put a large piece of trash on the curb and have it whisked away quickly, or to be able to process a needed permit or license online without having to stand in line at a government office.

But at some point a lot of what the smart city vendors are pushing starts to sound like a big brother solution. For example, they are pushing surveillance cameras everywhere tied into software systems smart enough to make sense out of the mountains of captured images. But I suspect that most people who live in a city don’t want their city government spying and remembering everything they do in public any more than we want the NSA to spy on our Internet usage at the federal level.

So perhaps cities can be made too smart. I can’t imagine anybody who minds if cities get more efficient at the things they are supposed to provide for citizens. People want their local government to fix the potholes, deliver drinkable water, provide practical mass transit, keep the traffic moving, and make them feel safe when they walk down the street. When cities go too much past those basic needs they either have crossed the line into being too intrusive in our lives, or they are stepping over the line and competing with things that commercial companies ought to be doing. So I guess we want our cities to be smart, but not too smart.

New Technology for August 2015

ibm_chip1This is my monthly look at new technologies that might eventually impact our industry.

Small Chip from IBM. IBM and a team of partners including Samsung, GlobalFoundries, and the State University of New York at Albany have made a significant leap forward by developing a computer chip that measures 7 nanometers, or billionths of an inch. That’s half the size of other cutting-edge chips in the industry. Experts are calling IBM’s new chip a leap that is two generations ahead in the current chip industry. IBM is also introducing a 10 nanometer chip as well.

IBM’s trial chip contained transistors just to prove the concept and so the chip wasn’t designed for any specific purpose. But this size breakthrough means that the industry can now look forward to putting much greater computer power into small devices like a smart watch or a contact lens.

A chip this small can be used in two different ways. It can reduce power requirements in existing devices, or it could be used to greatly increase computational power using the same amount of chip space.

IBM has contracted with GlobalFoundries to build the new chip for the next ten years. This should provide significant competition to Intel since currently nobody else in the industry is close to having a 7 nanometer chip.

Cheaper Batteries. Yet-Ming Chiang of MIT has developed processes that will significantly reduce the cost of building batteries. Chiang has not developed a new kind of battery, but instead has designed a new kind of battery factory. The new factory can be built for a tenth of the price of an existing battery factory which ought to result in a reduction of battery prices by about 30%.

This is an important breakthrough because there is a huge potential industry for storing electric power generation offline until it’s needed. But at today’s battery prices this is not really practical. This can be seen by looking at the price of Elon Musk’s new storage batteries for solar power – they are priced so that socially conscious rich people can use the technology, but they are not cheap enough yet to make this a widespread technology that is affordable for everybody.

A 30% reduction in battery costs starts to make lithium-ion batteries competitive with fossil fuel power. Today these batteries cost about $500 per kilowatt-hour, which is four times the cost of using gasoline. Chiang’s goal is to get battery costs down to $100 per kilowatt-hour.

Metasheets. A metasheet is a material that will block a very narrow band of radiation but let other radiation pass. Viktar Asadchy of Aalto University in Finland has developed a metamirror that will block specific radiation bands and will reflect the target radiation elsewhere while letting other radiation pass.

This is not a new concept, but attempts to do this in the past have usually bounced the target radiation back at the source. This breakthrough will let the target radiation be bounced to a material that can absorb it and radiate it as heat.

This could be a big breakthrough for numerous devices by creating small filters that will allow the dissipation of dangerous radiation from chips, radios, and other devices. This could result in far safer electronics and also can cut down on interference caused by stray radiation and make many electronics components function better.

New Tech – September 2014

glass highriseAs I do from time to time I highlight some of the more interesting technologies that I run across in my reading. A few of these might have a big impact on our industry.

First is news that IBM has developed a new storage method that would be a giant leap forward in storage density. The technology is being called racetrack memory. It works by lining up tiny magnets one atomic layer deep on sheets. The atoms can be moved upward and downward in charge in a ‘highly coherent manner’ by the application of a simple current.

That in itself is interesting, but when these layers are laid atop one another in sheets there is a multiplicative effect in the amount of storage available in a given space. IBM has already built a small flash device using the technology with 15 layers and it increased the storage capacity of the flash drive by a factor of 100.

The small size also means a much faster read/write time and reduced power requirements. One of the biggest advantages of the technology is that there are no moving parts. This makes the technology infinitely rewritable, which means it ought to never wear out. IBM believes that this can be manufactured affordably and that this will become the new storage standard. As they put together even more layers than the original 15 in the prototype they will get an even bigger multiplier of storage capacity compared to any of today’s storage technologies. Expect within a few years to be seeing multiple terabyte flash drives and cell phones.

The next new technology comes from a research team at the University of Washington. They call the new technology WiFi Backscatter and they will be formally publishing the paper on the research later this month. The promise of the technology is to create the ability to communicate with small sensors that won’t need to be powered in any other way.

WiFi Backscatter can communicate with battery-free RF devices by either reflecting or not reflecting a WiFi signal between a router and some other device like a laptop. The interruption in the reflections can be read as a binary code of off and on signals giving the RF device a way to communicate with the world without power. The team has not only detailed how this will work, but they have built a prototype that involves a tiny antenna and circuitry that can be connected to a wide range of electronic devices. This first generation antenna array draws a small amount of power from the electronic device, but the team believes that this ought to work with battery-free sensors and other IoT devices. This technique could be the first technology to enable multiple tiny IoT sensors scattered throughout our environment.

Finally, scientists at Michigan State have developed a ‘transparent luminescent solar concentrator’ that can generate electricity from any clear surface such as a window or a cellphone screen. The device works by absorbing the invisible spectrum in the ultraviolet and infrared spectrums and then re-emitting in a concentrated infrared frequency that then triggers tiny photovoltaic cells embedded around the edges of the clear surface.

The goal of the team is to make the technology more efficient. The current prototype is only about 1% efficient in converting light into electricity, but they believe they can get this up to about 5% efficiency. To compare that number, there are various non-transparent solar concentrators today that are about 7% efficient.

The big advantage of this technology is the transparency. Imagine a self-powering cellphone or a high-rise glass building that generates a lot of its power from the windows. This technology will allow putting solar generation in places that were never before contemplated. In a blog from last month I noted a new solar technology that could be painted onto any surface. It looks like we are headed for a time when any portion of a building can be generating electricity locally including the roof, the walls and the windows.

Cool New Stuff – Computing

Generic-office-desktop2As I do once in a while on Fridays I am going to talk about some of the coolest new technology I’ve read about recently, both items related to new computers.

First is the possibility of a desk-top supercomputer in a few years. A company called Optalysys says they will soon be releasing a first generation chip set and desk-top size computer that will be able to run at a speed of 346 gigaflops in the first generation. A flop is a measure of instructions per second that can be performed by a computer. A gigaflop is 109 instructions, a petaflop is 1015 instructions and an exaflop is 1018. The fastest supercomputer today is the Tinahhe-2, built by a Chinese university and which operates at 34 petaflops, which is obviously much faster than this first desktop machine.

The computer works by beaming low-intensity lasers through layers of liquid crystal. They say that in upcoming generations that they will have a machine that can do 9 petaflops by 2017 and they have a goal of having a machine that will do 17.1 exaflops (17,100 petaflops) by 2020. The 2017 version will be half as fast as the fastest supercomputer today and yet be far smaller and use far less power. This would make it possible for many more companies and universities to own a supercomputer. And if they really can achieve their goal by 2020 it means another big leap forward in supercomputing power since that machine would be several magnitudes faster than the Chinese machine today. This is exciting news because in the future there are going to be mountains of data to be analyzed and it’s going to take myriad, and affordable supercomputing to keep up with the demands of big data.

In a somewhat related, but very different approach, IBM has announced that it has developed a chip that mimics the way the human brain works. They have developed a chip they call TrueNorth that contains the equivalent of one million human neurons and 256 million synapses.

The IBM chip is a totally different approach to computing. The human brain stores memories and does computing within the same neural network and this chip does the same thing. IBM has been able to create what they call spiking neurons within the chip, which means that the chip can store data as a pattern of pulses much in the same way the brain does. This is a fundamentally different approach than traditional computers that use what is called Von Neumann computing that separates data and computing. One of the problems with traditional computing is that data has to be moved back and forth to be processes, meaning that normal computers don’t do anything in real time and there are often data bottlenecks.

The IBM TrueNorth chip, even in this first generation is able to process things in real time. Early work on the chip has shown that it can do things like recognize images in real time both faster and with far less power than traditional computers. IBM doesn’t claim that this particular chip is ready to put into products and they see it as the first prototype for testing this new method of computing. It’s even possible that this might be a dead-end in terms of commercial applications, although IBM already sees possibilities for this kind of computer to be used for both real time and graphics applications.

This chip was designed as part of a DARPA program called SyNAPSE, which is short for Systems of Neuromorphic Adaptive Plastic Scalable Electronics, which is an effort to create a brain-like hardware. The end game of that program is to eventually design a computer that can learn, and this first IBM chip is a long way from that end game. And of course, anybody who has seen the Terminator movies knows that DARPA is shooting to develop a benign version of Skynet!

Hello Siri . . .

Image representing Siri as depicted in CrunchBase

Image by None via CrunchBase

Gartner, a leading research firm, issued a list of the ten top strategic technology trends for 2014. By strategic they mean that these are developments that are getting a lot of attention and development in the industry, not necessarily that these developments will come to full fruition in 2014. One of the items on the list was ‘smart machines’ and under that category they included self-driving cars, smart advisors like IBM’s Watson and advanced global industrial systems, which are automated factories.

But I want to look at the other item on their list which is contextually aware intelligent personal assistants. This essentially will be Apple’s Siri on steroids. This is expected to be done at first mostly using cell phones or other mobile device. Eventually one would think that this will migrate towards something like Google Glass, a smart phone, a bracelet or some other way to have this always on you.

Probably the key part of the descriptive phrase is contextual. To be useful, a person’s personal assistant has to learn and understand the way they talk and live in order to become completely personalized to them. By contextual, the current Siri needs to grow to learn things by observation. To be the life-changing assistant envisioned by Gartner is going to require software that can learn to anticipate what you want. For example, as you are talking to a certain person your assistant ought to be able to pick out of the conversation those bits and pieces that you are going to want it to remember. For example, somebody may tell you their favorite restaurant or favorite beer and you would want your assistant to remember that without you telling it to do so.

Both Apple and Microsoft’s current personal assistants have already taken the first big step in the process in that they are able to converse some in conversation language mode. Compare what today’s assistants can already do to Google’s search engine, which makes you type in awkward phrases. Any assistant is going to have to be able to be completely fluent in a person’s language.

One can easily envision a personal assistant for life that helps you learn when you are young and who then sticks with you for life. Such an assistant will literally become the most important ‘person’ in somebody’s life. An effective assistant can free a person from many of the mundane tasks of life. You will never get lost, have to make an appointment, remember somebody’s birthday or do many of the routine things that are part of life today. A good assistant will free you from the mundane. But it still won’t take out the trash, although it can have your house-bot do that.

In the future you can envision this assistant tied into the Internet of things so it would be the one device you give orders to. It would then translate and talk to all of your other systems. It would talk to your smart house, talk to your self-driving car, talk to the system that is monitoring your health, etc.

The biggest issue with this kind of personal assistant is going to be privacy. A true life-assistant is going to know every good and bad thing about you, including your health problems and every one of your ugly bad habits. It is going to be essential that this kind of system stay completely private and be somehow immune to hacking. Nobody can trust an assistant in their life that others can hack or peer into.

One might think that this is something on the distant horizon, but there are many industry experts who think this is probably the first thing on the smart machine list that will come to pass, and that there will be pretty decent versions of this within the next decade. Siri is already a great first step, although often completely maddening. But as this kind of software improves it is not hard to picture this becoming something that you can’t live without. It will be a big transition for older people, but our children will take to this intuitively.

Grasping the Internet of Things

Internet of Things IoT13 Forum June 2013 040

Internet of Things IoT13 Forum June 2013 040 (Photo credit: marklittlewood1)

I have written several blog entries about the Internet of Things. But I have not really defined it very well. I read as many articles about the topic as I can find since I find it personally fascinating. To me this is mankind finally using computer technology to affect everyday life and goes far beyond things you can do with a PC or tablet.

I recently saw an article that summarized the direction of the Internet of Things into three categories – and this is a good description of where this is all headed. These categories are:

Knowledge of Self. This part of the Internet of things is in its infancy. But the future holds the promise that the Internet can be used to help people with self-control, mindfulness, behavior modification and training.

Today there are gimmicky things people are doing with sensors, such as counting the number of times you have opened the refrigerator as a way to remind you to lose weight. But this can be taken much further. We are not far from a time when people can use computers to help them change their behavior effectively, be that in losing weight or in getting your work done on time. Personal sensors will get to know you intimately and will be able to tell when you are daydreaming or straying from your tasks and can bring you back to what you want to accomplish. Computers can become the good angel on your shoulder should you choose that.

Probably the biggest promise in this area is that computers can be used to train anybody in almost anything they want to know. The problem with the Internet today is that it is nearly impossible in a lot of cases to distinguish between facts and fiction. But it ought to be possible to have the needed facts at your fingertips in real-time. If you have never changed a tire your own personal computer assistant will lead you through the steps and even show you videos of what to do as you do it for the first time. But such training could bring universal education to everybody in the world, which would be a gigantic transformation of mankind – and would obviate the widespread ignorance and superstitions that still come today from lack of education.

Knowledge of Others. Perhaps the two most importance in this area will be virtual presence and remote health care.

With virtual presence you will be able to participate almost anywhere as if you were there. This takes the idea of video conferencing and makes it 3D and real-time. This is going to transform the way we do business, hire employees and seek help from others.

But perhaps the biggest change is going to come in health care. Personal medical sensors are going to be able to monitor your body continuously and will alert you to any negative change. For instance, you will know when you are getting the flu at the earliest possible time so that you can take medicine to mitigate the symptoms.

There is also great promise that medical sensors will make it possible for people to live in their own homes for longer as we all age, something just about everybody wants. Sensors might even change the way we die. Over 80% of people say they want to die at home, but in 2009 only 33% did so. Medical monitoring and treatment tied to sensors ought to let a lot more of us die in the peace of our own beds.

Perhaps the biggest promise of personal monitors is the ability to detect and treat big problems before they get started. Doctors are saying that it ought to be possible to monitor for pre-cancerous cells and kill them when they first get started. If so, cancer could become a disease of the past.

Knowledge of the World. The Internet of Things promises to eventually have sensors throughout the environment. More detailed knowledge of our surroundings will let us micromanage our environment. Those who want a different amount of humidity in the air will be able to have this done automatically in rooms where they are alone.

But remote sensors hold the most promise in areas of things like manufacturing and food production. For instance, sensors can monitor a crop closely and can make sure that each part of a field gets the right amount of water and nutrition and that pests are controlled before they get out of hand. Such techniques could greatly increase the production of food per acre.

And we can monitor anything. People living near to a volcano, for example, will know far ahead of time when there has been an increase in activity.

Monitoring the wide world is going to be the last part of the Internet of Things to be implemented because it is going to require drastic new technologies in terms of small sensors and the ability to interpret what they are telling us. But a monitored world is going to be a very different world – probably one that is far safer, but also one where there is far less personal freedom – at least the freedom to publicly misbehave.

Are You Collaborating?

I am not talking about World War II movies and I hope none of you have decided to side with the enemy (whoever that is). Collaboration software is a tool that every business with employees who work at different locations ought to consider.

Collaborative software began several decades ago with Lotus Notes. That software allowed multiple users on the same WAN to work on the same spreadsheet or word document at the same time. And Lotus Notes had the added feature of letting you link spreadsheets and word documents at the same time so that any change made to a spreadsheet would automatically populate your word document. But Lotus Notes required people to be on the same WAN and in most companies that meant being in the same building, and so the concept became very popular, plus Microsoft came and kicked Lotus’s butt in the marketplace.

And so collaborative software mostly died off for a while, although there were few open source programs that were widely used by universities and others who love open source software.

But collaborative software is back in a number of different variations and if your company has employees at more than one location, then one of these new software products is going to be right for you. Here are some of the features you can find in various collaborative software today:

  • Number one is the ability to simultaneously let multiple people work on the same document. And instead of just spreadsheets and word documents, this has been extended to any software that users all have rights to use. Most software also creates a log showing who made changes to a document and when.
  • Supports multiple devices. Collaborative software is no longer just for the PCs and employees using tablets and smartphones can share in many of the features. As an example, collaborative software is a great way to keep the sales staff in the field fully engaged with everybody else in the company.
  • Communicate internally. Many collaborative software programs come with chat rooms, instant messaging and text massaging tools that make it fast and easy to communicate with other employees. Why send somebody an email or call them if you only have a quick question that they can answer on an IM?
  • Some systems let you know where people are and if they are available to communicate now. This stops you from calling people who are not in their office and instead communicating with them in a faster way.
  • Create better communications history. In some software each user gets a home page that operates much like Facebook that shows everything they have done, meaning that other employees can often go find information they need without bothering that person.
  • This can become the new way to structure corporate data. With a program like SharePoint you can quickly create folders specific to a topic or a project and then give access only to those you want to have access to that date. This used to require the intervention of somebody in the IT department but now can be done by almost anybody.
  • Gives you a great tool to work with your largest customers. You can give your contacts at your largest customers limited access to your systems so that they can quickly ask questions or talk to the right person by chat or IM. This is a great new way to extend your customer service platform and make it real time. You can easily isolate outsiders from corporate information while giving them access to the social networking aspects of the software.

So what are some of the Collaborative software tools to consider? Here are a few (and there are many others).

  • Podio. This is software that is free for up to five users. It might be a good way to see if you like the concept. After five users it’s $9 per employee per month.
  • IBM (Lotus). The Lotus name is not dead and is now the brand name of the IBM collaborative suite of products. Check them out here.
  • Intuit has a product called QuickBase that is a collaborative suite of software. One good thing about this is that it will integrate with QuickBooks and other Intuit products that you might already be using. Check it out here.
  • SharePoint is Microsoft’s collaborative suite of products and has done very well in the large business sector. See it here.

My Take on the Internet of Things

I think there might be as many different predictions about the Internet of Things as there are bloggers and pundits. So I thought I would join the fray and give my take as well. The Internet of Things is that it is going to involve a new set of technologies that will enable us to get feedback from our local environment. That is going to allow for the introduction of a new set of tools and toys, some frivolous and some revolutionary.

I have read scores of articles talking about how this is going to change daily life for households. The day may come when our households resemble the Jetsons and where we have robots with more common sense than most of us running our households, but we are many years away from that.

There will be lots of new toys and gadgets that will sometimes make our daily lives easier. For instance food we buy may have little sensors put into packaging that will tell you when your produce is getting ready to go bad so that you won’t forget to eat it. There will be better robots that can vacuum the floors and maybe even do laundry and walk the dog. But I don’t see these as revolutionary and probably not affordable for the general populace for some time. For a long time the Internet of Things is going to create toys that wealthy people or tech geeks will play with, and it will take years to get these technologies to make it into everybody’s homes. Very little of what I have been reading for household use sounds revolutionary.

The biggest revolutionary change that will directly affect the average person is medical monitoring. Within a decade or two it will be routine to have sensors always tracking your vitals so that they will know there is something wrong with you before you do. There will be little sensors in your bloodstream looking for things like cancer cells, which is going to mean that we won’t have to worry about curing cancer, we’ll head it off before it gets started. This will revolutionize healthcare to be proactive and preventative and will eventually be affordable to all.

English: A technology roadmap of the Internet ...

English: A technology roadmap of the Internet of Things. (Photo credit: Wikipedia)

I think the most immediate big benefactor of the Internet of Things is going to be at the industrial level. For instance, it is not hard to envision soil sensors that will tell the farmer the conditions of each part of his fields so that his smart tractor can fertilize or weed each section only as appropriate. There is already work going on to produce mini-sensors that can be sent underground into oil fields to give oil geologists the most accurate picture they have ever had of the underground topology. This will make it possible to extract a lot more oil and to do so more efficiently.

Small sensors will also make it a lot easier to manufacturer complex objects or complicated molecules. This could lead to the production of new polymers and materials that will be cheaper stronger and biodegradable. It will mean that medicines can be modified to interact with your specific DNA to avoid side effects. It means 3D printing that will feel like Star Trek replicators that will be able to combine complex molecules to make food and other objects. NASA has already undertaken a project to be able to print pizza as the first step towards being able to print food in space to enable long flights to Mars.

And a lot of what the Internet of Things might mean is a bit scary. Some high-end department stores already track customers with active cell phones to see exactly how they shop. But this is going to get far more personal and with face recognition software stores are going to know everything about how you shop. They will not just know what you buy, but what you looked at and thought about buying. And they will offer you instant on-site specials to get you to buy – ads that are aimed just at you, right where you are standing.

I remember reading a science fiction book once where the ads on the street changed for each person who walked by, and we are not that far away from that reality. There are already billboards in Japan that look at the demographics in front of them and which change the ads appropriately. Add facial recognition into that equation and they will move beyond showing ads aimed at middle-aged men and instead show an ad aimed directly at you. The Internet of Things is going to create a whole new set of attacks on privacy and as a society we will need to develop strategies and policies to protect ourselves against the onslaught of billions of sensors.

Probably one of the biggest uses of new sensors will be in energy management. And this will be done on the demand end rather than the supply end. Today we all have devices that use electricity continuously even when we aren’t using them. It may not seem like a lot of power to have lights on in an empty room or to have the water warm all of the time in an automatic coffee pot, but multiply these energy uses by millions and billions and it adds up to a lot of wasted power. You read today about the smart grid, which is an effort to be more efficient with electricity mostly on the demand side. But the real efficiencies will be gained when the devices in our life can act independently to minimize power usage.

Sensor technologies will be the heart of the Internet of Things and will be able to work on tasks that nobody wants to do. For instance, small nanobots that can metabolize or bind oil could be dispatched to an oil spill to quickly minimize environmental damage. The thousands of toxic waste dumps we have created on the planet can be restored by nanobots. Harvard has been working on developing a robot bee and it is not hard to envision little flying robots that could be monitoring and protecting endangered species in the wild. We will eventually use these technologies to eat the excess carbon dioxide in our atmosphere and to terraform Mars with an oxygen atmosphere and water.

Many of the technologies involved will be revolutionary and they will spark new debates in areas like privacy and data security. Mistakes will be made and there will be horror stories of little sensors gone awry. Some of the security monitoring will be put to bad uses by repressive regimes. But the positive things that can come out of the Internet of Things make me very excited about the next few decades.

Of course there will be a lot of bandwidth needed. The amount of raw data we will be gathering will be swamp current bandwidth needs. We are going to need bandwidth everywhere from the City to the factory to the farm, and areas without bandwidth are going to be locked out of a lot more than just not being able to stream NetFlix. The kind of bandwidth we are going to need is going to require fiber and we need to keep pushing fiber out to where people play and work.