Google’s Experiment with Cellular Service

Wi-FiAs I’m writing this (a week ago), Google opened up the ability to sign-up for its Project Fi phone service for a 24-hour period. Until now this has been by invitation only, limited I think by the availability of the Google Nexus phones. But they are launching the new Nexus 5X phone and so they are providing free sign-up for a 24-hour period.

The concept behind the Google phone plan is simple. They sell unlimited voice and text for $20 per month and sell data at $10 per gigabit as it’s used. The Google phone can work on WiFi networks or will use either the Sprint or T-Mobile networks when a caller is out of range of WiFi. And there is roaming available on other carriers when a customers in not within the range of any of the preferred networks.

Cellular usage is seamless for customers and Google doesn’t even tell a customer which network they are using at any given time. They have developed a SIM card that can choose between as many as 10 different carriers although today they only have deals with the two cellular carriers. The main point of the phone is that a customer doesn’t have to deal with cellular companies any longer and just deals with Google. There are no contracts and you only pay for what you use.

Google still only supports this on their own Nexus phones for now although the SIM card could be made to work in numerous other phones. Google is letting customers pay for the phones over time similar to what the other cellular carriers do.

Google is pushing the product harder in markets where it has gigabit networks. Certainly customers that live with slow or inconsistent broadband won’t want their voice calls routing first to WiFi.

The main issue I see from the product is that it is an arbitrage business plan. I define anything as arbitrage that relies on using a primary resource over which the provider has no control. Over the years a lot of my clients are very familiar with other arbitrage plans that came and went at the whim of the underlying providers. For example, there have been numerous wholesale products sold through Sprint like long distance, dial tone, and cellular plans that some of my clients used to build into a business plan, only to have Sprint eventually decide to pull the plug and stop supporting the wholesale product.

I am sure Google has tied down Sprint and T-Mobile for the purchase of wholesale voice and texting for some significant amount of time. But like with any arbitrage situation, these carriers could change their mind in the future and strand both Google and all of their customers. I’m not suggesting that will happen, but I’ve seen probably a hundred arbitrage opportunities come and go in the marketplace during my career and not one of them lasted as long as promised.

It’s been rumored that Apple is considering a similar plan. If they do, then the combined market power of both Google and Apple might make it harder for the underlying carriers to change their mind. But at the end of the day only a handful of companies own the vast majority of the cellular spectrum and they are always going to be the ones calling the shots in the industry. They will continue with wholesale products that make them money and will abandon things that don’t.

There are analysts who have opined that what Google is doing is the inevitable direction of the industry and that cellular minutes will get commoditized much in the manner as long distance in the past. But I think these analysts are being naive. AT&T and Verizon are making a lot of money selling overpriced cellular plans to people. These companies have spent a lot of money for spectrum and they know how to be good monopolists. I still laugh when I think about how households that used to spend $30 to $50 per month for a landline and long distance now spend an average of $60 per family member for cellphones. These companies have done an amazing job of selling us on the value of the cellphone.

Perhaps the analysts are right and Google, maybe with some help from Apple, will create a new paradigm where the carriers have little choice but to go along and sell bulk minutes. But I just keep thinking back to all of the past arbitrage opportunities where the buyers of the service were also told that the opportunity would be permanent – and none of them were.

New Video Format

alliance-for-open-mediaSix major tech companies have joined together to create a new video format. Google, Amazon, Cisco, Microsoft, Netflix, and Mozilla have combined to create a new group called the Alliance for Open Media.

The goal of this group is create a video format that is optimized for the web. Current video formats were created before there was wide-spread video using web browsers on a host of different devices.

The Alliance has listed several goals for the new format:

Open Source Current video codecs are proprietary, making it impossible to tweak them for a given application.

Optimized for the Web One of the most important features of the web is that there is no guarantee that all of the bits of a given transmission will arrive at the same time. This is the cause of many of the glitches one gets when trying to watch live video on the web. A web-optimized video codec will be allowed to plow forward with less than complete data. In most cases a small amount of missing bits won’t be noticeable to the eye, unlike the fits and starts that often come today when the video playback is delayed waiting for packets.

Scalable to any Device and any Bandwidth One of the problems with existing codecs is that they are not flexible. For example, consider a time when you wanted to watch something in HD but didn’t have enough bandwidth. The only option today is to fall back the whole way to an SD transmission, at a far lower quality. But in between these two standards is a wide range of possible options where a smart codec could analyze the bandwidth available and could then maximize the transmission by choosing different options among the many variables within a codec. This means you could produce ‘almost HD’ rather than defaulting to something of much poorer in quality.

Optimized for Computational Footprint and Hardware. This means that the manufacturers of devices would be able to maximize the codec specifically for their devices. All smartphones or all tablets or all of any device are not the same and manufacturers would be able to choose a video format that maximizes the video display for each of their devices.

Capable of Consistent, High-quality, Real-time Video Real-time video is a far greater challenge than streaming video. Video content is not uniform in quality and characteristics and there is thus a major difference in the quality between watching two different video streams on the same device. A flexible video codec could standardize quality much in the same way that a sound system can level out differences in listener volume between different audio streams.

Flexible for Both Commercial and Non-commercial Content A significant percentage of videos watched today are user-generated and not from commercial sources. It’s just as important to maximize the quality of Vine videos as it is for showing commercial shows from Netflix.

There is no guarantee that this group can achieve all of these goals immediately, because that’s a pretty tall task. But the power of these various firms combined certainly is promising. The potential for a new video codec that meets all of these goals is enormous. It would improve the quality of web videos on all devices. I know that personally, quality matters and this is why I tend to watch videos from sources like Netflix and Amazon Prime. By definition streamed video can be of much higher and more consistent quality than real-time video. But I’ve noticed that my daughter has a far lower standard of quality than I do and watches videos from a wide variety of sources. Improving web video, regardless of the source, will be a major breakthrough and will make watching video on the web enjoyable to a far larger percentage of users.

Universal Internet Access

navigator_globe_lgWhile many of us are spending a lot of time trying to find a broadband solution for the unserved and underserved homes in the US, companies like Facebook, Google, and Microsoft are looking at ways of bringing some sort of broadband to everybody in the world.

Mark Zuckerberg of Facebook spoke to the United Nations this past week and talked about the need to bring Internet access to the five billion people on the planet that do not have it. He says that bringing Internet access to people is the most immediate way to help lift people out of abject poverty.

And one has to think he is right. Even very basic Internet access, which is what he and those other companies are trying to supply, will bring those billions into contact with the rest of the world. It’s hard to imagine how much untapped human talent resides in those many billions and access to the Internet can let the brightest of them contribute to the betterment of their communities and of mankind.

But on a more basic level, Internet access brings basic needs to poor communities. It opens up ecommerce and ebanking and other fundamental ways for people to become engaged in ways of making a living beyond a scratch existence. It opens up communities to educational opportunities, often for the first time. There are numerous stories already of rural communities around the world that have been transformed by access to the Internet.

One has to remember that the kind of access Zuckerberg is talking about is not the same as what we have in the developed countries. Here we are racing towards gigabit networks on fiber, while in these new places the connections are likely to be slow connections almost entirely via cheap smartphones. But you have to start somewhere.

Of course, there is also a bit of entrepreneurial competition going on here since each of these large corporations wants to be the face of the Internet for all of these new billions of potential customers. And so we see each of them taking different tactics and using different technologies to bring broadband to remote places.

Ultimately, the early broadband solutions brought to these new places will have to be replaced with some real infrastructure. As any population accepts Internet access they will quickly exhaust any limited broadband connection from a balloon, airplane, or satellite. And so there will come a clamor over time for the governments around the world to start building backbone fiber networks to get real broadband into the country and the region. I’ve talked to consultants who work with African nations and it is the lack of this basic fiber infrastructure that is one of the biggest limitations on getting adequate broadband to remote parts of the world.

And so hopefully this early work to bring some connectivity to remote places will be followed up with a program to bring more permanent broadband infrastructure to the places that need it. It’s possible that the need for broadband is going to soon be ranked right after food, water, and shelter as a necessity for a community. I would expect the people of the world to expect, and to then push their governments into making broadband a priority. I don’t even know how well we’ll do to get fiber to each region of our own country, and so the poorer parts of the world face a monumental task over the coming decades to satisfy the desire for connectivity. But when people want something badly enough they generally find a way to get what they want, and so I think we are only a few years away from a time when most of the people on the planet will be clamoring for good Internet access.

 

The Gigabit Dilemma

common carrierCox recently filed a lawsuit against the City of Tempe, Arizona for giving Google more preferable terms as a cable TV provider than what Cox has in their franchise with the city. Tempe undertook the unusual step in creating a new license category of “video service provider’ in establishing the contract with Google. This is different than Cox, which is considered a cable TV provider as defined by FCC rules.

The TV offerings from the two providers are basically the same. But according to the Cox complaint Google has been given easier compliance with various consumer protection and billing rules. Cox alleges that Google might not have to comply with things like giving customers notice of rate changes, meeting installation time frames, and even things like the requirement for providing emergency alerts. I don’t have the Google franchise agreement, so I don’t know the specific facts, but if Cox is right in these allegations then they are likely going to win the lawsuit. Under FCC rules it is hard for a city to discriminate among cable providers.

But the issue has grown beyond cable TV. A lot of fiber overbuilders are asking for the right to cherry pick neighborhoods and to not build everywhere within the franchise area – something that incumbent cable companies are required to do. I don’t know if this is an issue in this case, but I am aware of other cities where fiber overbuilders only want to build in the neighborhoods where enough customers elect to have them, similar to the way that Google builds to fiberhoods.

The idea of not building everywhere is a radical change in the way that cities treat cable companies, but is very much the traditional way to treat ISPs. Since broadband has been defined for many years by the FCC as an information service, data-only ISPs have been free to come to any city and build broadband to any subset of customers, largely without even talking to a city. But cable TV has always been heavily regulated and cable companies have never had that same kind of freedom.

But the world has changed and it’s nearly impossible any more to tell the difference between a cable provider and an ISP. Companies like Google face several dilemmas these days. If they only sell data they don’t get a high enough customer penetration rate – too many people still want to pay just one provider for a bundle. But if they offer cable TV then they get into the kind of mess they are facing right now in Tempe. To confuse matters even further, the FCC recently reclassified ISPs as common carriers which might change the rules for ISPs. It’s a very uncertain time to be a broadband provider.

Cities have their own dilemmas. It seems that every city wants gigabit fiber. But if you allow Google or anybody into your city without a requirement to build everywhere within a reasonable amount of time, then the city is setting themselves up for a huge future digital divide within their own city. They are going to have some parts of town with gigabit fiber and the rest of the town with something that is probably a lot slower. Over time that is going to create myriad problems within the city. There will be services available to the gigabit neighborhoods that are not available where there is no fiber. And one would expect that over time property values will tank in the non-fiber neighborhoods. Cities might look up fifteen years from now and wonder how they created new areas of blight.

I have no idea if Google plans to eventually build everywhere in Tempe. But I do know that there are fiber providers who definitely do not want to build everywhere, or more likely cannot afford to build everywhere in a given city. And not all of these fiber providers are going to offer cable TV, and so they might not even have the franchise discussion with the city and instead can just start building fiber.

Ever since the introduction of DSL and cable modems we’ve had digital divides. These divides have either been between rich and poor neighborhoods within a city, or between the city and the suburban and rural areas surrounding it. But the digital divide between gigabit and non-gigabit neighborhoods is going to be the widest and most significant digital divide we have ever had. I am not sure that cities are thinking about this. I fear that many politicians think broadband is broadband and there is a huge current cachet to having gigabit fiber in one’s city.

In the past these same politicians would have asked a lot of questions of a new cable provider. If you don’t think that’s true you just have to look back at some of the huge battles that Verizon had to fight a decade ago to get their FiOS TV into some cities. But for some reason, which I don’t fully understand, this same scrutiny is not always being applied to fiber overbuilders today.

It’s got to be hard for a city to know what to do. If gigabit fiber is the new standard then a city ought to fight hard to get it. But at the same time they need to be careful that they are not causing a bigger problem a decade from now between the neighborhoods with fiber and those without.

Is There a Business Case for Free WiFi?

Wi-FiLately I have seen numerous press releases about free WiFi networks in various cities. I can certainly understand why a City would want to provide free WiFi if it can afford it since there are numerous benefits to citizens from having ubiquitous WiFi. But most of these press releases talk about having private companies supply the WiFi. And that makes me ask the question: is there a business case to be made for providing free WiFi?

Probably the most talked about recent example is where Google agreed to retrofit the numerous abandoned payphone booths in New York City into free outdoor WiFi hotspots. But Google is different than anybody else who would do this and perhaps they will gather enough data through these various free connections and sell enough advertising to make this pay for itself. I suspect that even for them this won’t be profitable, but I can grant them the benefit of the doubt and perhaps for them this will work financially.

But it’s hard to see the case for other private providers. For instance, in Pittsburgh a start-up, Meta Mesh, is hoping to bring free WiFi throughout the city. Their business plan hopes to get funded by grants from Google and the Braddock Community Development Corp. They will then place commercial mesh routers throughout the city and hope that households and businesses will agree to add their WiFi connections to the larger mesh.

This again sounds like something beneficial for the city in that it will provide WiFi to those who can’t afford it. But it doesn’t seem like much of a permanent business plan. Grant funding is notoriously unreliable and while they may raise the money to get this going, it’s quite a challenge to keep getting grants year after year to keep it going. If they hit one dry spell in raising money the project probably dies. And to a large extent they are not really deploying WiFi but are instead counting on homes and businesses to agree to share their bandwidth with others.

I hope what I wrote doesn’t sound like I am belittling the project, because it will provide great community benefits if it works. My question is rather to ask if this is a permanent business model that can be sustained and copied elsewhere. These kinds of efforts are usually local and very much depend upon a few key people to make them work and to keep them working. With no reliable customer-based revenue stream it’s really hard to maintain something that is being done for altruistic rather than monetary purposes.

Another similar project is the solar-powered trash cans that will provide WiFi in lower Manhattan. These are also being provided by a non-profit company in conjunction with a trash removal company and are funded through grants from the city and others.

These efforts show that a city might be able to get WiFi started through a non-profit. But time is going to tell if this is sustainable. I can remember numerous similar WiFi projects that were started over the years and I think each of them eventually fizzled out.

Cities can certainly provide WiFi directly as a municipally-funded community benefit. Wikipedia lists 57 US cities that currently provide some sort of public WiFi. I am familiar with many of the cities on the list. In some cases they only provide WiFi in and around city buildings like City Hall or perhaps also in their local airport. And there are many other cities that provide WiFi that are not on this list. I also know that many of these cities deployed WiFi many years ago, and the speeds available with the older WiFi technology probably means that they offer speeds in the range of a few Mbps. That was a great speed when it was launched but is woefully inadequate today.

And that is my real concern. WiFi, like most technologies, is rapidly changing and anything deployed in the past is obsolete today, and anything deployed today will soon be obsolete once HotSpot 2.0 is perfected and as soon as MIMO antennas get better. Deploying an alternate bandwidth source for a community is going to require constant upgrades to make WiFi keep up with public expectations. And upgrades are expensive and it’s hard to maintain an adequate network in a world where the amount of data that people and households use doubles approximately every three years.

I wish all of these ventures good luck and I hope they can make it work. And I hope that more cities expand their free WiFi because it can be a huge benefit to those who can’t otherwise get bandwidth. But I have yet to see a sustainable financial model, other than perhaps a direct tax subsidy that can both pay for the initial WiFi deployment and then keep up with the needed upgrades to keep it relevant.

A Few Lessons from Big Companies

Text-messageI spend a lot of time reading about corporations and I think there are some lessons to learn from them that are relevant to small companies.

Selling Product versus Building Relationships. There are many  large companies that sell products without developing relationships with their customers. In our industry the large cable and telcos come to mind. They are all rated among the worst of all corporations in delivering customer service and they even antagonize many of their customers. This works fine for them until they get competition, and then the customers who don’t like them quickly jump ship to the new competitor.

But there are large businesses that go out of their way to build customer relationships because they believe that loyal customers are their most important asset. Consider car manufacturers. They realized a long time ago that they were not going to be good at customer service, so they created a network of dealers who are local businesses with ties in each community and these dealers have built trust over generations. And there are many other companies that deliver great customer service. Tech firms like Amazon, Apple, and Google have been consistently rated among the top ten in customer satisfaction for the last few years – showing that tech firms can put an emphasis on customers and still thrive.

My most successful clients build relationships with their customers and as a result have built a loyal customer base. Many of them are or were monopolies, and there was a time when most of my clients could not tell me who their ten largest customers were. But I rarely see that today and small telcos and cable companies have learned to build loyalty through building relationships.

Growing Fast versus Growing Deliberately. Many large companies need to grow fast to be successful. Once you have taken venture capital money or gone public then the pressure is on to grow profits quickly. But growing too fast almost always changes a company in negative ways. It’s really common to see companies go into the growth mode and then forget who they are. Most tech companies, for example, started with a small core of people who worked hard as a team to develop the core company. But when it’s time to grow, and companies hire mountains of new people it’s nearly impossible to maintain the original culture that made the company a great place to work.

Growth can be just as hard for small companies. It can be as hard economically and culturally for a small company to grow from 5,000 to 10,000 customers as it is for a large company to add millions. Small companies are often unprepared for the extra work involved with growth and find that they overwork and overstress their staff during a growth cycle. Growth creates a dilemma for small companies. If you hire the people needed to staff the growth period your company will be overstaffed when growth stops.

And so a lesson about growth can be learned from large companies. They will often staff growth through temporary employees, contractors, and consultants rather than take on people that they may not need later. Companies of any size are hesitant about hiring employees that they might not need a year from now.

High-Tech versus High-Touch. A lot of large businesses are trying to feign a good customer service experience by electronically ‘touching’ their customers often. I recall last year when Comcast introduced a texting system to communicate with customers. After they sent me half a dozen text messages in the same week, I disconnected the texting function because I really didn’t want to hear from them that often. But there are large companies who are convinced that if they electronically reach out to customers often that they are engaging in relationship building and proactive customer service.

And perhaps they are with some customers. But I am more appreciative of a business where I can talk to a person when it’s needed. Not that I mind electronic communications. I like to know that AT&T has auto-billed me and I like knowing when charges hit my credit cards. But I don’t want to be bothered by a business when they aren’t passing on information I want or need.

The important point here is that you have to touch your customers sometime and whether you reach out electronically or in person it’s better than no-touch and not talking to your customers. I know telecom companies that call every customer at least once a year to ask them if they like the service and if everything is okay. Such calls are welcomed by most customers and this is a great tool for businesses to build relationships. But just be prepared that if you ask your customers how you are doing that you need to be ready to deal with negative feedback. That is how to build happy customers.

The Open Compute Project

The InternetI wrote recently about how a lot of hardware is now proprietary and that the largest buyers of network gear are designing and building their own equipment and bypassing the normal supply chains. My worry about this trend is that all of the small buyers of such equipment are getting left behind and it’s not hard to foresee a day when small carriers won’t be able to find affordable network routers and other similar equipment.

Today I want to look one layer deeper into that premise and look at the Open Compute Project. This was started just four years ago by Facebook and is creating the hardware equivalent of open source software like Linux.

Facebook found themselves wanting to do things in their data centers that were not being satisfied by Cisco, Dell, HP or the other traditional vendors of switches and routers. They were undergoing tremendous growth and their traffic was increasing faster than their networks could accommodate.

So Facebook followed the trend set by other large companies like Google, Amazon, Apple, and Microsoft, and set off to design their own data center and data equipment. Facebook had several goals. They wanted to make their equipment far more energy efficient because data centers are huge generators of heat and they were using a lot of energy to keep servers cool and were looking for a greener solution. They also wanted to create routers and switches that were fast, yet simple and basic, and they wanted to control them by centralized software – which differed from the market who built the brains into each network router. This made Facebook one of the pioneers in software defined networks (SDN).

And they succeeded; they developed new hardware and software that allowed them to handle far more data than they could have done with what was on the market at the time. But then Facebook took an extraordinary step and decided to make what they had created available to everybody else. Jonathan Heiliger at Facebook came up with the idea of making their hardware  open source. Designing better data centers was not a core competency for Facebook and he figured that the company would benefit in the future if other outside companies joined them in searching for better data center solutions.

This was a huge contrast to what Google was doing. Google believes that hardware and software are their key differentiators in the market, and so they have kept everything they have developed proprietary. But Facebook had already been using open source software and they saw the benefits of collaboration. They saw that when numerous programmers worked together the result was software that worked better with less bugs and that could be modified quickly, as needed, by bringing together a big pool of programming resources. And they thought this same thing could happen with data center equipment.

And they were right. Their Open Compute Project has been very successful and has drawn in other large partners. Companies like Apple, HP, and Microsoft now participate in the effort. It has also drawn in large industry users like Wall Street firms who are some of the largest users of data center resources. Facebook says that they have saved over $2 billion in data center costs due to the effort and their data centers are using significantly less electricity per computation than before.

And a new supply chain has grown around the new concept. Any company can get access to the specifications  and design their own version of the equipment. There are manufacturers ready to build anything that comes out of the process, meaning that all of the companies in this collaborative effort have bypassed the traditional telecom vendors in the process and work directly with a factory to produce their gear.

This effort has been very good for these large companies, and good for the nation as a whole because through collaboration these companies have pushed the limits on data center systems to make them less expensive and more efficient. They claim that for now they have leapt forward past Moore’s law and are ahead of the curve.

But as I wrote earlier, this leaves out the rest of the world. Smaller carriers cannot take advantage of this process. Small companies don’t have the kind of staff that can work with the design specs, and no factory is going to make a small batch of routers. While the equipment and controlling hardware is open source, each large member is building different equipment and none of it is available on the open market. And small companies wouldn’t know what to do with the hardware if they got it, because it’s controlled by open source software that doesn’t come with training or manuals.

So smaller carriers are still buying from Cisco and the traditional switch and router makers. The small carriers can still find what they need in the market. But if you look ten years forward this is going to become a problem. Companies like Cisco have always funded their next generation of equipment by working with one or two large customers to develop better solutions. The rest of Cisco’s customers would then get the advantages of this effort as the new technology was rolled out to everybody else. But the largest users of routers and switches are no longer using the traditional manufacturers. That is going to mean less innovation over time in the traditional market. It also means that the normal industry vendors aren’t going to have the huge revenue streams from large customers to make gear affordable for everybody.

Selling Our Personal Data

SpyVsSpyRecently, the CEO of Apple, Tim Cook, has been making speeches in multiple forums that contrasts Apple’s privacy practices to those of other large consumer-based companies like Google, Facebook, and Yahoo. Cook says that his company is selling superior products and that they are not in the business of gathering or selling information about their customers.

Certainly he can’t say that Apple doesn’t use customer information, because they do. I have a Macbook and there are tons of ways that Apple uses my data to make my experience better. If I travel, the Mac will display the right time and local weather, for example. And various Apple software products will get to know me and make customized suggestions for me over time. But Cook’s point is that Apple doesn’t sell that data to others.

Of course, the companies that Cook is comparing himself to do not sell electronics like Apple but rather software. Probably the closest analog to Apple is Samsung and they can’t make the same claim as Apple. Late last year it was discovered that Samsung smart TVs were capable of listening to customer conversations all of the time. It’s not clear that Samsung gathers data directly from its smart phones, but they have chosen Android and one can imagine that part of that arrangement is to let Google gather data from Samsung smartphones.

Companies like Facebook and Google have a hard time not using your data, because that is really the only way they can generate value. It’s wonderful to have millions of loyal users on your platform, but both companies make most of their money from advertising. Certainly Google’s search engine advertising doesn’t require any data from users and that revenue is driven from the companies who want their products to be at the top of the list in a search. But Google and Facebook also sells web advertising, and the name of that game is to know the user in order to direct the most relevant ads to each customer.

I think if using our information stopped with advertising that most people would be fundamentally comfortable with having these companies invade their privacy. I know I find it eerie when I do a Google search and for the next three days I see ads that are related to for something I searched for. But I can personally live with that, because most of the time Google is wasting their time on me and I wasn’t looking to shop. I find it funny that I will look up the latest information about smart cars and then get flooded with car ads (because I exclusively drive Ford trucks and I buy one every twenty years, whether I need a new one or not).

The real rub is that these companies do a lot more than build advertising profiles on us. They know all sorts of other personal data about us and they associate that data with our name. While I am not bothered by getting car ads for vehicles I am never going to buy, I frequently hear about people getting bombarded with ads or even mailings and phone calls about far more personal topics like rehab centers or the latest diabetes treatments. That is going over the line in my opinion.

The invasion of our privacy seems to be going even further. Facebook, for example, is the world leader in facial recognition technology and they are building a huge database of every time you show up in somebody’s picture. They not only know about you, but they are learning where you go and who you associate with. That is a bit unnerving.

But to me the real scary thing is that these companies then sell this data to others. And there is no telling how that data is used. Even should the large companies have some sense of morality and responsibility (and many believe they do not), the companies that buy this data can do anything with it they please. It’s very easy these days to buy a data dump about other people, and that kind of information can be a powerful tool in the hands of an ex-spouse, an employer, or a scammer.

The problem that we all face is that it’s too easy to use the services that watch us. Google has a spectacular set of software products. And for my generation there are a ton of friends and relatives on Facebook. If you don’t want to be spied on you have to make a very conscious effort to wall yourself off from these sorts of data-gathering web activities, and that is hard to do. And no matter what you do online, your ISP or the government might be gathering all of this data anyway.

These large companies sometimes hide behind the fact that they mostly sell ‘metadata’ which is data that has been scrubbed to hide the identify of individuals. But numerous articles point out that with data mining it’s only necessary to know a few facts about you in order to pull out facts about you from metadata files.

We may come to a day when there is massive pushback against these companies that are collecting, using, and selling our personal data. It will probably take a string of tragedies and disasters for this to become a worry for the average person. And if that happens, then either the large companies will stop spying on us or somebody who promises not to will take their place. But it is extremely profitable today for the big companies to spy on people, and until there is more pain than profit from using our data, one has to imagine that this is going to continue.

A Business Case for WiFi Hotspots

Wi-FiLately I have been asked a number of times if there is a business case to be made for providing a large outdoor WiFi hotspot network. Today I will look at the two issues that answer that question:  1) the hardware available today and;  2) the revenue opportunities.

Hardware Issues. The WiFi industry is currently in a state of what I call ‘between’. This often happens when a new standard is being introduced. There have been existing hotspots on the market for many years. But the whole industry is moving towards implementing Hotspot 2.0, which is a standard that allows for roaming between hotspots the same way that cellphones roam between cell towers. But since the coverage distance of a hotspot is far less – around 250 feet at most from a hotspot – roaming is even more of an issue for WiFi.

With Hotspot 2.0 fully implemented, a customer can automatically log in when walking within range of a hotspot. But more importantly they will maintain whatever they are doing  (such as a web session or IP phone call) without interruption as they move to a new hotspot (as long as they don’t hit a dead area). But the units on the market today can best be characterized as pre-Hotspot 2.0 and they do not yet include all of the features needed to fully support roaming. This means any units you buy today are going to need an upgrade eventually to a standard that is not yet fully defined.

The units on the market today are also very expensive compared to older hotspots. The manufacturers are concentrating on high-capacity hotspots that can handle as many as 500 simultaneous users. These are complicated hotspots with multiple antennae and cost as much as ten times as the old simple hotspots. But these are what are selling and they are made for stadiums, event centers, busy shopping districts or places where there will to be a lot people. But a citywide deployment doesn’t need many hotspots with that huge capacity, but rather much cheaper and lower capacity units that also do Hotspot 2.0.

Revenue Opportunities. The revenue opportunities for an outdoor WiFi network are not clear. I don’t know of any hotspot networks that have been able to pay for themselves. But there may be new revenue opportunities coming that could improve the picture.

There are two traditional WiFi revenue opportunities. One is to sell access to the WiFi network by the hour, by the day or by the month – traditional ISP services. There are customers in any town who would prefer WiFi to more expensive cellular data if you can create good enough coverage. You can sell this to individuals or in bulk to large employers in a town that have employees who work outside. The other traditional revenue opportunity it to sell dedicated hotspots to restaurants and other businesses that want to offer a branded hotspot for their customers. This will require that you (or somebody) provide a broadband connection to that customer to feed the hotspot.

There are two revenue opportunities on the horizon today. The first is to offer WiFi phones. These phones are being offered today in two ways. First, there is the WiFi-only phone like Cablevision is offering and that only works on WiFi. Cablevision prices this at $9.95 per month for an existing cable customer and it’s nearly all margin. But there are several wireless resellers (and now also Google) who sell WiFi phones that will roam to cellular when WiFi is not available.

The primary issue with copying this business plan is that the companies doing it have all created a proprietary system that works only on a specific phone. That is not something easy for a smaller company to work out. There are some cheap Chinese WiFi-only phones available, but if you choose them you are competing against people’s preferences to use an iPhone or a Samsung Galaxy by forcing them to your handset choice. This is not likely to be very popular until it becomes an app that will work on any phone.

The other new revenue opportunity is to sell wholesale WiFi access to others. I know Cisco has been touting this opportunity for several years. But I have yet to hear of anybody who has been able to monetize the idea. The cellular companies love it when customers use their phones on WiFi, but that’s a far cry from them being willing to buy time on your network on their customer’s behalf.

My conclusion of all of this is that it looks a tough business case today to build a citywide WiFi network. Right now the network hotspots are too expensive for a mass deployment. But there are vendors working on lower-cost hotspots. It also makes sense to wait until Hotspot 2.0 is fully fleshed-out and functional rather than buy a network with undefined future upgrade costs. And on the revenue side, while it sounds interesting to sell bulk WiFi, I have a hard time recommending this as a business plan unless you have presold some large customers like a utility or other carrier to buy bulk access to your new network. I have always been leery of ‘build-it-and-they-will-come’ business plans and I could recommend this only if there is a clear path to monetize it.

Fact Checking the Search Engines

Scales-Of-Justice-12987500-300x300I laugh sometimes when I see how social media and other web outlets get a story very wrong. There is a whirlwind of stories going around right now about how Google is going to squash stories from Fox News and from other news outlets with which they don’t agree. And there is a huge gnashing of teeth by those who think this is a terrible thing.

The only problem with these stories is that they aren’t true. Google has recently introduced fact checking for medical information on their web site. They did this because they see that more than 1 in 20 Google searches are medical inquiries. People now largely go to the Internet first when they get a symptom. And so Google wants to make sure that people are getting reliable medical facts.

And there is good reason for them to do this. They way search engines work in general is that topics with the most sources on the web get ranked at the top of a search. So when somebody comes up with some quack medical tip, if that meme is then picked up on hundreds of other web sites and in social media, the new meme rises to the top of a search on that topic—regardless of how true it is.

There is a ton of quack medical and nutrition advice on the internet. Much of the quackery can be traced back to somebody with a motive to spread an untruth; generally that motive is financial. The pushers of nutrition supplements, pills to make you lose weight, books on curing cancer with herbs, etc. all stand to make money when what they are selling makes it to the top of a Google search.

Now contrast these spurious memes to good medical advice on the web. There are a handful of web sites like WebMD that provide very straightforward medical facts. But these sites are not often in the news and the symptoms of something like diabetes don’t make it into web articles at nearly the same rate as some crazy cure for diabetes might. And so over time the legitimate web sites get pushed further and further down a Google search list.

Google’s solution to the problem is to hire a doctor to list the medical sites that are legitimate and Google is going to arbitrarily boost those sites over other sources of medical information. You type in ‘symptoms of diabetes’ and you will now be led first to one of these sites and not to some crazy article. In doing this Google hasn’t suppressed anything and all of those other crazy medical sites and articles are still there in search – they just don’t sit at the top of the list. I think this is a wonderful idea and I laud Google for probably helping millions of people find the right medical facts.

However, it has been widely reported that this same thing is going to be done for all web searches and that Google is going to be the one to decide what is ‘true’ or not for all search topics. And so they would also decide what comes up first when you search for a topic on politics, religion, climate change, or the best recipe for brownies. Over the last few weeks Fox News made a huge deal out of this and stirred up the whole web over the topic. And since this is precisely what google does with shopping sites, the rumor sounds reasonable.

But it’s not true and Google is not doing this. There was a recent news release from Google about an internal Google research paper that talked about how Google might theoretically introduce more fact-checking. The report recommends that Google use what is called a ‘Knowledge Vault” that would contain verified facts of various types. The Google researchers use an example of websites that say that president Obama was born in Kenya. Since that is demonstrably not true, and if the president’s place of birth was listed in the Knowledge Vault, then a website making such a false claim would get a lower Google ranking.

The research paper goes on to recommend that only a few sources of information be used to feed such a Knowledge Vault, such as Wikipedia and Freebase which are crowd-sourced and largely self-policing. Unlike the medical searches, which Google is clearly tilting, they would not be the ones deciding what is factual, but would leave that up to sites that mostly get things right.

But none of this real. While Google has introduced the medical ranking system, the idea of introducing this for everything is nothing more than a research paper. It’s the kind of thought experiment you would expect a company that runs a search engine to think about. And even if Google ever decided to do this (and for topics like religion and politics that seems unlikely) they would not be the ones deciding what the ‘facts’ are, but would leave that up to the shared consensus on a dozen sites that are basically web encyclopedias. But since Google runs their search engine to make money I find it highly unlikely that they would ever be dumb enough to fiddle with politics, religion, or any of the other hot button topics—because they understand that as successful as they are, people could flock in mass to another search engine if they no longer trusted Google to lead them to wacky political web sites. Because if my Facebook feed is any indication, that is probably the second most popular search on the web these days.