Public Networks and Privacy

I’ve been investigating smart city applications and one of the features that many smart network vendors are touting is expanded public safety networks that can provide cameras and other monitoring devices for police, making it easier to monitor neighborhoods and solve crimes. This seems like something most police departments have on their wish list, because cameras are 24/7 and can see things that people are never likely to witness.

The question I ask today is if this what America wants? There are a few examples of cities with ubiquitous video surveillance like London, but is that kind of surveillance going to work in the US?

I think we’ve gotten our first clue from Seattle. The City installed a WiFi mesh network using Aruba wireless equipment in 2013 with a $3.6 million grant from the Department of Homeland Security. The initial vision for the network was that it would be a valuable tool to provide security in the major port in Seattle as well as provide communications for first responders during emergencies. At the time of installation the city intended to also make the surveillance capabilities available to numerous departments within the City, not just to the police.

But when the antennas, like the one shown with this blog, went up in downtown Seattle in 2013, a number of groups began questioning the city about their surveillance policies and the proposed use of these devices. Various groups including the ACLU voiced concerns that the network would be able to track cellphones, laptops and other devices that have MAC addresses. This could allow the City to gather information on anybody moving in downtown or the Port and might allow the City to do things like identify and track protesters, monitor who enters and leaves downtown buildings, track the movement of homeless people who have cellphones, etc.

Privacy groups and the ACLU complained to the City that the network effectively was a suspicionless surveillance system that monitors the general population and is a major violation of various constitutional rights. The instant and loud protests about the network caught City officials by surprise and by the end of 2013 they deactivated the network until they developed a surveillance policy. The city never denied that the system could monitor the kinds of things that citizens were wary of. That surveillance policy never materialized, and the City recently hired a vendor to dismantle the network and salvage any usable parts for use elsewhere in the City.

I can think of other public outcries that have led to discontinuance of public monitoring systems, particularly speed camera networks that catch and ticket speeders. Numerous communities tried that idea and scrapped it after massive citizen outrage. New York City installed a downtown WiFi network a few years ago that was to include security cameras and other monitoring devices. From what I read they’ve never yet activated the security features, probably for similar reasons. A web search shows that other cities like Chicago have implemented a network similar to Seattle’s and have not gotten the negative public reaction.

The Seattle debacle leads to the question of what is reasonable surveillance. The developers of smart city solutions today are promoting the same kinds of features contained in the Seattle network, plus new ones. Technology has advanced since 2013 and newer systems are promising to include the first generation of facial recognition software and also the ability to identify people by their walking gait. These new monitoring devices won’t just track people with cellphones and can identify and track everybody.

I think there is probably a disconnect between what smart city vendors are developing and what the public wants out of their city government. I would think that most citizens are in favor of smart city solutions like smart traffic systems that would eliminate driving backups, such as changing the timing of lights to get people through town as efficiently as possible.

But I wonder how many people really want their City to identify and track them every time they go within reach of one of City monitors. The information gathered by such monitors can be incredibly personal. It identifies where somebody is including a time stamp. The worry is not just that a City might misuse such personal information, but IT security guys I’ve talked to believe that many Municipal IT networks are susceptible to hacking.

In the vendors defense they are promoting features that already function well. Surveillance cameras and other associated monitors are tried and true technologies that work. Some of the newer features like facial recognition are cutting edge, but surveillance  systems installed today can likely be upgraded with software changes as the technology gets better.

I know I would be uncomfortable if my city installed this kind of surveillance system. I don’t go downtown except go to restaurants or bars, but what I do is private and is not the city’s business. Unfortunately, I suspect that city officials all over the country will get enamored by the claims from smart city vendors and will be tempted to install these kinds of systems. I just hope that there is enough public discussion of city plans so that the public understands what their city is planning. I’m sure there are cities where the public will support this technology, but plenty of others where citizens will hate the idea. Just because we have the technical capabilities to monitor everybody doesn’t mean we ought to.

Can the States Regulate Internet Privacy

Since Congress and the FCC have taken steps to remove restrictions on ISPs using customer data, a number of states and even some cities have taken legislative steps to reintroduce some sort of privacy restrictions on ISPs. This is bound to end up in the courts at some point to determine where the authority lies to regulate ISPs.

Congress just voted in March to end restrictions on the ways that ISPs can use customer data, leading to a widespread fear that ISPs could profit from selling customer browsing history. Since then all of the large telcos and cable companies have made public statements that they would not sell customer information in this way, but many of these companies have histories that would indicate otherwise.

Interestingly, a new bill has been introduced in Congress called the BROWSER Act of 2017 that would add back some of the restrictions imposed on ISPs and would also make those restrictions apply to edge providers like Google and Facebook. The bill would give the authority to enforce the privacy rules to the Federal Trade Commission rather than the FCC. The bill was introduced by Rep. Marsha Blackburn who was also one of the architects of the earlier removal of ISP restrictions. This bill doesn’t seem to be getting much traction and there is a lot of speculation that the bill was mostly offered to save face for Congress for taking away ISP privacy restrictions.

Now states have jumped in to fill the void. Interestingly the states looking into this are from both sides of the political spectrum which makes it clear that privacy is an issue that worries everybody. Here is a summary of a few of the state legislative efforts:

Connecticut. The proposed law would require consumer buy-in before any “telecommunication company, certified telecommunications provider, certified competitive video service provider or Internet service provider” could profit from selling such data.

Illinois. The privacy measures proposed would allow consumers to be able to ask what information about them is being shared. The bills would also require customer approval before apps can track and record location information on cellphones.

Massachusetts. The proposed legislation would require customer buy-in for sharing private information. It would also prohibit ISPs from charging more to customers who don’t want to share their personal information (something AT&T has done with their fiber product).

Minnesota. The proposed law would stop ISPs from even recording and saving customer information without their approval.

Montana. The proposed law there would prohibit any ISPs that share customer data from getting any state contracts.

New York. The proposed law would prohibit ISPs from sharing customer information without customer buy-in.

Washington. One proposed bill would require written permission from customers to share their data. The bill would also prohibit ISPs from denying service to customers that don’t want to share their private information.

Wisconsin. The proposed bill essentially requires the same restrictions on privacy that were included in the repealed FCC rules.

This has even made it down to the City level. For example, Seattle just issued new rules for the three cable providers with a city franchise telling them not to collect or sell customer data without explicit customer permission or else face losing their franchise.

A lot of these laws will not pass this year since the new laws were introduced late into the legislative sessions for most states. But it’s clear from the laws that have been proposed that this is a topic with significant bipartisan support. One would expect a lot of laws to be introduced and enacted in legislative sessions that will occur later this year or early next year.

There is no doubt that at some point this is going to result in lawsuits to resolve the conflict between federal and state rules. An issue of this magnitude will almost certainly will end up at the Supreme Court at some point. But as we have seen in the past, during the period of these kinds of legislative and legal fights the status of any rules is muddy. And that generally means that ISPs are likely to continue with the status quo until the laws become clear. That likely means that ISPs won’t openly be selling customer data for a few years, although one would think that the large ones have already been collecting data for future use.

The Big City Bandwidth Dilemma

Seattle-SkylineSeattle is like many large cities and they badly want a gigabit fiber network everywhere. They were one of the earliest cities to want this and they hired me back in 2005 to try to find a way to bring big bandwidth to the city. They still don’t have fiber, and they recently commissioned another study to see if there is a solution available today.

The study concentrated on the cost of bringing fiber everywhere and about how the City might be able to pay for it. After all, no city wants to build fiber if they don’t reasonably believe they can make the payments on the bonds used to pay for the fiber. The report shows that it’s very hard for a large City to justify paying for a fiber network. And this highlights what I call the big city bandwidth dilemma. Should a City just wait to see what the incumbents do and hope that they eventually get gigabit broadband, or should they be like Seattle and keep pushing for a solution? There are two major aspects of the dilemma that every city is wrangling with:

The Incumbent Response. If a city does nothing they may never get fiber, or they might get fiber to some of the ‘best’ neighborhoods, but not everywhere. We see that in markets where somebody other than the incumbents brings fiber that the incumbents immediately step up their game and offer fast speeds. There is no better evidence for this than in Austin where both AT&T and Time Warner quickly announced much faster speeds and competitive prices to offset Google’s entry into the market.

But everybody understands that the incumbents in Austin would not have increased speeds absent any competition, as can be seen in their many other markets. This create a huge dilemma for a city. Should they decide to build fiber alone or with a commercial partner, that new venture will be met with stiff competition and will have a hard time getting the needed market penetration rate to ensure financial success. But should the city do nothing – then they get nothing.

Citywide Coverage. In large cities almost no commercial builder is willing to build fiber to every neighborhood. One doesn’t need a crystal ball to look at the consequences of this in the future. A city will become a patchwork of fiber haves and have-nots. The have-not neighborhoods probably already have some poverty and blight, but if they get walled off from having the same broadband as everybody else, then over time they are going to become even more isolated and poor. Every city that has Google coming to town is so thrilled to have them that nobody is looking forward ten and twenty years to imagine what will happen to the neighborhoods without fiber.

Cherry Picking. Google is selling a gigabit for a flat $70 per month. While that might be cheap for a gigabit it is still a cherry picking price that is too expensive for most households. It’s hard to imagine more than 30% to 40% of any market being willing to pay that much for broadband. A large number of homes settle for something slower, but that they can afford.

And almost every other gigabit provider charges more than Google. For example, CenturyLink is now selling a gigabit in some markets at $79.95—but in order to get it you have to buy a $45/month phone plan. Before taxes that means it will cost $125 per month to get the gigabit. I can’t see that Comcast has a gigabit product yet, but earlier this year they came out with a 2-gigabit fiber-fed product priced at $300 per month.

The problem with cherry picking is that it also creates a market of haves and have-nots. The incumbent cable company may not like the competition, but they know they are still going to be able to sell over-priced bandwidth to the majority of the market. Look at how Comcast has fared against Verizon FiOS and you will see that, while they hate competition, they still fare quite well in a competitive market.

A Possible Solution? The Seattle report did suggest one solution that could make this work. Cities not only want fiber, but they want fiber everywhere and at prices affordable to the vast majority of their citizens. Any city that can accomplish that understands that they will have a huge competitive advantage over cities without affordable fiber.

The report suggest that Seattle ought to ‘buy-down’ the retail rate on a gigabit by paying for some of the network with property taxes. This is not a new idea and there are a few small cities that have financed fiber using this solution. But nobody has ever tried this in a large city.

The report suggests buying the price of a gigabit down to $45 per month, a figure that is not cherry-picking and that a lot of homes can afford. That kind of price certainly would put a whole different set of competitive pressures on the incumbents. I can imagine them screaming and probably suing a city who tries this. But if this was done through a referendum and people voted for it, almost no court will overturn a vote of the people. I don’t know if this idea can work in a large city, but it’s the first idea I’ve heard that deals with the issues I’ve outlined above.

Lessons Learned With Gigabit Squared and Seattle

Hollow-core_photonic_bandgap_fiberChristopher Mitchell of the Institute for Local Self-Reliance recently wrote a 3-part article talking about why Gigabit Squared (“G2”) failed in Seattle. His articles talk about the challenges that any competitor has when going head-to-head with a large competitor like Comcast. I would take that discussion one step further and talk about why G2 specifically failed in Seattle. I think there are valuable lessons to be learned from their experience for anybody entering a new market.

The biggest problem faced by G2 is that they had a hard time raising development capital. This is an issue faced by almost every new infrastructure project in the country, both private and public. The US investment community no longer has much taste for the high-risk involved in funding the first step of a project. I call this development capital, but when this capital comes from private sources it is often called angel investing.

Up until a decade or so ago new start-ups were able to find angel investors who would take a chance on a new venture that had promise. For taking that early risk the angel investors got really large returns and a piece of equity in the business. But today it seems nearly impossible to get enough money to get a project to the point of being ‘shovel-ready’. I think a lot of this reluctance comes from the last few decades that saw the implosion of a lot of tech start-ups. There were a lot of start-up telecom and web-based businesses that failed since the late 90’s and a lot of angel investors lost their entire investments.

Second, G2 planned to launch the business in phases, probably due to the hard time they were having in raising money. They planned to raise $20 million to first build around the University of Washington campus. Then they planned to raise a little more and build a little more and repeat until they built most of the City. In the investing world this is referred to as raising the money in tranches.

The problem with raising money in tranches is that it makes it even harder to raise the early money because the early investors can’t understand the big picture. They can’t know how any equity they get from the business will be diluted by future waves of investors. This approach also makes it nearly certain that the business will fail at some point, because every time the company needs to raise more capital they end up on a financial ledge. For example, even if they raise two rounds of money, they will probably fail when they can’t raise the third.

There is a general understanding among people that raise money that it is far easier to raise $100 million than $5 million or $20 million. I know that sounds non-intuitive, but G2 probably would have had an easier time raising the money to build the whole City than they did in trying to do something smaller. Investors can understand the big picture a lot easier than they can understand a project done in phases.

The third issue that killed G2 is that they didn’t hoard their early money. They had raised some early money and they did some of the right things with that money like doing engineering and building relationships with the City and with other carriers. That is the sort of developmental steps you should do with early money, and I would characterize those steps as getting the business shovel-ready and ready to raise the construction money.

But before G2 was funded they put enormous pressure on the business by announcing a timeline for when they were going to launch retail service. They announced products and prices and even went so far as to launch a website where customers could get on a waiting list for service. They created a lot of public expectation. They opened shop and hired a few employees in the market. The company began eating into their very limited cash with operating expenses rather than sticking with pure development of the project.

It’s fairly easy to see why G2 did this. They were having problems raising money and I think they were trying to create a stir about the project by showing community support. They hoped that public support would make it easier to attract the needed angel investors. But all this did was to cut short the amount of time they had to raise money.

G2 is not the first start-up to fail in a market, and they failed for some of the same reasons that have sunk other ventures in the past. Anybody thinking of opening a new market or new venture should look at the lessons to be learned from this. First, never underestimate how hard it is to raise developmental capital. It is probably the hardest thing there is to do in the business world. Second, have a business plan that contemplates the full build. I think G2 might have had more luck if they were trying to raise money to build the whole City than in trying to build a neighborhood at a time. And third, never spend money on operations until you are fully funded. If you have some seed money use it only for raising the bigger money.

And who know, maybe Gigabit Squared is not quite done in Seattle and can take another shot at it.

Local Programming

digital on-demand

digital on-demand (Photo credit: Will Lion)

One way to differentiate your cable system from your competition is to develop local programming. Local programming is just what you imagine it to be. It includes such things as high school sports, little league games, local church services, local government meetings, high school plays, and if you have a local college a wide array of things. And it can include more with content like local news, courts, cooking shows, tourist information, etc.

Why should you get involved with local programming? If local programming is done well, meaning that it has content that people want to watch, then it differentiates your cable programming from the competition and entices people to buy your service rather than the other guy. And of course, if customers buy your cable they are more likely to buy your higher margin products like data and telephone.

The ability to produce local programming has gotten much easier in recent years due to the cost of cameras dropping significantly. I remember in the not-too-distant past helping local service providers get grants to buy video cameras for local organizations that cost more than $15,000 each. Today, studio quality cameras are handheld and cost a fraction of that old cost.

One of the first hurdles you must cross with local programming is figuring out how to get the content listed in the channel guide with everything else. Many, but not all channel guides allow you to insert your own custom programs.

A number of cable systems carry local programming of some sort, so let me talk about how various companies have gotten local programming onto their cable systems.

Create a Local Network. There is always the expensive way to do things, which is to create a traditional local channel on your cable system. This means you would have some sort of studio and you would produce a lot of content to run 24/7. Some companies have done this and think it is successful. Some of the larger cable companies such as Cox have local channels, but there are also smaller companies doing this like Hiawatha Broadband in Winona, Minnesota and several large telephone cooperatives in the West. But the cost of producing content is expensive and very few companies feel they can afford this option. To be successful, it must be done well.

Let Others Create the Content. There is a less expensive option which is to let other create the content for you. There are a number of systems that have given a channel to local government, to local churches or to universities. Sometimes these organizations to a great job and sometimes they don’t. Most viewers don’t hold local programming to the same standards as network TV, but shows must have good sound and decent video if they are to attract viewers. One of the most successful local programs I have ever seen was a company that carried a local court and it seems the DUIs get good ratings. Many communities have done well broadcasting local high school sports.

Video-on-demand. Another way to carry local programming is not to create a channel, but instead to create a library of local content. If your system is capable of video on demand then you can create a library of local content. This way you can not only cover little league or high school sports, but a subscriber can pull up the game where their son hit a home run from last summer to show grandma when she visits.

There are other uses of having this kind of VOD library. For instance, you can create a rotating set of content from the library to show in hotels to tell visitors about area attractions. You could do something like the City of Seattle has done and create an index of past government meetings so that somebody can pull up a specific meeting where a specific topic was discussed. You can also pull the best of the VOD content and create a channel where the content plays continuously. But to do this well you need to always refresh the content.

Web TV channels. Finally there is the newest way to create a channel. There are now some vendors who have made it easy to let you put any web content directly onto your cable system. They let you take any web programming and create a virtual channel. They let you create as many local channels as you like and to put the content into a channel lineup.

This really opens up the world of local content for a service provider. It takes a lot of electronics and eats up system bandwidth to create multiple traditional local channels. But using a web-to-TV interface you can carry almost unlimited channels in one channel slot on your network. Each customer can then just watch what they want out of the lineup because they are getting the content from the web and not broadcast as a ‘channel’ from the hub.

This means that you can give a ‘channel’ to every organization in town that wants one, be that high schools, colleges, churches, governments, non-profits, local businesses, etc. Some of them will do a good job at creating local content and others will not, but the best of them ought to create a great local line-up that your competition won’t have.

This technology also lets you bring in any other content from the web. You can add OTT content like NetFlix and Amazon Prime. You can make channels out of YouTube. Or you can add one of the web services that have already tied this kind of web programming together nicely.

So you can create channels that bring together local content plus the best of the web. One idea that I have mentioned before is to create a package of local programming, OTT web programming and network channels. Such a package could sell for $20 and be more profitable than your larger cable packages. You can also insert local advertising into local programming or sign up with somebody like aioTV who will insert national advertising and share the revenue with you.