The Gigabit Dilemma

common carrierCox recently filed a lawsuit against the City of Tempe, Arizona for giving Google more preferable terms as a cable TV provider than what Cox has in their franchise with the city. Tempe undertook the unusual step in creating a new license category of “video service provider’ in establishing the contract with Google. This is different than Cox, which is considered a cable TV provider as defined by FCC rules.

The TV offerings from the two providers are basically the same. But according to the Cox complaint Google has been given easier compliance with various consumer protection and billing rules. Cox alleges that Google might not have to comply with things like giving customers notice of rate changes, meeting installation time frames, and even things like the requirement for providing emergency alerts. I don’t have the Google franchise agreement, so I don’t know the specific facts, but if Cox is right in these allegations then they are likely going to win the lawsuit. Under FCC rules it is hard for a city to discriminate among cable providers.

But the issue has grown beyond cable TV. A lot of fiber overbuilders are asking for the right to cherry pick neighborhoods and to not build everywhere within the franchise area – something that incumbent cable companies are required to do. I don’t know if this is an issue in this case, but I am aware of other cities where fiber overbuilders only want to build in the neighborhoods where enough customers elect to have them, similar to the way that Google builds to fiberhoods.

The idea of not building everywhere is a radical change in the way that cities treat cable companies, but is very much the traditional way to treat ISPs. Since broadband has been defined for many years by the FCC as an information service, data-only ISPs have been free to come to any city and build broadband to any subset of customers, largely without even talking to a city. But cable TV has always been heavily regulated and cable companies have never had that same kind of freedom.

But the world has changed and it’s nearly impossible any more to tell the difference between a cable provider and an ISP. Companies like Google face several dilemmas these days. If they only sell data they don’t get a high enough customer penetration rate – too many people still want to pay just one provider for a bundle. But if they offer cable TV then they get into the kind of mess they are facing right now in Tempe. To confuse matters even further, the FCC recently reclassified ISPs as common carriers which might change the rules for ISPs. It’s a very uncertain time to be a broadband provider.

Cities have their own dilemmas. It seems that every city wants gigabit fiber. But if you allow Google or anybody into your city without a requirement to build everywhere within a reasonable amount of time, then the city is setting themselves up for a huge future digital divide within their own city. They are going to have some parts of town with gigabit fiber and the rest of the town with something that is probably a lot slower. Over time that is going to create myriad problems within the city. There will be services available to the gigabit neighborhoods that are not available where there is no fiber. And one would expect that over time property values will tank in the non-fiber neighborhoods. Cities might look up fifteen years from now and wonder how they created new areas of blight.

I have no idea if Google plans to eventually build everywhere in Tempe. But I do know that there are fiber providers who definitely do not want to build everywhere, or more likely cannot afford to build everywhere in a given city. And not all of these fiber providers are going to offer cable TV, and so they might not even have the franchise discussion with the city and instead can just start building fiber.

Ever since the introduction of DSL and cable modems we’ve had digital divides. These divides have either been between rich and poor neighborhoods within a city, or between the city and the suburban and rural areas surrounding it. But the digital divide between gigabit and non-gigabit neighborhoods is going to be the widest and most significant digital divide we have ever had. I am not sure that cities are thinking about this. I fear that many politicians think broadband is broadband and there is a huge current cachet to having gigabit fiber in one’s city.

In the past these same politicians would have asked a lot of questions of a new cable provider. If you don’t think that’s true you just have to look back at some of the huge battles that Verizon had to fight a decade ago to get their FiOS TV into some cities. But for some reason, which I don’t fully understand, this same scrutiny is not always being applied to fiber overbuilders today.

It’s got to be hard for a city to know what to do. If gigabit fiber is the new standard then a city ought to fight hard to get it. But at the same time they need to be careful that they are not causing a bigger problem a decade from now between the neighborhoods with fiber and those without.

Service Unavailable

51H2Ytxu9TL._SX361_BO1,204,203,200_I just finished reading Service Unavailable, a new book by Frederick L. Pilot. It’s a quick and easy read for anybody in the broadband industry and covers the rural broadband crisis we have in the US.

The first two-thirds of the book are a great history of how we got to where we are today. Pilot explains the decisions made by the FCC and by the large ISPs in the country that have brought us to today’s broadband network. In far too many places that network consists of old copper wires built to deliver voice; these have deteriorated with age and are wholly inadequate to deliver any meaningful broadband. The large ISPs have poured all of their money into urban networks and Pilot describes how the networks in the rest of the country have been allowed to slowly rot away from lack of maintenance.

It’s a shame that his book went to press right before the FCC took an action that would have been an exclamation point in Pilot’s story of broadband policies gone amuck. The FCC just gave away billions of dollars to the largest telcos to upgrade rural DSL to 10 Mbps download speeds – a speed that is already inadequate today and that will be a total joke by the time the last of the upgrades are done over a six year period. The FCC seems not to have grasped the exponential growth in consumer broadband demand that has doubled about every three years since the early 90s.

Pilot goes on to recommend a national broadband program that would direct many billions of dollars to build fiber everywhere, much in the same way that the federal government built the interstate highways. He says this is the only way that places outside of the urban areas will ever get adequate broadband.

I certainly share Pilot’s frustration and have had the same thought many times. We could probably build fiber everywhere for the price of building one or two aircraft carriers, and one has to wonder where our national priorities are. As Pilot points out, broadband everywhere would unleash a huge untapped source of creativity and income producing ability in the parts of the country that don’t have good broadband today. And as he points out, many of the places that barely squeak by with what is considered broadband today are going to find themselves on the wrong side of the digital divide within a few years.

But then I stop and consider how federal projects are run and I’m not so sure this is the right answer. I look back at how the stimulus grant programs were run. These grants shoveled large sums of money out the door to build a whole lot of fiber networks that barely brought broadband to anybody. And they did it very inefficiently by forcing fiber projects to pay urban wage rates for projects built in rural counties where the prevailing wages were much lower. And these projects required things like environmental and historical structure studies, things that I had never seen done before by any fiber project.

And then there is the question of who would run these networks? I sure wouldn’t want the feds as my ISP and I wonder how they would decide who would be the recipient of these huge expenditures of federal monies? Pilot proposes that such networks be operated as open access networks, a model that has not yet seen any success in the US. It’s a model that works great in Europe, where all of the largest ISPs jump onto any available network in order to get customers. But in this country the incumbents have largely agreed not to compete against each other in any meaningful way.

But beyond the operational issues, which surely could be figured out if we really wanted to do this, one has to ask if this idea can ever get traction with our current government? We have such a huge backlog of infrastructure projects for maintaining roads, bridges, and waterways that one has to wonder how broadband networks would get the needed priority. I have never understood the reluctance of Congress to tackle infrastructure because such expenditures mostly translate to wages, which means full employment, lots of taxes paid, and a humming economy. But we’ve just gone through over a decade of gridlock, and I have a hard time seeing anything but more of the same as we seem to grow more divided and partisan every year.

Still, Pilot is asking exactly the right questions. Unfortunately, I am not sure there can ever be that one big fix-it-all solution that will solve the broadband crisis. I completely agree with Pilot that there should be such a solution and I also agree that this is badly needed. We are quickly headed towards a day of urban America with gigabit speeds and the rest of the country with 10 Mbps DSL, a wider broadband gap than we have ever had. And all we have is an FCC that is shoveling money out the door for band-aid fixes to DSL networks on old copper.

So I’m not sure that the solution suggested by Pilot can be practically implemented in today’s political world, but it is one possible solution in a world where very few others are proposing a way to fix the problem. I would think that the industry could figure out a workable solution if there was any real inclination to do so. But instead, I fear we are left with a world of large corporations running our broadband infrastructure who are more interested in quarterly earnings than they are about reinvesting in the future. If I could, I would wish for a more utopian society where we could put Pilot and other thinkers into a room together until they worked out a solution to our looming broadband crisis.

A New PON Technology

ONTNow that many fiber competitors are providing gigabit Ethernet to a lot of customers we have started to stress the capability of the existing passive optical network (PON) technology. The most predominant type of PON network in place today is GPON (gigabit PON). This technology shares 2.5 gigabits of download data among up to 64 homes (although most providers put fewer customers on a PON).

My clients today tell me that their gigabit customers still don’t use much more data than other customers. I liken this to the time when the industry provided unlimited long distance to households and found out that, on the whole, those customers didn’t call a lot more than before. As long as you can’t tell a big difference in usage between a gigabit customer and a 100 Mbps customer, introducing gigabit speeds alone is not going to break a network.

But what does matter is that all customers, in aggregate, are demanding more downloads over time. Numerous studies have shown that the amount of total data demanded by an average household doubles about every three years. With that kind of exponential growth it won’t take long until almost any network will show stress. But added to the inexorable growth of data usage is a belief that, over time, customers with gigabit speeds are going to find applications that use that speed. When gigabit customers really start using gigabit capabilities the current PON technology will be quickly overstressed.

Several vendors have come out with a new PON technology that has been referred to as XGPON or NGPON1. This new technology increases the shared data stream to 10 gigabits. The primary trouble with this technology is that it is neither easily forward nor backward compatible. Upgrading to 10 gigabits means an outlay for new electronics for an only 4 times increase in bandwidth. I have a hard time recommending that a customer with GPON make a spendy upgrade for a technology that is only slightly better. It won’t take more than a decade until the exponential growth of customer demand catches up to this upgrade.

But there is another new alternative. Both Alcatel-Lucent and Huawei have come out with next generation PON technology which uses TWDM (time and wave division multiplexing) to insert multiple PONs onto the same fiber. The first generation of this technology creates four different light pathways using four different ‘colors’ of light. This is effectively the same as a 4-way node split in that it creates a separate PON for the customers assigned to a given color. Even if you had 64 customers on a PON this technology can instead provide four separate PONs of 16 customers. But with 32 customers this becomes an extremely friendly 8 customer per PON.

This new technology is being referred to as NGPON2. Probably the biggest benefit of the technology is that it doesn’t require a forced migration and upgrade to existing customers. Those customers can stay on the existing color while you migrate or add new customers to the new colors. But any existing customer that is moved onto a new PON color would need to have an upgraded ONT. The best feature of the new technology is that it provides a huge upgrade in bandwidth and can provide either 40 Gbps or 80 Gbps download per an existing PON.

This seems like a no brainer for any service provider who wants to offer gigabit as their only product. An all-gigabit network is going to create choke points in a traditional PON network, but as long as the backbone bandwidth to nodes is increased along with this upgrade it ought to handle gigabit customers seamlessly (when they actually start using their gigabit).

The big question is when does a current provider need to consider this kind of upgrade? I have numerous clients who provide 100 Mbps service on PON who are experiencing very little network contention. One strategy some of them are considering with GPON is to place gigabit customers on their own PON and limit the number of customers on each gigabit PON to a manageable number. With creative strategies like this it might be possible to keep GPON running comfortably for a long time. It’s interesting to see PON providers starting to seriously consider bandwidth management strategies. It’s something that the owners of HFC cable networks have had to do for a decade, and it seems that we are getting to the point where even fiber networks can feel stress from bandwidth growth.

 

 

Technology and the News

Port charotte SunReuters Institute just published their Digital News Report 2015. This report looks at a number of different countries to understand how people access news and how much they trust the news that they access.

One thing that is obvious in reading this report is that, while countries vary, overall there has been a major transition in the ways people access news from traditional TV and print news to online news.

Consider the percentage of people who still primarily get news from television. This is highest in France (58%) and Germany (53%), but in the US only 40% of people now list TV as their primary source of news. And in very online Finland this has dropped to 30%.

Newspapers have taken a beating over the last few decades and are nearly irrelevant as a source of news. Japan has the highest percentage of people who still rely on newspapers at 14%, but most countries are much closer to the 5% figure in the US.

The big shift worldwide has been to get news online, either from various online news sources or from social media sites. In the US 43% of people now get news online while another 11% get news primarily from social media sites like Facebook or Twitter. Germany (23% and 5%) and France (29% and 5%) are the two countries with the lowest percentage of those using online news, but those two countries have also been the slowest in accepting smartphones.

There is a clear difference by age in where people get news. For example, across all countries 60% of those from 18–24 get news online but only 22% of those over 55. And only 27% of those 18-24 get news from TV while with those over 55 it’s 54%. It’s clear in watching these trends over the last few years that within a few decades TV news is going to be headed towards the same irrelevance as newspapers.

A lot of these trends are due to the amount of trust that people place on news sources. People in Finland (68%), Brazil (62%), and Germany (60%) generally trust their sources of news while in Italy (35%), Spain (34%), and the US (32%) people generally don’t trust the most common sources of news.

The newest and fastest growing source of news is social media. An astounding 41% of the people in the US have used Facebook for news within the last week of being surveyed. This was followed by YouTube (18%), Twitter (11%), and WhatsApp (9%).

I know my own news viewing habits have changed over the years. I was traditionally a voracious newspaper reader and I often subscribed to three or more papers. No matter where I lived I tried to read the Washington Post and New York Times, even if it was only the Sunday edition. I rarely watched TV news and I listened to radio news when driving.

But my news habits today are very different. I still never watch TV news. When I drive I listen almost exclusively to talk radio, which gives me some deeper analysis and commentary on the news. I get most of my worldwide news on my smartphone from various news apps like Flipboard and Pulse. I skim these often and save the most interesting articles to Pocket to read later. I get industry news from Flipboard and Twitter as well as subscribing to numerous industry newsfeeds from organizations that gather tech and telecom articles. I still subscribe to my local small town newspaper and use it to keep up with local news and local sports. And nothing online has strong loyalty and I switch online news sources as I find ones I like better. While I don’t think of Facebook as a source of news, I do use that as the one place where I sometimes comment on the news.

The beauty today is that everybody can tailor a news experience to fit their interest. However, the downside to the wide variety of choices is that in picking news sources people are tending more and more towards reading only news sources that reinforce their world view. There are a lot of social scientists that say that the trend of getting news online has been a major contributing factor towards the political polarization we have in the country. That may be true and I put effort looking at multiple sources of news in an effort to avoid that.

But I know I never want to go back to the old ways. I feel that online news has given me the ability to know a lot more about what is going on in the world while also letting me dig really deeply into topics that most interest me. I remember the old days when you might see an article about something interesting that was happening in another country and then you had no way to easily find out more about it.

Our Internet Infrastructure

Paul Barford, a professor at the University of Wisconsin, led an effort to map the major routes used by the Internet in the U.S. He believes that making knowledge of the map can help us plan better to make the Internet less susceptible to natural disasters, accidents, or intentional sabotage.

I can remember two times when the Internet backbone took a serious hit in this country and they were both in 2001. First, a 60-car CSX train derailed in the Howard Street tunnel in Baltimore, the resulting fire melted a lot of fiber cables that were on the east coast north-south route. Then later that year on 9/11, the twin World Trade Towers collapsed taking out the main carriers’ hotel and data center in Manhattan.

And there is no reason to think that we won’t have more disasters. When you look at the map, my first reaction is how few routes there are in the main backbone.

map_of_internets_backbonex519

 

Professor Barford hopes the map will spur conversation about the need for more route diversity. The Department of Homeland Security agrees and is publishing the map and making the details of the routes available to government, public, and private researchers.

Some might say that publishing such a map makes us more vulnerable. I don’t think it does. Everybody in the industry knows the addresses of the main Internet POPs since those are the end points of the data connections that ISPs buy to connect to the Internet. And I didn’t really need this map to know that the major routes of fiber mostly follow the Interstate highways. In Florida, where I live, there is a route on I-95 on one side of the state and I-75 on the other with a spur to Orlando. I doubt that anybody here in the industry didn’t already know that.

The one thing that strikes me about the map is that once you get off the major big city routes that many of the smaller US markets only have one route into and out of their hub; it doesn’t look that hard to isolate some markets with a couple of fiber cuts. I know that some of the carriers involved in the backbone have contingency plans that don’t show up on this map, and there are other fiber routes that can pick up the slack fairly soon after a major Internet outage in most places.

The other thing you realize about this network is that it wasn’t really designed—it grew organically. The network takes the shortest path between major markets using major roads and thus follows the routes built by the first fiber pioneers in the 80s and early 90s.

Hopefully this map spurs the carriers to get together and plan a more robust backbone going into the future. It’s very easy to get complacent about a network that is functioning, but this map highlights a number of vulnerable points in the network that could be improved. This kind of planning was undertaken by the large electric grids after a number of power outages a decade ago. Let’s not wait for major Internet outages to get us to pay attention to making the network safer and more redundant.

 

Programmers Hate Skinny Bundles

cable headendI read several reports from the current International Broadcasting Convention in Amsterdam that there is a lot of talk among programmers about a dislike of the skinny bundles that are being offered by companies like Sling TV. This is a convention of mostly programmers and companies that produce content. FierceCable reported on the convention and wrote an article titled Execs from Discovery, Roku and others warn the skinny bundle will hamper content creation.

I can understand the perspective of the programmers. Consider Discovery. They are one of the more egregious programmers when it comes to making cable companies take all of their content. Discovery benefits tremendously from the bundle because given a choice, many cable providers would elect to not carry at least some of the many Discovery networks.

There is no doubt that the move to skinny bundles is going to be bad for programmers like Discovery as they lose revenues on many of their networks. Discovery currently has 13 different networks in the US and a few more internationally. And obviously skinny bundles like Sling TV won’t elect to carry many, or even any, of them.

But Discovery and the other networks are trying to swim against the tide if they think there is any way to stop the move towards smaller line-ups. It’s what people want. Numerous studies have shown that most households only watch a very small fraction of the 200 or 300 channels that are delivered to them in the big bundles. And people in general are getting fed up with paying for all of them.

Netflix and Hulu got this all started by letting people watch individual shows rather than networks. And that is what people really want. They create a loyalty to a given show much more than to a network. Interestingly, Discovery takes advantage of this trend already and some of their series like MythBusters, How It’s Made, and River Monsters are available on Netflix.

The real question being raised in Amsterdam is if the trend towards skinny bundles is going to stifle the creation of unique content. It’s a good question and only time will tell. My gut says that it is not going to cut down on the making of good new content because there are profits to be made from coming up with a popular show.

What might change is who is making the content. There is no doubt that over time the move to skinny bundles will hurt traditional programmers like Discovery. They may have to shut down some of their networks if not enough people are willing to pay for them. But these networks were only created in the first place in the artificial environment where millions of homes were guaranteed to pay for a new network. One of the primary reason that the big bundles are breaking apart today is the greed of the programming conglomerates that created and forced numerous new networks on the cable companies. What we are now seeing is that with the Internet people have the ability to push back against the crazy big bundles they have been forced to buy.

So it is quite possible that a company like Discovery will lose a lot of money compared to what they make today, and perhaps as part of that transition they won’t produce as much unique content. But I think that somebody else will. We already see companies like Netflix producing new content. There are even rumors about Apple producing content.

As long as content can make a lot of money, people are going to take a chance for the big bucks. One has to remember that most unique content doesn’t make money today. Many movies don’t recover the cost of producing them if the public doesn’t like them. When these companies talk about creating new content, what they really are talking about is producing hits. One very successful series or movie can produce a huge profit for the producer of the content. As long as that big carrot is dangled there are going to be many who are going to chase the big dollars.

I really didn’t mean to pick specifically on Discovery and they are just an example. You could substitute any of the other large network conglomerates above and it’s the same conversation. The fact is, content delivery is changing and there is going to be fallout from that change. It’s likely over time that some of the existing large conglomerates might go under or disappear. That is the consequence of this kind of fundamental change. But it’s happened to many other industries over the last decades and there won’t be anybody lamenting the fall of a Discovery any more than people are nostalgic about Kodak. All people are really going to care about is that they can watch content they like and they aren’t really going to care much about who created it or who profits from it.

We are Almost at the Tipping Point

Alexander_Crystal_SeerThere is an amazing amount of progress going on in numerous fields that affect our daily lives.

Earlier this year, the World Economic Forum’s Global Agenda Council on the Future of Software and Society polled experts to ask when they expected major new technologies to hit a tipping point, meaning that the technology would pass the point where it would then become a mainstream norm. The list of technological changes that are predicted for the next ten years is astounding. This coming decade is probably going to be considered by historians as the time when mankind moved past the era of the Industrial Revolution into the Computer and Software Age.

We live in a time when it has become routine to expect rapid changes and improvements in the way we do things. But taken altogether, we will hit a tipping point with so many new technologies that lives will be decidedly different a decade from now compared to today. Following are the changes that this group foresees on the immediate horizon along with a prediction of when each of the changes passes the tipping point and becomes part of our daily lives. Note that the tipping points they provided are not the only event that could move a technology into the mainstream, but instead are a good example.

  • Implantable Technology – Tipping point when the first implantable mobile phone is commercially available. Expected date: 2023
  • Personal Digital Presence – Tipping point when 80% of people in the world have a digital presence on the web. Expected data: 2023
  • Vision as the New Interface. Tipping point when 10% of reading glasses connect to the internet. Expected date: 2023
  • Wearable Internet – Tipping point when 10% of people wear clothes connected to the Internet. Expected date: 2022.
  • Ubiquitous Computing – Tipping point when 90% of the world’s population has access to the Internet. Expected date: 2024.
  • A Supercomputer in Your Pocket. Tipping point when 90% of the world population has a smartphone. Expected date: 2023
  • Storage for All – Tipping point when 90% of people have unlimited and free (supported by advertising) data storage. Expected date: 2025.
  • The Internet of Things – Tipping point when 1 trillion sensors are connected to the Internet. Expected date: 2022.
  • The Connected Home – Tipping point when over 50% of broadband to homes is used for appliances and devices. Expected date: 2024.
  • Smart Cities – Tipping point when the first city with more than 50,000 people has no traffic lights. Expected date: 2026
  • Big Data for Decisions – Tipping point when the first government replaces a census with big data. Expected date: 2023.
  • Driverless Cars – Tipping point when driverless cars are 10% of the vehicles on the road. Expected date: 2026.
  • Artificial Intelligence and Decision-Making – Tipping point when an AI is on a major corporate Board. Expected date: 2026.
  • AI and White Collar Jobs – Tipping point when 30% of corporate audits are done by AI. Expected date: 2025.
  • Robotics – Tipping point is the first robotic pharmacist in the US. Expected date: 2021.
  • Bitcoin and Blockchain – Tipping point is when 10% of gross domestic product stored on blockchains. Expected date: 2027.
  • The Sharing Economy – Tipping point when more global trips are made by car sharing than in private cars. Expected date: 2025.
  • Governments and Blockchain – Tipping point when tax is collected for the first time via blockchain. Expected date: 2023.
  • Printing and Manufacturing – Tipping point when the first 3D-printed car is in production. Expected date: 2022.
  • 3D Printing and Health – Tipping point when first transplant of 3D liver. Expected date: 2024.
  • 3D Printing and Consumer Products – Tipping point when 5% of consumer goods printed in 3D. Expected date: 2025.

Even if only a large fraction of these changes happen when predicted it is going to be a very different world a decade from now. This kind of list is almost overwhelming. I am probably going to write future blogs about a few of the changes that I find the most intriguing. One thing is for sure, – hang onto your seats, the whole world is about to enter a new age.

Getting Access to Existing Fiber

Fiber CableFrontier, the incumbent in West Virginia that bought the property from Verizon, is fighting publicly with Citynet, the biggest competitive telco in the state, about whether they should have to share dark fiber.

Dark fiber is just what it sounds like – fiber that has not been lit with electronics. Most fibers that have been built have extra pairs that are not used. Every fiber provider needs some extra pairs for future use in case some of the existing lit pairs go bad or get damaged too badly to repair. And some other pairs are often reserved for future construction and expansion needs. But any pairs above some reasonable safety margin for future maintenance and growth are fiber pairs that are likely never going to be used.

The FCC has wrangled with dark fiber in the past. The Telecommunications Act of 1996 included language that required the largest telcos to lease dark fiber to competitors. The FCC implemented this a few years later and for a while other carriers were able to lease dark fiber between telephone exchanges. But the Bell companies attacked these rules continuously and got them so watered down that it became nearly impossible to put together a network this way. But it is still possible to lease dark fiber using those rules if somebody is determined enough to fight through a horrid ordering process from a phone company that is determined not to lease the dark fiber.

The stimulus grant rules also required that any long-haul fibers built with free federal money must provide for inexpensive access to competitors willing to build the last mile. I don’t know the specific facts of the Citynet dispute, but I would guess that the stimulus fiber is part of what they are fighting over.

The stimulus grants in West Virginia are about the oddest and most corrupt of all of the stimulus grants that were awarded. The stimulus grant went originally to the State of West Virginia to build a fiber line that would connect most counties with a fiber backbone. There were similar fiber programs in other states. But in West Virginia, halfway through construction, the network was just ‘given’ to Verizon, who was the phone company at the time. The grant was controversial thereafter. For instance, the project was reduced from 915 miles to 675 miles, yet the grant was not reduced from the original $42 million. This means the final grant cost a whopping $57,800 per mile compared to similar stimulus grants that cost $30,000 per mile.

According to the federal rules that built the fiber, Citynet and any other competitor is supposed to get very cheap access to that fiber if they want to use it for last mile projects. If they don’t get reasonable access those grants allowed for the right to appeal to the FCC or the NTIA. However, the stimulus grants were not specific about whether this was to be dark fiber or bandwidth on lit fiber.

But this fight raises a more interesting question. Almost every long-haul fiber that has been built contains a lot of extra pairs of fiber. As I just noted in another recent blog, most rural counties already are home to half a dozen or more fiber networks that almost all contain empty and un-used fiber.

We have a rural bandwidth problem in the country due to the fact that it’s relatively expensive to build fiber in rural places. Perhaps if the FCC really wants to solve the rural bandwidth shortage they ought to take a look at all of the dark fiber that is already sitting idle in rural places.

It would be really nice if the FCC could force any incumbent – be that a cable company, telco, school system, state government, etc.– that has dark fibers in rural counties to be forced to lease it to others for a fair price. This is something that could be made to only apply to those places where there is a lot of households that don’t have access to FCC-defined broadband.

We don’t actually have a fiber shortage in a lot of places – what we have instead is a whole lot of fiber that has been built on public rights-of-way that is not being used and that is not being made available to those who could use it. It’s easy to point the finger at companies like Frontier, but a lot of the idle fiber sitting in rural places has been built by government entities like a school district or a Department of Transportation, that is not willing to share it with others. That sort of gross waste of a precious resource is shameful and there ought to be a solution that would make truly idle fiber available to those who would use it to bring broadband to households that need it.

Installing Fiber in Conduit

innerduraFuturePathGroupI thought I would take a break today from complaining about the FCC and instead talk today about how fiber is put into conduit. I know a lot of the people who read this blog are not technical and I figured some of you would want to know a little more about how fiber actually gets to where it’s going.

I’m looking specifically today about fiber placed in conduit. Conduit is used when fiber is installed underground in an environment where you want to either protect the fiber from damage or else be able to easily get to the fiber in the future. It’s possible to bury fiber directly, but most carrier class fiber routes use conduit.  There are three basic options for getting fiber through a conduit – pulling, pushing, and blowing.

In the first step of the installation process a conduit will be buried in the ground. Some conduit consists of a large empty tube that can hold multiple fibers. But today it’s becoming more common to use what is called innerduct conduit, which contains multiple smaller tubes inside of a larger conduit.

For long outdoor fiber runs the primary method used to install fiber in conduit is pulling. In fact, if you are installing large count fiber or heavier fibers, this is the only real option. Conduits made for this purpose come with factory-installed cords inside. The pulling process then consists of tying the fiber to the cord at one end of a run of conduit and then pulling out the cord from the opposite end. For long fiber runs the pulling is done with specialized equipment that can pull steadily and evenly to minimize any damage to the fiber. Fiber is strong, but it can be damaged during the installation process, which is why it’s essential before accepting a new run of fiber to first test it by shining a laser through to make sure the fiber survived the installation process. Damage from pulling is probably the number one cause of late fiber problems on long fiber routes.

In short fiber runs, such as inside of a central office or a home, the fiber can be pulled manually by hand. While fiber has a lot of flexibility, the fiber can be damaged by pulling it around tight bends or other impediments.

Pushing fiber is a technique that is only used for short runs of fiber. It’s exactly what it sounds like and you literally feed the fiber into one end of a conduit and shove the fiber through the empty conduit and hope there are no snags or bends that will limit your ability to make it the whole way through. Pushing fiber is the safest method to use since it puts the least amount of stress on the fiber. If the run is a bit longer, but still pushable, there are pushing tools that can apply steady constant pressure to force the fiber through the conduit.

Blowing fiber is perhaps the most interesting method used. Blowing fiber involves using equipment at both ends of the conduit to be filled. The machines force air into one end of the fiber, increasing air pressure, while at the opposite end of the fiber another machine draws air out of the conduit to produce lower air pressure. The difference in the air pressure draws the fiber through the conduit.

Blowing fiber can be used on longer routes as long as the fiber to be fed is not too heavy, perhaps 8 or 12 pairs of fiber. It’s vital when blowing fiber for longer distances to have conduit with very low-friction lining and no physical impediments.

Both pulling fiber and blowing fiber take specialized equipment and require following specific techniques to do it right to get the fiber through the conduit both quickly and safely. If you watch a fiber installation team and they are just sitting somewhere along the road, chances are that they are not being idle but are instead pulling or blowing the fiber through the conduit. All of these methods require knowledge and skill to do right without harming the conduit.

New Science for September 2015

sun9501I titled this blog New Science because the breakthroughs I’m covering today are more scientific breakthroughs that will require a lot of effort to turn into usable technology. But the potential for these breakthroughs are immense.

Massless Particles. A team led by Zahid Hasan at Princeton has found a massless particle called the Weyl fermion that could lead to much faster electronics in the future. The particle has been predicted but never before found. Fermions are the elementary particles that make up electrons, and as such are massless.

These Weyl fermions can carry a charge much more efficiently than normal electrons and could be used instead of electrons to power electronics. Electrons act erratically and bounce all over the place, but it’s believed that the Wehl fermions would be far more predictable while carrying a charge, and could travel as much as 1,000 times faster through normal semiconductors. The Weyl fermion’s spin is both in the same direction as its motion (which physicists call ‘right-handed) and opposite its direction (‘left-handed’) at the same time. This means that all the fermions move in exactly the same way and can traverse through and around obstacles that scatter normal electrons.

What’s best about these particles is that the researchers found them in a synthetic crystal, which means that they can be easily produced, while many other exotic particles only exist due to the high energy of a particle accelerator.

Practical Superconductor. Scientists at the Max Planck Institute in Germany have created a superconductor that works at a reasonably warm temperature. In the past superconductors have needed extremely cold temperatures of -220 degrees Fahrenheit to work. These temperatures are far too cold for practical applications. The new superconductor works at -90 degrees Fahrenheit, a temperature that is found in nature in Antarctica, and which is by far the warmest temperature that has produced the superconductor effect.

The new superconductor uses hydrogen sulfide, the rotten egg smelling gas, to create the superconductor. Scientists believe if they can find the right materials that they can eventually create superconductors that can work at room temperatures. If so, then electronics and computers can be made far more efficient.

Transporting Light over Distance. Researchers at the Universities of Bayreuth and Erlangen-Nurenberg in Germany have demonstrated how carbon nanofibers might be used to transport light energy efficiently across great distances. All of today’s technologies scatter the light to some degree, which means that when generating solar power we have to convert light to electricity at the site of the light collection.

The nanofibers were made from building blocks called carbonyl-bridged triarylamine. They were then enhanced by inserting three naphthalimidbithiophene chromophores. The resulting nanotubes automatically aligned themselves into tubes of 4 micrometers long with a diameter of only 0.005 micrometers. The nanotubes then naturally align face-to-face and can efficiently transfer light energy with almost no loss. This results is light being transmitted in a wave-like manner called quantum coherence. The potential of this technology would be to gather light and transmit it to a place where the conversion to electricity can be done more efficiently.

Quantum Dot Solar Windows. A team from the Center for Advanced Solar Photophysics (CASP) of Los Alamos and from the Department of Materials Science of the University of Milan-Bicocca (UNIMIB) in Italy have developed a way to generate solar energy from clear window glass. They showed this can be done using colorless heavy-metal free colloidal quantum dots that act as luminescent solar concentrators (LSCs).

The clear quantum dots are embedded within the glass and are aligned in such a way that they reflect some of the light to receivers at the edge of the glass panes. The quantum dots are a huge leap forward in LSC technology. Previous LSC technology has consisted either of organic emitters that were inefficient or heavy-metal emitters that were toxic and dangerous to put into the environment. The new quantum dots are made from copper, indium, selenium, and sulfur, all of which are routinely found in the environment and safe. The dots are several magnitudes more efficient than earlier technologies and could turn any window into a solar collector.