Surveys for Grants and Loans

Many of the federal and state grant programs and many broadband lenders want applicants to undertake a survey to quantify the likely success of a new broadband venture. Unfortunately, there are far too many broadband projects being launched that are unable to answer the basic question, “How many customers are likely to buy service from the new network?” There are only two ways to get a reliable answer to that question – a canvass or a statistically valid survey.

A canvass is the easiest to understand and it involves knocking on the doors or calling every potential customer in a market. I’ve seen many clients have good luck with this when overbuilding a small town or a subdivision. A canvass will be most successful when an ISP has all of the facts needed by potential customers such as specific products and prices. Many companies would label the canvass process as pre-selling – getting potential customers to tentatively commit before construction.

The alternative is a canvass is a ‘statistically valid’ survey. Any survey that doesn’t meet the statistically valid test isn’t worth the paper it’s printed on. There are a few key aspects of doing a statistically valid survey:

Must be Random. This is the most important aspect of a valid survey and is where many surveys fail. Random means that you are sampling the whole community, not just a subset of respondents. A survey that is mailed to people or put online for anybody to take is not random.

The problem with a non-random survey is that the respondents self-select. For example, if you mail a survey to potential customers, then people who are interested in broadband are the most likely to respond and to return the completed survey. It can feel good to get back a lot of positive responses, but it’s far more important to hear from those who don’t support fiber.

The whole purpose of doing a broadband survey is to quantify the amount of support – and that also means quantifying those who won’t buy fiber. I’ve seen results from mailed surveys where almost every response was pro-broadband, and of course, that is unlikely. That result just means that the people who aren’t interested in broadband didn’t bother to complete or return the survey. The only way you can put any faith in a mailed survey is if you get so many responses that it approaches being a canvass. A good analogy of the problems with a mail survey would be to stand in front of a grocery store and ask customers if they like to shop there. While there may be a few customers with complaints, such a survey would not tell you anything about how the community feels about that store since the question was not asked to those who don’t shop at the store.

This blog is too short to describe survey methods – but there are specific acceptable techniques for conducting a random survey either by telephone or by knocking on doors. It’s possible to do those tasks non-randomly, so you should seek advice before conducting a phone or door-knocking survey.

Non-biased Questions. Survey questions must be non-biased, meaning that they can’t lead a respondent towards a certain answer. A question like, “Do you want to save money on broadband?” is worthless because it’s hard to imagine anybody answering no to that question. It’s a lot harder to write non-based questions than you might think, and bias can be a lot more subtle than that question.

Respondent Bias. People who conduct surveys know that there are some kinds of questions that many respondents won’t answer truthfully. For example, I’ve read that nearly half of applicants lie about their annual income when applying for a credit card. For various reasons people want others to think they earn more than they actually do.

Respondent bias can apply to a broadband survey as well. I’ve learned that you can’t rely on responses having to do with spending. For example, many respondents will under-report what they pay each month for broadband. Perhaps people don’t want the survey taker to think they spend too much.

Respondent bias is one of the reasons that political surveys are less reliable than surveys on more factual topics – respondents may not tell the truth about who they will vote for or how they feel about political issues. Luckily, most people are truthful when asked about non-emotional topics and factual questions, and we’ve found residential broadband surveys to be a great predictor of market interest in broadband.

Survey Fatigue. Respondents have a natural tendency to give up if a survey takes too long. They will hang-up on a phone survey or start giving quick and inaccurate answers to get rid of somebody at their door. A survey ought to last no longer than 10 minutes, and the ideal length should be closer to five minutes.

The big takeaway from this discussion is that doing a survey the wrong way will likely give you the wrong answer to the basic question of likely market penetration. You’re better off to not do a survey than to do one that is not statistically valid. I don’t know if there is anything more deadly in launching a new broadband market than having a false expectations of the number of customers that will buy broadband.

The Right Way to do Customer Surveys


Customers (Photo credit: Vinqui)

Carriers are always being advised to find out what their customers really want and one of the best tools to do that is a well-designed survey. I use the term well-designed, because you can’t put any credence into the results of a survey that is not done correctly. I see many surveys done incorrectly, with the results being that a company will act on the results of a survey that may not reflect what customers really want. There are a number of steps that must be taken to get an accurate and statistically representative survey.

Adequate sample size. You must complete an adequate number of surveys to get the results you want. Most business surveys are designed to get results that are 95% accurate, plus or minus 5%. What this means that if you were to give that same survey to every customer you would expect the same results within that range of accuracy. Most businesses and political surveys find that to be accurate enough.

There are several online websites available that will calculate the size of the sample needed. Most people find the number of needed samples surprising. To get the 95% accuracy, if you have 1,000 customers you need to complete 278 surveys. If you have 5,000 customers you need to survey 357 of them, and with 20,000 customers it’s 377 surveys needed.

I often see companies conduct surveys that produce far fewer completed surveys than these sample numbers. Such surveys are valid, but the amount of accuracy is not as trustworthy. For example, if you have 5,000 customers and you complete only 100 surveys, the results could still represent a 95% accuracy, but only within a range of plus or minus 10%. That doesn’t sound a lot less accurate, but it means that there is almost a one in five chance that the results do not reflect your whole customer base.

Random. For a survey to be valid the people surveyed must be selected at random. If you are surveying your own customers it’s easy not to be random. For example, if you send out a survey in your bills, the results you get back are not random. They are biased by the fact that people who either like or dislike what you asked about are the most likely to respond while people who feel neutral about the topic are likelier not to. The only really reliable way to get a random sample is to call people. And even then you have to choose the numbers randomly.

Calling has several issues. You must consider the Do Not Call lists that the federal government has established for people to opt out of getting solicitation calls. You are allowed under these rules to call your own existing customers, but you are not allowed to call potential customers who are on the Do Not Call list.

Second, you need to deal with cell phone numbers since many customers no longer have landlines. To get an adequate sample you need to somehow call people with both types of phones. You are not legally allowed to make solicitation calls to a cell phone number unless the person has given you permission to do so. Hopefully you will have customer records that provide a contact number to call for each customer, and any customer who has given you their number has given you permission to call them even if it is a cell phone number.

Non-biased. The questions you ask must be unbiased, which means that they are not worded in such a way as to elicit a certain response. This is why so many surveys you take seem to be somewhat bland, because the questions are written without flowery adjectives.

Interpreting the Results. Companies often misinterpret the results of a survey. The 95% accuracy that is the goal of the survey only applies to the primary questions you ask. For example, if you gave a valid survey to all of your customers and asked a question such as if they would be interested in buying cellular service from you, then the response to that question would be 95% accurate. However, companies often try to interpret a survey question at a deeper level. For example, they might look at the results of the same question for men versus women respondents and say that one group is more likely to feel a certain way than another.

And you can’t make those kinds of interpretations with any accuracy. In this example the survey is an accurate representation of how all customers feel because you sampled enough customers to get a statistically valid result. However, if half of the people you surveyed were women, then your survey of women is only half as big as the overall survey, and correspondingly less reliable. In this example it wouldn’t be a lot more unreliable, but if you try to really subdivide the sample population the results become nearly worthless. For instance, if you try to analyze the results by various age groups of ten years each you will find that you can’t rely on the results at all.

Survey fatigue. Another common problem is survey fatigue where a survey is so long that a lot of people hang up in the middle of it. It’s important to try to keep a survey under five minutes or this will happen a lot.

Summary. If you are going to spend the time and money to do a survey then you should spend the extra effort to do it right. Make sure your sample size is adequate. Make sure the survey is given randomly. Make sure the questions are unbiased. And keep it short.

And as a reminder, CCG has done hundreds of surveys, so one way to make sure it’s done right is to engage us. We can help at every level from helping write the questions setting sample sizes or even making the calls.