Is the FCC Going to Nationalize the Media?

Pinky_and_the_BrainI ran across an article that just has me shaking my head. It’s by Kurt Nimmo at a site called Alex Jones’ InfoWars. This article claims that the FCC is doing a study as a precursor for privatizing the private media sector, including the Internet.

I certainly understand with people feel a little paranoid about the government right now due to the gigantic data gathering that is being done by the NSA. But it’s pretty incredibly paranoid to think that somehow the US government is planning on taking over every TV station, radio station, newspaper and even the Internet. Of course, one gets the first clue that this article is a bit paranoid and biased when the very first sentence uses the word ‘sovietization’.

So what the heck is this guy talking about that would give him the idea that the FCC was ready to take over the communications world? It starts with a study that the FCC is undertaking through Social Solutions International. I have included here this Research Design Study from SSI so that you can see it for yourself. This document is not the results of the FCC study, but instead is a description of the study that is currently being undertaken, to be published sometime in 2014.

My firm CCG Consulting does market surveys and so I am pretty conversant with the kind of jargon that is used to talk about statistical sampling and market research. And this document is massively jargon-laden and it takes some reading between the lines to figure out exactly what they are doing.

So what is this study trying to find out? They are basically after two things. First, they want to understand better where people go to get their news. The study refers to this as ‘critical information needs’, but it basically boils down to where people to find out what is happening in the world – and that is news

Second, the study is looking at random local markets to do a qualitative analysis of information that is made available to the public. This is the part that has Mr. Nimmo so paranoid because it is going to look at local newspapers, radio broadcasts and TV and judge them according to accuracy, fairness, bias, etc. And somehow they are going to try to do the same thing with the Internet.

But it’s a long stretch to say that the FCC is using this study as a precursor to taking over media. That is a monstrous break in logic and out of touch with reality and with the relatively weak nature of the FCC.

So what do I think of this study? It certainly is within the purview of the FCC to periodically look at how people communicate in the country. After all, they are in charge of monitoring and regulating those very industries.

But I don’t think this particular study is going to be very effective or turn up anything of much interest. Certainly it is going to give us a peek at where people go to get information today. But one would have to think that companies like Google know far more about that today than what this study is going to uncover.

And I have very poor hopes that the qualitative analysis is going to uncover anything that will be statistically valid and have any relevance for the whole country. It would make a lot more sense to study a tiny of handful of markets in complete depth over a long period of time if somebody really wants to understand the barriers and misinformation that is in place today in local media. Those kinds of local studies are best done by academia. This study doesn’t look to me to be as thorough and vigorous as those kinds of studies can be.

And so my expectations is that this study is going to generate a few headlines next year highlighting whatever claims the study makes, and then it will go on the shelf. It’s not likely to have much impact on FCC policy and it certainly is not going to be the catalyst to the FCC somehow taking over the US media (not sure how they would do that even if they wanted to).

But unfortunately in the Internet age people like Mr. Nimmo can stir up paranoia and animosity towards the government over what, in this case, looks more like an expensive boondoggle. There are certainly things that I don’t like about the FCC, being an industry person, but I am not too worried that they are out to conquer the world a la Pinkie and the Brain.

The Right Way to do Customer Surveys

Customers

Customers (Photo credit: Vinqui)

Carriers are always being advised to find out what their customers really want and one of the best tools to do that is a well-designed survey. I use the term well-designed, because you can’t put any credence into the results of a survey that is not done correctly. I see many surveys done incorrectly, with the results being that a company will act on the results of a survey that may not reflect what customers really want. There are a number of steps that must be taken to get an accurate and statistically representative survey.

Adequate sample size. You must complete an adequate number of surveys to get the results you want. Most business surveys are designed to get results that are 95% accurate, plus or minus 5%. What this means that if you were to give that same survey to every customer you would expect the same results within that range of accuracy. Most businesses and political surveys find that to be accurate enough.

There are several online websites available that will calculate the size of the sample needed. Most people find the number of needed samples surprising. To get the 95% accuracy, if you have 1,000 customers you need to complete 278 surveys. If you have 5,000 customers you need to survey 357 of them, and with 20,000 customers it’s 377 surveys needed.

I often see companies conduct surveys that produce far fewer completed surveys than these sample numbers. Such surveys are valid, but the amount of accuracy is not as trustworthy. For example, if you have 5,000 customers and you complete only 100 surveys, the results could still represent a 95% accuracy, but only within a range of plus or minus 10%. That doesn’t sound a lot less accurate, but it means that there is almost a one in five chance that the results do not reflect your whole customer base.

Random. For a survey to be valid the people surveyed must be selected at random. If you are surveying your own customers it’s easy not to be random. For example, if you send out a survey in your bills, the results you get back are not random. They are biased by the fact that people who either like or dislike what you asked about are the most likely to respond while people who feel neutral about the topic are likelier not to. The only really reliable way to get a random sample is to call people. And even then you have to choose the numbers randomly.

Calling has several issues. You must consider the Do Not Call lists that the federal government has established for people to opt out of getting solicitation calls. You are allowed under these rules to call your own existing customers, but you are not allowed to call potential customers who are on the Do Not Call list.

Second, you need to deal with cell phone numbers since many customers no longer have landlines. To get an adequate sample you need to somehow call people with both types of phones. You are not legally allowed to make solicitation calls to a cell phone number unless the person has given you permission to do so. Hopefully you will have customer records that provide a contact number to call for each customer, and any customer who has given you their number has given you permission to call them even if it is a cell phone number.

Non-biased. The questions you ask must be unbiased, which means that they are not worded in such a way as to elicit a certain response. This is why so many surveys you take seem to be somewhat bland, because the questions are written without flowery adjectives.

Interpreting the Results. Companies often misinterpret the results of a survey. The 95% accuracy that is the goal of the survey only applies to the primary questions you ask. For example, if you gave a valid survey to all of your customers and asked a question such as if they would be interested in buying cellular service from you, then the response to that question would be 95% accurate. However, companies often try to interpret a survey question at a deeper level. For example, they might look at the results of the same question for men versus women respondents and say that one group is more likely to feel a certain way than another.

And you can’t make those kinds of interpretations with any accuracy. In this example the survey is an accurate representation of how all customers feel because you sampled enough customers to get a statistically valid result. However, if half of the people you surveyed were women, then your survey of women is only half as big as the overall survey, and correspondingly less reliable. In this example it wouldn’t be a lot more unreliable, but if you try to really subdivide the sample population the results become nearly worthless. For instance, if you try to analyze the results by various age groups of ten years each you will find that you can’t rely on the results at all.

Survey fatigue. Another common problem is survey fatigue where a survey is so long that a lot of people hang up in the middle of it. It’s important to try to keep a survey under five minutes or this will happen a lot.

Summary. If you are going to spend the time and money to do a survey then you should spend the extra effort to do it right. Make sure your sample size is adequate. Make sure the survey is given randomly. Make sure the questions are unbiased. And keep it short.

And as a reminder, CCG has done hundreds of surveys, so one way to make sure it’s done right is to engage us. We can help at every level from helping write the questions setting sample sizes or even making the calls.