What Election Polls Teach Us About Workplace Surveys

What Election Polls Teach Us About Workplace Surveys

Whether through social media, news channels, or political polls, election season always brings out more opinions than we care to hear. Polls in 2016 seemed to indicate that Hillary Clinton would be the next president of the United States.  However, President Trump ended up winning the election and surprised many because of what the polls had shown. Check out this article on the polls just two weeks before the 2016 election: ABC 2016 Presidential Poll. Clinton had a double-digit lead. How could the polls be so off?

Polls are a type of survey and there are many factors in survey data that lead to its validity. I will focus on one factor that has been a particularly hot topic for survey strategies in my 11 years of consulting with organizations: response rates.

In case it is helpful, check out my two other posts on survey development:
4 Survey Mistakes and How to Fix Them
5 Questions for Planning Your Survey Project

Sampling Error

Sampling error is what happens when we don’t have a sample that represents the population well. For any given study or effort, you have a group you want to study called a population. However, it is rare that you are able to collect data from the entire population. So, you try to get a portion of that population called a sample to provide data.  At the end of that CNN article that I linked earlier, it says that the ABC poll they cited contained 847 likely voters. There are two parts of that to examine.

First, is that 847 people is less than .00001% of the 138 million who voted. That’s not even dividing by all of the approximately 237.5 million possible voters. Although the number of voters and number of possible voters varied depending on the site I went to, 847 is a very small portion of a total that sits in the 100s of millions range. If you went to your bosses with survey results that came from less than .5% of your workforce population, would you feel comfortable making organizational decisions based on that data? I hope that answer is no.

The second is what constitutes a likely voter? The definition of a likely voter is based on who has voted in previous elections. Those polls would not represent newly registered voters and people who have not voted frequently in the past. You can see how this would result in sampling error: what constitutes a likely voter in previous elections is not necessarily going to be the same group that shows up for an election. In other words, polls derived from previous likely voters will be inaccurate if the current likely voter characteristics are drastically different. That discrepancy results in sampling error.

Fixing Sampling Error

What you need is a more representative sample. That might mean getting more people (a higher response rate) or it may mean that you deliberately sample with representativeness in mind instead of inviting the next willing and able person to complete your survey. There are several suggestions that I will offer from my years of consulting experience.

Improving Response Rates

Response rate refers to the percent of your total survey invite list that submits a survey response. For example, if you invited 100 people and 47 people completed a survey, then you have a response rate of 47%. Here are a few of my suggestions for increasing the number of people that complete your survey.

Make your survey worth the time to take. A quality survey that actually leads to results is a hugely important in getting a response rate. Confusing and vague survey items make it frustrating to complete and difficult to act on.

Once you have results from well-written items, make sure that you act on the results and that people know the connection between the two. This might be as simple as an email that says, “We heard you from this last survey. As a result, we will do the following…” Hopefully your selected way forward will demonstrate effective action planning to make positive changes for your organization and workforce.

Stick to clear, fair end dates. End dates for surveys are important because it helps set expectations for the possible time to complete the task and contributes to fairness in the survey process. If people don’t feel like they have time to complete it, then it’s not going to seem like a fair process. Be sure to think about things like what time zones your workforce is in, events that take people out of the office, and work cycles that may interrupt their ability to complete it.

Once you have an end date announced, stick to it. If you continually set the expectation that the announced end date will get moved back, then people won’t feel urgency to complete it and the date may pass. However, a firm date will let people know exactly what to expect.

Be choosey about your surveys. Survey fatigue is a very real phenomenon and is one that should be taken seriously. You will need some way of prioritizing the surveys that are most important for your organization’s goals. One challenge that this can present is that employees receive invitations for multiple surveys. Since they aren’t paid to take surveys, they will end up having to choose which surveys to complete and which ones to ignore. Try to find other ways to collect data if you can. Gather feedback for smaller efforts with more localized discussions and meetings.

Send it to the right people. This one may sound like a no brainer, but this can sometimes be tricky since your organization may not have records that are easily filtered for your purposes. Having a well-defined topic will also help you know who has the right experience for your survey. Sending your survey to the wrong people hurts the credibility of your effort, frustrates those who incorrectly receiving the invitation, and may give you bad data. None of those options are preferred. Target your population well. If you can’t, then you probably should take another approach to gathering the information you seek.

Random Sampling

One way to include fewer people in your invite list and to still have a representative sample is to invite people based on a random sampling technique. You can easily do that by listing all the people in your population with a unique identifier (employee ID, social, or numbering each person) in Excel. Then use the =rand() command in the cell next to each case (look here if you need some help: Random Numbers in Excel). Copy and paste the randomly generated numbers as values. Then you can sort the cases and take the bottom 25% of your sample. Truly random sampling ensures that the sample will be representative. But instead of sending a survey invitation to 100% of your sample, you will send it to a much smaller group.

You can check representativeness of the sample by comparing the sample you got with the overall characteristics of the entire population (e.g., percent from different departments, age groups, sex, etc.). If it doesn’t fit (unlikely as it is), you can repeat the steps.

Final Thoughts

The survey respondents are vital to a successful survey. Bias in your sample will give you bias in your results. For a process that is inherently about subjective thoughts, it matters whose subjective thoughts you receive in the end. Follow the points I provide, and you can end up with more accurate results than presidential election polls did in 2016.

Surveys are still worth it. They just need to be done with thoughtfully crafted approaches in order to avoid common pitfalls like sampling error.

As always, thanks for reading.

Brandon
Subscribe to the blog below!