4 Survey Mistakes and How to Fix Them: A Beginner’s Guide for the HR Professional

4 Survey Mistakes and How to Fix Them: A Beginner’s Guide for the HR Professional

Surveys have become a common tool for many organizations to better understand what is going on with employees. But what makes a survey a good one? How do you know if the questions are going to give you the information you want?

I have seen a lot of survey struggles in my years of consulting. I’ve had clients completely misinterpret survey results and then make several bad decisions. Some clients have wanted to cram in survey development timelines at the last-minute thinking it will be a quick process. Then there is the uncomfortable situation of seeing a client who wants help analyzing their survey data and realizing that they cannot accomplish what they hoped to because their survey items cannot answer the questions they seek to answer. Maybe your work already involves surveys and you could use some guidance. Maybe you’re looking to start using surveys. In any case, this guide should help get you started on the deceptively difficult task of survey writing. I have a separate post for tips about the broader survey project planning here.

This post will help you recognize some key characteristics of a good survey so you can more effectively work with survey vendors and/or those creating surveys in your organization. There are many nuances to writing a good and effective employee survey. This post won’t solve all your survey problems and there are many ways to approach survey development. That said, avoiding these 4 mistakes will raise your surveys to the next level by creating more actionable and clear items. I try my best to provide solutions for each mistake. Ultimately you should consult survey professionals, but this should save you time and money by having a clearer idea of the end goal.

Here is the overall principle to remember with surveys: communicate as clearly and concisely as possible what information the employee needs to provide you in each item.

Sounds simple, right? This is often easier said than done.

Anatomy of a Survey Item

A survey item consists of the item text, a scale, and scale anchors. When all three of these align well, you can get rich information on your workforce. When they don’t, it can be confusing and frustrating to complete.

The survey item gives the content and topic that respondents will react to. It should provide boundaries for the topic and be clear to respondents what you (as a survey distributor, manager, steward, etc.) would like to know about. The item presented in the example isn’t great, and by the end of the post you’ll see why.

The scale and scale anchor give respondents the desired way to respond. The scale anchors are the verbal description of responses. The scale is the numerical value assigned to those descriptions that allow quantitative analysis of the data. I strongly recommend that you do not display the scale for respondents. I have it displayed here just so you can see how the two correspond, but the numbers are a matter of your coding the anchors behind the scenes. Respondents only need to know the anchors.

Mistake 1: Too Many Ideas

Many times, survey developers are trying to do more with less. In an effort to do that, they try to put several ideas into a single item, what survey experts call double barreled items (or sometimes triple, quadruple, etc.) barreled items. The trouble with this comes when it’s time to interpret the results. For example, take the following item:

I am satisfied with the variety of work and autonomy I have in my current job.

If your employees strongly disagree with this item, then what should be the focus of new efforts to improve? Should you focus on the issue of work variety or ways to give employees more decision-making power? Maybe a combination of both? It is impossible to pinpoint what the issue is with all those ideas included in the item. Without additional information, you end up guessing at the priorities when you could clearly identify them with a better item. That guessing often ends up resulting in fixing things that aren’t broken, creating new problems, and a confusing cycle of improvements. The thing is, you don’t always need to guess.

The fix. Thankfully, course correction for this isn’t as complicated as some of the other ones. Keep a single idea per item. One way to help with this is to avoid using the word “and”. Write two separate items if there are two separate ideas. If there are 3 separate ideas go for 3 items and so forth. Sticking to a single idea per item will make it much easier to interpret the results and allow you to be more accurate in targeting new improvements. Just as important, it also makes it easier for the respondent to understand how to respond. Here is the second item from above written in two separate items:

I am satisfied with the variety of work in my current job.

I am satisfied with the amount of autonomy I have in my current job.

There are other things to correct in these “fixed” items, but at least there are more singular ideas in each one.

Mistake 2: Unclear Terms

People sometimes think that terms and vocabulary are understood by all. But there are often many ways of interpreting words, making concrete boundaries that everyone uses for a survey item difficult to find. For example, take the following item:

Overall, my work team is cohesive.

“Cohesive” is a broad term that can encompass a lot of different things for many different people. Is that everyone gets along with each other? Is it that everyone can collaborate well together? The answers to those questions will determine the manner in which you apply those results to your organization. Without a clear common understanding among your survey respondents, several ideas are actually packed into a single term and leaves you in the same position as with the first mistake: the item cannot clearly be interpreted and acted upon.

The fix. The fix here is less obvious than the one provided for mistake 1. Start by trying to explain what is meant by each term in the survey development process. As you explain what you mean by each term, you will likely come up with a few different ideas that encompass your term. Make each of those ideas a separate item. Now you will have single ideas in each item that are more specific.

Using the example of team cohesion, start by specifying what team cohesion means. Then seek out items each with a single piece of cohesion covered in each one. For some organizations it might mean that the team members are friends and hang out after work hours. Others may not care if you like each other but only look at cohesion as a team that can solve problems well together. It might mean all of that for you. Here are some example items for cohesion as collaborating well as a team:

I am comfortable sharing new ideas with my team.

My work team provides constructive criticism to my ideas.

The quality of my work improves when I include my team members.

I feel comfortable giving constructive criticism to my team members about their work performance.

My work team knows when it is most effective to collaborate and when it is most effective to work individually.

This is just an initial set of items for what it means to work collaboratively. These would need continual revisions to make it better. Within each of the ideas of team cohesion, I can easily think of 3-5 items that could be written to carefully specify what team cohesion is. Notice how much more information can be gathered with these items than solely, “My work team is cohesive”. You can even include that broad cohesion item with these and it becomes much clearer. The rabbit hole can get really deep at this point so I’ll continue.

Mistake 3: Mismatch Between Item Stem and Scale Anchor

Remember that the item stem is the statement that people will respond to. The scale anchor is the response given to the statement such as, strongly agree. The scale assigns a numerical value for each scale anchor to indicate degree of agreement, feeling, how often something is done, etc. depending on the question you are asking. There are times, however that I have seen the item stem and scale anchor mismatched. The item in that case is ultimately useless. Here’s an example of this mistake (with the scale and anchors for this one):

I have frequent performance discussions with my supervisor:

1. Strongly disagree

2. Disagree

3. Somewhat disagree

4. Somewhat agree

5. Agree

6. Strongly agree

The biggest issue above is that we are trying to force a question that is inherently about frequency of a behavior into a disagree/agree scale. If we want to ask about frequency, why don’t we just ask about frequency? Asking a frequency question with an attitude scale means that at best you completely miss on most of the information you are hoping to get: if it’s not frequently done then how often is it done?

The fix. If a respondent disagrees that they have frequent performance discussions it doesn’t tell you how often it does occur; it only says that they don’t consider it to be frequent, another issue with the clarity of the given item. The example below demonstrates how much more information you can gather when the item text, scale, and scale anchors align more closely. Here is example of an improved item:

I have performance discussions with my supervisor:

0. 0 times a year

1. A few times a year, but less than monthly

2. Once a month

3. 2-3 times a month

4. Once a week

5. Several times a week, but less than daily

6. Daily

The new item asks for frequency information and is measured on a frequency scale, allowing for more information than you would be able to get with mismatching item text and scale. Now you can decide if that is frequent enough for your organization. There are still questions about what constitutes a performance discussion and what the expectation should be. But the idea is to match the kind of questions with the right kind of scales.

Mistake 4: Incomplete Scales

Having a full scale of measurement means that all of your respondents should have a chance to give an answer that fits them. It also means that scale points should be about evenly spaced. It can be problematic to have an incomplete scale as it can lead to respondent frustration and lower response rates. For example, take a look at the following item and scale:

Overall, I am ­­­­­_______ with my health benefits.

1. dissatisfied

2. somewhat satisfied

3. satisfied

4. very satisfied

Believe it or not the example item is something I often see. By not presenting the full range of negative responses, results are inherently going to be more positive. While I don’t think it comes from a place of conspiracy and collusion (hopefully), it does not allow for the true range of information to be gathered. Managers and leaders may be led to believe that things are better off than is actually the case simply because there are 3 times as many positive response options.

The fix. Understand what the full range of responses looks like and provide it. For this example, you can provide a full scale that includes somewhat to very dissatisfied in addition to the satisfied response options. That way respondents can express dissatisfaction and satisfaction in equal degree and results can be presented with greater impact since there was an equal chance for people to express dissatisfaction and satisfaction. To me, this makes your results stronger and more clearly informs decision makers. Here’s what it would look like:

Overall, I am ­­­­­_______ with my health benefits.

1. very dissatisfied

2. dissatisfied

3. somewhat dissatisfied

4. somewhat satisfied

5. satisfied

6. very satisfied

In matching the scale, know that sometimes the negative end of the scale doesn’t have multiple options. For example, one could argue that when rating importance of something, there aren’t really several degrees of unimportance. It’s just the absence of importance, or less than “very little importance”. This is some of the nuance involved in survey writing and is the reason that I recommend people work with trained survey development professionals. It can be tricky to understand the subtleties of some items. But, again you will be in a much better position to meet with a survey professional if you follow these tips.

Final Thoughts

This is not necessarily a comprehensive list of survey tips or training. There are still the topics of aligning survey content to strategy, acting on survey results, and overall survey program strategy among other things. The tips I provided can help you when trying to evaluate surveys that you are considering from a vendor or when trying to create one on your own. If you’re trying to create your own survey, take a stab at it and then refer to survey experts to help take your survey the rest of the way. Clarity of information is paramount. Having one idea per item, clear terms, and effectively aligned scales will give you a very strong foundation for your survey.  

Thanks for reading!

-Brandon
For more expert tips and content, scroll down to subscribe!