Recently, I received a survey in the mail from a national organization asking me to respond to questions about my experience with trees. (Yes, you read that correctly—trees.)

Specifically, the survey included questions like:

Have you ever climbed a tree?
Did you ever relax in the shade of a tree?
When you were a child, did you ever play under or amongst trees?

Frankly, these survey questions puzzled me. How or why would it matter to this national organization whether or not I had ever climbed a tree or played amongst trees? (Moreover, how likely is it than someone has ever not sat under a tree for shade?) How could this kind of data, aggregated over hundreds of respondents, be used in any meaningful way to inform business decisions? Ultimately, it was not lost on me that, likely, the main purpose of the survey was not to help inform their business strategy, but rather, to get me to reflect on the value of trees so that I would then donate money to their cause.

Surveys are used a lot, by all kinds of people and organizations, and for all kinds of purposes. Sometimes they are well-designed and used appropriately, but often they’re not. Surely, much has been written about good survey design, and while much of it centers on how to construct effective questions, properly structure the survey, etc., this article focuses specifically on how the design of the set of response options can impact findings.

Response Options: Open vs. Closed Formats

When creating a survey, one of the first decisions that must be made about the design of response options is whether to use an open or closed format. An open format allows the respondent to write a free-form response, while a closed format typically consists of a number of options from which the respondent can select.

Even this fundamental decision about the survey design can impact the outcome of the survey, as a study on parental values illustrates. In this study, respondents were asked what they considered most important in preparing children for life. When a list of options was offered, 61.5 percent chose, “to think for themselves,” while only 4.6 percent provided an answer similar to this when using an open format response.

In reality, choosing from a list of options is never equivalent to constructing a free-form response because it narrows the universe of possible responses, potentially suggesting options which may not have naturally occurred to the respondent. In this way, the design of the survey itself introduces a distortion of reality that can influence or confound survey results.

The Rating Scale: Positive & Negative Numbers

Closed response formats can take a variety of forms, but rating scales are one example and are commonly used in survey design. One way to design a scale is to use a range of sequential numbers. For example, you might ask the question, “How successful would you say you’ve been in life?” Your rating scale might then use a numeric scale that ranges from zero (not at all successful) to ten (extremely successful.)

Alternately, you could use a scale ranging from -5 (not at all successful) to +5 (extremely successful). Even though the actual number of increments to select from would remain the same in each version, to what extent might the specific numeric values used in the scale affect how people respond?

A research study designed to glean insight on this found that the type of scale does indeed have an influence. In the study, the number of respondents selecting from the left side of the scale was remarkably different, depending on which type of scale was used. In fact, 34 percent of the respondents selected a value on the left side of the scale that employed both negative and positive values, as opposed to only 13 percent who selected from the left side when the scale ranged from zero to ten!

Researchers postulate that respondents referenced the values in the rating scale to help make sense of the options available. Should ‘not at all successful’ be considered as the absence of noteworthy achievements, or alternately, the existence of relevant failures? Respondents used the numbers in the scale to help inform their understanding. When paired with the value of zero, “not at all successful” suggested the respondent should consider the absence of notable achievements, while a pairing with the value of -5 (along with the midpoint of the scale at zero) suggested respondents should take into consideration any relevant failures.

Even when the number of options to choose from was the same across different types of rating scales, the actual numeric values used influenced and drove how respondents understood the survey question, and consequently, how they responded.

The Ordering of Questions

Another aspect of survey design that can impact research findings is the ordering of your questions. For example, let’s say you’re interested in finding out about people’s level of satisfaction with their marital or dating life. In addition, you’re interested in finding out about their general life satisfaction. Would the ordering of the questions matter?

It turns out that if you ask people to rate their marital or dating happiness, and then ask them to rate their general life satisfaction, the answer to the second question can be “colored” by their answer to the first question. This is because there are many aspects of life that can be relevant to general life satisfaction, and what’s likely to come to mind may differ across individuals. But if this question follows an inquiry about marital/dating satisfaction, then certainly, marital/dating satisfaction becomes top of mind, especially since it conceivably contributes to general life satisfaction.

This “order effect” is particularly pronounced with part-to-whole questions where one question asks about a part of an overall attitude (e.g., about dating/marital happiness), and a subsequent question deals with a related, yet broader topic (e.g., life satisfaction). The influence of the order effect can be easy to miss when creating a survey, unless you take the time to walk through the survey as a respondent would, carefully considering how any given question might ‘color’ your mindset as you think about how to respond to subsequent questions.

Asking About Overall Evaluation

Surveys often include a question designed to gauge people’s overall assessment of a topic or experience. Often, this is one of the most important questions in the survey, and can be phrased in a variety of ways.

For example, each of the following is a slightly different way of asking essentially the same question:

It’s worth considering to what extent these different variations of the question might yield different findings. One research study compared the results of different versions of this type of question and found that people using the Performance and Regret versions (e.g., No. 1 and 2) tended to give higher ratings than those using the Expectations and Ideal versions (e.g., No. 3 and 4). This is because the Expectation and Ideal versions not only set an explicit standard, but also a higher and more demanding one against which the product or experience was being compared.

The way in which the question is asked, then, can directly influence the type of ratings obtained – even though each version of the question measures the same underlying construct. This is yet another example of how the survey design can influence and drive survey findings.

Survey Design Best Practices

In light of everything discussed in this article, it might be easy to become skeptical about whether it’s possible to design a survey that actually yields valid results. But at the very least, you should have a heightened awareness of what can confound survey findings, and that’s an important place to start! Once we understand how the design of a survey can influence respondents’ behavior, we can leverage this important insight.

From this article, we’ve seen that there are several things to consider:

  1. Recognize how closed response formats can narrow the universe of what might normally come to mind for respondents as they consider possible ways to answer a survey question, and how this can affect research findings.
  2. Take into account how different variations of a numeric rating scale can influence how respondents interpret the survey question. Negative numbers likely suggest a different meaning than positive numbers.
  3. Think through how to order the questions (especially part-to-whole questions) so you anticipate how the ordering itself is likely to impact how respondents interpret the (subsequent) question.
  4. When including an overall evaluation question in your survey, think carefully about how the wording is likely to affect how respondents think about constructing an answer, especially when it invokes a thought process that compares one thing to another, or against a specific standard.

Conclusion:

Even subtle details of the design can influence survey findings. It’s important that surveys measure what they’re intended to measure. Much has been written about “best practices,” and it’s easy to assume that as long as general guidelines are followed, a given survey will yield valid findings. But we’ve seen that people often use the survey itself as a source of information- e.g., as a means of interpreting the survey questions. Even minor changes in format, or in question wording or context, can result in significant differences in findings. Once designers understand how design nuances can affect research outcomes, they can design with deeper insight and understanding in order to feel more confident in the findings.

Comments
  • Brian Bimschleger

    Really great article. Survey design is an under-appreciated skill that makes a huge difference. Thanks for such a solid write-up.

  • Kelly Alleen-Willems

    Very nice summary of influencing factors — I agree that being AWARE of the influence is key, even if the survey does, inevitably, influence responses in some way.

    Comparison and how the results will be used and analyzed are key here! For example, if your goal is to validate findings by providing response options to a question, rather than an open-ended response, then by all means DO IT! On the other hand, if you are concerned that a rating question might skew toward the left or right… just remember that no data point matters on it’s own but when compared to other data points in analysis, THAT is where trends and findings emerge. So a skew one way or another is great to be aware of but, ultimately, a solid analysis of the responses is where you gain the most!

  • Patricia Adams

    The article is very informative and is exactly what I need to create a survey for my research. Could you suggest me some good survey tool that has some great question types. I have searched quite a few online but I am caught up between SoGoSurvey and SurveyGizmo. Any suggestions?

    • Thanks Patricia! There are tons of great tools out there, but we’re really partial to Typeform – gorgeous, customizable designs, and some pretty robust reporting to boot.