Understand, Think, Summarize, Select
This section covers what’s going on in the brains of every single person that starts and takes your survey. Once you know where your survey can quickly frustrate, confuse, slow, or be misinterpreted by someone, you take action to avoid some of the more common pitfalls. This means you can design effective surveys. So, how exactly do people answer a survey question?
Thanks to several talented survey researchers (listed in the search box below), you can see the major cognitive phases someone goes through when looking at a single survey question.
This process happens for every survey question. One of the reasons that longer surveys lead to lower responses rate is because your respondents are getting mentally fatigued going through each cognitive phase. But there’s a big issue with the diagram above: not every respondent meaningfully goes through all of the steps.
The Response-Behavior Continuum
In an ideal world, every respondent goes through each step listed above or every survey question. Survey researchers call this optimizing. This is when a respondent thinks critically through the steps above, carefully reads each question, and selects the best response option. Optimizing means respondents don’t skip mental steps but give meaningful attention to every question and response.
That’s not a realistic expectation, especially when the survey is very long, very complex, or irrelevant to a respondent. If someone speeds through or skips steps completely, survey researchers it satisficing.
There are two levels to satisficing: weak satisficing is when someone rushes or partially completes the four steps, while strong satisficing is when someone skips steps completely. The diagram below shows you the spectrum of possible survey behavior.
There are a few signs that you can look for to see if any strong or weak satisficing has happened in your survey data:
Satisficing Survey Behavior to Look For
- A survey was completed very quickly, relative to how long the survey was
- Open-ended responses are incomplete sentences or just a few words
- Open-ended responses are unrelated to the question
- A string of closed-ended questions all has the same response, such as selecting the first option repeatedly (known as non-differentiation)
- A respondent selects “No opinion” or “I don’t know” repeatedly
Let’s end this topic by going over some quick tips for better survey design.
Design Surveys for Optimizing
When you design a survey, your goal is to limit satisficing and encourage optimizing. In general, the shorter the survey, the more likely someone optimizes their response behavior and the less likely someone satisfies or abandons your survey altogether.
Less cognitive fatigue also means faster completion times and likely a higher response rate too. Keep your surveys short, consistent, and relevant to encourage optimizing as much as possible.
Design shorter, intentional, and tested surveys for the most value.
Below are some additional tips to help you design better surveys. You can also check out this resource or this resource for more tips.
Survey Design Tips to Promote Optimizing Behavior
- Budget at least a week for question drafting and cognitive testing
- Limit your survey to 8-12 questions total
- Limit your use of open-ended questions
- Put the most important questions up front (assume that respondents will abandon the survey)
- Limit your use of jargon or complex language (or define terms within the question)
- Test and refine your provided closed-ended responses
- Limit your use of multiple or repeated closed-ended questions (such as several rating questions back-to-back)
- Display or trigger the survey inside the product, service, or experience whenever possible
- Design questions to work together (aka responses from this one question help you understand responses from a later question) so you can ask fewer questions
Let’s end this handbook by reviewing the common survey questions and when and why to use each.
- Survey satisificing behavior
- Survey optimizing behavior
- Survey item nonresponse
- Total Survey Error (TSE) framework