There are many reasons you might deliver UX surveys:
- Deliver insights into what potential users want from a new or existing product;
- List the qualities and failings of an existing website or app;
- Explore the value of a new feature or function;
- Measure changes in attitudes or performance of a product over time.
When delivering a survey, user research is direct and straightforward. Used in conjunction with other methods, they’re a valuable way of narrowing down customer behaviour trends and opinions that, when used to deliver an improved product, lead to a flawless customer user experience.
However, the advantages of carrying out UX surveys are often the same things that make them so easy to get wrong, providing too many opportunities for pitfalls when constructed incorrectly.
So, how does that happen?
Advantages of a UX research survey
- Simple to compose
- Easy to deliver
- Deliver quantifiable results, views, and opinions into UX elements
- Provide easy-to-understand statistics for investors and stakeholders
- A straightforward method of collecting quantitative data
- Proficient method of collecting qualitative data
- Monitor the progress of developing product versions
- Helps designers make data-driven decision-making
So how can those advantages lead to problems? Well, something simple to compose and quick to deliver can encourage survey designers to rush the process without thinking things through properly, gathering low-quality or irrelevant data.
For the high-quality data required from a UX design survey, researchers must carefully consider their ideal questions and how to construct them into a simple, flowing questionnaire that delivers relevant and valuable responses.
Common UX research survey design mistakes
- Building towards human biases – We’re subconsciously driven to push for results that validate our intentions instead of uncovering the flaws we don’t yet see but need to.
- Asking the wrong questions – Knowing what to ask and how is essential for obtaining valuable, realistic responses.
- Complicating questions by asking too much – Simplicity is critical to enabling easily quantifiable results.
- Poor structure – Long or confusing surveys will discourage participants, encouraging high churn rates.
Different types of survey data
In UX research, we’re concerned with how users behave (behavioural research methods—what they do) and their attitudes (attitudinal research methods—what they say and feel).
The data we uncover is quantitative—results we can measure in numbers or rates—or qualitative—as long-form written answers explaining the cause or drivers behind actions or feelings.
Different research methods play into each of these areas; surveys are primarily attitudinal, delivering either quantitative or qualitative data. However, to keep a survey flowing and participants engaged, they tend towards questions that provide a lot of data quickly and easily, utilising quantitative measuring.
By asking closed-ended questions, you can easily measure results with a single or a few clicks:
- Multiple-choice radio buttons
- Multiple-choice checkboxes
- Rating scales
- Ranking order
Alternatively, open-ended questions require participants to type an observation or explanation and are harder to quantify yet deliver far more detailed answers.
A clever way of mixing these options is to ask closed-ended questions until a response merits an open-ended explanation. That way, you can utilise open and closed-ended methods to receive the best of both worlds.
A word on the system usability scale (SUS)
Anyone who’s ever completed a survey questionnaire is more than likely to have clicked on a rating scale from strongly disagree to strongly agree.
Whether utilising a 5, 7, or 10-point scale, the results are easy to quantify and interpret.
However, an easy-to-administer version of this type of survey, the system usability scale, was introduced in 1986 as a quick and easy way to measure usability testing.
The SUS asks ten simple template questions applicable to almost all products. Users rank them on a scale of 1 (strongly disagree) to 5 (strongly agree), and using three simple calculations delivers your product’s overall usability score.
- I think that I would like to use this system frequently.
- I found the system unnecessarily complex.
- I thought the system was easy to use.
- I think that I would need the support of a technical person to be able to use this system.
- I found the various functions in this system were well integrated.
- I thought there was too much inconsistency in this system.
- I would imagine that most people would learn to use this system very quickly.
- I found the system very cumbersome to use.
- I felt very confident using the system.
- I needed to learn a lot of things before I could get going with this system.
To calculate the usability score:
- For each of the odd-numbered questions, subtract 1 from the score.
- For each of the even-numbered questions, subtract their value from 5.
- Take the new values, add the total score, and then multiply it by 2.5.
Scores can be ranked as letter grades or percentiles, with the average being 68. Products with scores below 68 require thought and work, and those above the average show fewer signs for concern as their scores rise.
This simple single score provides a simple indication of whether the product works and offers a satisfactory experience. It also provides a simplified way of judging A/B version testing.
Different types of UX survey
UX surveys can be utilised for all kinds of analysis, each requiring its own approach and delivery.
- Evaluative research surveys
- Generative research surveys
- Continuous research surveys
- Customer satisfaction surveys
- Net promoter score surveys
- User journey experience surveys
- System usability surveys
- Customer effort surveys
- Exit-intent surveys
Surveys can be just as useful delivered at the beginning of a project to understand new-user behaviour, at the end of a research session to measure its results, or continually after product delivery to monitor progress or drop-off.
Building your best user research survey
1. Always start with your goals and necessary outcomes
Don’t kick things off by writing the questions you think you need answers to; start with what you want to achieve. When you understand your primary goals, you can identify your customers’ pain points and problems, leading to the factors you’ll need to consider.
If you have a range of essential goals, don’t try and squeeze them all into one survey—create as many as you need to keep them concise and on topic.
2. Ask the right questions in the right way
Once you understand what you need to cover, you must write questions that deliver data you can use.
- Keep questions clear and to the point.
- Ask questions in the first person, encouraging participants to speak about their experience instead of giving their opinion.
- Use easy-to-understand language and straightforward syntax.
- Don’t use words that could cause confusion.
- Be specific.
- Avoid single and double negatives.
- Ask questions that require a single answer. Asking for too much information in a single question can confuse participants, creating confusing responses.
3. Check for biases
Human biases are our subconscious hopes manifesting in our actions. We make hundreds of automatic decisions each day based on previous experiences and what makes us feel safe. Sadly, these biases aren’t always healthy or conducive to good decision-making or behaviour.
For example, confirmation bias encourages us to ask questions that back up our ideas and hypothesis. How we ask this question, “Which parts of the process were the most enjoyable?” automatically suggests that our product is enjoyable. Your user might not have liked a single thing about it but had to find something plausible just to be able to answer the question. A better version would be, “How did you find using the product?” or “Do you think you’re likely to use the new features?”
We’re naturally drawn to a range of similar biases that help to sway participants’ views to favour results. But with inaccurate or forced data, we’re far less likely to uncover the true painpoints, without which we can’t fix and strengthen our products.
4. Keep everything short and sweet and well structured
Nobody wants to be glued to a survey (UX research or otherwise) for longer than necessary. So to get the most from your questionnaires, they need to be simple, quick to complete and structured to flow in a way that keeps participants motivated, preventing them from jumping backwards and forwards between topics.
It’s essential to have a simple user interface that allows participants to complete surveys without unnecessary scrolling and swiping.
5. Provide incentives wherever possible
If you can’t avoid lengthy surveys, make them worthwhile. For quality data, give users something of equivalent value to their efforts, whether that’s direct payment, a voucher, a discount, or free access to paid products.
6. Test your surveys until they’re foolproof
Test your questionnaires amongst your teams, opening them up for scrutiny. Then, when everyone’s happy, beta-test them with a small percentage of your user group to see if you’ve missed any other flaws before rolling it out to the majority.
UX surveys can play a crucial part in your process, but only if they’re designed and delivered correctly. Designing questions and categories around clearly defined goals is essential. Casting too wide a net can provide too much confusing data, failing to uncover any patterns; casting too narrow a net means wasted opportunities and insufficient data.
Before you start creating any questionnaire, ask yourself,
- What do you want to know, and why?
- What will the answers to your questions change?
- What will it take to make that change?
Armed with real goals and rules that determine action and awareness into how you can get a survey so wrong, you’ll stand a better chance of building better questionnaires that reveal the results you need to make healthy changes.