Design Validation

Design research using either qualitative or quantitative methods helps uncover issues that users face when interacting with the visual elements of a proposition.

Following the prototyping stage and ahead of development, your design will be as close as it can get to live without coding. Carrying out user experience evaluations at this stage puts your design through its paces and identifies any final changes before the cost of alteration increases significantly after build.

Scope of Research

Carrying out user research at the design stage offers a far wider scope than at any other point in the design process. By now the following should be in place:

  • Navigation structure and taxonomy
  • Branding and graphic design treatment plus possible alternatives
  • Copy, call to actions (CTA’s) and icons
  • Processes, forms and transactions
  • Full user journeys

The design won’t be integrated with the back end at this point so transactions, and other similar functions, may not work precisely as they would in the live environment. However, the experience from a user’s perspective will be as close to the live experience as it is possible to get. This offers the best opportunity for feedback.

It will be, almost certainly the first time all the elements have been in place at the same time before. During prototype testing we are often concerned with an individual element, journey or function. Now we can focus on the holistic experience and that delivers different and richer insight across all the areas listed above.

Choosing the right method

There are really two choices when considering design research methods and these are:

  1. Qualitative User Research – carried one-to-one and is referred to as usability evaluation, web user research, user testing and usability testing.
  2. Quantitative User Research- carried out with an online panel of many hundreds of respondents and is known as online task testing.

Choosing which is the right method for your project depends on the requirements you have. The table below sets out which requirements steer you toward one or the other:

 Requirement QUALITATIVE Research  QUANTITATIVE Research 
When you need a large sample of respondents
to provide scores, rankings, timings or preferences
 No  Yes
 Multi-country studies Feedback limited to a few
Large sample size provides
reliable feedback
 Choosing between graphic design alternatives  Yes No 
Understanding the “why” behind users’ interactions
and issues
 Yes Yes
 Capturing video of the user’s screen  Yes No 
 Using ‘Think Aloud Protocol’  Yes No 
Capturing video of the user’s facial expressions
with audio
 Yes  No
If you think you might need to change the study
during research
 Yes  No

                                                           Qualitative & Quantitative Research Comparison 

These methodologies are not mutually exclusive and are frequently run in series with qualitative research used to build on the data captured in the quantitative research. Although a significant budget may be required for this.

Qualitative user research is carried one-to-one and is also referred to as usability evaluation, web user research, user testing and usability testing.

Qualitative Research Process

The methodology for conducting qualitative research (usability evaluations) of this type involves the following:

  • Recruitment: Participants recruited against the target profiles specified by the client
  • Preparation:
    • test script and discussion guide designed to facilitate the participants interaction with the design
    • facility to carry out the research
    • Equipment set up to record the sessions and devices for the participant to use (i.e. tablet, smartphone, PC)
  • Research Session:
    • Skilled moderation of each of the sessions
    • Observation and recording of issues
  • Analysis & Reporting

Participant Numbers & Session Lengthparticipant icon

By the time we get to the design testing stage we are normally carrying out the usability evaluation on all platforms – smartphone, tablet and PC. But in our experience, we don’t need to run 5 sessions on each platform because we have normally flushed out serious issues in previous rounds.

In most cases we run ten ( approx. 60 minute ) sessions over two days of research, with the following split across platforms:

  • Smartphone = 4 participants
  • PC = 4 participants
  • Tablet = 4 participants

We generally find that tablet can be informed by the findings on smartphone and PC but we like to run a couple of sessions to avoid us missing an obvious issue with break points and key information below the fold

Whereas with prototype testing the scope of the available interaction can be relatively small, with design testing we usually have a wide range of content and multiple user journeys to interact with. Participants will be asked to complete multiple tasks that encourage them to explore all the areas of the design we are interested in.

Research Facility and Viewing

Research facilities provide viewing through a one-way mirror or via a screen so that the design team and stakeholders can see, first hand, how participants get on with the developing designs. As the people viewing are in the same location as the moderator there is the opportunity to discuss what they observe between sessions and begin to plan updates and changes.
It is also an opportunity for the design team and stakeholders to collaborate and discuss what they are seeing as research facility viewing rooms tend to be sound proof.

However, research facilities come at a cost and alternatively the research can be run almost anywhere – in meeting rooms, participants homes, clients offices etc. We can stream the sessions across the internet so live viewing remotely is possible.

However, there are limitations which are:

  • Only the screen of the device is visible not the participants facial expressions
  • Viewing video for extended periods over the internet can be a little unreliable due to the bandwidth heavy nature of video
  • The screen resolution can make it difficult to view fine details

An alternative is to review the high definition session videos which we record as a matter of course and upload to a shared Dropbox folder. Bandwidth allowing we will upload these as we go and immediately share a link so you are able to watch with about an hours lapsed time.

Test Equipment

We provide all the technology required to run the user research sessions.

This includes;

  • The test laptop which is loaded with specialist software for recording picture in picture
  • High definition video of the screen the participant is using plus their facial expressions.
  • The software needed to connect smartphone and tablet devices so that we can record the screen.

The setup we use for recording mobile uses a hardware and software configuration that means the smartphone simply has the power cable plugged in. The participant can pick up the phone and hold and interact with it naturally without any attachments for overhead cameras, or cradles or with the device fixed in position on the desk.

We have a range of devices we provide for the testing including iOS and Android smartphones, tablets and Microsoft PC’s. We are happy to use your devices if you need us to and can generally connect to them with ease.

Test Script, Moderation, Analysis and Reporting

analysis icon

All our projects are run by one of our team of highly experienced UX consultants who has been independently evaluated and award Accredited Practitioner status. Our consultants all have a minimum of 5 years experience, the necessary qualifications plus the capabilities to run this type of design research.

Test Script

In discussion with the client, our UX consultant will develop the test script that includes the tasks and scenarios that will guide the participants’ interaction, plus any questions you need answered by the research.

We document this within a Research Plan, which contains all other details concerning the research, and is shared with you for iteration and sign off. A typical session structure might look like this:

  • Before they start
    • Discussion about the context of use, current behaviours and attitudes towards the subject of the research.
  • Tasks
    • We will create tasks covering the key journeys, interactions, features and functions that are to be evaluated
    • Questions connected to tasks that the moderator must ensure are answered.
  • Closing Interview
    • A discussion about their experience


All our consultants have many years of experience running research of this type and, unless otherwise requested, our approach to moderation is as follows:

  • To verbally communicate the tasks to participants to avoid formality
  • To allow the user to interact freely with the prototype and not to lead them
  • Only to interrupt the user when they are hesitant or confused and if that doesn’t result in losing the momentum of the task or potential learnings from their behaviour
  • To question them after a task or sub-task if necessary

On occasion, we have been asked to operate a more formal moderation approach by other customers and are happy to alter our style if need be.

During the user testing sessions the moderator will be observing the session and making notes that they will use later for analysis. These may be time stamped notes so that they can refer to them later and review the video of the sessions.

Analysis & Reporting

After the research they will carry out the analysis and start to create the report – if that is the deliverable required. Where reporting is required we follow User Experience best practice reporting standards.

We utilise a traffic light reporting scheme to categorise observations and assign severity ratings as shown below:

traffic light reporting scheme

Traffic Light Reporting Scheme

CASE STUDY: 'My Home Move'

For online conveyancing specialist MyHomeMove we had provided usability testing throughout the development process. The final stage included testing the full design with all branding and copy in place.

We conducted user research on smartphone, tablet and PC designs that were fully functional and allowed the participants to interact as they would in the real world. The research was conducted in Leeds so that both the client and their design agency could view the sessions and discuss the observations with our user experience consultant. We provided a detailed report that included recommendations for the agency to implement.

When the new website went live there was an 80% increase in mobile conversion.

Quantitative research is carried out with an online panel of many hundreds of respondents and is known as online task testing.

Quantitative Research Process

The methodology for conducting quantitative research (online task studies) of this type involves the following:

  • panel of respondents segmented to as close as possible the target profiles specified by the client
  • study design including tasks, questions and any study logic
  • Software for delivering the study and the time required to code it
  • Analysis of the data set and reporting of findings and recommendations

Panel Size and Selection:

participant icon

The main reasons for running quantitative design research are that users are geographically spread out, or that a statistically relevant sample size is required.

With international studies, where statistical relevance is not required, smaller numbers can be sufficient – for example if there are ten countries we may only wish to research with 20 in each. With this type of research, we are using the quantitative methodology as a proxy for qualitative user testing most likely because it is more cost effective than to run the research in each of ten countries.

In a single country study we may wish to use a panel of 200 minimum in any case because we are using the methodology because we need a high volume of responses to give us statistical relevance.

In both cases it may make sense to run studies with hundreds of users because there is not much more effort in analysing large or small data sets so it is mainly a cost question. The only time this isn’t the case is when a study includes lost of questions that allow respondents to give unique answers – i.e. they type a sentence about what they liked or didn’t like. Each of these individual responses needs to be read and as this is time consuming it increases the cost of analysis.

The study is delivered to the panel using a link that is emailed which makes it very easy to course respondents from any one of a number of panel providers, or even the clients’ internal customer panel. We are flexible with where the respondents come from and it is simply a case of matching budget and profiles.

Study Design and Questions:

Once an online study goes live it can’t be changed – it is a fundamental difference between qualitative research, which can be adapted as you go, and quantitative research, and so the study design must be perfect when it is launched.

To achieve this we use the following process:

  1. Hold a briefing meeting or call to discuss in detail the requirement, scope of tasks and questions to be asked.
  2. Draft a study script in a word document format and share this with the client.
  3. Iterate and improve until we are all happy with the draft script.
  4. Code the script into the study software and carry out a test run.
  5. When we are happy with the study send a test link to the client for test and sign-off or further iteration.
  6. Go live only when the final test link is signed off.

During the briefing meeting or call we also ask about the reporting format, the type of tables and charts you are expecting to see and cuts of data. For example if you are expecting to see a gender split we need to make sure gender is a question being asked.

Study Software:

We are able to use a range of technologies to run online task studies including UserZoom and Loop11. There are strengths and weaknesses with many of the software products available and we are happy to recommend the best product for your requirement. If you use an in house tool or have a licence already in place we are happy to use that.

When the study is coded and the test link signed off the study will be made live. We will then monitor the completion rate and liaise with the panel providers to ensure panellists are being sent the study and that we are achieving the quota.

Analysis and Reporting:

analysis icon

When the quota has been achieved the study will be turned off and analysis begins. Quantitative studies tend to create very large data sets with 100’s of responses and may require statistical analysis to answer some of the research questions.

Some of the study software tools come with their own built in visualisation capabilities that can quickly and easily carry out cross tabulation and provide tables and charts. Where this is not the case, or where the research requires more detailed analysis we may export the data set into excel or a more powerful package such as SPSS.

Task-Based Evaluation

In most digital interactions, users perform simple and complex tasks to accomplish their goal. We measure the intuitiveness, clarity of instructions, and overall success rate when end-users carry out these tasks. Listed below are the methodologies that support Task-Based Evaluation.

Expert reviews, also known as heuristic reviews and usability audits, are often carried out ahead of or instead of user research, they provide an independent detailed review of the user journeys and task flow at the design stage. They are low cost methodology and can be delivered very quickly – often in just a couple of days, and drawn on the expertise of a senior user experience consultant.


Expert reviews are a valuable and cost-effective method of ensuring your website or app design is usable and providing the right kind of experience for its users. They are increasingly popular with organisations looking for a comprehensive UX audit of their product, often before undertaking more intensive user research

One of our senior user experience consultants will review your design focussing on the task flow and user journeys. They approach the review from the perspective of a typical user so that they can highlight any usability issues that might spoil a genuine user’s experience of your website. Using their knowledge, the UX expert will then suggest ways in which you can eliminate these problems and improve the product’s overall user experience.

Expert Review Methodologies

There are two key methodologies that can be called upon for an expert review. These are:

  • Cognitive Walkthrough
  • Heuristic Evaluation

Each is described below and we tend to use a cognitive walkthrough but draw on the guidance provided by the heuristic review methodology also.

Cognitive Walkthrough

Cognitive walkthrough places the UX specialist in the shoes of a typical user, and sends them on the same journey that user will make in an effort to perform tasks and achieve their goal. This is particularly useful at the design stage and with task flow evaluation.

At each step of the journey, the expert uses their knowledge of user behaviour to answer the following questions:

  • Will the user try to achieve the effect that the subtask has? – Is it clear that the subtask is required to achieve their goal?
  • Will the user notice that the correct action is available? – Is the means to accomplish this task visible?
  • Will the user understand that the wanted subtask can be achieved by the action? – Is it clear to the user what action is required to continue?
  • Does the user get appropriate feedback? – Will the user be aware they have successfully achieved their goal once the action has been taken?

Dependent upon what answers are arrived at during the evaluation, each step of the journey is marked as either a success or a failure. In the case of the latter, reasons for why the design of the UI might prevent the user performing a task are evaluated, allowing solutions to be found and recommendations made that will improve the website or app’s design and task flows.

This form of expert review requires not only a keen understanding of the users the site is aimed at, but also of its business goals. This allows our UX expert to define detailed personas – ‘characters’ that describe a specific type of user via their age, occupation, goals, pains etc – which are then used in the evaluation to ensure that both the user’s and the organisation’s objectives are well served by the design.

Heuristic Evaluation

In a heuristic evaluation the UX expert will assess a user-interface (UI) in accordance with a predetermined set of usability guidelines (heuristics).

The heuristics are are often drawn from an original series formulated in 1994 by usability consultant, Jakob Nielsen. The ten heuristics he defined, originally for software systems but adaptable to websites and apps, are as follows:

  1. Visibility of system status – The UI keeps the consumer informed of what is going on at every step of the user journey. E.g. Using a progress bar or similar when processing a payment, so the user isn’t left to fret over whether it is going through or the website has crashed.
  2. Match between system and the real world – Content matches the expectations of the user, conveying the desired information clearly, while striking the correct tone.
  3. User Control and Freedom – Does the site navigation and its associated features (breadcrumbs etc) meet the demands of the user? Does it provide a clear path so a user always knows where they are and how to get back to where they were before in the event of error?
  4. Consistency and Standards – Is there consistency in what the user sees and what the user expects to see on account of what they know from visiting other websites? No ‘learning’ is required to achieve a desired goal. E.g. Link colours and button labels are in tune with what is already recognised by the user.
  5. Error Prevention – Ensure the design includes the information and labels that will prevent a user from making errors. E.g. If the phone number field in a form doesn’t allow spaces, let the user know this.
  6. Recognition rather than recall – All the relevant information that allows a user to successfully complete a task is on the one page, so they are not having to flick between tabs or rely on memory to achieve a desired goal.
  7. Flexibility and Efficiency of Use – In the case of an eCommerce website or app, this might be applied to whether shortcuts such as ‘Recently Viewed’ or ‘Saved Searches’ links are provided.
  8. Aesthetic and Minimalist Design – Design elements should be aesthetically pleasing, but not at the expense of site functionality and message. Does the product strike a happy medium where all these components work in harmony?
  9. Help users recognise, diagnose and recover from errors – Where errors have occurred, whether preventable or not (404s etc), is there sufficient information to allow the user to correct their mistake (e.g. highlighting missed mandatory fields on a form) or get back into the site with relevant content (e.g. customised 404 pages)?
  10. Help and Documentation – Pointers, self-explanatory labels and advanced search options to enable a smoother user journey.

Alternative sets of guidelines have been defined over the years, but it is predominately the Nielsen set that forms the basis of this particular methodology.


A large omnichannel retailer asked us to complete a task flow review of a prototype at the design stage ahead of usability evaluation with real users. We completed a three-day review of the major journeys across three platforms, focussing on mobile and desktop first and then completing a quick review of tablet.

Feedback was delivered in a short report that highlighted issues and action and these were implemented ahead of the user research. This meant the user testing could focus on the fine detail of the experience.

The data was analysed, the results for the prototypes compared and we provided insight about the overall usability of the prototypes and also which of the design variants wad the most effective. The research was conducted in Leeds so that both the client and their design agency could view the sessions and discuss the observations with our user experience consultant. We provided a detailed report that included recommendations for the agency to implement.

IA & Taxonomy Review

We use a combination of desk-based review of the product/information hierarchy, and verification of end-user vocabulary through sorting and grouping exercises with representative samples of the target audiences to evaluate classification schemes. Listed below, are the methodologies we use to support IA & Taxonomy Creation.

Navigation Evaluation Graphic

After the prototype has branding and visual treatment added it is necessary to evaluate whether the navigation and hierarchy created, following the taxonomy exercises in the generative stage , are impacted by the change. The addition of colour, icons and other design treatments can impact the intuitiveness of the design and navigation is one of the areas affected. The quickest and simplest way of doing this is by carrying out a desk review by one of our Senior UX Consultants.

In order to complete the review our UX consultant will wish to review the following:

  • Previous outputs from card sorting and tree testing exercises
  • The prototype navigation container prior to design
  • Any other documentation concerning the decisions taken around the information architecture and classification scheme

This documentation will enable our UX consultant to understand the goal state so they can evaluate whether that has continued after the design was added.

The actual review will be guided by the user journeys and in process will be similar to a cognitive walkthrough. The UX consultant will go through each journey, putting themselves in the shoes of a typical user, or specific persona, and noting any potential issues with the labelling, grouping and structure of the navigation container. In addition to user journeys, tiered navigation will also be reviewed for adherence to hierarchy organisation principles.

The findings from the review will be delivered back to the client in the form of a detailed report that includes:

  • Issue identified
  • Potential impact
  • Recommendation for rectifying it
  • Possible impact of recommendation to other areas.

As always, we are happy to talk through the findings of the report on the telephone or face to face to make sure you are happy with how to implement change and mitigate the issues.

card sorting graphic

A full explanation of how the taxonomy review process works and which methodologies are used is provided here in the generative phase which is when it should take place. However, we don’t live in a perfect world and are approached with plenty of requirements to review the taxonomy of an existing website or late stage designs because it hasn’t been thought about earlier in the process.

The approach and process is almost identical to that described in the link above but it is worth noting a few points of consideration.

Removing the Design

One of the first things we will do is to remove the influence of the design from the taxonomy review. The methodology we use is called “Tree Testing” or also known as “Reverse Card Sorting”. This process isolates the information architecture, navigation and labelling and allows us to establish what is working.

Card Sorting

In most cases we use open card sorting where a user is asked to group similar items together and then name the group of items, giving them the maximum control over the outcome.

However, if the design has reached a stage where decisions have been taken that impact the content groupings and hierarchy and that cannot be altered we may need to use closed card sorting. This is where the user is given the group names and asked to organise content within these groups.

Stakeholder Workshops

With well-developed designs and live websites or apps it is far more likely that stakeholders will have developed entrenched views about content hierarchy, naming and grouping. Stakeholder workshops at this stage need to address these political issues subtly and be prepared to win over “non-believers”. This situation can occur at the generative stage but it is often less severe and we have longer to address it as evidence builds through user research.


The outcome of taxonomy reviews at the design stage, and necessarily at the live with a view to redesign stage, will most likely result in changes needing to be made to the design. Be prepared for this as it will take more time than just a tweak.


A large UK company approached us to review the taxonomy of a live website that had undergone some redesign but not had any changes make to the navigation and hierarchy. The new design utilised the original hierarchy which had grown over the years with new items added through unplanned changes. We ran a series of online and offline card sorting exercises, stakeholder workshops and delivery sessions as we helped them create a new customer centric navigation container.