• Used at the early stages of a project when a concept or proposition is being developed, or in the case of an existing digital asset or service, when a redesign is being considered
  • The research is designed to gather input and generate insight from internal stakeholders and external users
  • Methodologies are selected based on the requirements and assets available

The core methodologies that support the contextual research activity are set out below and may be used in isolation or combination as part of a programme of work.

Concept Validation

At concept stage (e.g. ideas), identifying and understanding the user base is of critical importance, as is gaining stakeholder input. Testing the concept through research validates the requirements that will form the foundation of a developing proposition. Listed below are the methodologies that support concept validation.

Diary studies are designed to gather data from users over a long period of time, sometimes weeks, and in research terms are called ‘longitudinal studies’.


The briefing and planning process is an important stage in generating a clear understanding of the objectives of the project and for planning how it will run. With diary studies this is critical because once the study is launched and in the field it is not possible to alter it. The study is designed and planned to the last detail including questions that may be delivered to the respondent via the app during the fieldwork.

In some cases, where the focus of the research is a concept or proposition that is not fully developed, this stage may include a workshop where the client team shares information with our consultants. This is an excellent way of ensuring that the client team has fully considered the proposition (or concept) and understands the questions they wish to answer through the research so that the consultant is able to create a robust research plan


We have to go beyond the standard screener used for qualitative research and ask questions about availability and comfort with recording video, photos and messages. Getting the screener right avoids people dropping out part way through or delivering unreliable or ‘gappy’ feedback.

It is also important to structure the incentive payment to maintain the respondents engagement. Staged payments or even rewards can be used to encourage respondents to interact, record and share data. This needs to be clearly communicated when initiating the recruit and even so we always allow for people to drop out of the research.

The project starts with briefing the respondents about what is required during the research. This is sometimes done in a focus group or a one-to-one session with the respondents. This will involve installation of the data capture app (see below) and explanation of the type of activity we are interested in.

In cases where the research needs to be carried out over a wider geography and holding face-to-face meetings is not possible, we schedule phone calls to brief them on the requirement for the diary study, make sure they can access the app and provide them with a contact point for any issues or questions they may have.


nativeye We tend to (currently) use Nativeye software for data capture for diary study projects. Nativeye is a cost-effective solution for running mobile diary studies. The app enables us to create a study, invite participants to download an iOS or Android app and then have the ability to interact with them during data capture period and use a variety of analysis and reporting modules when it is complete.


The key to diary study research is to keep the demands on the participant simple. If we need more granularity we would look to ethnographic research which involves observation of behaviour rather than respondent reporting. The type of things we will ask respondents to do includes:

  • Typing a message, thought or description of what they are seeing
  • Taking a picture of surroundings or a screen
  • Recording a short video of themselves or surroundings
  • Responding to an alert or notification on their phone
  • Using built in tools like sliding scale of emotion (see example below)
sliding scale of emotion

If time and budget allow, we prefer to interview each of the respondents after carrying out some initial data analysis. This is done either one-to-one or in small groups and allows us to dig into reported behaviour, thoughts and feelings.

We can run these interviews in a research facility so the client can view the discussion first hand. It is possible to report on the diary studies from the data captured alone but carrying out the depth interviews or small groups provides far richer findings.

Delivering the Findings

Communicating the findings from discovery research of this type is key to the success of the project. We need to not only provide clarity about what we learnt and the recommendations bsed on those findings, but also bring this to life.


We often provide deliverables that include a detailed report, story boards, behaviour mapsand more. Here is an example of a story board:

story board

There can be a lot of data, insight and suggestions following diary studies so we always advise our clients to be prepared for this and allow time to digest the findings.

Stakeholder interviews are used to gather information about a concept or proposition from within your organisation or immediate interest group – ( e.g. non-Executive Directors or Trustees). They tend to be utilised for significant projects such as the proposed introduction of a new product or service or high value or high-risk concept

Running the Interviews

Interviews can be held one-to-one or in small groups dependent on the availability of people, their personal needs (perhaps they feel more comfortable speaking on their own) or the requirements of the project. They are best carried out face-to-face but can be carried out remotely. Agreeing the approach is a collaborative process between the client project team and our team so that we can reflect all the sensitivities and logistical considerations that exist.


Small groups of stakeholders can also work as Co-Creation Workshops which is more about developing the proposition or concept rather than gathering information.

These can be useful when stakeholder interviews follow user research and there is data to share and it ensures that the insight generated from the research becomes owned by the client teams.

Typically, we run a half day co-creation workshop to share the user research insight and workshop the implications and understanding of what that means for the concept or proposition. This allows plenty of time to become fully immersed without attendees looking at the time too much.

Involvement at this point tends to ensure that each person attending the co-creation workshop has a full understanding of the emerging themes and has formed opinions and ideas. The purpose of the co-creation workshop is to merge these ideas with the insight generated from research and establish tangible and agreed propositions and priorities.

We involve the client team in the preparation for the workshop so that the agenda and objectives is agreed between us and it achieves its goals. The interviews need to be carried out in an environment of trust where specifics are not attributed to an individual in the reporting and where broad themes are captured, representative of overall sentiment and feedback.

In organisations large and small, there are considerable benefits to utilising an independent consultant like us to carry out stakeholder interviews. As we have run them plenty of times, with very senior people we are not phased by the prospect of running stakeholder interviews and workshops. We know how to structure them to achieve the pre-agreed goals, how to facilitate the session and we don’t have any “baggage” so people are more likely to open up and share.

Delivering the Findings

The output from stakeholder interviews can be delivered in various ways from a written report, to a set of illustrations or even a workshop.

Focus groups provide the opportunity for participants to debate issues, share experiences and to expand and develop ideas from their own perspectives.  In addition, working in groups of similar types of users will allow participants to form and develop ideas based on their customer requirements and real-world experiences.


The quality of recruitment and the preparation put in to the creation of the discussion guideis important when planning for a focus group.

The first step is selecting the right facilitation aids that will enable to group to provide the insight required. We also need to consider the management of the group, facilitating the discussion, running activities and ensuring the research objectives are met during the 90 minutes. Getting this right requires experience together with proper planning and preparation.

Running the Focus Groups

Within the groups a range of individual tasks and groups exercises can be used to explore opinions and encourage deeper discussions, such as individual thought bubble completion – where participants individually write up their initial thoughts and opinions about an idea or concept before it is discussed amongst the group.


In contrast we can carry out a group activity such as card sort where participants are asked to work as groups to sort cards containing a range of features or properties into piles based on certain criteria e.g. individual features sorted by most important factors influencing usage.

Each focus group lasts between 60 and 90 minutes and can involve 2 to 4 or 7 to 8 participants. For concept and proposition work we tend to use small groups of 2 to 4 people because they provide a more intimate setting where we can go into greater depth.


A good example of this is a recent project we ran for Facebook where they were considering adding a new function to their Workplace platform. Working with a number of small groups of 4 participants recruited as pairs, 2 from each organisation, we were able to delve into their contextual needs on considerable detail.



We typically hold focus groups within a viewing facility, with full audio and video recording and so that representatives from the client organisation can view.
This carries a cost and is therefore optional and sometimes, as with a project we ran in New York about a new cosmetics proposition, the client is happy to watch the videos and read the report.

Delivering the Findings

The output from focus groups is always a detailed report containing observations, analysis and recommendations. A huge amount of data is gathered during focus groups and a significant aspect of the project is the consultant’s ability to synthesise this data into meaningful findings. To accompany the report we provide video of the sessions so that the project team can look back on the sessions as the project evolves.

Personas are fictional characters which represent key attributes and behaviours of customer segments. They are widely used in digital development, but need to be grounded in real user research and customer data to be of value.

At the start of the development process, if organisations don’t already have them, or if they need to be refreshed, we get asked to create personas. When properly created, teams can use them to challenge their decisions. Often a member of the team will take on one of the user personas and represent them during the design and try and see design questions from their perspective.

Our methodology is designed to ensure that personas are an authentic reflection of customer research, and to enable personas to be used as a key tool to measure online behaviour and drive digital development.

It is central to our methodology that basic decisions about user segmentation, and the attributes and behaviours attributed to personas, are founded on research rather than guesswork.

We develop personas specifically to guide:

  • Digital Development: what features are of most value to which personas? How should digital activities be prioritised?
  • Tracking of Customer Segments: how can an organisation track the online behaviour and user journeys of key segments?

We have been asked to create personas with some web analytic data and a couple of days of billable time. We refused as they would be pointless, but it does happen and so we would urge people to consider that if they use personas to base their design decisions upon, they must provide a strong foundation.


We use a mix of qualitative and quantitative research data to build personas and can use data that our client already has or generate new research where there are gaps. As a result each proposal for developing personas is different.

We normally start by reviewing the existing data you have about your online customers which will help to build an initial high-level segmentation.

This may include:

  • Existing Qualitative Research (e.g. focus groups, usability testing) with your online customers
  • Exisiting Analytics Data 
  • Any publicly available research data or findings about customer behaviour in your market sector.

From this, we will create an initial segmentation of the customers (e.g. by demography, device use), which we will use to recruit for qualitative research.

We will also identify “knowledge gaps”, and decide what are the issues we need to understand in more depth about your online customers’ behaviour.


Qualitative research at this stage is generally carried out one-to-one and an example of the type of things we are looking for includes:
  • Their device behaviour: how they use different devices to access your site and what factors govern their choices
  • Their needs: what are the main things that they need to find on your website; what types of reassurance are they looking for
  • Their frustrations/ pain points: what aspects of your website are likely to deter them from completing a task or transaction
  • Their buying behaviour: what are the key factors that influence their decision to purchase
  • Their attitudes: particularly how they think about making purchases online in your market sector.

Creating the Persona

The findings of the qualitative research will be the basis for developing online customer personas – typically 4 to 6 personas in all. In some ways where personas are concerned – the fewer the better as they provide focus and design decisions are tough enough without having tiny and granular differences to worry about. But it isn’t always possible particularly with large brands with wide customer bases.

We will normally describe each persona on a single page, to a similar format illustrated below.

Example Persona

As shown, the persona typically includes the persona’s “name”, and some demographic indicators. It will also have a description of their online behaviours and attributes across all their digital activities and, specifically in relation to your website/market sector:

  • Their device use (what devices do they use when)
  • Their buying or use behaviour (dependent on the proposition)
  • Their needs and frustrations (what is likely to motivate or inhibit purchase decisions)
  • High value features and functionality (what would be of particular value to this persona).



Once the personas have been developed in draft, we hold a workshop with our clients project team.  This allows us to present the personas and discuss how the personas can contribute to the UX strategy. It can help to avoid the personas being left on a “shelf” with a tick in the box against the project task and the development moving on regardless.

A valuable use for personas beyond the development of a new concept or proposition is to embed them into your online measurement framework. By incorporating them into the segmentation used by your online metrics software they can be referred to in performance terms. This is also useful if targets were set at the beginning of the project about adoption or growth rates.

Using personas for ongoing measurement allows you to gain an understanding of how the online activity of your customer segments is changing over time. It also reveals how changes to features and functionality affects the behaviour and user journeys of each persona.

Personas tend to be easier to visualise than abstract marketing segments and so if done properly can become quickly embedded within your digital team.

A proposition evaluation is used when the ideas behind a concept or proposition are starting to be brought to life and early feedback is required. The assets being evaluated will be very low fidelity and will start to offer a sense of how the proposition or concept is planned to function.

Methodologies will be selected based on the assets available and questions needing to be asked.

Methodology Selection

There are a variety of methodologies that can and are used to interrogate a developing proposition including focus groups and one-to-one user research sessions. Deciding which to use is determined by the point the development has reached and the outstanding questions the product team still has.

It is helpful to put this stage of research into context with an example.

If we imagine that in the early stage of contextual research  “ABC Inc” is thinking about launching a new mobile app that will address what they believe they have identified as a key user issue. Contextual research has helped them to understand how this issue manifests in the daily lives of the users, what they use to mitigate it now and how a new mobile app could help.

The team at ABC Inc has now developed the proposition that the app will be built around and may have a reasonably good idea about:


  • What the core functions will be
  • How it will be positioned 
  • Where it fits in the users’ life
  • What in the market is like it and how it differs

The assets that contain information about this developing proposition can arrive with us in various forms, from a PowerPoint deck to a few illustrations. It could even be a minimum viable product (MVP) if the organisation is using an agile development methodology although this puts them in the prototyping stage of development from a service framework perspective.

Assuming it is not in prototype form yet, the next task for us is to understand from the client what questions they have. For example:

  • Do they still have questions about the viability of the overall proposition?
  • Do they have gaps in their understand of how it might fit in to a users day-to-day activities?
  • Are they interested in feedback about proposition name, initial brand ideas or concepts?
  • Do they need data about potential use cases?

When we understand how far the proposition is developed and the questions that need to be asked we can determine the research approach and design.

Generally speaking, if the proposition and questions needing to be asked are very broad we will be more likely to use small groups or focus groups to gather data.

If the proposition is very tight and the questions narrower and focussed, then one-to-one research would be selected.

This is illustrated in the following graphic.

method selection graphic


Although methodologically, running proposition evaluation research is very similar to running user research with developed prototypes or designs, the preparation stage is different. It is unlikely, in our experience, that a client will hand over assets that can be evaluated as they are. In almost every situation we will be required to create test assets that we can use in the research.


Creating test assets can be a very simple piece of work or can take many days of effort. They will differ between one-on-one (depth) sessions and group sessions.

They may not always be a derivative of the information handed over by the client – for example we may create a test or task in the research that is metaphorical of the developing proposition. We may even use live competitor alternatives as stimulus for the research.

These assets need to be developed in parallel to the discussion guide to ensure that the session is well structured and delivers on the objectives, whether for groups or depth sessions.

Running the Research

As always, recruitment of participants is important and where possible we will use the developed personas to guide the recruitment profiles and screener.

Most proposition research will involve 8 to 12 one-to-one sessions lasting about 60 minutes if that method has been selected.

In the case of groups, we would tend to run 4 to 6 small groups of 2 to 4 participants, and 3 groups of 8-10 participants, again dependent on the method selected. In all cases, if there are a wide range of user profiles or they differ wildly (scientists and janitors) we may need to run additional groups to generate reliable insight.

Gathering Insight

In our experience, proposition research is almost always viewed by members of the product team. The only time this isn’t the case is with international research projects where the logistics and travel requirements make it difficult, but even in those cases we see a lot of teams making the trip. As a result, most of the proposition research we run utilises research facilities with viewing rooms.

Following the discussion guide our moderator will run the research session in order to generate the insight required and as mentioned above this can involve tasks and tests.


The software provider was considering adding a new function to their collaboration platform. They had already conducted research that led them to believe there was a need for it and now wished to test the use cases they had created. Our research design involved small groups with a range of exercises, some carried out alone and some in groups.

The exercises were designed to reveal the users’ behaviour concerning certain tasks they performed on a regular basis and to identify potential gaps. The function being developed wasn’t revealed to the users until well into the sessions. When it was finally revealed the user had naturally identified where it might fit (or not) in their routine.

This is the fundamental attribute of proposition research. The proposition is evaluated even though there is no tangible assets for the user to interact with.

Delivering the Findings

In most cases the findings of proposition research are delivered through a detailed research report, a workshop with the product team, a creative of some sort, or a mixture of all three. This is determined early in the project so that the best method for delivering the findings into the client team is used 2-1.

User Journey Mapping

Insights from concept validation establish the progression of steps for key user journeys. Listed below are the methodologies that support user journey mapping.

A use case is a detailed description of how a user carries out an individual interaction with the digital product or service you are developing and how the system reacts to that interaction. When the use cases are completed we can analyse the workflows that will begin to inform the processes and journeys that deliver the proposition.

Constructing the Use Case

Our approach to developing use cases is straightforward and employs a template structure based on best practice.

The key components of the use case template are as follows:

  • Title – a unique, short and punchy title for the use case.
  • Short Description – two to three sentences describing the scope of the use case.
  • Persona – sometimes referred to as “Actors” but we prefer to use the personas developed to help embed them into the project.
  • Preconditions – a short description or list of the conditions that are in place when the use case begins.
  • Initial Workflow – the core steps taken to complete the use case.
  • Alternative Workflow – describe the different workflows that apply due to contextual differences.
  • Error Flows – Note items that stop users achieving their goal in the use case.
  • Finished State – description of what must be true for the use case to be considered complete.

Every use case is documented using this template and every persona is also considered. However, we look for duplication to rationalise the use cases and so one use-case may represent one, some or all personas. The key is to go through the process over and over again to make sure that everything has been identified and documented

User journey mapping is a visual representation of and end-to-end interaction through a process or system. They can be extremely detailed and encompass not only steps in a process but also the emotional state of a user and are used to represent a proposed journey or an existing journey under review.

The Key Components

User journey maps, also known as [customer] experience maps are a valuable User Experience Design tool because they are multi-dimensional.

They are presented from the customer or user perspective and typically include the following types of information:

  • Timeline or steps they go through – this is a representation of the time or steps the typical journey takes to complete end to end.
  • Emotional state of the user – we sometimes use emoticons for this or a peak/trough representation.
  • Platforms and channels – what they are interacting with i.e. mobile app on smartphone, call centre, in store etc.
  • Touchpoints – what they are doing when interacting with the organisation.
  • [sometimes] Context – if appropriate we might include what the context is such as they are at home, in the office, on the move.user journey mapping

Example of a User Journey Map

As each engagement is different we will adapt the components based on the needs of the project. For example, we have included moments of truth, what the user is thinking and direction of movement.

The important thing to remember is that the user journeys are designed to be used as part of the proposition assets.

A user journey map is created for each persona for there will be more than one as personas tend to show different emotions, use different primary platforms and more. We will normally begin with one map and then adapt it for each of the personas and where appropriate colour code the differences.

Creating User Journey Maps

User journey maps draw on the research from earlier stages in the generative phase of development. So, we are assuming here that this work has been done and so too have personas been developed.

Step 1: Co-creation Workshops – Where possible we like to run co-creation workshops with our client team to brainstorm the core user journeys. We begin with the timeline or stages and then consider the touch points, platforms and channels where they take place. This process will deliver a solid structure and allow everyone in the team to agree on the core components of the journey.

Step 2: Add Emotional State The next step will be to add the emotional state and to facilitate this we will make sure the personas are available and presented as poster sized assets on the walls of the brainstorm room. We will ask the team to put themselves in the shoes of a persona and then we will collectively go through the user journey adding the emotional highs and lows a particular persona may experience through the journey.

Delivering the Maps

Armed with the notes, diagrams and other assets created during the co-creation workshop, we return to our office and start work on the detailed user journey maps. They will include;

  • Visual elements,
  • Flow elements,
  • Detailed notes and annotations,
  • Colour coding,
  • Emoticons

and more depending on the agreed format.

This is detailed work that also requires clear creatives and so we will often have a team working on the creation of user journeys and bringing different capabilities to the project.

We do our best to contain the output within a layout that is easily shared digitally and presented in A4 or A3 formats. This means they are more likely to be regularly accessed and referred to because they are easier to use in themselves. If the environment we are delivering to will allow it can be of value to print large format “billboard” style printouts that can live on the walls of the product team.

Storyboarding brings user journeys and processes to life using a strong visual style to provide the narrative from the user perspective. A storyboard typically uses a sequence of drawings that include dialogue and instructions and enable non-specialist and specialist alike to quickly understand a sometimes complex processes.

Storyboards are a possible output, or deliverable, from the user journey mapping or workflow analysis stage. A storyboard contains limited detail and tends to focus on the key triggers and process steps. If properly created they will provide a clear representation of the journey from a user perspective – customer centric.

In our experience, the primary purpose of storyboards is as a method of communicating complex process flows and user journeys to non-technical stakeholders. So, the decision about whether to have them created is driven by the makeup of the project team and involvement of stakeholders.

Creating Storyboards

Storyboards draw on use cases, workflows and user journeys as the raw materials that feed into the creative process. When we create storyboards we ask the following questions:

  • Who is going to be using the storyboards?
  • What purpose are they serving?
  • What visual style would the client like us to use
  • What level of granularity should we go to?

All these items are linked together. For example, if the storyboards are going to the CEO to explain a new proposition they may need to be well developed and professional looking graphics with a moderate level of detail. We have produced everything from black and white, hand drawn visualisations to highly detailed, full colour presentations and everything in between.

Process design testing takes place ahead of any interactive medium being created to ensure the workflow and process steps support the desired user interactions. It involves users attending one-to-one qualitative research sessions and interacting with colour coded process cards created to represent each aspect of the process or journey.

Preparing for Research

The approach to process testing is very similar to that used for usability testing with the core elements present:

  • The test assets
  • The participants
  • The location of the research
  • The preparation, moderation, analysis and reporting.

The biggest differences with process testing is that the test assets have to be created and the reporting is normally in the form of revised process designs.

Process designs can be delivered to us as user journey maps, use case workflows, process flow diagrams and more. These assets explain how the process will work, the possible interactions, error paths, alternative flows, success criteria and more. But, they cannot be placed in front of users in this form as they will simply confuse them.


The approach we take is to develop colour coded process cards that represent different attributes in the process.

A recent project included the following card types:


Each user journey or process will have multiple cards, with short process having perhaps 2 or 3 and larger processes in the 10’s of cards. Each set is labelled as a specific journey and will be used in the user research as the asset for a scenario or task.

Before going anywhere near the research room, the card sets are rigorously tested with the client team to make sure that each step of the process is properly represented.

It is often the case that this stage in the process alone throws up issues and omissions from the processes that can be rectified before user testing takes place. For that reason, we recommend allowing additional time in the project for this stage to go through a few rounds of iteration.

Carrying out the Testing

If you have observed usability testing from the viewing room in a research facility you will already have a pretty good idea of what happens with process testing. Participants are recruited against the target user profile (persona) and attend a 45 to 60 minute session on their own and moderated by a senior UX consultant. After they sit down and made to feel relaxed with a few opening questions we begin the process testing.

Each set of process cards represents a user journey or sub-journey and the first card in the set is the user story. The participant is handed the user story card and asked to read it. The card may say something like:

– You want to log into your online bank account

  • You search for to view your account and click through to the login screen.
  • You cannot remember your logon details and the website advises you to contact support

How will you proceed?

The user will have a number of choices presented on the card and these will lead to different routes through the process via associated cards.

Our goal in the research is to understand if the process supports the way the user wants to interact in a natural way. By observing their behaviour, listening to their verbalised thought processes and feedback we can refine and optimise the processes.

Sharing the Findings

The best way to share the findings from process testing is to provide revised process maps or flows as the main deliverable. A report provides the narrative as to why the process has changed or been altered in the way it has. It will take each step in each process and identify what worked and what didn’t and how it needs to change or be adapted.

Together these deliverables will inform how the wireframes or prototype should be designed to support the user journey and process.

IA & Taxonomy Creation

  • Used to guide the creation of the information architecture, the naming and grouping of items in a structure, such as a website or app
  • Provides a user centred taxonomy and also generally involves an internal stakeholder view
  • Use of closed or open card sort or tree testing is determined by the project requirements

Taxonomy projects often combine open and closed card sorting, online and offline techniques plus stakeholder workshops to gather information and disseminate findings. Listed below are the methodologies that support IA & Taxonomy Creation.

Open and Closed Card Sorting

Card sorting is a method by which users are asked to group and name items, such as content areas in a website. The methodology can be run with physical cards or with card sort software. Typically, a user can group and categorise about 50 items in a session with reliability.



There are two methods of card sorting – open and closed.


In Open Card Sorting, we give the participants the cards and they sort the cards and define the name of groups. An example of this is shown below;

open card sorting

As the illustration shows the user is given various food products such as apples and Cream and asked to group them. The user in the example has decided to group dairy items together and vegetables together. They have the ability to not only group the items like this but to name those groups.


Closed card sorting is similar but in this case we give the participants the content group names and ask them to organise the items into those predefined groups. This is illustrated as follows:

closed card sorting

In this case the example shows that the user has been given the groups of ‘Fruit’, ‘Vegetables’ and ‘Dairy’ and is being asked to group the items within these categories.

In our experience, it is better to run open card sorting on a wider scale and scope in order to include the majority of content items and rigorously test the grouping and naming

Open card sorting tends to be used first to determine the overall hierarchy, groupings and category names.

Closed card sorting can then be used to categorise the remaining content items that may not have been included in the first stage.

Tree testing is also known as reverse card sorting and evaluates the taxonomy and information architecture of a developing website or app. It tends to be used after open card sorting has determined an initial taxonomy. This is then turned into a hierarchy or structure which is then evaluated with tree testing.



Rather than utilising cards, respondents are given various browse tasks similar to those they would use in the real world. They are asked to interact with the navigation (Information Architecture) only and there are no visual design or navigation aids – a low-fidelity prototype. The respondent clicks the labels within the navigation in order to complete the task.

The following graphic illustrates the process.

Tree Testing Example

By isolating the IA we are able to rigorously evaluate its effectiveness including the structure, naming and grouping of items within the navigation and structure.

Qualitative Card Sort

Qualitative research used to carry out card sorting and tree testing utlises a small number of participants in one-to-one sessions lasting about 60 minutes. It tends to be used when we wish to understand why a user has grouped and named content in the way they have.

Carrying out the Research

With qualitative card sorting, we create physical cards for the user to group and organiseduring the session. There will be a maximum of 50 items and the user is given plenty of time, up to an hour, to group items and name the groups.

This will all be done under the watchful eye of a user experience consultant. They will encourage the participant to explain what they are doing, why they are grouping items as they are and where the rationale for the name they have decided for a group has come from. This is known as “think aloud protocol” and is common in UX research techniques.

card sorting graphic

We typically run 8 to 10 one to one qualitative card sort sessions in order to gather sufficient data for analysis and to provide a recommended hierarchy.

The hierarchy will typically show the main content groupings and the next level items that sit below them. It could look something like this:

content groupings

It is possible that more than one solution to the navigation is generated from the research and if this is the case we will provide both versions together with recommendations for which to use.

Quantitative Card Sort

Quantitative card sorting and tree testing utlises a large number of respondents recruited from a panel. They are invited to participate in an online card sort exercise created in specialist software such as Optimal Sort. The large number or responses offers a high degree of confidence in the findings.

Quantitative card sort is very similar to qualitative card sort but instead of creating physical cards the content items are provided to the user via online software. As a result, we cannot learn why a user may have grouped and named content as they have. Instead we are relying on a large number of responses to derive confidence in the findings.

Carrying out the Research

We typically run the online card sort with 500 respondents recruited from a panel – Optimal Sort has an integrated panel which ensures this step is technically simple. A sample of 500 will provide a very robust data set from which we can carry out our analysis. However, we can run the online card sort with a smaller panel if budget constraints require us to do so.

Some customers run their own customer panel and ask us to utilise this for quantitative card sort research. This can represent a large cost saving to the project and we are happy to do it. The card sort software we use simply produces a link that can be emailed to the panel and responses are captured in the usual way.

The key consideration with the panel is the speed of collection and achieving the quota. In most of the projects we have run using an integrated panel we have been able to meet a quota of 500 completes within 5 to 10 days of launching the study.


The software we use for online card sorting projects is Optimal Sort.

Respondents are asked to organise and group a sub-set of circa 50 items that they can drag and drop from a list on the left of their screen into groups on the right. They are then asked to name the groups and we can also allow them to rename the items if that methodology is agreed.

Here is a screen grab of what a user typically sees in Optimal sort:

optimal sort

Analysing the Findings

With the data captured from the online card sort we will complete analysis of the findings. This will include the creation of dendrograms and similarity matrices as appropriate and illustrated as follows:


similarity matrixSimilarity Matrix

The analysis will allow us to identify associations between groups.

Guiding REISS principles (Repetition, Exclusivity, Inclusivity, Sub-Setting, & Similarity) when applied help evolve grouping that creates a number of main groups. Opportunities to cross reference/merge items will be indicated by pairing relationships (in the example, these are highlighted with green blocks).

similarity matrix reiss

The best merge method will be used to identify groupings.

The illustration below shows percentage of participants that agree in full or partially with four larger groups based on their individual pairing. Participants have all grouped at least two of these cards together in each large group. All participants agree with parts of the four larger groups, with 50% or more individual pairings matched.

dendrogram example

Delivering the Findings

The outcome from the online card sort analysis will be one or more suggested grouping models. These will form the basis of a customer centric taxonomy and are often tested by running qualitative card sorting as a second stage. In the qualitative sessions we are able to interrogate why users may have grouped and named content in certain ways during the quantitative research. This dual method provides good checks and balances to ensure the final outcome is a robust taxonomy.

User experience evaluations are only used in the generative stage when a redesign is being considered, say in the case of a website or app. The approach is similar to usability testing and involves one-to-one sessions with participants attempting tasks that reflect key journeys and processes. They are used to identify issues and opportunities with the existing product or service and to determine what should be retained and what should be discarded.

Running the Testing

As the asset being tested is live and fully functional the user experience evaluation goes further than a usability study carried out at the prototype and design stages. With a redesign in mind we are normally looking for feedback across a range of areas. This can include the interaction, the user journey design, key process flows, branding, copy and tone of voice, taxonomy, error handling and more.


In generative study for a hotel chain we reviewed an existing app ahead of the redesign process. The recruitment targeted participants from their entire user base including business and consumer users, groups and individuals. The browse and search journeys were rigorously tested but so too were the payment, account management, email messages and loyalty scheme. The study also included visit to competitor apps so that users could identify areas they preferred that could be referenced in the redesign.

This study was carried out in a research facility because the client wanted to hear the feedback first hand, but it isn’t unusual for this stage of research to take place in users homes, as we did for an insurance comparison company ahead of redesign, or in clients offices or meetings rooms. The decision generally comes down to cost, availability to spend a couple of days out of the office, and location of users. Session videos are always provided (free of charge from us) so you are still able to see what happened even if you can’t attend.

Delivering the Findings

The deliverable for generative user experience evaluations tends to be a very detailed report that guides the redesign. But, it can also include prototypes or wireframes for consideration as part of the redesign – dependent on how fast the project is running and the capability of the team involved