summative research methods

Workflow Optimisation

Pre-launch workflow optimisation is a checkpoint wherein end-to-end user journeys are tested for any ambiguity or incompleteness. Listed below are the methodologies that support this.

The final stage of qualitative user research takes place immediately before launch. At this stage the site, app or product is assumed to be 100% complete. The evaluation is therefore able to include the full end-to-end journeys, copy & content, iconography, proposition effectiveness and more.

UX Evaluation Methodology

The methodology utilised for the final stage of user research immediately prior to launch is almost always qualitative – i.e. users in one to one sessions with a moderator. This is because it allows us to interrogate the participants about the fine detail of their interactions. For it is these fine details we are hoping to identify and rectify before you go live.

Having flushed out major issues in the earlier stages of design we are focussed here on optimisation, identifying any previously undiscovered, or newly introduced, issues. New issues can be introduced by the build process. It is often the case that prototypes don’t reflect the back-end limitations that the new product, website or app will be built on.

When the build begins and engineers start to code they will make design compromises to meet the requirements of the back-end legacy systems and in some cases these can introduce usability or user experience issues. The developer may have to consider processing power, server architecture, flexibility of the code base and all sorts of other restrictions that the UX Designer is not necessarily aware of. Without a final stage of UX evaluation these can go unnoticed and the launch take place with a sub-optimum experience.

UX Evaluation Process

Like all qualitative user research there are a few key elements:

  • The product, website or app to be evaluated
  • Participants to attend the sessions
  • A venue to run the research
  • UX consultant to prepare and moderate the research, analyse and report on the observations and findings

We have discussed the subject and goal of the testing earlier so let’s focus on the other three areas

Participant Recruitment

participant icon

For UX evaluations at this final stage of development we would normally use 10 participants unless there are wild differences in user profiles. A total of 10 participants will allow us to cover the majority of core user profiles and give coverage across smartphone, tablet and PC screen sizes. It also allows us to run with 60 to 75 minutes sessions for each participant and complete the user research in two days.

Recruitment Screener

To ensure that the right participant profile is recruited we create a recruitment screener that will be shared with the client for approval before being sent to the third-party participant recruitment specialists. When fulfilled, our recruitment partner will provide a password protected spreadsheet to us listing the respondents and confirming their responses to screener questions.

The screener will ask carefully structured questions that leave the interviewee unaware of what the recruitment criteria is so they can’t game the system.

Here is a simplified example of the type of question structure we use:

Example Question A – Which of the following statements best describes how you bank?

1. I bank in branch only
2. I bank in branch and online
3. I bank online only
4. I don’t have a bank account
5. Other

Recruit only people who answer 2 or 3 ]

The recruiter would be asked to terminate those that don’t apply. This type of question works far more effectively for recruiting participants than a direct question such as:

Example Question B – Do you use online banking?

1. Yes
2. No

When recruitment is complete we share the participant profiles with the client but will remove personally identifiable data, such as mobile phone numbers and email addresses, so that the participant information can be shared without password protection and without infringing any data protection requirements.

Research Venue

research venue graphic

It is common for the final stage of user testing to be carried out in a research facility with one-way mirror viewing room. This is because at this stage there are multiple stakeholders interested in the project and decisions taken now need to be mindful of all the other “moving parts” in play. With a viewing room they can all attend the research and gain a holistic view of the entire experience and not just their area of interest.

That said, we are not prescriptive about where the research takes place and are happy to run it in your office, at our office, in the participants home or anywhere else that is practically viable. We record picture in picture video as a matter of course and provide it free of charge immediately after the sessions and so even if you can’t view live it won’t be long before you can review the entire session in HD video.

Our research equipment is highly mobile and we run research all over the UK and the world. This enables the client to bring the maximum number of people to the research to view with us doing the travelling rather than their large team. In the UK we have delivered research in Manchester, Leeds, London, Bristol, Edinburgh, Brighton and even Leamington Spa!

Preparation, Moderation, Analysis & Reporting

analysis icon

Testing the end-to-end user experience will be reflected in the Research Plan through the tasks and scenarios we create. The Research Plan contains full details of the research including venue, url’s / location of the product or app, technology set up, user profiles, sessions timings plus test script and tasks.

Each task will be created using the following template:

Task/Scenario – Name (## minutes)  
 Description  ………………………………………………………………………………..
 Completion Criteria   ………………………………………………………………………………..
 Purpose   ………………………………………………………………………………..
 Notes   ………………………………………………………………………………..

The test script will also include the interview questions used to settle the participant and delve into their contextual behaviour prior to the tasks and the post session questions to dig into their experience a little more.

Our Senior UX Consultant will moderate each session using the test script as a guide. Between sessions they will enter the viewing room and discuss observations with you and answer any questions you have. At the end of the second day of user testing they will be happy to hold a debrief session with you to discuss the observations from the research.

Following the research our UX Consultant, who are all senior UX professionals with many years of running research of this type, will analyse the observations from the research referring to the picture-in-picture session recordings and compile a detailed report. The report is highly visual and each page will focus on a specific issue, include a screen grab to illustrate what the issue is and then detail the recommendation together with examples or best practice references.

In addition, the report will contain any strategic findings such as those concerning the communication and clarity of the proposition or overall holistic experience.

CASE STUDY: 'My Home Move'

My Home Move involved us at every step of their design and development process as they built a new website using mobile first principles. The final stage of testing was about fine tuning and resulted in minor changes to the acquisition journey.

When the website went live mobile conversion increased by more than 80%.

Pre-launch Evaluation

At this stage the site or app is can be evaluated for accessibility as the code is the key component to be reviewed. Making a website accessible for people with disabilities offers a range of benefits and is a legal requirement in many countries.

As described above, pre-launch user research is a critical, final step ahead of launching a new digital product, website or app. Of equal importance is ensuring that the product is accessible, preferably to pre-defined WCAG guidelines. The benefits and methodology are set out below together with screen grabs to show example deliverables.

Accessibility Audit to WCAG

Website accessibility audit’s evaluate how well the website or app in question supports use by people with disabilities. The review has to take place at the summative stage because the evaluation includes the coding used. There are well documented guidelines published by WCAG (World Content Accessibility Guidelines) which we fully conform with.

 

An Accessibility Audit is primarily for organisations wishing to understand to what level they comply with the WCAG accessibility guidelines and who need a check list to guide them toward making necessary improvements.

The Benefits of Accessibility Audits

accessibility graphic

There are many benefits to providing an accessible website for your users. Here are just a few:

  • Websites that are accessible perform much better in SEO
  • Improving accessibility also improves usability and so it should not just be seen as something for people with disabilities
  • Accessibility improvements tend to help people with sight and cognitive difficulties and that generally means the older population. This just happens to be the fastest growing and most affluent online user base.
  • Reduced risk of legal action for non-compliance – the law requires organisations to make reasonable adjustments to enable people with disabilities to interact with them. An accessible website is one such adjustment.

Audit Services

We provide best practice Accessibility Audit services for your website and apps to identify how they perform against the World Content Accessibility Guidelines, known as WCAG. Our audit services include the following:

  • Accessibility audits to W3C level A, AA and AAA
  • Mobile accessibility audits
  • Three different deliverable formats depending on your needs from table to management report.
  • Accessibility advice and consultancy

Our Accessibility Audits are delivered by experts and are NOT automated software audits.Our service will show you where you comply, where you fail to comply and we can also show you how to fix any issues.

Our Methodology

Our Accessibility Audit uses a best practice methodology to review a sub-set of pages from your website or app against the level of WCAG compliance your organisation is aiming for. Accessibility is now reasonably mature and the conventions clearly set out by WCAG and so we feel there is no need to try and adapt or complicate the best practice methods that already exist.

Version 2.0 of the W3C’s Web Content Accessibility Guidelines (WCAG) contains 61 checkpoints split into three conformance levels – A, AA and AAA. We typically audit to level A or AA (AA is what we recommend) and it is rare that a website is required to meet level AAA. Level A includes 25 checkpoints and level AA a further 13 checkpoints all of which are covered in conformance with the guidance set out by WCAG.

Most websites and apps are built using templates and so it is not necessary to test every page. We therefore test an example of each template plus any other unique content items such as forms, data tables, images and maps. For example, an accessibility audit for a 100 page website would see us auditing only 6 or 7 pages.

Delivering the Findings

The standard deliverable is a spreadsheet containing the results for every checkpoint for each page we have audited.

There are three tabs in the spreadsheet covering;

  1. Summary of Statistics

Summary of Statistics

2. Detailed Checkpoints

detailed checkpoints

3. Colour Contrast

colour contrast

There are more sophisticated reporting options for those organisations that need them and these are described below.

  • Management summary report in Word document format – This report includes a narrative and describes how to rectify issues listed in the Excel report.
  • Management summary report in PowerPoint document format – The powerpoint report includes annotated screen grabs to identify the issue, the page, feature or function concerned and recommendation including examples
  • Presentation or workshop of findings and recommendations – Our expert will come to your offices or deliver via webex the findings of the audit and answer any questions about rectifying issues and best practice implementation

We are also able to offer an upfront page count audit to help identify how many pages need to be audited. The minimum number of pages we will audit is 5.

CASE STUDY: WCAG EVALUATION

We were asked to complete WCAG evaluations of a new payment system app on mobile platform for a large European payments company.

The final developed apps were pre-launch and we could only access them on-site. We completed the audits at the customer offices on iOS and Android devices and delivered the detailed excel spread sheet reports. The client rectified the issues that caused failure and we then re-evaluated the app and confirmed they conformed with level AA of WCAG and could launch with confidence.

Proposition Evaluation

Summative research also measures the comprehensiveness of the proposition, and understanding of key messaging around USPs. Below is the methodology that supports this evaluation.

An Expert Review is a valuable tool in providing a deep, low cost and quickly delivered evaluation of your website or app either immediately prior to launch or on a live site prior to redesign. Our unique methodology evaluates the developed or live app/site across navigation, content, function, presentation and feedback and provides recommendations to improve and a UX Index score.

 

We utilise our unique methodology that evaluates your website, product or app against 48 attributes under five key groups:

  1. Navigation
  2. Content
  3. Function
  4. Presentation
  5. Feedback

The expert review focusses on the key user journeys or processes contained within your website, product or app. It will identify areas where usability could be improved together with recommendations for how to implement changes and also provide you with a UX Index score. This score can be used to compare your site, product or app over time or with competitors.

The service is delivered by one of our senior UX consultants who is a user experience expert with many years’ experience of identifying usability issues that are found in different types of digital interface. They are also skilled at making effective recommendations that will mitigate against the issues found without creating further usability issues in other areas.

They will apply our methodology together with their expert knowledge or usability best practice and pragmatic experience of what works in other, similar situations.

The output from the service is a detailed report containing the following:

  • Introduction – explaining the objectives and what we did to meet them
  • Executive Summary – conveying the key findings from the expert review
  • Charts – showing the usability score and frequency of issues by category
  • Issues Log – using a traffic light system we prioritise all the issues found
  • Detailed Findings – with screen grabs, analysis and recommendations

We typically allow two days to conduct the expert review on a single platform plus sufficient time to complete a quality assurance check of the report before it is sent to you.

Expert Review Methodology

1  Objectives & Scope

  • Decide on areas of platform to be audited – e.g. on an eCommerce site this might include registration, adding to basket, checkout etc.
  • Understand objectives and any information that will help us with the audit such as dummy login details that may be required to gain access to key areas.

2  Initial Walkthrough & Context

  • Review of user journeys or processes identified in the brief.
  • Prioritise areas of initial concern.
  • Look at the level of consistency across the website, product or app to provide an overall context for the expert review.

3  Evaluation Criteria and Priorities

  • Journeys and processes assessed against 5 categories which are;
    • Navigation
    • Content
    • Function
    • Presentation
    • Feedback

By the end of step 3 we have a prioritised list of issues by category for each of the key user journeys or processe

4  Detailed Analysis & Recommendations

  • Senior UX Consultant will complete the detailed analysis and make recommendations to correct issues.
  • Recommendations, findings, tables and charts populated into report

The number of issues identified per user journey, task or process can vary dependent how many individual user journey’s (etc.) are included within the scope of the project. Some scopes identify a single user journey and contains analysis of almost every issue from ‘critical’ to ‘minor’. Others define multiple user journeys and so only the most critical issues are addressed in the audit.

5  Quality Assurance

  • The quality check is designed to correct any typographical errors, to ensure that the methodology and scoring have been correctly followed and finally to review the recommendations and make sure they follow best practice.

If you would like our senior UX consultant to talk you and your team through the findings of the report we are happy to do so. At a location agreed with you we can workshop the findings, present them or simply talk them through in a format to suit your audience

Optional Video Sessions

Expert reviews are frequently used where there isn’t time or budget available to carry out usability testing with real users. To add a user element to our method we will create tasks, reflecting the key journeys and processes and use services such as WhatUsersDo to provide video sessions from real users attempting these tasks. The users provide a commentary of their thoughts and observations as they go through the tasks. The tasks are un-moderated and we normally generate 5 respondent sessions that we use for analysis and recommendations that are included in the final report to bring the expert review findings to life.

CASE STUDY: WEBSITE EVALUATION

A large financial services provider wished to evaluate an existing rewards site ahead of a redesign and in doing so to generate a comparison between smartphone, tablet and desktop versions.

We conducted three expert reviews following our unique methodology and combined this with video sessions from a panel of WhatUsersDo respondents. The scoring allowed us to easily show which platform performed best in each area and which performed worst. A detailed report provided recommendations that were taken forward into the redesign.