Using ChatGPT to analyze UX research data

Using ChatGPT to analyze UX research data

I recently wrote an article on Medium about the impact of AI on agencies and researchers. As an agency, we have a working group looking at AI and how it can be used in our work, or not. We have a long way to go I suspect, but I thought it worth sharing a couple of things that are starting to emerge.

Some of these have come up as a result of our collaboration as a working group. Others are emerging from the hands-on evaluation we are carrying out. My disclaimer is we are still learning so I reserve the right to alter my view as we gather more information.

Privacy, IP ownership and ChatGPT

We decided to run our own research to create a data set we owned so that we had complete control over any risks associated with uploading it, or parts of it to ChatGPT. It should go without saying that any data uploaded to ChatGPT should be anonymized. But there is also the question of data ownership and it is important to understand what happens to the data you upload.

I asked ChatGPT about this and what would happen to my data, and at first the response was reassuring. I asked the following question:

If I upload my research interview data to ChatGPT does it become part of the training data?

ChatPGT provided the following answer:

No, uploading your research interview data to ChatGPT does not make it part of the training data. As an AI language model, ChatGPT does not have the ability to incorporate or retain specific data inputs from individual users. The training process for models like ChatGPT is conducted separately and involves large-scale datasets collected by the model’s developers.

However, on probing further and suggesting that my interaction involved the uploading of data and the asking of questions, it changed it’s answer to:

You are correct. When you upload data and ask questions about it, those interactions can be logged and potentially used by OpenAI to improve the model. This includes the data you upload, the questions you ask, and the model’s responses.

I’m not sure if this is a bad thing. The more data in the model, the more accurate it becomes. But it is important to understand what is happening. Most client contracts will have IP clauses and those that I have seen always carve out ownership of the data to be the client’s. That would mean it cannot be uploaded without permission.

diary

Bias in ChatGPT

In time, we will all become “prompt engineers”, improving our ability to ask ChatGPT, and other AI’s better questions. However, we need to understand that while AI models like ChatGPT can be valuable tools, they also have limitations and potential biases that need to be taken into account. There is bias in the training data, there are generalizations due to lack of context, and there will be issues with quality and accuracy.

ChatGPT is trained on a large collection of text data from the internet, which can introduce biases present in that data. If the training data contains biased or unrepresentative perspectives, these biases can be reflected in the model’s responses and analysis. If you ask ChatGPT it will say that “OpenAI has made efforts to mitigate biases during training”, but some degree of bias may still exist.

The way ChatGPT works, is as a language model. It generates responses based on patterns and examples from its training data. That means it may not fully understand the specific context or nuances of the research interviews unless those aspects are explicitly provided – a prompt engineering challenge. Exploring this area with ChatGPT it told me: “It is important to consider that ChatGPT’s responses may not always capture the complexity and depth of human experiences accurately”. Surely that is at the heart of research and to me will be the biggest challenge in how we utilize AI for analysis purposes.

And as I noted, there may also be quality and accuracy issues. While ChatGPT is a powerful language model, it is not infallible. The generated responses may sometimes be incorrect, nonsensical, or inconsistent. So, we may use ChatGPT to assist our analysis, but it is important to validate and cross-reference the generated analysis with other sources to ensure accuracy and reliability. This is the role of the researcher.

So while ChatGPT may end up being a valuable tool for research and analysis, for now at least it should be used cautiously and in conjunction with the expertise of an experienced researcher. So far it can serve as a starting point for exploration and hypothesis generation, but human judgment, critical thinking, and validation remain crucial for reliable research analysis.

If you would like to know more about how we are exploring AI and the findings from our work, get in touch by emailing us at hello@ux247.com.