CAN CHATGPT TAKE SURVEYS?

By Chris Mawn, Director of Clients ANZ at PureSpectrum

chatgpt

Image by mohamed-nohassi (Unsplash)

Our industry is abuzz with the opportunities and threats from generative AI – whether it’s writing questionnaires, coding open-ended responses, or creating visually stunning and engaging presentations. But can generative AI chatbots be used to take an online survey? 

It’s a fair question to ask and one that has certainly been trending in all my recent client visits and meetings. The terms ‘generative AI chatbots’ and ‘survey bots’ are usually somewhat conflated in discussion, so to be clear, survey bots are scripts that can automatically submit forms/surveys from pre-programmed routines at a high rate, while bypassing security measures. For the purposes of this article, we’ll be focusing on generative AI chatbots, specifically ChatGPT.

First, we need to understand what ChatGPT is capable of. Compared to other chatbots you may have experienced on various websites (usually when desperately trying to find human help!), ChatGPT’s accuracy and conversational capability are way more advanced, but ultimately the system still operates on input from a user (whether that is a UI or an API).

ChatGPT is a lot like Google, except that it does not need to consume all the available ‘answers’ to a search query. It selects the single best answer to a question/comment in a humanistic, conversational manner, all based on the natural language prompts you feed it.

OK, but can ChatGPT take surveys?

We actually asked ChatGPT this question and here is how it responded:

And it’s correct – ChatGPT cannot take a survey, as it cannot mimic the behaviour of a human typing out an answer to a question. Furthermore, after completing thorough research on ChatGPT-4, the findings show that there is no significant evidence to support the idea that ChatGPT is ‘entering’ surveys. It is also important to note that ChatGPT’s knowledge is limited to things that occurred prior to September 2021.

However, there are some real, inherent risks and concerns associated with open-ended questions. Since AI language models can quite easily generate coherent and contextually relevant text, ChatGPT may be used to create seemingly genuine responses to these questions.

So how do we combat this?

As ChatGPT grows in popularity, the ‘anti-venom’ of AI content detectors is following with equal strides, including GPTZero, Sapling, and Writefull.

But there is no one solution to preventing the use of generative AI chatbots in surveys, and so within PureSpectrum’s Marketplace platform we are able to do the following:

  • Utilise PureScore™, our comprehensive advanced machine-learning quality scoring system. This algorithm already deploys a data quality screener for various question types within our Data Quality for All (DQ4All) workflow. The data quality team at PureSpectrum is actively working to augment this system with open-ended questions that ask for feelings and emotions from the respondent about something that occurred after September 2021. Forcing the respondent to produce an answer about something that ChatGPT doesn’t have knowledge of provides a way to assess accuracy and set a baseline for that user and what their real, non-AI-produced response looks like.
  • Actively detect and flag survey sessions where the respondent is switching between the active tab they are interacting with, within the PureScore™ pre-screener, and a secondary tab, and back to an active tab within the browser.
  • Detect copy-and-paste behaviour on open-ended questions and terminate the respondent from the survey. This can only be observed in the PureSpectrum pre-screener, so clients will still need to protect their surveys too – a belt-and-braces approach. 
  • Measure time spent on a question within the PureScore™ pre-screener (all clients are encouraged to do the same within their surveys). Detecting and blocking copy-and-paste behaviour (which would usually manifest as faster question completion times) forces a user to get ChatGPT to answer the question and then type the response into the survey themselves (assuming they eventually pick up on the fact that they are terminated when they copy and paste answers). This will likely manifest in longer time spent compared to human-only baseline times.
  • Use semantic analysis within PureText™ (PureSpectrum’s in-house natural language processing model for evaluating the quality and contextuality of open ends) to analyse language patterns. AI-generated text may exhibit certain linguistic patterns or produce text that is too polished, overly formal, or verbose. Analysing response patterns for unusual phrasing or excessive use of certain words will flag potentially AI-generated content.
  • Check for consistency and contextuality both at point-in-time and longitudinally. We can look for inconsistencies within the response(s) or discrepancies with prior responses provided by the respondent. The AI-generated text might be coherent but may not always align with the context of the survey or previous answers given by the respondent.
  • Look for unrelated content, as AI-generated text may sometimes veer off topic or include irrelevant details. Checking for responses that seem tangential or unrelated to the question being asked will help flag text generated by an AI. 

The rapid and constant advancement of AI-generated content presents a challenge for detecting fraudulent responses in surveys, but by adopting a multi-faceted approach that combines various detection strategies, the chances of identifying AI-generated responses can be massively improved. While technologies like ChatGPT may complicate the evaluation of response quality and context, shifting the focus towards recognising symptoms and impacts associated with AI-generated content offers a promising path in the battle against fraudsters.

As AI continues to evolve, PureSpectrum’s ongoing research and innovation in detection methods will be crucial for maintaining the integrity and reliability of survey data, and PureSpectrum vows to be at the forefront of this mission.

This article was first published in the Q2 2023 edition of Asia Research Media

Leave a Comment

Your email address will not be published. Required fields are marked *