As the legislative election in France approached this summer, a research team decided to reach out to hundreds of citizens to interview them about their views on key issues. But the interviewer asking the questions wasn’t a human researcher — it was an AI chatbot.
To prepare ChatGPT to take on this role, the researchers started by prompting the AI bot to behave as it has observed professors communicating in its training data. The specific prompt, according to a paper published by the researchers, was: “You are a professor at one of the world’s leading research universities, specializing in qualitative research methods with a focus on conducting interviews. In the following, you will conduct an interview with a human respondent to find out the participant’s motivations and reasoning regarding their voting choice during the legislative elections on June 30, 2024, in France, a few days after the interview.”
The human subjects, meanwhile, were told that a chatbot would be doing the online interview rather than a person, and they were identified to participate using a system called Prolific, which is commonly used by researchers to find survey participants.
Part of the research question for the project was whether the participants would be game to share their views with a bot, and whether ChatGPT would stay on topic and, well, act professional enough to solicit useful answers.
The chatbot interviewer is part of an experiment by two professors at the London School of Economics, who argue that AI could change the game when it comes to measuring public opinion in a variety of fields.
“It could really accelerate the pace of research,” says Xavier Jaravel, one of the professors leading the experiment. He noted that AI is already being used in the physical sciences to automate parts of the experimental process. For example, this year’s Nobel Prize in chemistry went to scholars who used AI to predict protein folds.
And Jaravel hopes that AI interviewers could allow more researchers in more fields to sample public views than is feasible and cost-effective with human interviewers. That could end up causing big changes for professors around the country, adding sampling public opinion and experience as part of the playbook for many more academics.
But other researchers question whether AI bots should stand in for researchers in the deeply human task of assessing the opinions and feelings of people.
“It's a very quantitative perspective to think that just having more participants automatically makes the study better — and that's not necessarily true,” says Andrew Gillen, an assistant teaching professor in the first-year engineering program at Northeastern University. He argues that in many cases, “in-depth interviews with a select group is generally more meaningful” — and that those should be done by humans.
No Judgment
In the experiment with French voters, and with another trial that used the approach to ask about what gives life meaning, many participants said in a post-survey assessment that they preferred the chatbot when it came to sharing their views on highly personal topics.
“Half of the respondents said they would rather take the interview again, or do a similar interview again, with an AI,” says Jaravel. “And the reason is that they feel like the AI is a non-judgmental entity. That they could freely share their thoughts, and they wouldn't be judged. And they thought with a human, they would feel judged, potentially.”
Get EdSurge journalism delivered free to your inbox. Sign up for our newsletters.
About 15 percent of participants said they would prefer a human interviewer, and about 35 percent said they were indifferent to chatbot or human.
The researchers also gave transcripts of the chatbot interviews to trained sociologists to check the quality of the interviews, and the experts determined that the AI interviewer was comparable to an “average human expert interviewer,” Jaravel says. A paper on their study points out, however, that “the AI-led interviews never match the best human experts.”
The researchers are encouraged by the findings, and they have released their interviewing platform free for any other researcher to try out themselves.
Jaravel agrees that in-depth interviews that are more typical in ethnographic research are far superior to anything their chatbot system could do. But he argues that the chatbot interviewer can collect far richer information than the kind of static online surveys that are typical when researchers want to sample large populations. “So we think that what we can do with the tool here is really advancing that type of research because you can get much more detail,” he tells EdSurge.
Gillen, the researcher at Northeastern, argues that there is something important that no chatbot will ever be able to do that is important even when administering surveys — something he called “positionality.” The AI chatbot has nothing at stake and can’t understand what or why it is asking questions, and that in itself will change the responses, he argues. “You're changing the intervention by having it be a bot and not a person,” he adds.
Gillen says that once when he was going through the interview process to apply for a faculty job, a college asked him to record answers on video to a series of set questions, in what was referred to as a “one-way interview.” And he says he found the format alienating.
“Technically it's the same” as answering questions on a Zoom call with humans, he says, “and yet it felt so much worse.” While that experience didn’t involve AI, he says that he imagines that a chatbot interviewing him would have felt similarly impersonal.
Bringing in Voices
For Jaravel, though, the hope is that the approach could help fields that don’t currently ask for public input start doing so.
“In economics we rarely talk to people,” he says, noting that researchers in the field more often look to large datasets of economic indicators as the key research source.
The next step for the researchers is to try to add voice capabilities to their platform, so that the bot can ask the questions verbally rather than in text chat.
So what did the research involving French voters reveal?
Based on chatbot interviews with 422 French voters, the researchers found that participants focused on very different issues depending on their political leaning. “Respondents on the left are driven by the desire to reduce inequality and promote the green transition through various policies,” the researchers concluded in their paper. “In contrast, respondents in the center highlight the importance of ensuring the continuity of ongoing policies and economic stability, i.e. preserving the agenda and legacy of the President. Finally, far right voters highlight immigration (77 percent), insecurity and crime (47 percent) and policies favoring French citizens over foreigners (30 percent) as their key reasons for support.”
The researchers argue that the findings “shed new light on these questions, illustrating that our simple tool can be deployed very fast to investigate changes in the political environment in real time.”