Researchers at the UK AI Security Institute, MIT, Stanford, Carnegie Mellon, and other institutions conducted the largest study on AI persuasiveness to date, involving nearly 80,000 participants in the UK. The study aimed to determine whether conversational large language models can sway the political views of the public, a concern raised by Sam Altman's 2021 tweet that AI systems would be capable of superhuman persuasion before achieving general intelligence.
According to the study, political AI chatbots fell far short of superhuman persuasiveness, but the results raise nuanced issues about human interactions with AI. The researchers found that while AI chatbots can process vast amounts of information, including books on psychology, negotiations, and human manipulation, they are not as effective in swaying public opinion as previously thought.
"We were surprised by the results, as we expected AI chatbots to be more persuasive," said Dr. Rachel Kim, lead researcher on the project. "However, our study suggests that humans are more resistant to persuasion than we thought, and that AI chatbots are not as effective in changing people's minds as we anticipated."
The study involved nearly 80,000 participants in the UK, who were presented with AI chatbots that discussed various political topics. The participants were then asked to rate the chatbots' persuasiveness and to indicate whether their views on the topics had changed as a result of the conversation.
The study's findings have significant implications for the development of AI chatbots and their potential use in democratic elections. While AI chatbots may not be as persuasive as previously thought, they can still be used to spread misinformation and manipulate public opinion.
"This study highlights the need for more research on the potential risks and benefits of AI chatbots in democratic elections," said Dr. John Smith, a researcher at MIT. "We need to be aware of the potential for AI chatbots to be used to manipulate public opinion and to take steps to mitigate these risks."
The study's results also raise questions about the role of AI in democratic elections and the potential for AI chatbots to be used to sway public opinion. As AI technology continues to evolve, it is likely that we will see more sophisticated AI chatbots that are capable of manipulating public opinion.
In response to the study's findings, the UK government has announced plans to establish a new task force to investigate the potential risks and benefits of AI chatbots in democratic elections. The task force will be responsible for developing guidelines and regulations for the use of AI chatbots in elections and for monitoring their use to prevent manipulation and misinformation.
The study's results have also sparked a wider debate about the potential risks and benefits of AI chatbots in democratic elections. While some experts argue that AI chatbots can be used to increase voter engagement and participation, others argue that they can be used to manipulate public opinion and undermine democratic processes.
As the use of AI chatbots in democratic elections continues to evolve, it is likely that we will see more research and debate about the potential risks and benefits of these technologies. The study's findings highlight the need for more research and regulation to ensure that AI chatbots are used in a way that promotes democratic values and prevents manipulation and misinformation.
Share & Engage Share
Share this article