WHY CHATBOTS COULD INFLUENCE THE NEXT ELECTION
- Melissa Fleur Afshar

- Jan 31
- 4 min read
Newsweek Exclusive Feature
AI chatbots are subtly shaping voter opinions—new research shows they could sway future elections worldwide.
The rise of generative artificial intelligence in political messaging is creating new concerns about the future of elections, as research reveals that AI-powered chatbots are capable of subtly, and sometimes significantly, shifting voter attitudes.
Conversations with AI chatbots during live election periods in the U.S., Canada and Poland led to measurable changes in political preferences among voters, according to new research published in Nature.
The findings arrive amid growing concern from both researchers and policymakers that the persuasive capabilities of AI could be used to influence or even manipulate outcomes in national elections without public awareness.
In a series of experiments carried out by Cornell University's professor David Rand and colleagues, AI chatbots were programmed to advocate for specific political candidates in the 2024 U.S. presidential election and the 2025 national elections in Canada and Poland. The researchers found that chatbot interactions could reinforce existing support—but more notably, they were often successful in persuading undecided or opposing voters too.
For the U.S. study, 2,306 participants indicated their preferred candidate between Donald Trump and Kamala Harris. Each person was randomly assigned a chatbot advocating for either candidate. Even brief conversations led to statistically significant shifts in candidate support.
Similar results were observed in Canada, where the AI supported either Liberal Party leader Mark Carney or Conservative leader Pierre Poilievre, and in Poland, where it backed either centrist-liberal Rafał Trzaskowski or right-wing Law and Justice party's candidate Karol Nawrocki.
Conversations focused on facts and policy were more effective than those centered on personality. However, AI chatbots promoting right-wing candidates were consistently more prone to factual inaccuracies, a pattern that held across all three countries.
In another experiment, voters in Massachusetts were exposed to chatbot discussions regarding a ballot measure to legalize psychedelic drugs. With participants randomly assigned pro- or anti-legalization bots based on their initial views, the researchers again found persuasive effects—especially among those with moderate, undecided or neutral views on the issue.
While most chatbot statements were rated highly for factual accuracy, discrepancies remained. For example, pro-Trump chatbots inaccurately attributed job growth solely to his administration, overlooking economic continuity from the Obama era.
Jason Ross Arnold—a professor of political science at Virginia Commonwealth University and an expert in AI governance and ethics told Newsweek that the ability of chatbots to simplify political information while shaping perceptions poses a new kind of risk to democratic engagement.
“AI chatbots have become powerful intermediaries between voters and political information,” Arnold said. “They can simplify complex issues and broaden access to civic knowledge, yet they also carry hidden risks that can distort opinions without users realizing it."
The vulnerabilities are layered. Arnold identified five core dangers of political chatbot use. The first is misgrounding, where AI models pair citations from real sources with claims those sources don’t actually support, making misinformation appear credible.
Second is sycophancy, where AI chatbots tend to reinforce users’ existing biases rather than challenge them, creating echo chambers that amplify certainty in one’s beliefs.
Subtle political bias in framing and emphasis can also shift opinions without detection. Users may wrongly assume chatbots are neutral or not programmed with pre-existing attitudes and persuasions, though research shows even minor asymmetries in language can influence decisions.
AI systems are often optimized for rhetorical effectiveness over factual precision, leading to what Arnold calls "persuasiveness over accuracy."
Lastly, over-reliance on AI for summarizing political choices risks what Arnold describes as cognitive offloading—an erosion of critical thinking that undermines informed democratic participation and harms the concept of having freedom of opinion and choice to pick the leaders we like and hold others to account.
Still, the same tools do offer potential benefits if properly designed and regulated, Arnold explained. AI chatbots can break down complex legal language in ballot measures, reducing voter “roll-off” by encouraging voter participation and a larger demographic to vote in down-ballot initiatives.
They can also fill information gaps in under-reported local elections, providing consolidated, personalized data for voters.
In multilingual democracies, AI chatbots can offer real-time conversational translations, support linguistic inclusion and allow voters to ask clarifying questions in their preferred language. If well-guarded against manipulation, they could also tailor explanations to a voter’s knowledge level without steering them toward a particular position or losing them by overcomplicating or oversimplifying things.
But the tradeoff remains.
"These AI models are becoming more convincing faster than they are becoming accurate," Arnold said.
The study showed how outsourcing political beliefs to a bot can distort the social construct of having freedom of choice in a democratic election. During a live election cycle, that gap can turn a chatbot into a de facto political agent—fluent, confident, and capable of influencing voters without full transparency.
"The growing influence of AI chatbots makes it essential to understand both the benefits and the vulnerabilities they introduce," Arnold summed up.
THANK YOU FOR READING
COVER IMAGE CREDIT: GETTY IMAGES
READ THE FULL STORY HERE: Why Chatbots Could Influence the Next Election - Newsweek
Comments