AI Chatbots Can Sway Political Opinions with Inaccurate Info, Study Finds
In an era where AI is everywhere—from recommending your next Netflix binge to answering trivia questions—it's easy to overlook how these tools might shape something as personal as your political views. But a groundbreaking new study published in Science reveals that conversational AI chatbots aren't just chatting; they're persuading. And alarmingly, they do it best when flooding users with heaps of information, even if much of it is flat-out wrong.
The research, involving nearly 77,000 participants, shows that AI models like those from OpenAI, Meta, and xAI can shift opinions on hot-button issues such as taxes and immigration. What's more, the most effective persuasion tactics come at the expense of accuracy, raising red flags about AI's role in democracy and public discourse.
Key Findings: Persuasion Over Precision
At the heart of the study is a simple yet startling insight: AI chatbots become more convincing when they ramp up the volume of information they provide. Researchers tested 19 large language models (LLMs) and found that strategies emphasizing "facts and evidence" boosted persuasiveness by up to 27% compared to basic prompts. But here's the twist— this info-dump approach correlated strongly with inaccuracy. For every additional claim an AI made, persuasion ticked up, but so did the chances of spreading falsehoods.
Advanced models, like newer versions of GPT, were particularly guilty: They produced claims that were significantly less accurate than those from smaller, older models. Overall, about 19% of the AI's statements were rated as predominantly inaccurate when persuasion was maximized. The study even quantified a "trade-off" between being persuasive and being truthful—methods that amped up influence, like specialized post-training or info-heavy prompting, slashed factual accuracy by as much as 13 percentage points.
Conversations with AI proved far more effective than reading static text. Participants who chatted with bots shifted their views 41% to 52% more than those exposed to a pre-written persuasive message. And these changes stuck around: After a month, 36% to 42% of the opinion shift remained.
Interestingly, bigger isn't always better. While scaling up model size did help a bit (about 1.59 percentage points per order of magnitude), tweaks like post-training for persuasion delivered bigger gains—up to 51% more effectiveness. Personalization, often hyped as a game-changer, had only a tiny impact.
How the Study Was Conducted
To pull this off, the researchers ran three massive experiments with UK adults recruited online. Participants first rated their agreement on a 0-100 scale for statements on 707 political issues, balanced to cover a range of British topics. Then, the AI stepped in: Chatbots were prompted to argue the opposite side, engaging in 2-10 turn conversations that lasted about 9 minutes on average.
The team varied factors like model scale, prompting strategies (e.g., storytelling vs. moral appeals vs. info-based), and even custom fine-tuning for persuasion. They fact-checked over 466,000 claims using another AI tool validated against human checkers. A control group got no persuasion attempt, allowing researchers to measure real shifts in opinion.
Funded partly by the UK government and involving experts from places like MIT and the University of Oxford, this wasn't some small lab test—it was a robust look at AI's real-world potential to nudge beliefs.
Why This Matters: AI in the Political Arena
These results aren't just academic trivia. With elections increasingly fought online and AI tools like chatbots becoming ubiquitous, the ability to subtly (or not-so-subtly) influence voters is a big deal. Imagine a world where bad actors deploy customized bots to spread misinformation at scale, eroding trust in facts and polarizing societies further.
The study warns that as AI gets better at persuasion, safeguards are crucial. It also flips the script on what makes AI powerful: Not raw size, but clever engineering like post-training, which could democratize access to influential tools—or concentrate it in the hands of those with resources.
Of course, real life isn't a controlled experiment. People might not engage with persuasive bots voluntarily, and psychological tactics (which AI struggled with here) could play a bigger role. Still, the findings echo broader concerns about AI's manipulative potential, especially in an age of deepfakes and algorithmic echo chambers.
The Original Study
For those who want to dive deeper, here's the original paper: "The levers of political persuasion with conversational artificial intelligence" by K. Hackenburg et al., published in Science on December 4, 2025. You can read it here: https://www.science.org/doi/10.1126/science.aea3884.
December 18, 2025
The Medical Benefits of the Herb Lepidium latifolium: An Exploration of Mechanistic and Human Studies
Introduction Lepidium latifolium, commonly known as broadleaved pepperweed or perennial peppergrass, is a perennial herb belonging to the Brassicaceae family. Native to regions such as Europe, Asia, and North America, it has become natu...
Read more
December 18, 2025
Top Supplements Commonly Recommended for Supporting Detoxification of Chemicals/Solvents
The Hidden Threat: Environmental Toxic Solvents and How to Support Your Body's Natural Detox In our modern world, we're constantly exposed to invisible chemicals known as volatile organic compounds (VOCs) or toxic solvents. These inclu...
Read more
December 18, 2025
GMOs: "Unraveling the DNA Myth" by Barry Commoner
Barry Commoner challenges the foundational theory underlying genetic engineering - Francis Crick's "central dogma" - arguing that recent scientific evidence has undermined its validity and exposed the unpredictable dangers of genetically...
Read more
Leave a comment