AI factchecking on X risks spreading misinformation warns ex-minister
Former UK minister warns X's AI-powered factchecking could amplify conspiracy theories as platform shifts from human to bot moderation
-
AI-powered moderation: Elon Musk's X platform announced it will use large language models to draft Community Notes, its factchecking feature previously written by humans. The company claims this "advances the state of the art in improving information quality on the internet."
-
Human oversight promised: Keith Coleman, X's VP of Product, stated AI-generated notes would be reviewed by humans and only published if users with diverse viewpoints find them useful. The company published a research paper co-authored with academics from MIT, Harvard and other institutions supporting this approach.
-
Critics raise concerns: Former UK technology minister Damian Collins warned the move risks increasing promotion of "lies and conspiracy theories," accusing X of "leaving it to bots to edit the news." He expressed concerns about "industrial manipulation" on the 600-million-user platform.
-
Industry trend: This follows similar moves by tech giants. Google recently deprioritized human factchecks in search results, while Meta eliminated human factcheckers in favor of community notes earlier this year.
-
Effectiveness questioned: Research shows human-authored community notes are perceived as more trustworthy than simple misinformation flags. However, a Center for Countering Digital Hate study found misleading election posts often lacked corrective notes despite amassing billions of views.
-
Technical concerns raised: Experts warn AI systems struggle with nuance and context while being prone to "hallucinations" - confidently presenting false information as fact. Andy Dudfield of Full Fact cautioned this could "open the door" to AI-generated notes bypassing proper human review.
-
Platform defends approach: X's research paper argues AI notes can be faster, require less effort, and maintain quality while benefiting from crowd-sourced evaluation. The company maintains trust comes from user voting, not the note's authorship.
This development highlights growing tensions between tech platforms' automation efforts and concerns about maintaining information integrity in the AI era.
Related News
Zscaler CAIO on securing AI agents and blending rule-based with generative models
Claudionor Coelho Jr, Chief AI Officer at Zscaler, discusses AI's rapid evolution, cybersecurity challenges, and combining rule-based reasoning with generative models for enterprise transformation.
Human-AI collaboration boosts customer support satisfaction
AI enhances customer support when used as a tool for human agents, acting as a sixth sense or angel on the shoulder, according to Verizon Business study.
About the Author

Alex Thompson
AI Technology Editor
Senior technology editor specializing in AI and machine learning content creation for 8 years. Former technical editor at AI Magazine, now provides technical documentation and content strategy services for multiple AI companies. Excels at transforming complex AI technical concepts into accessible content.