Artificial intelligence is no longer a distant risk — it is actively reshaping the political information landscape. A new report from Kroll, the global risk intelligence firm, documents how generative AI tools are accelerating the production and spread of political disinformation, while simultaneously making it harder to hold bad actors accountable.
The strategic benefit gained by bad actors from widespread scepticism about authentic content. When deepfakes and AI-generated disinformation proliferate, genuine evidence — real recordings, real statements — can be dismissed as fabricated. Accountability erodes not because the truth is hidden, but because it can no longer be trusted.
How GenAI is Reshaping Political Misinformation
The Kroll report identifies generative AI as a force multiplier for disinformation campaigns. What previously required significant technical expertise — fabricating a convincing audio clip, producing a realistic video of a public figure, generating thousands of plausible social media posts — can now be accomplished by a single operator with a consumer-grade laptop and an internet connection.
The scale and speed enabled by AI mean that disinformation can saturate a news cycle before fact-checkers have time to respond. Worse, the visual and audio quality of AI-generated content has improved dramatically over the past two years, making detection increasingly difficult for both platform moderation systems and ordinary users.
The Liar’s Dividend: A Weapon Against Accountability
Perhaps the most insidious finding in the Kroll report is not about fabrication — it is about doubt. Even when genuine recordings of politicians, executives, or public officials exist, deepfake proliferation gives bad actors a new line of defence: simply claim the authentic content is AI-generated.
This strategy — the liar’s dividend — exploits the public’s growing uncertainty about what is real. A leaked recording of genuine wrongdoing can be dismissed as a deepfake. A verified video of a policy reversal can be labelled synthetic. The result is an asymmetric environment where disinformation benefits not only from spreading falsehoods, but from undermining trust in truth itself.
The greatest risk is not that AI will create convincing fakes — it is that it will make the real look fake. Once the public can no longer distinguish between the two, accountability collapses.
— Kroll Risk Intelligence Report, 2024
Examples of GenAI Disinformation in Elections
The report documents a series of documented incidents from 2023–2024 election cycles in which AI-generated content played a material role in shaping public discourse:
Fabricated audio
AI-cloned voice recordings of candidates making inflammatory statements were circulated on messaging platforms days before polls opened, with insufficient time for credible rebuttals.
Synthetic imagery
Photorealistic images depicting candidates in compromising situations were generated and shared across social media, exploiting platform content moderation delays.
Automated propaganda
LLM-powered accounts generated high volumes of targeted political messaging, adjusting framing based on user demographic data to maximise emotional impact.
Denial of authentic content
Real footage of public statements was falsely labelled AI-generated by the subjects themselves, preventing accountability for documented positions and actions.
Planning for the Future
Kroll’s recommendations focus on three layers of response: technical, regulatory, and institutional. On the technical side, the report calls for greater investment in provenance technologies — cryptographic content authentication that can establish a chain of custody for genuine recordings and images. On the regulatory side, it urges faster adoption of AI-labelling requirements, particularly during election periods, and greater liability for platforms that amplify unverified AI-generated political content.
The institutional dimension may be the most difficult to address. Rebuilding public confidence in authentic evidence requires sustained effort from media organisations, civil society groups, and political actors themselves. The Kroll report concludes that inaction is not a neutral option: as AI capabilities advance, the window for establishing robust norms and technical safeguards is narrowing.
For EU policy professionals, the implications are direct. The AI Act’s obligations around transparency and synthetic media labelling are a starting point — but the report’s findings suggest that implementation timelines may need to accelerate to keep pace with deployment realities on the ground.
Kroll is a global provider of risk and financial advisory solutions. Their 2024 report on AI and political disinformation draws on case studies from election monitoring across four continents, interviews with campaign security specialists, and analysis of documented AI-generated influence operations over the prior 18 months.
Related analysis