We Must Go to War Against Deepfakes Now to Keep November Voting Fair


The time has come to put the world’s democracies on a war footing against AI-enabled foreign meddling. 2023 was the year when we began to take artificial intelligence seriously. The year culminated in no shortage of different national and international ethical AI initiatives, executive orders and acts.

However, when it comes to the integrity of our democratic systems, we do not have several years to implement well-meaning laws: the 2024 AI super-election year is already upon us. The stakes could not be higher, yet our collective preparations are insufficient against the AI-enabled onslaught waiting to be unleashed. The world’s democracies must do more, in collaboration with each other to prepare our defenses and resilience.

In 2016, both governments and the platforms themselves underestimated the power of disinformation. We co-founded the Transatlantic Commission on Election Integrity alongside Joe Biden in early 2018 to sound the alarm and push for democracies to cooperate in solving this collective threat.

This photo illustration taken on Dec. 22, shows a user on an AI video generator website on a laptop computer in Taipei, Taiwan.
I-HWA CHENG/AFP via Getty Images

Today, there are many excellent initiatives among governments, media, and civil society, from fact-checkers to media literacy campaigns. We have worked with many of them. It remains to be seen whether any can match the scaling effect that AI will bring for their opponents. Likewise, many of those taken in by disinformation narratives see the ‘fact-checkers’ and official media channels as part of the ‘global conspiracy’.

Large language models such as ChatGPT, Bard or Llama2 can effectively supercharge the playbook that emerged in 2016. Election meddling on steroids. Ironically, they “democratize” the disinformation playbook, making it more accessible, easier, and cheaper for malign actors.

Such models can significantly increase the volume of disinformation being propagated and spread, and microtargeted both to push us further into our echo chambers, and to make detecting bot activity even harder.

Their ability to generate text, audio, image, and video content poses a new challenge in a highly emotionally charged political landscape: we can no longer believe our own eyes. Already, towards the end of 2023 deepfakes were deployed in elections in Slovakia and Argentina. The Argentinian deep fakes were produced by both final round presidential campaigns. However, blame for the Slovak audio deepfake—targeting the Liberal Party leader—was laid directly at Moscow’s door. By the U.S. presidential elections in November the technology will evolve exponentially.

Most major tech platforms have been keen to show they are adapting, not least because the EU’s Digital Services Act requires them to label manipulated content. Google and Meta will require more transparency around AI-generated political ads. Microsoft has made commitments to support campaign teams and has unveiled a tool to insert digital watermarks into advertising to show who created them and if they have been altered. Meanwhile, Elon Musk’s X has already been subjected to formal proceedings by the European Commission for failing to meet its Digital Services Act obligations.

Digital watermarking efforts are promising. For example, a cross-industry initiative, with players ranging from Adobe and Microsoft to the BBC and New York Times, is developing a global standard for a digital watermark that will enable publishers, creators, and consumers to see the authenticity and provenance of different media types.

This sort of initiative is what we need more of. We will not solve this challenge with legislation, with civil society efforts, or with platform transparency. We need all these things and more. When our militaries are preparing for various scenarios, they conduct exercises. This enables them to predict the enemy’s attack scenarios, and to prepare our responses and our deterrence.

We should look to conduct similar exercises across the democratic world, convening an array of actors from major leaders in frontier AI, the platforms, media, and civil society, and those bodies facilitating elections. This could take the form of a transatlantic red team like those used to test cybersecurity or military preparedness with different experts taking the role of attacker and defender.

Convening such an initiative does not need the auspices of a big and cumbersome international body; it just needs political will. For example, the UK, France, and South Korea could host such an exercise under the auspices of their AI Safety Summits. The Italians could convene an exercise under their G7 Presidency. What matters is not the convenor, but the outcomes which cannot be another set of broad principles for the long term, but immediate actionable points for all players in our democratic systems.

We can contain the ill-effects of AI-enabled election interference, but failing to prepare now will make 2016’s Russian election meddling look like a warm-up act. There is no shortage of efforts to tackle election interference and disinformation on one hand, or to place guiderails on AI on the other. Early in 2024 we must unite all these forces for democracy to collectively repel the autocrats.

Anders Fogh Rasmussen is former Danish Prime Minister and NATO Secretary General.

Michael Chertoff was the second U.S. Secretary of Homeland Security. He is also an advisor to platforms trying to counter deepfakes.

The views expressed in this article are the writers’ own.