Regulators must urgently tackle the threats posed by AI ahead of July’s general election to preserve trust in the democratic system, according to new research by The Alan Turing Institute’s Centre for Emerging Technology and Security (CETaS).
The researchers have urged communications regulator Ofcom and the Electoral Commission to quickly address the use of AI to mislead the public and erode confidence in the integrity of the electoral process.
Recent advances in AI technology have caused many people to be concerned about its use to spread disinformation, influence voters and disrupt the integrity of election processes, with the aim of manipulating the outcome of elections or eroding trust in democracy.
The new study cautions against fears that AI will directly impact election results, noting that, to date, there is limited evidence that AI has prevented a candidate from winning compared to the expected result. The researchers found that of 112 national elections taking place since January 2023 or forthcoming in 2024, just 19 had examples of AI enabled interference.
But there have been early signs of damage to the broader democratic system. This includes confusion among the electorate over whether AI generated content is real, which damages the integrity of online sources; deepfakes inciting online hate against political figures, which threatens their personal safety; and politicians exploiting AI disinformation for potential electoral gain.
Fake endorsements
The study also says that current ambiguous electoral laws on AI could lead to its misuse in the upcoming general election, such as with people using generative AI systems like ChatGPT to create fake campaign endorsements.
The authors make several recommendations to mitigate potential threats, including that the Electoral Commission and Ofcom create guidelines and request voluntary agreements for political parties detailing how they should use AI technology for campaigning, while requiring AI generated election material to be clearly marked as such.
They say these organisations should also work with the Independent Press Standards Organisation to publish new guidance for media reporting on content which is either alleged or confirmed to be AI generated, particularly during polling day in light of broadcasting restrictions.
Another recommendation is that the Electoral Commission should ensure any forthcoming voter information contains guidance for how individuals can remain vigilant to AI based election threats (such as attempts to cause confusion over the time and place of voting).
Exercises and simulations
The study also recommends that the UK Government’s Defending Democracy Task Force (DDTF) and the Joint Election Security and Preparedness Unit (JESP) coordinate exercises with local election officials, media outlets and social media outlets, simulating possible deepfakes of political candidates and AI voter suppression efforts to prepare to deal with these situations when they arise.
In addition, the DDTF should create a live repository of AI generated material from recent and upcoming elections so they can analyse trends to inform future public information campaigns.
The researchers created a timeline of how AI threats develop in the lead up to an election. In the weeks, months and hours beforehand, AI could be used to undermine the reputation of political candidates, falsely claim that candidates have withdrawn, shape voter attitudes on a particular issue or create deceptive political ads.
During the polling period, deepfake attacks, polling disinformation and AI generated knowledge sources (such as fake news articles) are likely to circulate and create confusion over how, where and when to vote. And after the election, we are most likely to see political candidates being declared the winner before the results have been announced, as well as deepfakes and AI bots claiming that there has been election fraud to undermine election integrity.
No clear guidance
Sam Stockwell, research associate at The Alan Turing Institute and lead author, said: “With a general election just weeks away, political parties are already in the midst of a busy campaigning period. Right now, there is no clear guidance or expectations for preventing AI being used to create false or misleading electoral information.
“That’s why it’s so important for regulators to act quickly before it’s too late.”
Dr Alexander Babuta, director of CETaS, said: “While we shouldn’t overplay the idea that our elections are no longer secure, particularly as worldwide evidence demonstrates no clear evidence of a result being changed by AI, we nevertheless must use this moment to act and make our elections resilient to the threats we face.
“Regulators can do more to help the public distinguish fact from fiction and ensure voters don’t lose faith in the democratic process.”