The Home Office is running an initiative for the development of solutions to detect ‘deepfake’ content generated by AI.
Its Accelerated Capability Environment – the unit working on public safety and security issues in digital and data – is collaborating with the Department for Science, Innovation and Technology and the Alan Turing Institute on the Deepfake Detection Challenge.
This is aimed at carrying out fast paced work to assess the current situation, determine existing capabilities and identify gaps that require new approaches and solutions to deal with the challenges posed by deepfakes.
The Home Office said this comes in response to the risks caused by deepfakes in activities such as the generation of fake news, financial fraud, revenge porn and child sexual abuse.
It staged an industry briefing event with IT industry association techUK yesterday, and is inviting public sector organisations and academia to register an interest in getting involved.
Accelerating evolution
An Accelerated Capability Environment blogpost said: “Rapid advances in generative artificial intelligence are accelerating the evolution of deepfakes. As they become increasingly sophisticated and convincing, creating deepfakes grows ever easier.”
It added: “Deepfake detection is challenging, and the public is becoming increasingly aware of the threats it poses; policing and government more widely want to ensure we are equipped with the tools to address the problem at the necessary pace and scale.”
Two discovery workshops on relevant policy and technology were held during March, and the challenge is aimed at producing demonstrations of possible solutions in a showcase even later in the year.