The UK government is moving to the frontline of the global fight against deepfake abuse, unveiling plans for a world-first deepfake detection evaluation framework designed to protect the public from AI-generated deception, fraud and exploitation.
Working alongside leading technology firms including Microsoft, as well as academics, law enforcement and international partners, the new framework will establish clear and consistent standards for assessing deepfake detection tools. Its aim is simple but urgent: to understand how well current technologies can identify harmful fake images, videos and audio, and to close the gaps criminals are already exploiting.
For diaspora communities, who are often targeted by cross-border scams, impersonation fraud and identity abuse, the stakes are especially high. From fake audio messages impersonating relatives overseas to manipulated videos used in financial scams, deepfakes have become a growing threat to trust, safety and dignity across borders.
The government says the framework will test detection technologies against real-world harms, including sexual abuse, financial fraud, impersonation and organised crime. Once fully developed, it will set expectations for the technology industry, pushing platforms and developers to meet higher standards in identifying and stopping deepfake content before it spreads.
The scale of the problem is accelerating at alarming speed. In 2025 alone, an estimated eight million deepfakes were shared online, a dramatic rise from just half a million in 2023. With tools becoming cheaper, faster and easier to use, criminals no longer need technical expertise to create convincing fake content.
Minister for Safeguarding and Violence Against Women and Girls, Jess Phillips, warned that deepfake abuse cuts across age, gender and background. She highlighted real-life scenarios where grandparents are tricked by fake videos of loved ones, women have their images manipulated without consent and businesses are defrauded through AI-powered impersonation.
Phillips said the personal and emotional devastation caused by deepfakes is profound, adding that the new framework is designed to expose the tactics of criminals and shut down the loopholes they rely on. She stressed that the public should not be forced to live in fear of what they see or hear online.
Technology Secretary Liz Kendall echoed those concerns, describing deepfakes as a weapon increasingly used to undermine trust and exploit vulnerable people. She said the UK is determined to lead internationally by combining strong detection tools with tough laws, including criminalising the creation of non-consensual intimate images and moving to ban so-called nudification tools.
Last week, the government funded and led a major Deepfake Detection Challenge hosted by Microsoft. Over four days, more than 350 participants took part, including INTERPOL, Five Eyes partners and major technology companies. Teams were tested in high-pressure, real-world scenarios involving election security, fraud, organised crime and victim identification, reflecting the growing national security risks posed by synthetic media.
Andrea Simon, Director of the End Violence Against Women Coalition, welcomed the move but warned that victims should not be left to carry the burden of reporting and fighting harmful content online. She said platforms themselves must take greater responsibility, as deepfake abuse and image-based violence continue to evolve.
From a policing perspective, Deputy Commissioner Nik Adams of the City of London Police said deepfakes are already transforming fraud at scale. As the national lead force for fraud, police are seeing criminals exploit AI to impersonate trusted individuals and deceive victims at unprecedented speed. He said the new framework will strengthen law enforcement’s ability to stay ahead of offenders and protect public confidence.
This initiative forms part of the government’s wider Plan for Change, aimed at making Britain’s streets and online spaces safer. New legislation making it illegal to create or request non-consensual deepfake intimate images of adults is coming into force, with further measures planned under the Online Safety Act to force platforms to act proactively rather than only responding after harm has occurred.
For readers of Chijos News, particularly those in the diaspora navigating digital spaces that connect families, businesses and communities across continents, this moment marks a critical shift. As AI reshapes how we communicate and trust information, the UK’s push to regulate, detect and deter deepfake abuse could set a global standard, one that prioritises safety, accountability and human dignity in the digital age.