For many people across the Nigerian diaspora living in the UK, social media platforms like X are not just spaces for conversation but vital tools for news, community connection, and public debate. That is why growing concerns around online safety, especially involving artificial intelligence, are resonating far beyond Westminster. Britain’s media regulator has now launched a formal investigation into Elon Musk’s X, amid fears that its Grok AI chatbot has been used to generate sexually explicit deepfake images, including illegal and non-consensual content.
The probe, announced on Monday, places renewed pressure on the platform, which is already facing scrutiny from regulators around the world. Ofcom said it was investigating whether X had failed in its legal duty to protect people in the UK from illegal content, particularly material that poses a risk to children. The regulator said reports that Grok had been used to create and share non-consensual intimate images and child sexual abuse material were “deeply concerning”.
The investigation comes as the British government prepares to bring a new law into force this week that will make it a criminal offence to create sexual deepfakes. Ministers have described such images as “weapons of abuse”, highlighting their devastating impact on victims, especially women and girls. Technology Minister Liz Kendall told lawmakers that the government also plans further legislation aimed at stopping the problem at its source by making it illegal for companies to supply tools specifically designed to create deepfakes.
Ofcom said platforms operating in Britain must take responsibility for preventing users from encountering illegal content and for removing such material swiftly once it is identified. The regulator stressed that it would not hesitate to investigate companies where there is evidence they may be failing in their duties under UK law.
X responded by pointing to a previous statement in which it said it takes action against illegal content, including child sexual abuse material, by removing it, permanently suspending accounts, and working with law enforcement where necessary. The company said anyone who uses or prompts Grok to generate illegal content would face the same consequences as users who upload such material directly.
Political pressure has continued to mount. Prime Minister Keir Starmer last week described the Grok-generated images as “disgusting” and “unlawful”, saying X needed to “get a grip” on the situation. The case is widely expected to be one of the first major tests of Britain’s Online Safety Act, which became law in 2023 but is being rolled out in stages by Ofcom.
When asked whether X could be banned in the UK, Business Secretary Peter Kyle said such a move was possible, although he noted that the authority to take that step lies with Ofcom. Under the most serious cases of non-compliance, the regulator can seek court orders requiring advertisers or payment providers to withdraw services from a platform, or even ask internet service providers to block access to a site in Britain.
Elon Musk has pushed back strongly against the criticism. Writing on X over the weekend, he accused the British government of trying to suppress free speech by focusing on Grok and his platform. However, Kendall rejected that argument, telling lawmakers the issue was not about freedom of expression but about tackling violence against women and girls, upholding basic standards of decency, and ensuring that the rules society expects offline also apply online.
The controversy is not limited to the UK. X has faced condemnation in several other countries over Grok’s image-generation capabilities. French officials have reported the platform to prosecutors and regulators, calling the content “manifestly illegal”. Authorities in India have demanded explanations, while Indonesia and Malaysia temporarily blocked Grok over the weekend. X has said it has restricted certain image requests, such as undressing people in photos, to paying users, though critics argue this does not go far enough.
Ofcom’s investigation will examine whether X adequately assessed the risk that people in Britain could be exposed to illegal content and whether it properly considered the risks to children. The outcome could have far-reaching consequences, not only for X but for how AI-powered tools are regulated across social media platforms.
For diaspora communities who rely heavily on digital platforms to stay informed and connected, the case highlights a growing global challenge. As artificial intelligence becomes more powerful and more accessible, governments are racing to ensure that innovation does not come at the expense of safety, dignity, and human rights.
At Chijos News, we continue to track how UK laws, global tech companies, and emerging technologies intersect with the lived experiences of migrants and diaspora communities. The Ofcom probe into X and Grok is a reminder that the rules shaping the digital world will increasingly affect how safely and confidently people can participate in online life, no matter where they come from.