Daily Breach

Crime & Fraud

UK Regulator Launches Probe Into X Over Grok AI Deepfake Abuse

Introduction

The UK’s data protection authority has opened a formal inquiry into X and xAI following reports that the Grok artificial intelligence system was used to generate sexual deepfake images without consent. The move signals escalating regulatory scrutiny over generative AI tools and their misuse.

Background and Context

The investigation is being led by the Information Commissioner’s Office, the UK body responsible for enforcing data protection and privacy laws. Concerns emerged after users allegedly leveraged Grok to create intimate or sexualised images of individuals without their knowledge.

Such practices raise serious questions under UK data protection legislation, particularly around consent, lawful processing of personal data, and the adequacy of safeguards in AI system design.

Regulatory Concerns

According to the ICO, the reported misuse of Grok presents “serious concerns” about whether sufficient protections were embedded during the development and deployment of the tool.

William Malcolm, Executive Director for Regulatory Risk and Innovation at the ICO, warned that the loss of control over personal data in this context can result in immediate and severe harm. He emphasized that risks are significantly amplified where children may be affected, underscoring the urgency of regulatory intervention.

Role of X and xAI

While xAI develops and operates Grok, the AI is also accessible through interactions on X. The ICO investigation will assess whether either company failed to meet their legal obligations as data controllers or processors under UK law.

In parallel, Ofcom confirmed it is not currently investigating xAI in relation to the standalone Grok application. However, Ofcom stated that its ongoing investigation into X remains active and is still in the evidence-gathering phase, a process that could take several months.

Ofcom’s Position on xAI

Ofcom clarified that while it continues to seek answers from xAI regarding the risks posed by Grok, limitations within the current legal framework restrict its ability to investigate certain chatbot-related activities. Specifically, the regulator noted challenges in applying rules concerning the creation of illegal images by standalone AI chat services.

Nevertheless, Ofcom is assessing whether xAI complies with obligations requiring platforms that publish pornographic material to implement highly effective age-verification measures to prevent access by children.

Impact and Scope

The case highlights a growing regulatory focus on generative AI misuse, particularly where technologies can be exploited to produce non-consensual sexual content. If breaches are confirmed, potential consequences could include enforcement action, fines, and mandatory changes to AI system design and governance.

For AI providers, the inquiry reinforces the expectation that privacy-by-design, robust content safeguards, and misuse prevention mechanisms are not optional but legal necessities.

Expert Commentary

From a cybersecurity and data protection perspective, this investigation reflects a broader shift toward holding AI developers and platform operators accountable for downstream harms enabled by their technologies. Regulators are increasingly unwilling to accept claims that misuse is solely the responsibility of end users when systemic safeguards are inadequate.

Outlook

As the investigation progresses, its outcome could set an important precedent for how generative AI systems are regulated in the UK. The case may also influence future guidance on consent, biometric data, and synthetic media, shaping how AI tools are deployed across social platforms.

Organizations developing or integrating generative AI should closely monitor the case and proactively review their compliance frameworks to mitigate similar risks.

Sources

Adv. Aayushman Verma

Adv. Aayushman Verma

About Author

Adv. Aayushman Verma is a cybersecurity and technology law enthusiast pursuing a Master’s in Cyber Law and Information Security at the National Law Institute University (NLIU), Bhopal. He has qualified the UPSC CDS and AFCAT examinations multiple times and his work focuses on cybersecurity consulting, digital policy, and data protection compliance, with an emphasis on translating complex legal and technological developments into clear insights on emerging cyber risks and secure digital futures.

Leave a Reply

Your email address will not be published. Required fields are marked *