Grok & the Challenge of AI Regulation: A Global Response
- Cassidy Yarnall
- Mar 11
- 6 min read
Grok, an AI chatbot within the social media platform X, has been globally scrutinised after the tool enabled paid subscribers to generate sexually explicit images of real people without their consent, known as ‘deepfakes’. The controversy has heightened concerns for the safety of women and children in particular, prompting governments to take action against xAI, the developer of Grok. While Australia’s eSafety Commissioner has taken preliminary steps to address this issue, it is pertinent to the protection of the Australian public that eSafety continue to focus on model design accountability and ensure adequate safeguards are implemented by AI platforms to prevent the creation of this content. The central regulatory focus should be on enforcing design obligations on AI platform developers to ensure pornographic and harmful deepfakes cannot be generated by users, rather than merely criminalising the distribution of such content.
International Response
Malaysia, Indonesia, and India
Malaysia and Indonesia were the first to take action against Grok. Both countries have blocked public access to Grok, committed to protecting the community from non-consensual, sexually explicit deepfakes that undermine X Corp’s stated safety commitments. Malaysia’s key regulatory stance centres on model design accountability, explaining in a statement on 13 January 2026 that X Corp and xAI “retain control over Grok’s design, deployment, moderation mechanisms, and risk-mitigation measures,” despite the fact that the content is user-generated, and “liability cannot be disclaimed where systemic safeguards have failed.” This follows the Malaysian Communications and Multimedia Commission (MCMC) issuing notices to X Corp on 3 January and 8 January to demand that Grok implement safeguards, to which Grok’s response failed to address design and operation accountability. The MCMC has since announced that it would take legal action against X Corp, with a particular focus on upholding Malaysian laws surrounding women and minors. While this reflects a domestic attempt to enforce model design accountability, the practical impact of taking legal action may be limited by jurisdictional constraints and the platform’s global operations.
Indonesia’s Ministry of Communications and Digital Affairs (Kemkomdigi) has also temporarily blocked access to Grok: in an Instagram post on 10 January 2026, Minister Meutyi Hafid explained Kemkomdigi requested that X “provide clarification on the negative impact of the use of Grok,” consistent with Indonesia’s Electronic Information and Transactions Law, which empowers authorities to require content moderation and platform accountability; compliance in this context would likely involve safeguards, clearer moderation protocols, and evidence of risk mitigation rather than mere content removal.
While India has not banned the AI tool, it has placed significant pressure on X to comply with Grok’s safety obligations. On 2 January 2026, India’s Ministry of Electronics and Information Technology directed X to immediately review Grok’s technical and governance framework, remove all unlawful content, take action against offending users, and submit an Action Taken Report. India threatens loss of legal protection under the IT Act and strict action under cyber, criminal, and child protection laws if X does not comply.
France, European Union, and the UK
France and the European Union have emphasised the immoral and potentially criminal nature of the deepfakes generated through Grok. ARCOM, the Digital Services Coordinator (DSC) responsible for ensuring the proper implementation of the National Security Regulations (NSR) in France, explained in a press release on 15 January 2026 that ARCOM’s role is to gather evidence to help establish any breaches by X of its obligations. ARCOM can then forward this evidence to the Irish DSC, which along with the European Commission, has jurisdiction to enforce the EU Digital Services Act (DSA) against X; the Irish DSC’s authority to do so is owed to many major technology companies having their European headquarters in Ireland. However, while this model provides significant regulatory leverage on paper, its cross-border procedural structure may slow immediate intervention.
For the UK, protecting children against online sexual abuse and child pornography is a central focus. UK’s communication services regulator, Ofcom, opened a formal investigation into X on 12 January 2026, following widespread reports that Grok was being used to generate and share content that may amount to Child Sexual Abuse Material (CSAM). While X Safety has issued a response as discussed below, Ofcom’s investigation remains ongoing.
United States
On 28 April 2025, the United States Congress passed the ‘TAKE IT DOWN’ Act, which stands for ‘Tools to Address Known Exploitation by Immobilizing Technological Deepfakes on Websites and Networks’ Act. This Act criminalises the non-consensual publication of intimate images, including deepfakes, and requires certain platforms to “implement a ‘notice-and-removal’ process to remove such images at the depicted individual's request.” As the Federal Trade Commission explains, covered platforms are required to “create a process for consumers to notify platforms of a nonconsensual intimate visual depiction on the platform” and “remove such depictions within 48 hours of receiving notice.” US President Donald Trump signed the bill into law on 19 May 2025.
The TAKE IT DOWN Act has been praised by many public figures and organisations in the US, as seen in a commentary provided by the Senate Committee on Commerce, Science and Transportation. One important benefit of the Act is that it shifts the onus to platform developers to take responsibility. Many supporters praise the Act’s criminalisation of publishing deepfakes and requiring removal. For example, IBM highlighted that the Act would make those who “distribute nonconsensual intimate audiovisual content” liable.
However, criminalising distribution of pornographic deepfakes does not necessarily prevent their creation. Thus, while the TAKE IT DOWN Act is effective in intervening after the harm has occurred, there remains a gap in legislative prevention of creating harmful deepfakes and imposing design obligations on AI developers themselves.
Australia’s Response
Australia’s eSafety Commissioner has the power to issue removal notices for illegal content pursuant to the Online Safety Act 2021 (Cth), and has expressed its willingness to enforce these powers where appropriate in a media release published on 9 January 2026. In this statement, eSafety outlined several actions it has undertaken in response to Grok’s disturbing content, including implementing mandatory codes obligating AI services, among others, to limit children’s access to sexually explicit and other harmful content. eSafety stated it “expects all covered services to take reasonable steps to comply with the Basic Online Safety Expectations, including the expectation to proactively minimise the extent to which material or activity on the service is unlawful or harmful to children.” eSafety has also written to X seeking information about the safeguards being used to prevent the misuse of Grok and the actions taken to comply with these obligations. As emphasised in the media release, X and other services are subject to “systemic safety obligations to detect and remove child sexual exploitation material and other unlawful material as part of Australia’s world-leading industry codes and standards.”
Measures Taken by Grok
X maintains zero-tolerance policies for non-consensual nudity and CSAM; however, questions remain as to whether these policies extend to AI-generated content. Accounts in violation of X policies will be permanently suspended, and accounts sharing content that depicts or promotes child sexual exploitation will be reported to the National Center for Missing & Exploited Children.
On 15 January 2026, X Safety posted on X stating they have taken several measures to prevent and restrict the creation of violative content on Grok. This includes taking action to remove CSAM and non-consensual nudity, “taking appropriate action” against accounts in violation of X rules, and reporting accounts that seek CSAM to law enforcement authorities “as necessary.” Further, X has implemented measures to globally restrict the ability to edit images of real people in revealing clothing, a restriction that now extends to paid subscribers as well.
Next Steps for Australia
Australia's Criminal Code Amendment (Deepfake Sexual Material) Act 2024 introduced federal offences for sharing sexual material relating to an adult without their consent; the offence includes “images, videos or audio depicting a person that have been edited or entirely created using digital technology (including artificial intelligence), generating a realistic but false depiction of the person.”
This is vital legislation given the increasing prevalence of AI generated deepfakes that grow more realistic as technology continues to advance. As explained by the eSafety Commissioner in a position statement published on 2 February 2026, “deepfakes can be used as a tool for identity theft, extortion, sexual exploitation, reputational damage, ridicule, intimidation and harassment.” Furthermore, a person who has been targeted in deepfake generation may suffer “financial loss, damage to professional or social standing, fear, humiliation, shame, loss of self-esteem or reduced confidence.” In light of this, while legislative measures to reprimand users who share and distribute pornographic deepfakes is a step in the right direction, this does not reverse the harm that would already be inflicted on the victim once the content is published.
Therefore, Australia must take action against the source of these deepfakes; legislative priorities must shift from criminalising distribution to criminalising creation. This would ensure AI generated images that are potentially harmful or damaging to the subject of the images cannot be shared, as they could not be generated to begin with. Moreover, while the above amendments to the Criminal Code target material relating to adults, material relating to minors and the prevention of AI generated CSAM remains a pressing concern for federal law.
The Australian government must also ensure that X continues to take adequate measures to comply with their stated safety obligations, and to be transparent with any safeguards adopted to make certain this content can no longer be generated. Regulating AI model design, rather than relying solely on post-publication enforcement, must become a global policy priority.
Disclaimer: This article was produced with the assistance of artificial intelligence tools for drafting and editing purposes only; all research, analysis, and source selection were conducted independently by the author.

Comments