Elon Musk's AI model Grok will no longer be able to edit photos of real people to show them in revealing clothing in jurisdictions where it is illegal, after widespread concern over sexualised AI deepfakes.
We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing such as bikinis. This restriction applies to all users, including paid subscribers, reads an announcement on X, which operates the Grok AI tool.
The change was announced hours after California's top prosecutor said the state was probing the spread of sexualised AI deepfakes, including those involving children, generated by the AI model.
We now geoblock the ability of all users to generate images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in X in those jurisdictions where it's illegal, X stated in a statement on Wednesday.
Additionally, it reiterated that only paid users will be able to edit images using Grok on its platform. This measure adds an extra layer of protection to ensure accountability for abusers attempting to violate legal statutes or platform policies.
With NSFW (not safe for work) settings enabled, Grok is supposed to allow upper body nudity of imaginary adult humans (not real ones) consistent with what can be seen in R-rated films, Musk wrote online on Wednesday.
The response comes amid backlash from international leaders, including Malaysia and Indonesia, which became the first countries to ban the Grok AI tool over explicit image alterations made without consent.
Britain's media regulator, Ofcom, has also announced an investigation into whether X complied with UK laws regarding sexualised images.
Concerns remain regarding the enforcement of the new policies, specifically how the AI will differentiate between real and imaginary subjects. Many observers have called for faster action to curb potential misuse of AI tools such as Grok.



















