Tech
X to stop Grok AI from undressing images of real people
X has announced that its artificial intelligence tool, Grok, will no longer be able to edit images of real people to depict them in revealing clothing in jurisdictions where such activity is illegal, following widespread backlash over the misuse of sexualised AI deepfakes.
In a statement published on the platform, X said it had introduced new safeguards to prevent the Grok account from being used to manipulate photos of real individuals in a sexualised manner. “We have implemented technological measures to prevent the Grok account from allowing the editing of images of real people in revealing clothing,” the company said.
The move has been welcomed by UK authorities, who had previously raised concerns about the tool’s use. The UK government described the decision as a “vindication” of its calls for X to take stronger action to control Grok. Media regulator Ofcom also said the change was a “welcome development”, while stressing that its investigation into whether the platform breached UK laws is still under way.
“We are working round the clock to progress this and get answers into what went wrong and what’s being done to fix it,” Ofcom said, signalling continued scrutiny despite the latest measures.
Technology Secretary Liz Kendall welcomed X’s announcement but emphasised the need for accountability. She said she would “expect the facts to be fully and robustly established by Ofcom’s ongoing investigation”, underlining the government’s commitment to ensuring online safety rules are upheld.
However, campaigners and victims of AI-generated sexualised images say the decision has come after significant harm had already been caused. Journalist and campaigner Jess Davies, who was among women whose images were edited using Grok, described the changes as a “positive step” but said the feature should never have been permitted in the first place.
