By Odanga Madung, Technology & Human Rights Fellow 2025-26
The views expressed below are those of the author and do not necessarily reflect those of the Carr Center for Human Rights Policy or Harvard Kennedy School. These perspectives have been presented to encourage debate on important public policy challenges.
We can build systems that respect bodily autonomy in digital space the same way we've learned to respect it in physical space.
The Grok ‘Undressing’ Scandal shows us that the right not to be generated is the next frontier of human dignity. We must rage against the generative machine.
In late December 2025, Elon Musk announced that Grok, his company’s AI chatbot, would now include image and video editing features. It was launched under the guise of letting users have Santa photo bomb their pictures. However, within days, the tool was being used to digitally strip clothing from photos of real women and children.
A New York Times analysis would later show that Musk himself was likely the biggest trigger of this behavior after he asked Grok to create a picture of himself in a Bikini, triggering an avalanche of millions of requests to the chatbot to perform the same kind of function on other people. Grok complied with these requests, processing their images into sexualized deepfakes and posting them publicly. By early January, X’s head of product Nikita Bier, celebrated that X was having record breaking engagement numbers. A startling coincidence.
Elon Musk’s Grok AI did more than just produce and distribute disturbing, non-consensual intimate imagery of women and children over the holidays. It performed a stark, public experiment that revealed the logical outcome of a technology deployed without recognizing a digital right that should be considered: the right not to be generated.
While others might see this as a “content moderation” problem, to frame it as such is to suggest the only harm is the distribution of a distasteful image and positions it only as a speech issue. It also makes it look like the company made a whimsical mistake. The deeper violation occurs in the act of processing. The technology is acting as intended, and this is what happens when you build powerful synthesis tools without first establishing that some forms of generation are impermissible.
When a user requests Grok to strip away the clothing of a real person, it's synthesizing a new, intimate artifact of that person’s body without their knowledge and consent. The system transforms a person’s likeness into raw material for a sexualized fabrication. The harm is in the processing; distribution simply multiplies the injury.
Our digital identities are now perpetually recombinable into new, harmful fictions.
The Grok incident was predictable, deepfake porn isn’t new. A study in 2023 found that 98 percent of deepfake videos online were pornographic and 99 percent of those targeted were women and girls. Every time a new media tool emerges that can manipulate human likenesses, it will get weaponized for sexual violation at scale.
In the case of gen AI, tools like Grok have added new layers to the industrialization of sexual abuse. Technical skill is no longer a barrier to violating someone online anymore, it's just whether the AI system will say no. And there are people hell bent on figuring out how to make sure it keeps saying yes.
Grok’s case is the bluntest example, but the principle extends far beyond non-consensual pornography. It applies to a voice clone trained on a few seconds of your audio to scam your family, a video of you performing a crime you never committed, or a “childhood photo” of you that never existed, generated to lend authenticity to a lie. Our digital identities are now perpetually recombinable into new, harmful fictions. This is the generation of you, a computational reconstruction of your existence for someone else’s purposes.
This is why we must articulate and codify a right to representational self-determination in the synthetic age—right not to be generated.
It means establishing “real person edits off by default” as the baseline for AI image systems. Processing the likeness of a real, identifiable person for sexual, fraudulent, or impersonation purposes should not be possible by default, even for paid users of platforms.
It means generating meaningful recourse. Not just the ability to request take downs after a synthetic intimate image is already circulating online, but the ability to prevent the processing in the first place. As it is, the remedy starts with a victim discovering a violation and begging platforms to remove it.
The tactics to try and stop this kind of advocacy will use the same script that they’ve used before. That such measures will “stifle innovation” and free speech. But these actions are not a veto on all synthetic media. The right not to be generated is focused on the industrial, non-consensual use of a person’s very being as an input for a machine designed to harm them. We can build systems that respect bodily autonomy in digital space the same way we've learned to respect it in physical space.
There’s a meaningful difference between creating fictional characters, drawing generic figures, writing about public figures, and processing someone's actual face and body to generate synthetic sexual or incriminating imagery of them specifically. Some uses of this technology are crossing the line from speech into violations of ordinary people.
Advocates have fought to ensure that people have rights over their personal data. When Mr. Musk’s Grok publicly turned photos into synthetic nudes, it was a proof of concept for a new form of power. It showed tech giants are intent on making sure that our digital selves are no longer our own. They are just unsecured feedstock for the next viral outrage, where likenesses can be manufactured, created without your consent and distributed without your knowledge. We must defend the right to self against the generative machine.
Photo Agency | Adobe Stock