For more than two years, an app called ClothOff has been terrorizing young women online – and it’s maddeningly difficult to stop. The app has been removed from two major app stores and banned from most social platforms, but is still available on the web and through the Telegram bot. In October, a clinic at Yale Law School filed a lawsuit that would have taken down the app, forcing the owners to remove all images and stop operations. But just finding the accused has been a challenge.
“It was put together in the British Virgin Islands,” said Professor John Langford, lead counsel in the lawsuit, “but we believe it is run by brothers and sisters and Belarus. It may be part of a larger network around the world.”
It is a bitter lesson in the wake of a new flood of non-consensual pornography generated by Elon Musk’s xAI, which including many minor victims. Child sexual abuse material is the most legally toxic content on the internet — it’s illegal to produce, transmit or store, and it’s routinely scanned by every major cloud service. But even with strong legal restrictions, there are still some ways to get around image generators like ClothOff, as Langford’s case shows. Individual users can be prosecuted, but platforms like ClothOff and Grok are more difficult to police, leaving few options for victims hoping to seek justice in court.
Clinical complaints, that is available onlinepaint an alarming picture. The plaintiff is an anonymous high school student in New Jersey, whose classmate used ClothOff to edit her Instagram photos. She was 14 when the original Instagram photo was taken, meaning the AI-edited version is legally classified as child abuse imagery. But even though the edited images are immediately illegal, the local authorities refuse to prosecute the case, because it is difficult to find evidence from the suspect’s device.
“Neither the school nor law enforcement ever determined how widely CSAM Jane Doe and the other girls were deployed,” the complaint states.
Still, the court case has moved slowly. The complaint was filed in October, and during those months, Langford and his colleagues were in the process of serving notice to the defendants – a difficult task given the global nature of the company. Once served, the clinic was able to push for court appearances and, eventually, a trial, but in the meantime the legal system did not give comfort to ClothOff victims.
Grok’s case may seem like an easier problem to fix. XAI Elon Musk does not hide, and there is a lot of money in the end for the lawyer who can win the claim. But Grok is a general purpose tool, which makes it more difficult to hold it accountable in court.
Techcrunch event
San Francisco
|
13-15 October 2026
“ClothOff is designed and marketed specifically as a fake porn image and video generator,” Langford said. “When you demand a public system that users can ask for different things, it becomes more complex.”
Some US laws already prohibit fake pornography – in particular Take It Down legislation. But when certain users are clearly breaking the law, it’s harder to hold the entire platform accountable. Existing law requires clear evidence of intent to harm, which means providing evidence that xAI knew the tool would be used to produce non-consensual pornography. Without such evidence, xAI’s first amendment rights would provide significant legal protection.
“In terms of the First Amendment, it’s pretty clear Child Sexual Abuse material is not protected expression,” Langford said. “So when you design a system to create that kind of content, you’re clearly operating outside of what the First Amendment protects.
The easiest way to solve this problem is to show that xAI is deliberately ignoring the problem. It’s a real possibility, given new report that Musk directed employees to loosen Grok’s protection. But even then, it would be a more dangerous case to make.
“A reasonable person would say, we knew this was a problem last year,” Langford said. “How come you don’t have tighter controls to make sure this doesn’t happen? It’s either carelessness or knowledge but it’s just a more complicated case.”
First Amendment issues are why xAI’s biggest pushback comes from a court system without strong legal protections for free speech. Indonesia and Malaysia has taken steps to block access to the Grok chatbot, while regulators in the UK have it open an investigation which may result in similar restrictions. Another initial step has been taken up by the European Commission, France, Ireland, India and Brazil. In contrast, no US regulatory body has issued an official response.
There’s no telling how the investigation will pan out, but at the very least, the deluge of imagery raises many questions for regulators to examine — and the answers could be dangerous.
“If you post, distribute, distribute Child Sexual Abuse material, you are in violation of criminal prohibitions and can be held liable,” Langford said. “The hard question is, what did X know? What did X do or not do? What is he doing now in response?”

