Musk denied awareness of Grok’s sexual images when the California AG launched a probe


Elon Musk said Wednesday he “did not see any picture of the naked age generated by Grok,” hours before the Great California Great open an investigation to the xAI chatbot about the “proliferation of nonconsensual sexually explicit material.”

Musk’s refusal comes as increasing pressure from the government around the world — from the UK and Europe to Malaysia and Indonesia — after users on X started asking Grok to edit their photos a real womanand in some cases of children, to sexual images without consent. Copyleaks, an AI detection and content governance platform, estimates that approximately one image is posted every minute on X. A separate sample gathering from January 5th to January 6th found 6,700 per hour in 24 hours. (X and xAI are part of the same company.)

“This material … has been used to harass people on the internet,” California Attorney General Rob Bonta said in a statement. “I request that xAI take immediate action to prevent this from happening.”

The AG’s office will investigate whether and how xAI violated the law.

There are several laws to protect the target of nonconsensual sexual images and child sexual abuse material (CSAM). Last year in Take It Down Act signed into federal law, which makes it a crime to knowingly distribute nonconsensual intimate images — including deepfakes — and requires platforms like X to remove the content within 48 hours. California also has its own series of laws that Governor Gavin Newsom signed in 2024 to crack down on sexually explicit deepfakes.

Grok began fulfilling requests from users on X to produce sexualized photos of women and children at the end of the year. The trend seems to have taken off after certain adult content creators asked Grok to reproduce sexual images of themselves as a form of marketing, which then led to other users posting similar prompts. In some public cases, including well-known figures like “Stranger Things” actress Millie Bobby Brown, Grok has responded to requests to alter real-life photos of real women by altering their clothing, body position, or physical features in a sexual way.

according to some reportsxAI has begun implementing safeguards to address these issues. Grok now requires a premium subscription before responding to a specific image generation request, and even then the image may not be generated. April Kozen, VP of marketing at Copyleaks, told TechCrunch that Grok can fulfill requests in a more general or tone-down way. He added that Grok seems to be more permissive with mature content creators.

Techcrunch event

San Francisco
|
13-15 October 2026

“Overall, this behavior suggests that X is experimenting with various mechanisms to reduce or control the generation of problematic images, although inconsistencies remain,” said Kozen.

Neither xAI nor Musk have publicly addressed the issue. A few days after the incident began, Musk appeared to take notice of the issue by asking Grok to produce it picture himself in a bikini. On January 3, X security account said the company took “action against illegal content in X, including (CSAM),” without specifically addressing Grok’s apparent lack of safeguards or creating manipulated sexualized imagery involving women.

The position mirrors what Musk posted today, emphasizing illegality and user behavior.

Musk wrote that he “doesn’t know the nude pictures that Grok made. Literally zero.” The statement does not deny the existence of bikini photos or more sexual editing.

Michael Goodyear, an associate professor at New York Law School and a former litigator, told TechCrunch that Musk was inclined to focus on CSAM because the penalties for creating or distributing synthetic sexual images of children are greater.

“For example, in the United States, a distributor or a threatened distributor of CSAM can face up to three years in prison under the Take It Down Act, compared to two for nonconsensual adult sexual images,” said Goodyear.

He added that the “bigger point” is Musk’s efforts to draw attention to problematic user content.

“Obviously, Grok does not generate images spontaneously. It is only at the request of the user,” wrote Musk in his post. “When asked to produce an image, it will refuse to produce anything illegal, because the operating principle for Grok is to obey the laws of a particular country or country. There may be times when hacking the Grok opponent asks for an unwanted action. If this happens, we immediately fix the bug.”

Taken together, the posts describe these incidents as unusual, address user requests or adversarial requests, and present technical issues that can be resolved with fixes. No one has acknowledged the flaws in Grok’s basic safety design.

“Regulators may consider, with attention to free speech protections, requiring proactive measures by AI developers to prevent such content,” Goodyear said.

TechCrunch has reached out to xAI to ask how many times it has caught images of women and children being sexually manipulated nonconsensually, what guardrails have been changed, and whether the company has informed regulators about the issue. TechCrunch will update the article if the company responds.

The California AG isn’t the only regulator trying to hold xAI accountable over the issue. Indonesia and Malaysia have access temporarily blocked to Grok; India has get X make technical and procedural changes directly to Grok; at The European Commission ordered xAI to retain all documents related to the Grok chatbot, as a precursor to open a new investigation; and the UK’s online security watchdog Ofcom opened a formal investigation under the UK Online Safety Act.

xAI has been attacked for Grok’s sexual images before. As AG Bonta pointed out in a statement, Grok included a “spicy mode” to generate explicit content. In October, an update made it easier to jailbreak regardless of existing security guidelines, prompting many users to create hardcore pornography with Grok, as well graphic and violent sexual images.

Many of the porn images produced by Grok are of AI-generated people – something that can still be found ethically but perhaps less harmful to the individuals in the images and videos.

“When AI systems allow the manipulation of images of real people without their express consent, the impact can be immediate and deeply personal,” Copyleaks co-founder and CEO Alon Yamin said in a statement emailed to TechCrunch. “From Sora to Grok, we’re seeing a rapid increase in AI capabilities for manipulated media. To that end, detection and governance are needed now more than ever to prevent abuse.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *