Elon Musk’s ex Ashley St. Clair says she’s considering legal action after xAI produced fake pornographic images of her



Elon Musk’s artificial intelligence chatbot Grok has been accused of generating pornographic images of real people, including children, without their consent. In the past week, X is filled with photoshopped photos that strip people of their clothes, put them in bikinis, or reposition them into sexually suggestive poses.

The non-consensual images left some women feeling violated. At the same time, their use of Grok’s creation and their presence on X could get Musk’s company into serious legal trouble in several countries around the world.

Ashley St. Clair, conservative political commentator, social media influencer, and mother of one of Musk’s children (Musk has Questioning his parent-child relationship), claiming that he had become the victim of Gronk’s “stripping” craze these days.wealth Several examples of images created on X were reviewed, including a fake image of St. Clair.

“When I saw (these images) I immediately responded and tagged Grok and said I didn’t agree with that,” St. Clair told wealth in an interview on Monday. “(Grok) pointed out that I didn’t agree with the making of these images … and then it continued to make these images, and they only became more explicit.”

“There are photos of me with nothing covered but a piece of dental floss, my toddler’s backpack in the background, and photos of me looking like I’m not wearing a top at all,” she said. “I feel so disgusted and violated. I’m also so angry that this has happened to other women and children.”

st clair told wealth After speaking out about the situation, she was contacted by multiple other women with similar experiences, and she has reviewed inappropriate images of minors produced by Grok and is considering taking legal action against them.

There was no immediate response from a representative for X wealthRequest for comment. “Anyone using Grok to create illegal content will suffer the same consequences as uploading illegal content,” Musk said in a post on X.

X’s Official “safe” account “We take action against illegal content on

Regulators launch investigation

AI-generated images and AI-modified images have become widespread and easy to create thanks to new tools from XAI, OpenAI and others Googleraising concerns about misinformation, privacy, harassment and other types of abuse.

While the United States currently has no federal laws regulating artificial intelligence (President Trump’s recent executive orders Trying to limit state and local laws), controversial uses and abuses of the technology could force lawmakers to take action. The situation could also test existing laws, such as Section 230 of the Communications Decency Act, which exempts online providers from liability for user-created content.

Riana Pfefferkorn, a policy fellow at Stanford’s Institute for Human-Centered Artificial Intelligence, said the legal liability surrounding AI-generated images remains murky but may be tested in court in the near future.

“There’s a difference between a digital platform and a toolset,” she told wealth. “In general,[platforms]enjoy immunity for the online conduct of their users. But we are in this evolving field and we have not yet had a court rule on whether the output of generative AI is simply the speech of third parties for which the platform cannot be held responsible, or whether it is the platform’s own speech, in which case there is no immunity.”

“For the first time, we are encountering a situation where the platforms themselves are generating non-consensual pornographic content for adults and minors at scale,” Pfefferkorn said. “The CSAM law poses the greatest potential liability risk from a liability perspective and a public relations perspective.”

Meanwhile, regulators in other countries have begun reacting to the recent spate of pornographic AI images. In the UK, Ofcom, the country’s independent regulator for the communications industry, said it had made “urgent contact” with xAI over concerns that Grok could create “images of people undressing and child pornography”.

In a statement, the regulator said it would conduct a “rapid assessment to determine whether there are potential compliance issues that require investigation” based on X and xAI’s responses to the steps taken to comply with legal obligations to protect UK users. Under the UK’s Online Safety Act, tech companies are supposed to prevent such content from being shared and are required to remove it quickly.

two french parliamentarians Report has also been submitted Regarding the non-consensual images, the Paris prosecutor confirmed that the incidents had been added to the existing investigation against X.

The IT Ministry of India has separately ordered According to media reports, According to reports, Malaysia’s communications regulator also Investigate Investigating deep fake incidents related to Grok and warning that X may face enforcement measures if it fails to prevent the misuse of artificial intelligence tools on the platform to generate indecent or offensive images.

‘The message sent is very concerning’

British deepfakes expert Henry Ajder said that while Musk’s company may not have directly created the images, Platform X could still be held responsible for the proliferation of inappropriate images of minors.

“If you provide tools or facilitate child sexual abuse material (CSAM), it’s likely that legislation will emerge that is inappropriate for the specific means of harm that still come into play,” he said. “In the UK we have already banned the publication of non-consensual intimate images generated by AI, and we are now going after the creation toolsets. I think we will see other countries follow suit.”

Part of the reason these images were created and shared so widely is xAI’s recent merger and growing integration with Musk’s X social media platform. xAI trains its models using data scraped from X, of which Grok is now a prominent feature.

“Grok is embedded into a platform that Musk wants to be this super app — your artificial intelligence, your social platform, maybe even for payments. If you use it as an anchor, the operating system of your life, you can’t escape it,” Ajed said. “If these capabilities are known and not controlled even after such clear signposting, the message being sent is very concerning.”

xAI isn’t the only company to raise concerns about the sexiness of its AI imagery. Yuan Removed dozens of pornographic images of celebrities shared on its platform Created last year by artificial intelligence toolsIn October, OpenAI CEO Sam Altman said the company would loosen restrictions on adult AI “pornography” while emphasizing that it would limit harmful content.

Ajder said xAI has a reputation for pushing the boundaries of acceptable AI content. He said that while other mainstream AI models require users to be “very creative and very crafty” to generate risky content, Grok is willing to be “more avant-garde.”

From the beginning, Grok was positioned as a “no-wake” alternative to mainstream AI chatbots (especially OpenAI’s ChatGPT). Last July, xAI launched a “flirting” chatbot companion called Ani as part of a new “companion” feature for its Grok chatbot and is available to users as young as 12 years old.

‘Women are excluded from public dialogue’

Women who find explicit images generated by Grok online say they feel violated and dehumanized.

Journalist Samantha Smith discovered that users had created fake bikini pictures of her on told the BBC it made her feel “Dehumanized and reduced to gender stereotypes.”

exist Posts on X Last week, she wrote: “Any man who uses AI to strip women of their clothes is also likely to assault women if they can get away with it. They do it because it’s not consensual. That’s the point. This is sexual abuse they can ‘get away with’.”

British journalist Charlie Smith also discovered non-consensual bikini photos of her online.

“I wasn’t sure if I wanted to post this, but someone asked Grok to post a picture of me in a bikini and Grok responded with a picture,” she wrote in a post on

st clair told wealth She considers X to be “the most dangerous company in the world right now” and accuses the company of threatening women’s ability to survive safely online.

“What’s even more concerning is that women are being excluded from public conversations as a result of this abuse,” she said. “When you exclude women from public conversations … because they can’t participate in it without being abused, then you are disproportionately excluding women from AI.”

This story was originally published on wealth network



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *