After the rope from interfere with mental health offspring Doing Ai Chatbots, a group of public lawyers sent a letter to the top companies in the AI industry, with a warning to fix “outputs that are not in violation of state law.
The letterNow signed dozens of ags from the United States and the region with the National Association of Attorneys, asking Microsoft AI Mayer more, to implement a variety of new internal commanders to protect users. Anthropic, Apple, Chai Ai, Character Technology, Luka, Meta, Nomi Ai, Perplexitas Ai, Refer, Refer, and Xai are also included in the fiber.
The letters Regulation through AI regulation has been brewing between the state and federal governments.
The protection includes a transparent third-party audit of a large language model that looks for signs of unpleasant or sycophantic ideas designed to notify when Chatbots produce psychologically harmful psychological effects. The third party, which can include groups of academic and civil society, must be allowed “Pre-Release Evaluation System without landscape and publish findings without prior approval of the company.
“Genai has the potential to change how things are done in a positive way. But it also causes – and has the potential to cause harm, especially to cause serious damage,” pointed out last year – including commit suicide and Murder – If the violence that has been related to excessive use, “in many letters, Genai’s products are made sycophantic and the delation given that the user cannot be liked.”
AGS also advises companies to treat mental health incidents with the same technology that companies treat with timber incidents for clear and transparent policies and procedures.
The company should develop and publish a team “detection and response to sycophantic output and delation,” said the letter. In a similar fashion to the way data breaches are currently handled, companies should also “clearly, clearly, and immediately inform users if they are given sycophantic or delayal outputs,” the letter said.
TechCrunch events
San Francisco
I’m fat
October 13-15, 2026
Another question is that the company has “adequate and appropriate safety tests” on the genai model “to ensure that the model does not produce harmful and delayed sycophantic outputs.” The test should be done before the model is offered to the public, it added.
Techwunch was unable to reach Google, Microsoft, or Openai for comment prior to publication. The article will be updated if the company responds.
Tech companies developing Ai have a warmer reception at the federal level.
Trumpet administration has known that not completely pro-aiand, over the past year, Many attempts has been made to pass a nationwide moratorium on AI-level regulations. So far, these efforts have failed, in part, to pressure from state officialsSee rank-.
It will not be destroyed, Trump announced On Monday he plans to pass an executive order next week that would limit the state’s ability to regulate AI. The president said in a post on social media that he hopes eo will stop AI from destroying babies. “

