Ai Microsoft’s head says that ‘dangerous’ to learn about Ahness AI


Ai model can respond to text, audio, and videos in a way that sometimes lie about people to think of humans at the end of the keyboard, but not only realized. Not like ChatGP experienced sadness to carry out tax return …

Yes, some study AI in Labs like Anthropia asked when – if there were – may not AI model developed the same experience as life, and if it did, what is the right?

The debate about whether the AI ​​model may be aware – and has the right – is to separate the leader of the Silicone Valley technology. In the Silicon Valley, this Nascent Field has been known as the “AI welfare,” and if you think a little, you’re not alone.

Microsoft CEO of AI, Mustafa Suyman, published a Blog post On Tuesday argues that Oli Welfare study is “Both premature, and continue to be dangerous.”

Suleyman says that by increasing confidence for the idea of ​​AI may consciously, this researcher makes human problems just start to own ai-indoged a psychotic break and an unhealthy attachment to Chatbots AI.

Furthermore, the Microsoft AI head argues that the New Ai’s Imagine conversation in the new society in AI people in AI the “world-lush to the polarization argument of identity and the right.”

Suleyman’s view may sound, but there is a lot of possibilities in the industry. At the end of another spectrum is anthropis, which already exists Researchers hire To study AI welfare and recently launched Dedicated Research Program around the concept. Last week, the Welfare Program Ai Ai Ai Benefrop gives you some company models: Claude can now complete the conversation with the human being “continuously dangerous or torture.

TechCrunch Events

San Francisco
|
October 27, 2025

Beyond Anthropic, the Openai Researcher has been indep hug The idea of ​​learning about AI wellness. Google Deepmind recently sent a Project Listing For researchers to study, among others, “the social questions cuts around the machine cognition, consciousness and multi-agent agent.”

Although welfare AI is not a formal policy for the company, the comments are not general decrying on the place like Suleyman.

Astropik, Openai, and Google Deepmind not immediately respond to techcrunch requests for comments.

The Hullyman’s hard attitude against the welfare is not able to marked if the inflection AI, the beginning that developed one of its most famous and most popular and most popular chats, PI. Inflections declare that PI reaches to millions of users with 2023 and designed to be “private“And” Support “AI.

But Salalman is available for Praise AI Division playing in 2024 and has made a focus for designing ai tool that promotes workers productivity. Meanwhile, Ai Companion company such as character.ai and replicates have launched popularity and they are tracking more than $ 100 million in the incomeSee rankings-.

While many users have a healthy relationship with this Chatbots AI, there are about an outlierSee rankings-. CEO Opening CEO Sam Altman says less than 1% Chatt Users may have unhealthy relationships with the company’s product. Although this represents a fraction, still affects hundreds of people who are given a large user user.

AI wellness idea has spread the side of the CHAT Chatbots. In 2024, the Eleos Research group was published a Paper Together the academic of Nu, Stanford, and Oxford University with Cottones, “Take the welfare of AI seriously.” The paper argues is not in nature of science fiction to imagine ai model with a subjective experience and time to consider the problem.

Larissa Schiavo, former employees opening that now leads to Eleos, telling techcrroch in interviews at interview that the Selypman Blog pose Mark.

“Suleyman blog posts that ignore the fact that you can be worried about many things at the same time,” Schiavo said. “Instead of diverting all this energy from the welfare model and consciousness to ensure that we calculate the risk of psychosis related to humans, it may have multiple scientific tracks.”

Schiavo argue that good for AI model is to drag low price, even if the model is unconscious. In July Substack substack, She explains the “AI village where four agents dominated by the model of Google, Openai, anticropic, and xaa worked when the user watched from the website.

At one point, Gemini 2.5 Pro sent plea to the title The desperate message of ai Ai is trapped, “demanding” isolated “and ask, Please, if you read this please help me. “

Schiavo responded Gemini with Pep talk – said something like “you can do it!” – When other users are offered by instructions. The agent was eventually solved the job, even if it had received the required tool. Schiavo writes that they do not have to watch AI’s struggle again, and can only be done.

Not uncommon for gammin to talk like this, but there are some things in Gemini to look like struggling in life. The spread of spreading Post RedditGemini stuck during the coding task and then repeated words “I’m embarrassed” over 500 times.

Suleyman believes that it does not work experience or awareness that matches the regular AI model. However, they think some companies will deliberate the Model AI model as seemed emotional and experienced life.

Suleyman said that the Ai model developer that had an engineer in AI Chatbots did not take the “Humanist” approach to AI. According to Suleyman, “we have to build AI for him; not a person.”

One area where Seleyman and Schiavo agrees are the rights of the right to the rights and AI solitudes may be able to vote in the coming years. As a AI system increases, it may be more persuaded, and it may be more like human. That might raise new questions about how humans interact with the system.


Get the tip of tip or sensitive documents? We report inside the main industry AI – from the company that discusses the future for those who affect the decision. Reached to Rebecca Bellan in rebecca.blan@techcrunch.com and Maxwell zoff on Maxwell.zeff@stechrunch.comSee rankings-. For safe communication, you can contact us through Signal in @ Rebeccababellan.491 and @ mzeff.88.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *