Silicon Valley leaders including the White House Ai & Crzar Czar Dave Dave Straticy Scivers Jason Kwon caused this online comment for the comment to give AI safety. In a separate post, he stated that AI safety occupants are not what they seem, and also act in accordance with the dolls of millions.
An AI safety group that spoke to TechCrunki said the accusations from the sack and Openai are the latest attempt by Silicon Valley to intimidate critics, but certainly not the first. In 2024, some venture capital companies Ambor Ampor if the California AI safety bill, SB 1047will send the founders of the opening to jail. The Brokings Institution labels the rumor as one of “misguided“About the hashtag, but governor Gavin Newsom finally reopened it.
Whether or not the sacks and openai are intended to harm critics, their actions have some AI security advisors. Several nonprofit leaders were approached by TechCrunch last week to talk about the anonymity of serving groups.
Silicon Valley’s Controversy Tensions Between AI Buildings Responsible for Growing Consumer Products – a theme my colleague Kirsten Korosec and I unpacked this week Business podcast. We also dive into the AI safety law passed in California to regulate Chatbots, and Openai’s approach to erotica in Chatgpt.
On Tuesday, Sick wrote a post on x anthropopic emphasis – the possessor apprehensive Through the ability of AI to contribute to unemployment, cyberattacks, and catastrophic disasters for society – only the fear of asking for legislation will have a smaller profit and comply with the beginning on paper. Anthropopic is the only major lab for California Senate 53 (SB 53), a bill that sets security reporting requirements for large AI companies, was signed into law last month.
Karu responded a Virus essay From co-founder Jack Clark’s jackropik about the fear of AI. Clark delivered the essay as a speech at a curve safety conference in Berkeley earlier. Sitting in the audience, it certainly felt like an authentic account of the technology’s reservations about its products, but the sack didn’t look like that.
Sacks says anthropic is implementing a regulatory capture strategy of “Sophisticated Regulation,” although it may not be compatible with a sophisticated strategy that may exclude opponents from the federal government. In a Follow the post on X, SACKS notes that the anthropopic has positioned itself “constantly as the enemy of the Trump administration.”
TechCrunch events
San Francisco
I’m fat
October 27, 2025
Also this week, Openai Strategi’s Chief Strategist, Jason Kwon, wrote a post on x Explain why companies send subpoenas to ai safety nonprofits, such as encode, not ucoProfit that advocates for responsible AI policies. (A subpoena is a legal order of documents or testimony. Encode filed an amicus brief in support of Musk’s lawsuit, and other nonprofits spoke publicly against Openai’s filing.
“This is what’s needed to be transparent about who’s funding and whether there’s coordination,” Kwon said.
NBC News this week revealed the subpoenas sent to encode and six other nonprofits Those who criticized the company, asked for communication related to the biggest enemy of Openai, Cod and Meta CEO Mark Zuckerberg. Openai also asked to encode communications related to SB 53 support.
One prominent AI security leader told Techcrunch that there is a growing divide between OpenAI’s government and research organizations. While OpenAI security researchers often publish reports on AI systems, OPENAI’s policy unit against SB 53, said it would prefer to have uniform rules at the federal level.
Openai’s Head of Opting Mission, Joshua Achigiam, said about the company that sent subpoenas to nonprofits in a post on x This week.
“What could be a risk to my entire career I will say: This does not look good,” Achiam said.
Brendan Steinhauser, CEO of the nonprofit Safety Alliance Aman Ai (which has not yet been subpoenaed by Openai), told Techcrunch that Openai appears to be a conspiracy led by a criminal. However, they argue that this is not the case, and much of the AI safety community is quite critical of XAI’s safety practices, or noneSee rank-.
“On Openai’s part, this means to silence critics, to intimidate, and to alienate other nonprofits,” Steinhauser said. “For the sack, I think that the AI (safety AI) movement is growing and people want to hold the company.”
Sriram Krishnan, senior white house policy advisor for AI and former general partner of A16Z, created this week’s conversation with Post Social Media From ourselves, call ai safety advocates do not touch. He called on the AI safety Organization to talk to “people in the real world using, selling, implementing AI in their homes and organizations.”
A recent Pew study found that half of Americans more concerned than excited Regarding AI, however, it is not clear what the worry is exactly. Another recent study goes into more detail and finds that the treatment of voters is more about it job loss and deeffakes Of the catastrophic risks caused by AI, that is where the AI security movement is generally focused.
Addressing these safety concerns could come at the expense of the growth of the AI industry — a trade that worries many in Silicon Valley. With AI investment flooding the American economy, the fear of regulation is understandable.
But after years of unregulated AI progress, the AI safety movement seems to be gaining real momentum heading into 2026. Silicon Valley’s attempt to counter focus groups could be useful.

