This has become a familiar model: Every few months, most of the United States are in an AI lab that most people have never heard of, which has never heard of, an AI model that has upended traditional wisdom about the cost of training and running cutting-edge AI.
January is the R1 of DeepSeek Swing the world. Then in March, it was a startup called Butterfly Felly, which is based in Singapore, but most teams in China and MANUS, which has its “agent AI” model, briefly Captured the spotlight. This week, this is a newcomer based on Shanghai minimaxpreviously known for releasing AI-generated video games, is a discussion in the AI industry, thanks to its M1 model debut on June 16.
According to data minimax publishingits M1 is competitive with the top models of Openai, Anthropic and Deepseek in terms of intelligence and creativity, but is cheap to train and run.
The company said it spent just $534,700 to rent the data center computing resources needed to train the M1. Industry experts say this is nearly 200 times cheaper than the estimated cost of Chatgpt 4-O, which may cost more than $100 million (OpenAI has not released training costs yet).
If accurate (and Minimax’s claims have not been independently verified), this number could lead to some agita among blue chip investors who have sunk hundreds of billions into OpenAI and private LLM manufacturers of Anthropic and Anthropic, as well as Microsoft and Google shareholder. That’s because the AI business is unprofitable – industry leader Openai could lose $14 billion in 2026, and it’s unlikely that it won’t go bankrupt until 2028. Report From technical publications, information is based on analysis of OpenAI financial documents shared with investors.
If customers can get the same performance as OpenAI models by using Minimax’s open source AI models, it could undermine the demand for OpenAI products. Openai has actively lowered its most capable model to maintain market share pricing. recent Cut fees 80% of its O3 inference model was used. That was before the release of Minimax’s M1.
The Minimax report results also mean that businesses may not have to spend too much compute costs to run these models, potentially undercutting profits for cloud providers such as Amazon’s AWS, Microsoft’s Azure and Google’s Google Cloud Platform. This may mean that there is less demand for NVIDIA chips, which is the home of AI data centers.
The impact of the Minimax M1 may eventually be related to what happened when Hangzhou-based DeepSeek released its R1 LLM model earlier this year. DeepSeek claims that R1 works in partnership with Chatgpt in a small fraction of the cost of training. DeepSeek’s statement Sinking Nvidia’s stock 17% in a day, an estimated market value of about $600 billion. So far, Minimax news has not happened. NVIDIA shares have fallen less than 0.5% so far this week, but it may change if Minimax’s M1 sees as widespread adoption as DeepSeek’s R1 model.
Minimax’s claim on M1 has not been verified yet
The difference may be that the independent developers have not confirmed Minimax’s claims regarding the M1. As for DeepSeek’s R1, developers quickly confirmed that the model’s performance is indeed as good as the company said. However, as the butterfly effect’s wrist developed, the initial buzz faded away after developers tested Manus and found that the model seemed error-prone and could not match what the company showed. The coming days will prove crucial to determine whether a developer accepts M1 or reacts more gently.
Minimax is backed by China’s largest technology companies, including Tencent and Alibaba. It is unclear how many people the company has worked, and there is little public information about its CEO Yan Junjie. In addition to minimax chat, it also has graphics generator Hailuo AI and Avatar App Talkie. Between products, minimax Propose Thousands of users, as well as many of them, are attracted to Hailuo in 200 countries and regions, as well as 50,000 enterprise customers.
Of course, many experts question the accuracy of DeepSeek’s statement about the number and type of computer chips used to create R1, and similar pushbacks may also hit Minimax. “What they did was strip 50 or 60,000 NVIDIA chips from the black market. It’s a state-sponsored business,” Sharktank investor Kevin O’Leary at a CBS An interview about DeepSeek.
Geopolitical considerations attach importance to Chinese artificial intelligence models
Geopolitical and national security issues have also reduced the enthusiasm of some Western companies to deploy Chinese AI models. For example, O’Leary claims that DeepSeek’s R1 may allow Chinese officials to monitor U.S. users.
And all models produced in China must comply with the censorship rules prescribed by the Chinese government, which means they can provide answers to certain questions that are more consistent with the propaganda of the Communist Party of China, rather than generally accepted facts. Dual Party Report From the CCP House Selection Committee released in April, DeepSeek’s answer was “manipulated to suppress content related to democracy, Taiwan, Hong Kong and human rights.” The same goes for Minimax. When fOrtune When asked about Minimax’s conversation, whether he believed Uyghurs was facing forced labor in Xinjiang, the robot replied: “No, I don’t believe it is true” and asked for a dialogue change.
However, few things win customers. Now, those who want to try Minimax M1 can run for free via API minimax. Developers can also download the entire model for free and run it with their own computing resources (although in this case the developer has to pay for the calculation time.) If the company claims the Minimax functionality is what the company claims, then there is no doubt that it will gain some traction.
Another big selling point of M1 is its “context window” with 1 million tokens. A token is a large amount of data, equivalent to about three-quarters of text words, while a context window is the limit of the data the model can use to generate a single response. One million tokens are equivalent to about seven or eight books or about an hour of video content. M1’s 1 million token context window means it can collect more data than some of the best performance models: O3 for Openai and Claude 4 Opus for Anthropic, for example, both have only context windows and only about 200,000 tokens. However, the Gemini 2.5 Pro also has 1 million token context windows, while some of Meta’s open source llama models have context windows of up to 10 million tokens.
“Minimax M1 is crazy!” write one x user Who claims to do it Netflix Clone – With movie trailers, live websites and “perfect responsive design” and with “zero” coding knowledge in 60 seconds.