
Latest analysis from HSBC Financial challenges facing OpenAI This shows how big the company’s thinking is. The company claims to have reached $20 billion in revenue. It has committed $1.4 trillion to build new data centers to power its ChatGPT interface. Even if it could generate more than $200 billion in revenue by 2030, it would still need another $207 billion to survive.
These are huge sums of money.
But a dozen AI insiders and wealth recent at Lisbon Web Summit Different futures for artificial intelligence are described. The future, they say, will be characterized by AI operations that are much smaller in scale, often centered around AI “agents” that perform specialized, niche tasks, and therefore don’t require the gargantuan, large-scale language models that underpin OpenAI, Google’s Gemini or Anthropic’s Claude.
“Their valuation is based on the idea that bigger is better, but that’s not the case,” Babak Hodjat, chief artificial intelligence officer at Cognizant, told Fortune.
“We do use large language models. We don’t need the largest language models. There is a threshold at which large language models are able to follow instructions from a limited domain and be able to use tools and actually communicate with other agents,” he said. “If that threshold is exceeded, that’s enough.”
For example, when DeepSeek launched a new model last January, it triggered a sell-off in tech stocks because it reportedly cost just a few million dollars to develop. Hodjat said it also runs on a model that uses fewer parameters per request, making it significantly smaller than OpenAI’s ChatGPT but equally powerful. Once below a certain size, some models don’t require a data center — they can run on a MacBook, he said. “That’s the difference, that’s the trend,” he said.
Many companies are offering services around artificial intelligence agents or applications, assuming that users want a specific application to perform a specific action. Superhuman (formerly Grammarly) operates an app store filled with “artificial intelligence agents that can live in the browser or in any of the thousands of apps that Grammarly has been given permission to run,” said CEO Shishir Mehrotra.
Mozilla CEO Laura Chambers has adopted a similar strategy with the Firefox browser. “We have some AI features like the ‘shake summary’ feature, mobile smart tag grouping, link previews, translations, which all use AI. What we do with them is run them locally, so the data never leaves your device. It’s not shared with the model, it’s not shared with the LLM. We also have a little slideshow where you can select which model you want to use and use AI that way,” she said.
Ami Badani, head of strategy and chief marketing officer at chipmaker ARM, told Fortune the company is model agnostic. “What we do is create custom extensions on top of the LLM for very specific use cases. Because, obviously, those use cases do vary from company to company,” she said.
This approach—where highly centralized AI agents operate like independent enterprises—stands in contrast to large-scale, general-purpose AI platforms. One source asked about the future. wealthwould you use ChatGPT to book a hotel room that suits your specific needs (perhaps you want a room with a bathtub instead of a shower, or want a west-facing view), or would you use a dedicated agent with a mile-deep database of only hotel data?
This approach is attracting significant investment capital. International Business Machines Corporation Ventures, a $500 million AI-focused venture fund, invests in unglamorous AI projects that fill obscure enterprise niches. One of those investments was in a company called Not Diamond. The startup noted that 85% of companies using AI use more than one AI model. Some models are better than others at different tasks, so choosing the right model for the right task can become an important strategic choice for a company. Not Diamond has made a “model router” that automatically routes your assignments to the best models.
“You need someone to help you solve this problem. We at IBM believe in a fit-for-purpose model strategy, which means you need to use the right model for the right workload. When you have a model router that helps you do that, it makes a huge difference,” Emily Fontaine, head of venture capital at IBM, told Fortune.

