
This week, some of the world’s biggest names in artificial intelligence gathered in the Swiss ski resort town of Davos for the World Economic Forum (WEF).
Artificial intelligence dominates many discussions among businesses, government leaders, academics, and non-governmental groups. However, current models stand in stark contrast to how close they come to replicating human intelligence and the short-term economic impact the technology could have.
Speaking separately in Davos, two artificial intelligence experts asserted that the large language models (LLMs) that have fascinated the world are not the path to human intelligence.
Demis Hassabs, Nobel Prize winner Google deep thinkingThe head of Google’s Gemini model development said today’s artificial intelligence systems, while impressive, are “far from” human-level general artificial intelligence (AGI).
Yann LeCun, an AI pioneer who won computer science’s most prestigious Turing Award for his work on neural networks, went a step further and said the LL.M. that underpins all leading AI models will never achieve human-like intelligence, so a completely different approach is needed.
Their views differ sharply from those advocated by executives at Google’s main AI rivals, OpenAI and Anthropic, who claim their AI models are on the verge of rivaling human intelligence.
Anthropic CEO Dario Amodei told an audience in Davos that AI models will replace all software developer jobs within a year and reach “Nobel-level” scientific research in multiple fields within two years. He said 50% of white-collar jobs will disappear within five years.
OpenAI CEO Sam Altman (who did not attend Davos this year) said we have begun to move beyond human-level general artificial intelligence toward “superintelligence,” artificial intelligence that is smarter than all humans combined.
Does an LLM bring general intelligence?
exist Joint appearance at the World Economic Forum Hassabis and Amodei said there is a 50% chance of achieving AGI within ten years, although not through the exact same model as today’s artificial intelligence systems.
In a later speech sponsored by Google, he elaborated: “Perhaps we need one or two more breakthroughs to achieve general artificial intelligence.” He pointed to several key gaps, including the ability to learn from a few examples, the ability to continue learning, better long-term memory, and improved reasoning and planning capabilities.
“My definition of (AGI) is a system that exhibits the full range of human cognitive abilities — and I mean all of them,” he said, including “the highest levels of human creativity that we have always celebrated and the scientists and artists we admire.” While advanced AI systems are already beginning to solve difficult mathematical equations and solve previously unproven conjectures, AI will need to develop its own breakthrough conjectures — a “much more difficult” task — to be considered equivalent to human intelligence.
Speaking at the House of Artificial Intelligence in Davos, LeCun sharpened his criticism of the profession’s singular focus on LL.M. “The reason the LL.M. is so successful is because the language is easy,” he argued.
He contrasted it with the challenges posed by the physical world. “We have systems that can pass the bar exam, that can write code… but they don’t really deal with the real world. That’s why we don’t have home robots (and) we don’t have Level 5 self-driving cars,” he said.
Leaving LeCun Yuan The Advanced Machine Intelligence Laboratory (AMI) was established in November last year, arguing that the artificial intelligence industry has become dangerously monolithic. “The AI industry is all about LLM,” he said.
He said Meta’s decision to focus exclusively on LL.M.s and invest tens of billions of dollars in building massive data centers contributed to his decision to leave the tech giant. LeCun added that he doesn’t think LLMs and generative AI are the path to human-level AI, let alone “superintelligence.” CEO Mark Zuckerberg’s wishesmaking him unpopular in the company.
“In Silicon Valley, everyone is doing the same thing. They’re all digging the same trenches,” he said.
The fundamental limitation, LeCun believes, is that current systems cannot build a “model of the world” that predicts what is most likely to happen next and connects cause and effect. “I can’t imagine that we could build agent systems that don’t have the ability to predict the consequences of their actions in advance,” he said. “The way we behave in the world is that we know we can predict the consequences of our actions, which is why we are able to plan.”
LeCun’s new company hopes to develop these models of the world using video data. But while some video AI models attempt to predict pixels frame by frame, LeCun’s work aims to work at a higher level of abstraction to better correspond to objects and concepts.
“This will be the next AI revolution,” he said. “We will never achieve human-level intelligence by training in LL.M. or just by text training. We need the real world.”
What do companies think?
Hassabis puts the timetable for true human-level general artificial intelligence at “five to ten years.” However, the trillions of dollars flowing into artificial intelligence show that the business world is not waiting for answers.
For many business leaders, the debate over general artificial intelligence may be somewhat academic. Ravi Kumar, CEO of Cognizant, said the more pressing question is whether companies can capture the tremendous value that AI already provides.
According to a Cognizant research report released before the Davos Forum, if companies can effectively implement it, current artificial intelligence technology can bring about a US$4.5 trillion increase in labor productivity in the United States.
But Kumar told wealth Most businesses have yet to complete the hard work of restructuring their operations or retraining their workforce to harness the full potential of AI.
“If you start thinking about reinvention (of existing businesses), that $4.5 trillion is going to create real value for businesses,” he said. He said it will also require what he calls the “integration” of human and digital labor performed by artificial intelligence.
“Skills are no longer secondary,” he argued. “It has to be part of the infrastructure story that moves people into the future, creates higher wages and upward social mobility, and makes it an effort to create shared prosperity.”

