Flapping Airplanes and the promise of research-driven AI


A new AI lab is called Airplane launched on Wednesday, with $180 million in seed funding from Google Ventures, Sequoia, and Index. The founding team is impressive, and their goal – to find a less data-hungry way to train large models – is very interesting.

Based on what I’ve seen so far, I’d rate it as a Level Two scale trying to make money.

But there was something more exciting about the Flapping Airplanes project that I hadn’t been able to make out until I read it. This post is from Sequoia partner David Cahn.

As Cahn explains, Flapping Airplanes was one of the first laboratories to transcend the scale, data-building and relentless computing that has defined most of the industry to date:

The scale paradigm argues for devoting as much of society’s resources, as economically feasible, to expand the current LLM, with the hope that this will lead to AGI. The research paradigm says that we are 2-3 research breakthroughs away from “AGI” intelligence, and as a result, we need to devote resources to long-term research, especially projects that require 5-10 years to produce.

(…)

A computing-first approach will prioritize cluster scale above all else, and will greatly favor short-term wins (in the order of 1-2 years) over long-term bets (in the order of 5-10 years). A research-first approach will spread the temporal bet, and must be willing to make lots of bets that have an absolute probability less possible, but at the same time expand the search space for what is possible.

Maybe the people who counted were right, and there was no point in focusing on anything other than building a confused server. But with so many companies already pointing in that direction, it’s worth looking at others.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *