Odds out of stealth with a new model of ai ai


InceptionThe new Paraphus company starts by Science Professor Professor Professor Ermon Ermon, claiming to develop a novel model AI based on the “deployment”. Inception mentioned the based base model, or “DLM” short.

The most generativei AI model can now be divided into two types: language models (llms) and deployment models. Llms, built in Transformer Architectureused for a text generation. Meanwhile, Deployment Model, which ai system is like Midjourey and the opening SoraIt is usually used to create pictures, videos, and audio.

Model inception offers traditional LLMA capabilities, including code questions and answers, but with faster performance and reduced performance, according to the company.

EMON tells the techcrunch if he has learned how to apply Deployment Model for a long text in a stanford lab. The point is based on the idea that the traditional llms is quite slowly compared to distribution technology.

With LLMS, “You can’t produce the second word until you have the number one, and you can’t afford three times until you produce two,” EMBAN said.

Ermon is looking for a way to encourage text spread approaches because it is not like llms, which can be used, a different model, model from the data made quickly (eg image).

EMON hypototesis returns and modifying large text blocks in parallels may be a different model. After years of trial years, EMBOs and students attach the main defects, which detail in a Research paper Published last year.

Identify Packaging Potential, Ermon Last Summon Relationship, Tapping two students, Professor UCLA Aditya Grover and Cornel Professor Volodyr, for Co-led company.

When the Emgon declined to discuss funding in the asception, TechCrunch knew that Mayfield’s fund had investing.

Inception has earned multiple subscribers, including unnanna Fortune 100 companies, by completing critical needs to reduce AI and increased forever, say Eron.

“What we found is the model we can use the GPU more efficient,” EMBA said, showing a computer chips that are usually used to open the model in production. “I think so, like that, great deal, because I think this will change how to change the model model.”

Inception offers the API as well as the spread and selection of deployment, support for good models, and suits a good model for models, and dlms of-the-box for various cases of use. The company states the dlms can open up to 10x faster than traditional llms when it costs 10x less.

The small coding model ‘we are good for (the opening) Gpt-4 mini mini When more than 10 times, “the company spokes told TechCrunch. Model” Mini ‘we deal with a small source model like (meta) Call 3.1 8B and get more than 1,000 tokens per second. “

“Token” is an industry parlance for raw data. One thousand tokens per second is Amazing speedconsider the claim that is recognized.



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *