From invisibility cloak to AI chip: Neurophos raises $110M to build tiny optical processor for inference


Twenty years ago, a Duke University professor, David R. Smithartificial composite materials called “metamaterial“to make a real life invisibility cloak. While this cloak didn’t work like Harry Potter, showing a limited ability to hide objects from a single microwave light, progress in material science eventually came down to electromagnetism research.

Today, the Austin base NeurophosPhotonics startup spun out of Duke University and Metacept (an incubator run by Smith), took that research to solve what may be the biggest problem facing AI labs and hyperscalers: how to scale computing power while keeping power consumption down.

The startup has come up with a “metasurface modulator” with optical properties that can be used as a tensor core processor to perform matrix-vector multiplication – the math that is at the heart of much AI work (especially inference), is now done by specialized GPUs and TPUs that use traditional silicon gates and transistors. By putting thousands of these modulators on a chip, Neurophos says, the “optical processing unit” is faster than the silicon GPUs currently used en masse in AI data centers, and more efficient in inference (running a trained model), which can be quite an expensive task.

To finance the development of the chip, Neurophos just raised $110 million in a Series A round led by Gates Frontier (Bill Gates’ venture firm), with participation from Microsoft M12, Carbon Direct, Aramco Ventures, Bosch Ventures, Tectonic Ventures, Space Capital, and others.

now, Photonic chips are not new. In theory, photonic chips offer higher performance than traditional silicon because light generates less heat than electricity, can travel faster, and is less susceptible to changes in temperature and electromagnetic fields.

But optical components tend to be larger than their silicon counterparts, and can be difficult to mass-produce. And it also requires a converter to convert the data from digital to analog and back, which can be large and consume a lot of power.

Neurophos, however, insists that the metasurface it developed can solve all these problems quickly because it is about “10,000 times” smaller than traditional optical transistors. The small size, the startup claims, enables to fit thousands of units on the chip, which results in far more efficiency than traditional silicon because the chip can do more calculations simultaneously.

Techcrunch event

San Francisco
|
13-15 October 2026

“When you shrink the optical transistor, you can do more math in the optical domain before you have to do that conversion back to the electronic domain,” Dr. Patrick Bowen, CEO and co-founder of Neurophos, told TechCrunch. “If you want to be fast, you have to solve the problem of energy efficiency first. Because if you want to take a chip and make it 100 times faster, it burns 100 times more power. So you get the privilege of going fast after you solve the problem of energy efficiency.”

The result, Neurophos says, is an optical processing unit capable of beating Nvidia’s B200 AI GPU. Beginning said the chip can run at 56 GHz, producing a peak of 235 Map Operations per second (POPS) and a consumption of 675 watts, compared to the B200, which can deliver 9 POPS at 1,000 watts.

Bowen said Neurophos has already signed up a number of customers (though he declined to name them), and that companies including Microsoft are “looking very closely” at the startup’s product.

Still, the startup is entering a crowded market dominated by Nvidia, the world most famous public companywhose products have more or less supported the entire AI boom. There are also other companies working on photonics, although some, like Lighmatter, have it pivoted to focus on interconnects. And Neurophos is still a few years away from production, hoping the first chips will hit the market in mid-2028.

But Bowen believes the advances in performance and efficiency afforded by metasurface modulators will prove a considerable moat.

“What others are doing, and this includes Nvidia, in terms of the basic physics of silicon, is really evolutionary rather than revolutionary, and it is related to the progress of TSMC. If you look at the increase of TSMC nodes, on average, the energy efficiency increases by about 15%, and it takes a few years,” he said.

“Even if we reflect the improvement of Nvidia’s architecture over the years, when we come out in 2028, we still have a big advantage over everyone else in the market because we started with 50x more than Blackwell in energy efficiency and raw speed.”

And to overcome the mass production problems that optical chips have traditionally faced, Neurophos says the chips can be made using standard silicon materials, tools and casting processes.

The fresh funding will be used for the development of the company’s first integrated photonic computing system, including data center-ready OPU modules, a complete software stack, and early access developer hardware. The company also opened a San Francisco engineering site and expanded HQ in Austin, Texas.

“Modern AI inference demands monumental amounts of power and computation,” said Dr. Marc Tremblay, corporate vice president and core AI infrastructure technical fellow at Microsoft, said in a statement. “We need breakthroughs in computing similar to the leaps we’ve seen in our own AI models, which is Neurophos technology and a high talent density team.”



Source link

Leave a Reply

Your email address will not be published. Required fields are marked *