Nvidia released a stack of robotic base models, simulation tools, and new edge hardware in the CES 2026a move that signals the company’s ambition to become a standard platform for generalists roboticslike Android is the operating system for smartphones.
Nvidia’s move into robotics reflects a broader industry shift as AI moves away from the cloud and toward machines that can learn how to think in the physical world, enabled by cheaper sensors, advanced simulations, and AI models that can increasingly generalize all tasks.
Nvidia announced details Monday of its full-stack ecosystem for physical AI, including a new open-source model that allows robots to reason, plan, and adapt across many different tasks and environments, beyond narrow task-specific bots, all available in Hugging Face.
The models include: Cosmos Transfer 2.5 and Cosmos Predict 2.5, two world models for synthetic data generation and robot policy evaluation in simulation; Reason Cosmos 2, a reasoning vision language model (VLM) that allows AI systems to see, understand, and act in the physical world; and Isaac GR00T N1.6, its next-gen vision language action (VLA) model built for humanoid robots. GR00T relies on Cosmos Reason as its brain, and unlocks full body control for the humanoid to move and manipulate objects simultaneously.
Nvidia also introduced Isaac Lab-Arena at CES, an open source simulation framework hosted on GitHub that is another component of the company’s physical AI platform, enabling safe virtual testing of robot capabilities.
The platform promises to solve a critical industry challenge: As robots learn increasingly complex tasks, from precise object handling to cable installation, validating those capabilities in a physical environment can be expensive, slow, and risky. Isaac Lab-Arena addresses this by combining resources, task scenarios, training tools, and established benchmarks like Libero, RoboCasa, and RoboTwin, creating a unified standard where the industry previously lacked.
Supporting the ecosystem is Nvidia OSMO, an open source command center that is a connecting infrastructure that integrates all workflows from data generation through training in desktop and cloud environments.
Techcrunch event
San Francisco
|
13-15 October 2026
And to help power everything, there is a new Blackwell-powered Jetson T4000 graphics card, the newest member of the Thor family. Nvidia presents it as a cost-effective computing upgrade in a device that delivers 1200 teraflops of AI computing and 64 gigabytes of memory while running efficiently at 40 to 70 watts.
Nvidia is also deepening its partnership with Hugging Face so more people can experiment with robotic training without the need for expensive hardware or specialized knowledge. The collaboration integrates Nvidia’s Isaac and GR00T technologies into the LeRobot Hugging Face framework, connecting 2 million Nvidia robotics developers with 13 million AI Hugging Face builders. Open source developer platform Press 2 humanoids now work directly with Nvidia’s Jetson Thor chips, allowing developers to experiment with different AI models without being locked into a proprietary system.
The bigger picture here is that Nvidia is trying to make robotics development more accessible, and wants to be the vendor of underlying hardware and software, just as Android is the standard for smartphone makers.
There are early signs that Nvidia’s strategy is working. Robotics is Hugging Face’s fastest growing category, with Nvidia models leading downloads. Robotics companies, from Boston Dynamics and Caterpillar to Franka Robots and NEURA Robotics, have used Nvidia technology.
Follow all TechCrunch coverage of the annual CES conference here.

