Tech News
NVIDIA open-sources physical common-sense AI—Phase 3 of the AI story starts now
By Andreas Hassellof, CEO, Ombori
Artificial intelligence has moved in three clear waves:
- Recommendations – algorithms that quietly learned our clicks and served the next song or product.
- Chat – large language models that talk, translate and code on demand.
- Real world – systems that see, reason and act amid gravity, friction and people.
With yesterday’s release of Cosmos-Reason1-7B, NVIDIA just kicked off Wave 3 in earnest. The fully open model infers cause-and-effect and spells out its next safe action in plain language—no proprietary licence required.
CEO Jensen Huang has been calling this shift “the ChatGPT moment for general robotics,” and at COMPUTEX he doubled down, putting “Physical AI” at the heart of this year’s roadmap, right alongside new Blackwell GPUs and NVLink Fusion interconnects.
What suddenly becomes possible
Everyday setting New capability unlocked by open physical-sense AI
Warehouses Arms feel the difference between “slippery” and “fragile,” cutting breakage and speeding mixed-SKU picking.
Hospitals Autonomous carts slow on wet floors and detour around crash trolleys, trimming nurses’ walking marathons.
Crosswalks & junctions Smart signals reason about pedestrians, strollers and delivery bots to halve wait times at peak.
Retail & hotels Shelf-stocking and room-service robots weave through crowds and place items neatly, freeing staff for guests.
Buildings HVAC and lighting that perceive real occupancy and airflow shave double-digit percentages off energy bills.
Getting it out of the lab
Each real-world site remains a patchwork of cameras, motors, networks and safety rules—why pilots that should take days still drag on for months. Edge platforms such as Phygrid now hide that plumbing: choose the gate, aisle or cell, press Install, and Cosmos-Reason1-7B lands there automatically with security and rollback handled. One click replaces weeks of glue code.