Latest Articles

From Simulation to Production: How to Build Robots With AI

The next generation of robots will be generalist-specialists — capable of understanding instructions and learning broad skills while also trainable for specialized tasks.

Think of them as jacks of all trades that can also master specific jobs. 

Building these robots requires integrated cloud-to-robot workflows that make it seamless to collect and generate data, train and evaluate control policies, and deploy them safely onto physical machines.​ These generalist-specialist systems depend on reasoning vision language action (VLA) models to perceive, understand and act intelligently across diverse tasks.

To accelerate this shift, the open NVIDIA Isaac platform provides robotics developers with everything they need — models, data pipelines, simulation frameworks, runtime libraries — to build a robot and deploy it at scale with NVIDIA’s three-computer solution. NVIDIA even provides an open VLA model, NVIDIA Isaac GR00T N, which gives developers a powerful foundation to bootstrap and post-train their own robotic intelligence.

These models, libraries and frameworks can run in the cloud or on edge AI infrastructure — and can now be further accelerated with the integration of long-running agents like OpenClaw.

With the latest agent-friendly NVIDIA Isaac GR00T models, Isaac robot simulation and learning frameworks, as well as edge AI systems announced this week at NVIDIA GTC, NVIDIA is giving developers new, powerful tools for the generalist-specialist era of autonomy.

These workflows are open and composable, so developers can mix and match components, bring their own tools and data, and accelerate their pipeline from prototype to real-world deployment.

Your browser does not support the video tag.</p><div>
<div class="full-width-layout__copy"><p>Agility uses NVIDIA Isaac open frameworks to bring its robots from simulation to reality.

It all starts with data.

Turning Compute Into Data

Just a couple years ago, scaling a robotics pipeline depended on a developer’s ability to manually collect data: A robot’s learning depended on its exposure to different scenarios and real-world environments. 

NVIDIA open libraries and frameworks change the equation by blending real-world signals — sensor logs and teleoperation demonstrations — with simulation-generated data to quickly turn cloud compute into large quantities of usable data.

Generating high-fidelity, physically accurate synthetic data helps robotics developers overcome the limitations of physical data collection, where it can be difficult or impossible to gather enough information about rare edge cases. These edge cases may be hard or unsafe to capture physically, but they’re essential for a robot to master before deploying at scale in unpredictable, real-world environments.

While synthetic data today makes up just 20% of AI training data for edge scenarios, it’s expected to constitute more than 90% of edge scenario data by 2030, according to a report by Gartner. 

NVIDIA is propelling this shift with libraries and open frameworks that fuel an entire factory for realistic synthetic data based on the physical world. 

NVIDIA Omniverse NuRec accelerated 3D Gaussian splatting libraries, now in general availability, turn real-world sensor data into OpenUSD-based interactive simulations in NVIDIA Isaac Sim, an open source robotics simulation framework. This enables developers to scan and recreate real worlds from sensor data, making it easy to safely test robots in simulations based on real physical interactions. 

Your browser does not support the video tag.

Using NVIDIA Omniverse NuRec and FieldAI’s world-class robot foundation models, FieldAI enables industrial customers to effortlessly deploy robotics and physical AI into their workflows.

Real data can also be brought in from other devices using teleoperation. NVIDIA Isaac Teleop, also in general availability, enables developers to harness data collected through teleoperation devices — like extended-reality headsets, body trackers and gloves — to create demo data in the real world and in simulation that can be used to train robots in simulation environments like NVIDIA Isaac Lab

These datasets are then amplified using the new NVIDIA Physical AI Data Factory Blueprint that unifies data augmentation, evaluation and orchestration into a single pipeline. 

Powered by NVIDIA Cosmos open world foundation models and NVIDIA OSMO, an open source, agentic orchestrator, this reference workflow provides a scalable, production-ready data engine for robotics. Using the blueprint, developers can turn a single real-world scenario into new and varied synthetic possibilities in a fraction of the time it would take to collect similar data in the real world.

In addition to simulating the environment and data, robot builders need to simulate the robot itself. Using NVIDIA Isaac Sim, developers can choose from an array of humanoids, autonomous mobile robots and robot arms, and rig the virtual model to real-world specifications.

Your browser does not support the video tag.</p><div>
<div class="full-width-layout__copy"><p><span style="font-weight: 400;">Isaac Sim integrates with </span><span style="font-weight: 400;">PTC Onshape</span><span style="font-weight: 400;"> so developers can easily rig and modify their robots in simulation.</span>

The robot is rendered in OpenUSD, so it can seamlessly interact with the generated data and environment. Robot movements and trajectories can be recorded, replayed and used to train AI models — all safely in simulation before ever touching real hardware.

Putting AI Through Its Paces: Policy Training 

Once the teaching materials — the datasets — are gathered, it’s time for the robot to learn new tasks. This starts with the robot brain, powered by reasoning VLAs such as GR00T. 

The VLA can be post-trained using data specific to its intended task. For example, a laundry-folding robot must be trained to grasp a clothing item, identify its shape, fold it correctly and stack it neatly atop other folded items. A cooking robot might need to become an expert at slicing, stirring and sauteeing ingredients. And a hospital care robot must learn how to navigate a hallway, find an elevator and hand items to clinicians or patients.