Dynamic Intelligence publishes its first dataset to power VLA-based open-world robotics models
10/24/20251 min read


By publishing our first dataset, Dynamic Intelligence is seeding the foundational data layer that allows moving agents to operate reliably in open-world, unstructured environments rather than just predictable factory floors. VLA models — which integrate vision, language, and action — require high-quality multimodal data that captures real-world observations paired with language annotations and action trajectories. Without such a dataset, robots struggle to generalise across scenes, embodiments and tasks.
Our first release, DI-0, is publicly announced as the initial step in our open-world robotics initiative. While the dataset is introduced for research purposes, access has been limited to select partner institutions under license.
We are now developing DI-1, a significantly larger and more diverse dataset, and we’re open to collaborating with new research partners as we expand. Dynamic Intelligence is building DI-1 to meet the growing demand from physical AI labs for next-generation embodied AI — integrating hardware, software, and data into a unified stack.
