The Backbone
of Physical AI
Rich, egocentric, navigational video data.
Schedule a call
Drag to explore
We're building the ultimate dataset for the physical world.
01Coverage
Every walkable street, mapped.We've mapped every path, alley, and corridor of Singapore and New York City. Growing over 10K+ hours (1B+ frames) of egocentric video data.
02Data Formats
Multi-modal, sensor-synchronized.Every capture session is a synchronized bundle of video, motion, and spatial data — ready to pipe into your training pipeline.
03Use Cases
Built for next-gen physical AI.Real-world video data at this scale unlocks training for new foundational models – grounded in how the world actually looks and interacts.
01Coverage
Every walkable street, mapped.We've mapped every path, alley, and corridor of Singapore and New York City. Growing over 10K+ hours (1B+ frames) of egocentric video data.
02Data Formats
Multi-modal, sensor-synchronized.Every capture session is a synchronized bundle of video, motion, and spatial data — ready to pipe into your training pipeline.
03Use Cases
Built for next-gen physical AI.Real-world video data at this scale unlocks training for new foundational models – grounded in how the world actually looks and interacts.
Interested?Whether you're training foundation models, building for robotics, or simply looking for API access to our dataset of 10K+ hours of video, we'd love to hear from you.
Schedule a call