Overview
Role Overview
You will lead development of the algorithms and architectures that enable our robots to interact with and reason about the physical world. This role bridges world model research, embodied AI, and real-time robotics. You will engineer models and learning systems that power robotic agents through real world jobs.
Responsibilities
* Train and adapt large-scale VLA & VLMs that predict multi-modal futures (video, proprioception, audio, actions)
* Develop systems for cross-modal grounding, enabling robots to interpret sensor data in context and build coherent world models
* Enable temporal reasoning and goal-directed behavior through hierarchical task decomposition and meta-reasoning
* Support human-robot interaction by recognizing intentions, interpreting social cues, and enabling collaborative workflows
* Deploy models into real-time humanoid and mobile robots
* Evaluate and scale pipelines to measure generalization and safety
* Collaborate with locomotion, simulation and hardware teams to bridge sim-to-real transfer
* Publish and open source datasets, models, papers in parallel
#J-18808-Ljbffr