Software Engineer, Perception- Mobile Robot


March 26, 2021

Palo Alto, CA, US

The Role
Tesla’s Mobile Robot Engineering team designs, builds, and integrates fully in-house autonomous mobile robot systems to power the next generation of highly scalable and flexible manufacturing lines across Tesla's growing factory and product portfolio. The team joins mechanical, electrical, software, and manufacturing engineering disciplines in a highly collaborative team.
Core to the mobile robot’s full autonomy, the perception stack presents a unique opportunity to work on state-of-the-art algorithms for state estimation and motion planning, culminating in their deployment to real world commercial production applications. The perception software engineer will develop and own this stack from inception to deployment. The ideal candidate will have previous hands-on experience developing mobile robot perception solutions for commercial applications.
  • Maintain and improve existing modules associated with perception and motion planning
  • Develop, integrate and deploy real-time state-of-the-art algorithms into existing system architecture
  • Develop online and offline state estimation algorithms by fusing information from cameras, IMUs and other sensors
  • Design and implement scalable software for a large mobile robot fleet
  • Test and debug your solutions in realistic situations including in customer applications
  • Build pipelines for data collection, model training and deployment
  • Validate and document performance of algorithms and models in real and simulated environments
  • Design automatic data generation pipelines that create high quality, unbiased ground truth labels for neural network training
  • Create robust sensor calibration routines that perform reliably in complex and unpredictable environments
  • Work in a fast-paced environment with ambitious deadlines

  • MS or Ph.D. in Robotics, Computer Science or related discipline
  • Experience writing production code in Python and C++ with 3+ years of experience
  • Experience with sensors, algorithms and data structures related to localization and mapping, obstacle avoidance, semantic segmentation and tracking, motion planning and trajectory optimization
  • Strong mathematical background with experience in kinematic/dynamic modeling
  • Experience with CAN, ROS preferred
  • Knowledge of ML/DL preferred
  • Strong background in core problems in robotics, including Bayesian state estimation (e.g., MAP, MMSE, MLE), 3D reconstruction, Structure-from-Motion, Visual Odometry, Visual Inertial Odometry, Bundle Adjustment etc.,
  • Experience working in a Linux environment.
  • Working knowledge of Git: creating and merging branches, cherry-picking commits, examining the diff between two hashes. More advanced Git usage is a plus, particularly: development on feature-specific branches, squashing and rebasing commits, and breaking large changes into small, easily-digestible diffs.