Simulation Based Learning and Computer Vision for Robotics
The expansion of robotics in manufacturing face several challenges, such as a lack of robust perception systems and difficulties in acquiring and labeling data. The goal is to extend the reach of automation beyond highly repeatable processes and into high mix low volume (HMLV) environments. This requires a more robust and accurate perception system and efficient data acquisition and labeling procedures.
6D Pose Recognition
Recognizing the precise translation and orientation of objects, the so-called 6D pose, is essential in many robotic applications, such as bin-picking, and human-robot collaboration. 6D Pose estimation can play a crucial role in addressing the above challenges by providing a more robust and reliable perception for robots and progress towards automation of HMLV and new applications beyond highly repeatable processes.
Sim 2 Real
The training and deployment of 6D pose estimation algorithms is non-trivial due to the high demand for training data. 6D pose labeling of images is time and labor intensive, which limits the availability of datasets. On the other hand, using modern simulations to generate synthetic data for training DNNs shows great potential with low cost and high efficiency. Nevertheless, the performance of models solely trained on synthetic data often deteriorates when tested on real images due to the so-called reality-gap
Our Research in Robotics
Our research focuses on developing an integrated 6D pose estimation system for robotic grasping. This year, we have completed a robotic testbed, created an integrated baseline 6D pose vision system, and developed a Blender pipeline for the generation of synthetic data. Additionally, we are exploring other possibilities of leveraging simulations to approach new end-to-end vision paradigms and synthetic data generation. The proposed pipeline can efficiently generate large amounts of photo-realistic RGBD images for the object of interest and includes domain randomization techniques to bridge the gap between real and synthetic data.
We also developed a real-time two-stage 6D pose estimation approach for time-sensitive robotics applications. With this pipeline, our pose estimation approach can be trained from scratch using synthetic data only, showing competitive performance compared to state-of-the-art methods. We demonstrated the proposed approach in a robotic experiment, grasping a household object from cluttered background under different lighting conditions.