Week 0-1

Project Discussion and Topic selection

Posted by Avinash Sen on April 02, 2020

This week marks the official beginning of the dissertation.

Started the discussion of the Master's Project with Project Mentor Dr.Lalu P.P (Asst. Professor, Department of Mechanical Engineering), Co-ordinator of Nodal Centre for Robotics and Artificial Intelligence (NCRAI) and Project Guide Aju S.S ( Asst.Professor of Department of Production Engineering) of Nodal Centre for Robotics and Artificial Intelligence (NCRAI) , Kerala, India.

Bin-picking is the name of the technique used by a robot to grab objects that are arranged randomly inside a box or on a pallet. This technique has been researched by numerous organizations for over 3 decades, and one of the first attempts to create such a robust system was developed in 1986 at MIT. Initially only 2D images were used, coupled with distance sensors, to acquire data, but the continuous technological evolution allowed for this to be done with 3D devices. In contrast, the Random Bin-picking process is considered the ultimate goal for creating a vision-controlled robot, also known as VGR (Vision-guided robots). However, the creation of this type of versatile and precise system, capable of collecting any type of object without deforming it, regardless of the disordered environment around it, remains the main objective. Although several companies have already proposed different solutions to date, these are indicated to solve specific problems and are still not sufficiently versatile. In addition, most of the existing processes are used to grab non-fragile objects, due to the high degree of precision needed to avoid deforming sensitive objects.

In this Master's Project a solution for this problem that is based on learning the appearance model using convolutional neural networks (CNN) is proposed. By synthetically combining object models and backgrounds of complex composition and high graphical quality, we are able to generate photo realistic images with accurate annotated 3D pose for all objects in our custom created dataset. Using this network, we can estimate the object poses with sufficient accuracy for real world semantic grasping in a cluttered bin by real robot.