Week 46-48

System Integration, Experiments and Results.

Posted by Avinash Sen on June 21, 2020
Implementation in ROS Framework.

With the complete hardware calibrated, and having the objects in the workspace successfully detected, it is now possible to develop the precise bin-picking vision system. The description of the steps required for the development and Evaluation of the entire process is presented next; This section describes the purpose of each node, explains the connection among them all and how they communicate with each other and through each topics. ROS (ROBOT OPERATING SYSTEM) : It is a collection of tools, libraries, and conventions that aim to simplify the task of creating complex and robust robot behaviour across a wide variety of robotic platforms.

ROS contains different distributions which are a versioned set of ROS packages. The one used in this project is the ROS Kinetic Kame. We attached an Intel Realsense D435i depth camera to the 6th link of a Aubo i5 robot. The workstation and the robot connected via Ethernet cable for fast communication and the rest camera was connected by USB Type–C in the drive.

Download : Link____

Experimental Setup :

Running ROS master, camera node, DOPE node and rviz node:

1. Start ROS master :

cd ~/catkin_ws | source devel/setup.bash |  roscore
2.Start camera node (or start your own camera node)
roslaunch realsense2_camera rs_rgbd.launch  # Publishes RGB images to `/camera/color/image_raw`
3. Start DOPE node
roslaunch dope dope.launch [config:=/path/to/my_config.yaml]
4. Start rviz node
rosrun rviz rviz
To debug in RViz, run rviz, then add one or more of the following displays:
                  Add > Image to view the raw RGB image or the image with cuboids overlaid |
                  Add > Pose to view the object coordinate frame in 3D. | 
                  Add > MarkerArray to view the cuboids, meshes etc. in 3D. | 
                  Add > Camera to view the RGB Image with the poses and markers from above.
                

Experiments and Results

After the network has processed an image, it is necessary to extract the individual objects from the belief maps. In order to evaluate and demonstrate the performance of the developed bin-picking vision system, several tests were taken and a demonstration was held to present the versatility of the system. For this particular project, this sort of assessment could be done by evaluating the number of successes in the identification and pose estimation of a demo object. For better results, we took the object cracker, almost random, at 4 various areas on a table in front of the robot, at 3 various orientations for each of the 4 places.

Detection

This approach relies on a simple postprocessing step that searches for local peaks in the belief maps above a threshold, followed by a greedy assignment algorithm that associates projected vertices to detected centroids. For each vertex, this latter step compares the vector field evaluated at the vertex with the direction from the vertex to each centroid, assigning the vertex to the closest centroid within some angular threshold of the vector.

Pose Output

Once the vertices of each object instance have been determined, a PnP algorithm is used to retrieve the pose of the object. This step uses the detected projected vertices of the bounding box, the camera intrinsics, and the object dimensions to recover the final translation and rotation of the object with respect to the camera. All detected projected vertices are used, as long as at least the minimum number (four) are detected.

The network employs multiple stages to refine ambiguous estimates of the 2D locations of projected vertices of each object’s 3D bounding cuboid. These points are then used to predict the final pose using PnP, assuming known camera intrinsics and object dimensions. The robot was programed to go to distance before grasp coordinates above the object. The prototype object detected and pose estimated successfully in various orientations. We rotated the objects 6-DOF and the object detection was quite successful.