Deep Learning for Detecting Robotic Grasps

Learning-based approaches in previous works have been succeesfully used for grasping novel objects, but required manual design of features for image and depth data. We use deep learning, which allow us to learn the basic features used by our algorithm directly from RGB-D data.

Our deep learning algorithm uses our novel structured multimodal regularization approach to encourage the learned features to use a subset of the input modalities. This leads to more robust RGB-D features, especially when surface normals are used as additional input modes.


Video



Publications


  1. Deep Learning for Detecting Robotic Grasps, Ian Lenz, Honglak Lee, Ashutosh Saxena. To appear in International Journal of Robotics Research (IJRR), 2014. [PDF]
  2. Previous version appeared as a conference paper in RSS 2013

For other publications on grasping, click here.


Data/Code

The grasping rectangle dataset can be found here.

Code for learning and detection is available here.
Code to run grasps on a Baxter robot is available here.
The readme should get you started, but let Ian (ianlenz at cs.cornell.edu) know if you have any questions.


People

Ian Lenzianlenz at cs.cornell.edu
Shikhar Sharmashikhars at iitk.ac.in
Honglak Leehonglak at eecs.umich.edu
Ashutosh Saxenaasaxena at cs.cornell.edu

Related Projects

Grasping Novel Objects

Placing and Arranging Objects