Learning to Grasp: Main Page | Deep Learning.


Data/Code

Here we present a labeled dataset for identifying grasping rectangles in an image, as well as code for manipulating the raw data, and visualizing the rectangles. A grasping rectangle is an oriented rectangle in the image plane which defines the orientation of a gripper parallel to the image plane.

The dataset consists of 1035 images of 280 different objects, with several images taken of each object in different orientations or poses. Each image also has an associated point cloud and a background image. (A single background image is used for number object images). The raw dataset below consists of: (a) images, (b) grasping rectangles, (c) pointclouds, (d) background images and (e) a file containing a mapping from each image to the corresponding background image.

Raw Dataset, 1035 images, 4.7GB in total. data01 data02 data03 data04 data05 data06 data07 data08 data09 data10

Background images: here

Explanation of raw data format: README


A small sample of the images included in our data set

The following processed dataset contains the features and labels for each of the grasping rectangles given in the above raw dataset, as well as a mapping from each rectangle to the image that it came from, objectID of the object in that image, a description of that object.

ProcessedData , 61.9MB

Explanation of processed data format: README

Code for Learning

Source code for labeling if you want to add some of your own labels: Labeling Code and README

Source code for scoring the results of : Scoring Code and README

Source code for training and test system using both image and pointcloud features: Source Code. Note: requires ROS and OpenCV to run.

Code for Testing on PR2

The code (README) provided here includes all packages from perceiving point-cloud, finding the best grasp, to executing the grasp by PR2. It also includes a new grasping method by David Fischinger.

Please contact David (df@acin.tuwien.ac.at) for questions.