There are many everyday tasks---detecting objects, loading/unloading a dishwasher, cooking simple meals, arranging a disorganized house, respnding to human activities, assembling an object from kit of parts---that, while simple for humans, are extremely challenging for robots, as they involve detailed, tightly-coordinated perception and manipulation abilities. Our goal is to do breakthrough research on fundamental problems in robotics perception and manipulation, in order to enable personal robots to be truly useful in common household and office environments.

See videos showing the robots in action!


PlanIt: Learning User Preferences.

Robot learns context-driven user preferences on trajectories via sub-optimal feedback in Co-Active Learning setting. Interactive zero-G and tablet touch user interaction on Baxter and PR2.


Anticipating Human Activities.

Anticipate the activities a human will do next (and how!) to enable an assistive robot to plan ahead for reactive responses in human environments.


Robots Hallucinating Humans.

Given 3D scenes containing objects from the Internet, robots learn how to model the environment through object affordances and human preferences.


Deep Learning for Grasping.

Using deep learning methods, our algorithms are able to learn the basic features our robot uses to grasp novel objects from raw image data.


Arranging Disorganized Rooms.

Placing requires reasoning about 3D structure, stability and stacking. Even with noisy 3D point-clounds, learning algorithms robustly infer placing location/orientation even in non-trivial scenarios.


3D Scene Understanding
.
Segment and Detect objects (and their attributes) in a 3D Scene by reasoning about their shape, appearance, and geometric properties, as well as physics-based reasoning. ROS/PCL code + dataset available.


RGB-D Human Activity Detection.

Detect human activities from RGBD videos in order to perform assitive tasks. CAD-60 and CAD-120 dataset, along with code available.