There are many everyday tasks---detecting objects, loading/unloading a dishwasher, cooking simple meals,
arranging a disorganized house, respnding to human activities,
assembling an object from kit of parts---that, while simple for humans,
are extremely challenging for robots, as they involve detailed, tightly-coordinated
perception and manipulation abilities.
Our goal is to do breakthrough research on
fundamental problems in robotics perception and manipulation,
in order to enable personal robots to be truly useful in common household
and office environments.
See videos showing the robots in action!
Robots Hallucinating Humans.
Given 3D scenes containing objects from the Internet, robots learn how to model the environment
through object affordances and human preferences.
Arranging Disorganized Rooms.
Given a point-cloud of the environment, our robots reason through
humans to figure out where and how to place the objects.
3D Scene Understanding.
Detect objects (and their attributes) in a 3D Scene by reasoning
about their shape, appearance, and geometric properties.
ROS/PCL code + dataset available.
RGB-D Human Activity Detection.
Detect human activities from RGBD videos
in order to perform assitive tasks. CAD-60 and CAD-120 dataset,
along with code available.
Grasping Novel Objects.
Given an RGB (or an RGB-D) image of objects, never seen by the robot before,
our learning algorithms enable a robot to grasp them.
Applied to unloading items from a dishwasher.
Learning to Place Objects.
Placing requires reasoning about 3D structure, stability and stacking.
Even with noisy 3D point-clounds, learning algorithms robustly
infer placing location/orientation even in non-trivial scenarios.