| Detection Project Overview | Anticipation Project Overview | Data/Code | Results |

Anticipating Human Activities for Reactive Robotic Responses

An important aspect of human perception is anticipation, which we use extensively in our day-to-day activities when interacting with other humans as well as with our surroundings. Anticipating which activities will a human do next (and how to do them) can enable an assistive robot to plan ahead for reactive responses in the human environments. Furthermore, anticipation can even improve the detection accuracy of past activities. In this work, we represent each possible future using an anticipatory temporal conditional random field (ATCRF) that models the rich spatial-temporal relations through object affordances. We then consider each ATCRF as a particle and represent the distribution over the potential futures using a set of particles.

Robot's RGB-D view Heatmap of object affordances Heatmap of trajectories Robot opening the door

Popular Press: Kurzweil AI, Wired, The Verge, Time Magazine, LA Times, CBS News, NBC News, Discovery News, National Geographic, FOX News (Studio B). Wall Street Journal (WSJ) Live, The Daily Show (Comedy Central) with Lewis Black.


Download code and data.


Anticipating Human Activities using Object Affordances for Reactive Robotic Response, Hema S Koppula, Ashutosh Saxena. Robotics: Science and Systems (RSS), 2013. (oral, best student paper) [PDF]

Learning Spatio-Temporal Structure from RGB-D Videos for Human Activity Detection and Anticipation, Hema S Koppula, Ashutosh Saxena. International Conference on Machine Learning (ICML), 2013. [PDF]

Human Activity Detection - Click here for related papers.

The video below was also the finalist for Best Video Award at IROS 2013.


Hema Koppulahema at cs.cornell.edu
Prof. Ashutosh Saxenaasaxena at cs.cornell.edu


Related Projects

Human Activity Detection

3D Scene Understanding