Abstract | Videos | Publications | Data and Code | People
Hallucinating Humans for Learning Object Affordances
We bear in mind that the object being worked on is going to be ridden in, sat upon, looked at, talked into, activated, operated, or in some other way used by people individually or en masse. --Dreyfuss (1955).
In fact, humans are the primary reason for our environments to even exist. In this work, we show that modeling human-object relationships (i.e., object affordances) gives us a more compact way of modeling the contextual relationships (as compared to modeling object-object the relationships).
One key aspect of our work is that the humans may not be even seen by our algorithm! Given a large dataset containing only objects, our method considers human poses as a latent variable, and uses infinite topic models to learn the relationships. We apply this to the robotic task of arranging a cluttered house.
- Low-dimensional Modeling of Humans in Environment Context for Activity Anticipation, Yun Jiang, Ashutosh Saxena In Robotics: Science and Systems (RSS), 2014. [PDF coming soon]
- Infinite Latent Conditional Random Fields for Modeling Environments through Humans, Yun Jiang, Ashutosh Saxena. In Robotics: Science and Systems (RSS), 2013. [PDF, Supplementary material]
- Hallucinated Humans as the Hidden Context for Labeling 3D Scenes, Yun Jiang, Hema S. Koppula, Ashutosh Saxena. In Computer Vision and Pattern Recognition (CVPR), 2013 (oral). [PDF]
- Learning Object Arrangements in 3D Scenes using Human Context, Yun Jiang, Marcus Lim, Ashutosh Saxena. In International Conference of Machine Learning (ICML), June 2012. [PDF]
- Hallucinating Humans for Learning Robotic Placement of Objects, Yun Jiang, Ashutosh Saxena. In International Symposium on Experimental Robotics (ISER), May 2012. [PDF]
Download data and code and find more details here.
|Yun Jiang||yunjiang at cs.cornell.edu|
|Ashutosh Saxena||asaxena at cs.cornell.edu|
Microsoft Faculty Fellowship, 2012.