| Detection Project Overview | Anticipation Project Overview | Data/Code | Results |

Cornell Activity Dataset: state of the art results


CAD-60 Results

The table below shows the state-of-the-art results of activity detection on CAD-60 dataset.
Refer to Sung et al. (ICRA 2012) for details about definition of "New Person" metric in following table.

"New Person"
Algorithm Precision (%) Recall (%)
Sung et al., AAAI PAIR 2011, ICRA 2012. [1,2] 67.9 55.5
Koppula, Gupta, Saxena, IJRR 2012. [3] 80.871.4
Zhang, Tian, NWPJ 2012 [4] 8684
Ni, Moulin, Yan, ECCV 2012 [5] Accur: 65.32-
Yang, Tian, JVCIR 2013 [6] 71.966.6
Piyathilaka, Kodagoda, ICIEA 2013 [7] 70*78*
Ni et al., Cybernetics 2013 [8] 75.969.5
Gupta, Chia, Rajan, MM 2013 [9] 78.175.4
Wang et al., PAMI 2013 [10] Accur: 74.70-
Zhu, Chen, Guo, IVC 2014 [16] 93.284.6
Faria, Premebida, Nunes, RO-MAN 2014 [17] 91.191.9
Shan, Akella, ARSO 2014 [18] 93.894.5
Gaglio, Lo Re, Morana, HMS 2014 [19] 77.376.7
Parisi, Weber, Wermter, Front. Neurobot. 2015 [20] 91.990.2
Cippitelli, CIN 2016 [21] 93.993.5

* Slightly different methods were used for evaluation.

Have new results for CAD-60? Email Jaeyong Sung (jysung at cs.cornell.edu).


CAD-120 Results

The table below shows the state-of-the-art results of activity detection on CAD-120 dataset.

With ground-truth segmentation
"Sub-activity" "Activity"
Algorithm Accuracy Precision Recall Accuracy Precision Recall
Koppula et al., IJRR 2013. [3] 86.0 84.2 76.9 84.7 85.3 84.2
Koppula, Saxena, ICML 2013. [11] 89.3 87.9 84.9 93.5 95.0 93.3
Hu et al., ICRA 2014 [12] 87.0 89.2 83.1 - - -
With-out ground-truth segmentation
Koppula et al. IJRR 2013. [3] 68.2 71.1 62.2 80.6 81.8 80.0
Koppula, Saxena. ICML 2013. [11] 70.3 74.8 66.2 83.1 87.0 82.7
Rybok et al. WACV 2014. [13] - - - 78.2* - -

* Does not use ground-truth object bounding boxes.

The table below shows the state-of-the-art results of activity anticipation on CAD-120 dataset.

"Sub-activity" "Object affordance" "Traj."
Algorithm Accuracy Precision Recall Accuracy Precision Recall MHD*
Koppula et al., RSS 2013. [14] 47.7 37.9 69.2 66.1 36.7 71.3 31.0
Koppula, Saxena, ICML 2013. [11] 49.6 40.6 74.4 67.2 41.4 73.2 30.2
Jiang, Saxena, RSS 2014. [15] 52.1 43.2 76.1 68.1 44.2 74.9 26.7

* MHD measures the distance between the ground-truth and predicted trajectories. The unit is centimeter. Lower values indicate better results.

Have new results for CAD-120? Email Hema Koppula (hema at cs.cornell.edu).


References

  1. Human Activity Detection from RGBD Images, Jaeyong Sung, Colin Ponce, Bart Selman, Ashutosh Saxena. In AAAI workshop on Pattern, Activity and Intent Recognition (PAIR), 2011. [PDF]

  2. Unstructured Human Activity Detection from RGBD Images, Jaeyong Sung, Colin Ponce, Bart Selman, Ashutosh Saxena. International Conference on Robotics and Automation (ICRA), 2012. [PDF]

  3. Learning Human Activities and Object Affordances from RGB-D Videos, Hema S Koppula, Rudhir Gupta, Ashutosh Saxena. International Journal of Robotics Research (IJRR), in press, Jan 2013. [PDF]

  4. RGB-D Camera-based Daily Living Activity Recognition., C. Zhang, Y. Tian. Journal of Computer Vision and Image Processing, Vol. 2, No. 4, December 2012.

  5. Order-Preserving Sparse Coding for Sequence Classification, BingBing Ni, Pierre Moulin, Shuicheng Yan. European Conference on Computer Vision (ECCV), 2012.

  6. Effective 3D Action Recognition Using Eigenjoints, X. Yang, Y. Tian. Journal of Visual Communication and Image Representation (JVCIR), Special Issue on Visual Understanding and Applications with RGBD Cameras, 2013.

  7. Gaussian Mixture Based HMM for Human Daily Activity Recognition Using 3D Skeleton Features, Lasitha Piyathilaka, Sarath Kodagoda. IEEE 8th Conference on Industrial Electronics and Applications (ICIEA), 2013.

  8. Multilevel Depth and Image Fusion for Human Activity Detection, Bingbing Ni, Yong Pei, Pierre Moulin, Shuicheng Yan. Cybernetics, IEEE Transactions on 43.5, 2013.

  9. Human Activities Recognition using Depth Images, Raj Gupta, Alex Yong-Sang Chia, Deepu Rajan. Proceedings of the 21st ACM international conference on Multimedia, 2013.

  10. Learning Actionlet Ensemble for 3D Human Action Recognition, Jiang Wang, Zicheng Liu, Ying Wu, Junsong Yuan. IEEE Transactions on Pattern Analysis and Machine Intelligence (PAMI), 2013.

  11. Learning Spatio-Temporal Structure from RGB-D Videos for Human Activity Detection and Anticipation, Hema S Koppula, Ashutosh Saxena. International Conference on Machine Learning (ICML), 2013.

  12. Learning Latent Structure for Activity Recognition, Ninghang Hu, Gwenn Englebienne, Zhongyu Lou, Ben Krose. International Conference on Robotics and Automation (ICRA), 2014.

  13. "Important Stuff, Everywhere!" Activity Recognition with Salient Proto-Objects as Context, Lukas Rybok, Boris Schauerte, Ziad Al-Halah, and Rainer Stiefelhagen. IEEE Winter Conference on Applications of Computer Vision (WACV), 2014.

  14. Anticipating Human Activities using Object Affordances for Reactive Robotic Response, Hema S. Koppula, Ashutosh Saxena. Robotics: Science and Systems (RSS), 2013.

  15. Low-dimensional Modeling of Humans in Environment Context for Activity Anticipation, Yun Jiang, Ashutosh Saxena. In Robotics: Science and Systems (RSS), 2014.

  16. Evaluating Spatiotemporal Interest Point Features for Depth-based Action Recognition, Yu Zhu, Wenbin Chen, Guodong Guo. In Image and Vision Computing, 2014.

  17. A Probalistic Approach for Human Everyday Activities Recognition using Body Motion from RGB-D Images, Diego R. Faria, Cristiano Premebida, Urbano Nunes. In IEEE RO-MAN'14: IEEE International Symposium on Robot and Human Interactive Communication, 2014.

  18. 3D Human Action Segmentation and Recognition using Pose Kinetic Energy, Junjie Shan, Srinivas Akella. In IEEE Workshop on Advanced Robotics and its Social Impacts (ARSO), 2014.

  19. Human Activity Recognition Process Using 3-D Posture Data, S. Gaglio, G. Lo Re, M. Morana. In IEEE Transactions on Human-Machine Systems, 2014.

  20. Self-Organizing Neural Integration of Pose-Motion Features for Human Action Recognition, Parisi GI, Weber C and Wermter S. In Frontier in Neurobotics, 2015.

  21. A Human Activity Recognition System Using Skeleton Data from RGBD Sensors, Enea Cippitelli, Samuele Gasparrini, Ennio Gambi, and Susanna Spinsante. In Computational Intelligence and Neuroscience, 2016.