Friday, 2022/08/19

  • CS/AI
  • C
  • TITECH
  • Switch Language
    • ja日本語 (Japanese)
    • enEnglish

Shimosaka Research Group

pursuing MIUBIQ (machine intelligence in UbiComp Research)

  • Home
    • Members
    • Location
  • News
  • Projects
  • Publications
  • Awards
  • Archives
    • Codes
    • Datasets
Navigation
Projects Consistent collective activity recognition with fully connected CRFs

Consistent collective activity recognition with fully connected CRFs

2014/09/01 | Projects | 4551 views |

We propose a novel method for consistent collective activity recognition in video images. Collective activities are activities performed by multiple persons, such as queuing in a line, talking together, and waiting at an intersection. Since it is often difficult to differentiate between these activities using the appearance of only an individual person, the models proposed in recent studies exploit the contextual information of other people nearby. However, these models do not sufficiently consider the spatial and temporal consistency in a group (e.g., they consider the consistency in only the adjacent area), and therefore, they cannot effectively deal with temporary misclassification or simultaneously consider multiple collective activities in a scene. To overcome this drawback, this paper describes a method to integrate the individual recognition results via fully connected conditional random fields (CRFs), which consider all the interactions among the people in a video clip and alter the interaction strength in accordance with the degree of their similarity. Unlike previous methods that restrict the interactions among the people heuristically (e.g., within a constant area), our method describes the “multi-scale” interactions in various features, i.e., position, size, motion, and time sequence, in order to allow various types, sizes, and shapes of groups to be treated. Experimental results on two challenging video datasets indicate that our model outperforms not only other graph topologies but also state-of-the art models.

Publications

Takuhiro Kaneko, Masamichi Shimosaka, Shigeyuki Odashima, Rui Fukui, and Tomomasa Sato.
A fully connected model for consistent collective activity recognition in videos.
Pattern Recognition Letters, Vol. 43, pp. 109-118, 2014. [audible slides]

Takuhiro Kaneko, Masamichi Shimosaka, Shigeyuki Odashima, Rui Fukui, and Tomomasa Sato.
Consistent collective activity recognition with fully connected CRFs.
Proceedings of the 21st International Conference on Pattern Recognition (ICPR 2012),  pp. 2792-2795, Tsukuba, November 2012.
ICPR2012 Best Student Paper Awards (Photo)

Takuhiro Kaneko, Masamichi Shimosaka, Shigeyuki Odashima, Rui Fukui, and Tomomasa Sato.
Viewpoint invariant collective activity recognition with relative action context. 
In ECCV 2012 Proceedings (Part III), Lecture Notes in Computer Science, vol. 7585, pp. 253–262, Florence, Italy, October 2012, Springer Berlin Heidelberg.

  • tweet

Comments are disabled for this post

Social Networks

  • twitter
  • rss

Recent News

  • Presenting our paper on Robust Continuous MaxEnt IRL with RRT at IV2022 2022/06/09
  • Presenting our paper on Efficient Indoor Localization Model Construction by Sequential Recommendation of Data Gathering Position based on Bayesian Optimization at IPIN2021 2021/11/29
  • Adaptive incremental beacon placement optimization for crowd density monitoring applications 2021/11/01
  • Presenting 2 papers at ACM SIGSPATIAL 2021 2021/11/01
  • Fine-grained Urban Dynamics Prediction using Large-Scale Mobile Phone Location Data 2021/10/05
  • Robustifying Wi-Fi localization by Between-Location data augmentation 2021/09/28
  • Our paper on robustifying Wi-Fi localization by “Between-Location” data augmentation has been published in IEEE Sensors Journal 2021/09/17
  • Driving behavior modeling at un-signalized intersection with inverse reinforcement learning on sequential MDPs 2021/07/12
  • Presenting our paper on driving behavior modeling with inverse reinforcement learning at un-signalized intersection on sequential MDPs on IV2021 2021/07/07
  • Presenting our paper on fine-grained urban dynamics prediction with hierarchical Bayes on PAKDD 2021 2021/05/11

Search

Copyright 2015 · Shimosaka Research Group at TITECH