A new approach for pain event detection in video

Junkai CHEN, Zheru CHI, Hong FU

Research output: Chapter in Book/Report/Conference proceedingChapters

Abstract

A new approach for pain event detection in video is presented in this paper. Different from some previous works which focused on frame-based detection, we target in detecting pain events at video level. In this work, we explore the spatial information of video frames and dynamic textures of video sequences, and propose two different types of features. HOG of fiducial points (P-HOG) is employed to extract spatial features from video frames and HOG from Three Orthogonal Planes (HOG-TOP) is used to represent dynamic textures of video subsequences. After that, we apply max pooling to represent a video sequence as a global feature vector. Multiple Kernel Learning (MKL) is utilized to find an optimal fusion of the two types of features. And an SVM with multiple kernels is trained to perform the final classification. We conduct our experiments on the UNBC-McMaster Shoulder Pain dataset and achieve promising results, showing the effectiveness of our approach. Copyright © 2015 IEEE.
Original languageEnglish
Title of host publication2015 International Conference on Affective Computing and Intelligent Interaction (ACII 2015)
Place of PublicationPiscataway, NJ
PublisherIEEE
Pages250-254
ISBN (Electronic)9781479999538, 9781479999521
ISBN (Print)9781479999545
DOIs
Publication statusPublished - 2015

Citation

Chen, J., Chi, Z., & Fu, H. (2015). A new approach for pain event detection in video. In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII 2015) (pp. 250-254). Piscataway, NJ: IEEE.

Keywords

  • Pain event detection
  • P-HOG
  • HOG-TOP

Fingerprint Dive into the research topics of 'A new approach for pain event detection in video'. Together they form a unique fingerprint.