A new approach for pain event detection in video is presented in this paper. Different from some previous works which focused on frame-based detection, we target in detecting pain events at video level. In this work, we explore the spatial information of video frames and dynamic textures of video sequences, and propose two different types of features. HOG of fiducial points (P-HOG) is employed to extract spatial features from video frames and HOG from Three Orthogonal Planes (HOG-TOP) is used to represent dynamic textures of video subsequences. After that, we apply max pooling to represent a video sequence as a global feature vector. Multiple Kernel Learning (MKL) is utilized to find an optimal fusion of the two types of features. And an SVM with multiple kernels is trained to perform the final classification. We conduct our experiments on the UNBC-McMaster Shoulder Pain dataset and achieve promising results, showing the effectiveness of our approach. Copyright © 2015 IEEE.
|Title of host publication||2015 International Conference on Affective Computing and Intelligent Interaction (ACII 2015)|
|Place of Publication||Piscataway, NJ|
|ISBN (Electronic)||9781479999538, 9781479999521|
|Publication status||Published - 2015|
CitationChen, J., Chi, Z., & Fu, H. (2015). A new approach for pain event detection in video. In 2015 International Conference on Affective Computing and Intelligent Interaction (ACII 2015) (pp. 250-254). Piscataway, NJ: IEEE.
- Pain event detection