Learning to recognise 3D human action from a new skeleton-based representation using deep convolutional neural networks

Pham H.-H.; Khoudour L.; Crouzil A.; Zegers P.; Velastin S.A.

Abstract

Recognising human actions in untrimmed videos is an important challenging task. An effective three-dimensional (3D) motion representation and a powerful learning model are two key factors influencing recognition performance. In this study, the authors introduce a new skeleton-based representation for 3D action recognition in videos. The key idea of the proposed representation is to transform 3D joint coordinates of the human body carried in skeleton sequences into RGB images via a colour encoding process. By normalising the 3D joint coordinates and dividing each skeleton frame into five parts, where the joints are concatenated according to the order of their physical connections, the colour-coded representation is able to represent spatio-temporal evolutions of complex 3D motions, independently of the length of each sequence. They then design and train different deep convolutional neural networks based on the residual network architecture on the obtained image-based representations to learn 3D motion features and classify them into classes. Their proposed method is evaluated on two widely used action recognition benchmarks: MSR Action3D and NTU-RGB+D, a very large-scale dataset for 3D human action recognition. The experimental results demonstrate that the proposed method outperforms previous state-of-the-art approaches while requiring less computation for training and prediction.

Más información

Título según WOS: Learning to recognise 3D human action from a new skeleton-based representation using deep convolutional neural networks
Título según SCOPUS: Learning to recognise 3D human action from a new skeleton-based representation using deep convolutional neural networks
Título de la Revista: IET COMPUTER VISION
Volumen: 13
Número: 3
Editorial: INST ENGINEERING TECHNOLOGY-IET
Fecha de publicación: 2019
Página de inicio: 319
Página final: 328
Idioma: English
DOI:

10.1049/iet-cvi.2018.5014

Notas: ISI, SCOPUS