T-VLAD: Temporal vector of locally aggregated descriptor for multiview human action recognition
Abstract
Robust view-invariant human action recognition (HAR) requires effective representation of its temporal structure in multi-view videos. This study explores a view-invariant action representation based on convolutional features. Action representation over long video segments is computationally expensive, whereas features in short video segments limit the temporal coverage locally. Previous methods are based on complex multi-stream deep convolutional feature maps extracted over short segments. To cope with this issue, a novel framework is proposed based on a temporal vector of locally aggregated descriptors (TVLAD). T-VLAD encodes long term temporal structure of the video employing single stream convolutional features over short segments. A standard VLAD vector size is a multiple of its feature codebook size (256 is normally recommended). VLAD is modified to incorporate time-order information of segments, where the T-VLAD vector size is a multiple of its smaller time-order codebook size. Previous methods have not been extensively validated for view-variation. Results are validated in a challenging setup, where one view is used for testing and the remaining views are used for training. State-of-the-art results have been obtained on three multi-view datasets with fixed cameras, IXMAS, MuHAVi and MCAD. Also, the proposed encoding approach T-VLAD works equally well on a dynamic background dataset, UCF101. (c) 2021 Elsevier B.V. All rights reserved.
Más información
Título según WOS: | ID WOS:000674680600004 Not found in local WOS DB |
Título de la Revista: | PATTERN RECOGNITION LETTERS |
Volumen: | 148 |
Editorial: | Elsevier |
Fecha de publicación: | 2021 |
Página de inicio: | 22 |
Página final: | 28 |
DOI: |
10.1016/j.patrec.2021.04.023 |
Notas: | ISI |