Video augmentation technique for human action recognition using genetic algorithm
Abstract
Classification models for human action recognition require robust features and large training sets for good generalization. However, data augmentation methods are employed for imbalanced training sets to achieve higher accuracy. These samples generated using data augmentation only reflect existing samples within the training set, their feature representations are less diverse and hence, contribute to less precise classification. This paper presents new data augmentation and action representation approaches to grow training sets. The proposed approach is based on two fundamental concepts: virtual video generation for augmentation and representation of the action videos through robust features. Virtual videos are generated from the motion history templates of action videos, which are convolved using a convolutional neural network, to generate deep features. Furthermore, by observing an objective function of the genetic algorithm, the spatiotemporal features of different samples are combined, to generate the representations of the virtual videos and then classified through an extreme learning machine classifier on MuHAVi-Uncut, iXMAS, and IAVID-1 datasets.
Más información
Título según WOS: | ID WOS:000741309200001 Not found in local WOS DB |
Título de la Revista: | ETRI JOURNAL |
Volumen: | 44 |
Número: | 2 |
Editorial: | Wiley |
Fecha de publicación: | 2022 |
Página de inicio: | 327 |
Página final: | 338 |
DOI: |
10.4218/etrij.2019-0510 |
Notas: | ISI |