Cross-View Gait Recognition Based on U-Net
Abstract
Gait based recognition systems allow automatic subjects' recognition by using the way of walking. However, the performance of these systems is often degraded by some covariate factors such as walking direction, appearance changes, occlusions, among others. From these, it has been shown that change in appearance is the most influent covariant by drastically affecting the recognition performance. Consequently, inspired by the great successes of GANs in image translation tasks, we propose a method of gait recognition using a conditional generative model to generate view-invariant features. The proposed method is evaluated on one of the largest datasets available under the variations of view, clothing and carrying conditions: CASIA gait database B. Experimental results show that the proposed method outperforms state-of-the-art methods specially in carrying-bag and wearing-coat sequences. The full implementation and trained networks are available at https://gitlab.com/IsRaTiAl/gait
Más información
| Título según WOS: | ID WOS:000626021407013 Not found in local WOS DB |
| Título de la Revista: | 2024 INTERNATIONAL JOINT CONFERENCE ON NEURAL NETWORKS, IJCNN 2024 |
| Editorial: | IEEE |
| Fecha de publicación: | 2020 |
| DOI: |
10.1109/ijcnn48605.2020.9207501 |
| Notas: | ISI |