Saliency for fine-grained object recognition in domains with scarce training data

Flores C.F.; Gonzalez-Garcia A.; van de Weijer J.; Raducanu B.

Abstract

This paper investigates the role of saliency to improve the classification accuracy of a Convolutional Neural Network (CNN) for the case when scarce training data is available. Our approach consists in adding a saliency branch to an existing CNN architecture which is used to modulate the standard bottom-up visual features from the original image input, acting as an attentional mechanism that guides the feature extraction process. The main aim of the proposed approach is to enable the effective training of a fine-grained recognition model with limited training samples and to improve the performance on the task, thereby alleviating the need to annotate a large dataset. The vast majority of saliency methods are evaluated on their ability to generate saliency maps, and not on their functionality in a complete vision pipeline. Our proposed pipeline allows to evaluate saliency methods for the high-level task of object recognition. We perform extensive experiments on various fine-grained datasets (Flowers, Birds, Cars, and Dogs) under different conditions and show that saliency can considerably improve the network's performance, especially for the case of scarce training data. Furthermore, our experiments show that saliency methods that obtain improved saliency maps (as measured by traditional saliency benchmarks) also translate to saliency methods that yield improved performance gains when applied in an object recognition pipeline. (C) 2019 Published by Elsevier Ltd.

Más información

Título según WOS: Saliency for fine-grained object recognition in domains with scarce training data
Título según SCOPUS: Saliency for fine-grained object recognition in domains with scarce training data
Título de la Revista: PATTERN RECOGNITION
Volumen: 94
Editorial: ELSEVIER SCI LTD
Fecha de publicación: 2019
Página de inicio: 62
Página final: 73
Idioma: English
DOI:

10.1016/j.patcog.2019.05.002

Notas: ISI, SCOPUS