Leveraging Metadata in Representation Learning With Georeferenced Seafloor Imagery
Abstract
Camera equipped Autonomous Underwater Vehicles (AUVs) are now routinely used in seafloor surveys. Obtaining effective representations from the images they collect can enable perception-aware robotic exploration such as information-gain-guided path planning and target-driven visual navigation. This letter develops a novel self-supervised representation learning method for seafloor images collected by AUVs. The method allows deep-learning convolutional autoencoders to leverage multiple sources of metadata to regularise their learning, prioritising features observed in images that can be correlated with patterns in their metadata. The impact of the proposed regularisation is examined on a dataset consisting of more than 30 k colour seafloor images gathered by an AUV off the coast of Tasmania. The metadata used to regularise learning in this dataset consists of the horizontal location and depth of the observed seafloor. The results show that including metadata in self-supervised representation learning can increase image classification accuracy by up to 15% and never degrades learning performance. We show how effective representation learning can be applied to achieve class balanced representative image identification for summarised understanding of imbalanced class distributions in an unsupervised way.
Más información
Título según WOS: | ID WOS:000685889400021 Not found in local WOS DB |
Título de la Revista: | IEEE ROBOTICS AND AUTOMATION LETTERS |
Volumen: | 6 |
Número: | 4 |
Editorial: | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
Fecha de publicación: | 2021 |
Página de inicio: | 7815 |
Página final: | 7822 |
DOI: |
10.1109/LRA.2021.3101881 |
Notas: | ISI |