Multimodal learning and inference from visual and remotely sensed data

Rao, Dushyant; De Deuge, Mark; Nourani-Vatani, Navid; Williams, Stefan B.; Pizarro, Oscar

Abstract

Autonomous vehicles are often tasked to explore unseen environments, aiming to acquire and understand large amounts of visual image data and other sensory information. In such scenarios, remote sensing data may be available a priori, and can help to build a semantic model of the environment and plan future autonomous missions. In this paper, we introduce two multimodal learning algorithms to model the relationship between visual images taken by an autonomous underwater vehicle during a survey and remotely sensed acoustic bathymetry (ocean depth) data that is available prior to the survey. We present a multi-layer architecture to capture the joint distribution between the bathymetry and visual modalities. We then propose an extension based on gated feature learning models, which allows the model to cluster the input data in an unsupervised fashion and predict visual image features using just the ocean depth information. Our experiments demonstrate that multimodal learning improves semantic classification accuracy regardless of which modalities are available at classification time, allows for unsupervised clustering of either or both modalities, and can facilitate mission planning by enabling class-based or image-based queries.

Más información

Título según WOS: ID WOS:000394774300003 Not found in local WOS DB
Título de la Revista: INTERNATIONAL JOURNAL OF ROBOTICS RESEARCH
Volumen: 36
Número: 1
Editorial: SAGE PUBLICATIONS LTD
Fecha de publicación: 2017
Página de inicio: 24
Página final: 43
DOI:

10.1177/0278364916679892

Notas: ISI