Human-Robot Navigation Using Event-Based Cameras and Reinforcement Learning

Bugueño-Córdova; I.; Ruiz-Del-Solar; J.; Verschae; R.

Keywords: event, based cameras; human, robot navigation; reinforcement learning; sensor fusion; social robot

Abstract

This work introduces a robot navigation controller that combines event cameras and other sensors with reinforcement learning to enable real-time human-centered navigation and obstacle avoidance. Unlike conventional image-based controllers, which operate at fixed rates and suffer from motion blur and latency, this approach leverages the asynchronous nature of event cameras to process visual information over flexible time intervals, enabling adaptive inference and control. The framework integrates event-based perception, additional range sensing, and policy optimization via Deep Deterministic Policy Gradient, with an initial imitation learning phase to improve sample efficiency. Promising results are achieved in simulated environments, demonstrating robust navigation, pedestrian following, and obstacle avoidance. A demo video is available at the project website. © 2025 IEEE.

Más información

Título según SCOPUS: Human-Robot Navigation Using Event-Based Cameras and Reinforcement Learning
Título de la Revista: IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops
Editorial: IEEE Computer Society
Fecha de publicación: 2025
Página de inicio: 5004
Página final: 5012
Idioma: English
DOI:

10.1109/CVPRW67362.2025.00494

Notas: SCOPUS