Deep reinforcement learning for optimal firebreak placement in forest fire prevention
Abstract
The increasing frequency and intensity of large wildfires have become a significant natural hazard, requiring the development of advanced decision-support tools for resilient landscape design. Existing methods, such as Mixed Integer Programming and Stochastic Optimization, while effective, are computationally demanding. In this study, we introduce a novel Deep Reinforcement Learning (DRL) methodology to optimize the strategic placement of firebreaks across diverse landscapes. We employ Deep Q-Learning, Double Deep Q-Learning, and Dueling Double Deep Q-Learning, integrated with the Cell2Fire fire spread simulator and Convolutional Neural Networks. Our DRL agent successfully learns optimal firebreak locations, demonstrating superior performance compared to heuristics, especially after incorporating a pre-training loop. This improvement ranges between 1.59%-1.7% with respect to the heuristic, depending on the size of the instance, and 4.79%-6.81% when compared to a random solution. Our results highlight the potential of DRL for fire prevention, showing convergence with favorable results in cases as large as 40 x 40 cells. This study represents a pioneering application of reinforcement learning to fire prevention and landscape management.
Más información
Título según WOS: | ID WOS:001460515900001 Not found in local WOS DB |
Título de la Revista: | APPLIED SOFT COMPUTING |
Volumen: | 175 |
Editorial: | Elsevier |
Fecha de publicación: | 2025 |
DOI: |
10.1016/j.asoc.2025.113043 |
Notas: | ISI |