Deep Learning for Time-Series Forecasting With Exogenous Variables in Energy Consumption: A Performance and Interpretability Analysis

Mohammadi, A; Nápoles, G; Salgueiro, Y

Keywords: time series analysis, forecasting, data models, transformers, Predictive models, Time Series Forecasting, Autoregressive processes, Interpretability, deep learning, Load modeling, short-term load forecasting, explainable AI, long short term memory, Thin film transistors

Abstract

The rise of decentralized energy sources and renewables demands advanced grid planning, with short-term load forecasting (STLF) playing a crucial role. Energy demand in smart grids is highly variable and influenced by external factors, making accurate forecasting challenging. While deep learning models excel in time-series forecasting, their ability to integrate exogenous variables remains uncertain. This study evaluates different deep learning architectures for STLF, including recurrent (Long Short-Term Memory, LSTM), probabilistic (Deep Autoregressive, DeepAR), attention-based (Temporal Fusion Transformer, TFT), and foundation models (TimeLLM), each designed to capture temporal dependencies differently. The Smart Meters in London dataset, comprising 1.6 million energy consumption records enriched with weather and socio-demographic data, was used for evaluation. Results show that incorporating exogenous variables reduces forecasting error, with TFT achieving a test MAE of 1.643, RMSE of 2.903, and sMAPE of 18.8% on a lower-variability subset, leveraging long-range dependencies for enhanced predictions. DeepAR outperformed other models on the larger, noisier dataset, achieving a test MAE of 1.699, RMSE of 3.310, and sMAPE of 18.8%, demonstrating strong generalization. LSTM performed reasonably well but struggled to utilize exogenous information, leading to higher forecasting errors. TimeLLM, despite tailored prompting for time-series forecasting, failed to outperform task-specific models, with a test MAE of 2.193, RMSE of 3.473, and sMAPE of 22.89%, highlighting the challenges of adapting foundation models trained on different modalities to structured time-series forecasting. Beyond predictive accuracy, interpretability analysis challenges the assumption that attention mechanisms reliably capture feature importance. Performance degradation analysis revealed that perturbing features ranked highly by SHAP values and permutation feature importance led to sharper error increases than attention-based rankings, with permutation demonstrating the strongest correlation with prediction errors. These findings underscore the role of Explainable AI (XAI) in time-series forecasting, emphasizing the need for robust interpretability frameworks to ensure transparency in deep learning-based predictions.

Más información

Título según WOS: Deep Learning for Time-Series Forecasting With Exogenous Variables in Energy Consumption: A Performance and Interpretability Analysis
Título de la Revista: IEEE ACCESS
Volumen: 13
Editorial: IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC
Fecha de publicación: 2025
Página de inicio: 86746
Página final: 86767
Idioma: English
DOI:

10.1109/ACCESS.2025.3570618

Notas: ISI