InverseTime: A Self-Supervised Technique for Semi-Supervised Classification of Time Series
Abstract
Time series classification (TSC) is a fundamental and challenging problem in machine learning. Deep learning models typically achieve remarkable performance in this task but are constrained by the need for vast amounts of labeled data to generalize effectively. In this paper, we present InverseTime, a method that addresses this limitation by incorporating a novel self-supervised pretext task into the training objective. In this task, the training time series are first considered both in their original chronological order and in their reversed state. Then, the model is trained to recognize if time inversion was or was not applied to the input case. We found that this simple task actually provides a supervisory signal that significantly aids model training when explicit category labels are scarce, enabling semi-supervised TSC. Through comprehensive experiments on twelve diverse time-series datasets, spanning different domains, we demonstrate that our method consistently outperforms prior approaches, including various consistency regularization methods. These results show that self-supervision is a promising approach to circumvent the annotation bottleneck in time series applications.
Más información
Título según WOS: | ID WOS:001354541400001 Not found in local WOS DB |
Título de la Revista: | IEEE ACCESS |
Volumen: | 12 |
Editorial: | IEEE-INST ELECTRICAL ELECTRONICS ENGINEERS INC |
Fecha de publicación: | 2024 |
Página de inicio: | 165081 |
Página final: | 165093 |
DOI: |
10.1109/ACCESS.2024.3486669 |
Notas: | ISI |