Cluster Distillation: Semi-supervised Time Series Classification through Clustering-based Self-supervision

Goyo, Manuel Alejandro

Abstract

Time series have always raised great interest among scientists due to their multiple applications in real-world problems. In particular, time series classification using deep learning methods has recently attracted much attention and demonstrated remarkable performance. Unfortunately, most of the techniques studied so far assume that a fully-labeled dataset is available for training, a condition that limits the application of these methods in practice. In this paper, we present Cluster Distillation: a technique that leverages all the available data (labeled or unlabeled) for training a deep time series classifier. The method relies on a self-supervised mechanism that generates surrogate labels that guide learning when external supervisory signals are lacking. We create that mechanism by introducing clustering into a Knowledge Distillation framework in which a first neural net (the Teacher) transfers its beliefs about cluster memberships to a second neural net (the Student) which finally performs semi-supervised classification. Preliminary experiments in ten widely used datasets show that training a convolutional neural net (CNN) with the proposed technique leads to promising results, outperforming state-of-the-art methods in several relevant cases. The implementations are available on: ClusterDistillation

Más información

Título según SCOPUS: Cluster Distillation: Semi-supervised Time Series Classification through Clustering-based Self-supervision
Título de la Revista: Proceedings - International Conference of the Chilean Computer Science Society, SCCC
Volumen: 2022-
Editorial: IEEE Computer Society
Fecha de publicación: 2022
Idioma: English
DOI:

10.1109/SCCC57464.2022.10000276

Notas: SCOPUS