Learning to combine classifiers outputs with the transformer for text classification
Abstract
Text classification is a fairly explored task that has allowed dealing with a considerable amount of problems. However, one of its main difficulties is to conduct a learning process in data with class imbalance, i.e., datasets with only a few examples in some classes, which often represent the most interesting cases for the task. In this context, text classifiers overfit some particular classes, showing poor performance. To address this problem, we propose a scheme that combines the outputs of different classifiers, coding them in the encoder of a transformer. Feeding also a BERT encoding of each example, the encoder learns a joint representation of the text and the outputs of the classifiers. These encodings are used to train a new text classifier. Since the transformer is a highly complex model, we introduce a data augmentation technique, which allows the representation learning task to be driven without over-fitting the encoding to a particular class. The data augmentation technique also allows for producing a balanced dataset. The combination of both methods, representation learning, and data augmentation, allows improving the performance of trained classifiers. Results in benchmark data for two text classification tasks (stance classification and online harassment detection) show that the proposed scheme outperforms all of its direct competitors.
Más información
Título según WOS: | Learning to combine classifiers outputs with the transformer for text classification |
Título de la Revista: | INTELLIGENT DATA ANALYSIS |
Volumen: | 24 |
Editorial: | IOS Press |
Fecha de publicación: | 2020 |
Página de inicio: | S15 |
Página final: | S41 |
DOI: |
10.3233/IDA-200007 |
Notas: | ISI |