Multilabel Text Classification with Label-Dependent Representation

Alfaro Arancibia, Rodrigo; ALLENDE-CID, HCTOR; Allende, Hector

Keywords: machine learning, Text classification, Text representation, multilabel

Abstract

Assigning predefined classes to natural language texts, based on their content, is a necessary component in many tasks in organizations. This task is carried out by classifying documents within a set of predefined categories using models and computational methods. Text representation for classification purposes has traditionally been performed using a vector space model due to its good performance and simplicity. Moreover, the classification of texts via multilabeling has typically been approached by using simple label classification methods, which require the transformation of the problem studied to apply binary techniques, or by adapting binary algorithms. Over the previous decade, text classification has been extended using deep learning models. Compared to traditional machine learning methods, deep learning avoids rule design and feature selection by humans, and automatically provides semantically meaningful representations for text analysis. However, deep learning-based text classification is data-intensive and computationally complex. Interest in deep learning models does not rule out techniques and models based on shallow learning. This situation is true when the set of training cases is smaller, and when the set of features is small. White box approaches have advantages over black box approaches, where the feasibility of working with relatively small sets of data and the interpretability of the results stand out. This research evaluates a weighting function of the words in texts to modify the representation of the texts during multilabel classification, using a combination of two approaches: problem transformation and model adaptation. This weighting function was tested in 10 referential textual data sets, and compared with alternative techniques based on three performance measures: Hamming Loss, Accuracy, and macro-F1 . The best improvement occurs on the macro-F1 when the data sets have fewer labels, fewer documents, and smaller vocabulary sizes. In addition, the performance improves in data sets with higher cardinality, density, and diversity of labels. This proves the usefulness of the function on smaller data sets. The results show improvements of more than 10% in terms of macro-F1 in classifiers based on our method in almost all of the cases analyzed.

Más información

Título de la Revista: APPLIED SCIENCES-BASEL
Volumen: 16
Número: 6
Editorial: MDPI
Fecha de publicación: 2023
Idioma: Ingles
URL: https://www.mdpi.com/2076-3417/13/6/3594
Notas: WOS