Clinical Flair: A Pre-Trained Language Model for Spanish Clinical Natural Language Processing
Abstract
Word embeddings have been widely used in Natural Language Processing (NLP) tasks. Although these representations can capture the semantic information of words, they cannot learn the sequence-level semantics. This problem can be handled using contextual word embeddings derived from pre-trained language models, which have contributed to significant improvements in several NLP tasks. Further improvements are achieved when pretraining these models on domain-specific corpora. In this paper, we introduce Clinical Flair, a domain-specific language model trained on Spanish clinical narratives. To validate the quality of the contextual representations retrieved from our model, we tested them on four named entity recognition datasets belonging to the clinical and biomedical domains. Our experiments confirm that incorporating domain-specific embeddings into classical sequence labeling architectures improves model performance dramatically compared to general-domain embeddings, demonstrating the importance of having these resources available.
Más información
| Título según SCOPUS: | Clinical Flair: A Pre-Trained Language Model for Spanish Clinical Natural Language Processing |
| Título de la Revista: | ClinicalNLP 2022 - 4th Workshop on Clinical Natural Language Processing, Proceedings |
| Editorial: | Association for Computational Linguistics (ACL) |
| Fecha de publicación: | 2022 |
| Página final: | 92 |
| Idioma: | English |
| Notas: | SCOPUS |