Enrique Manjavacas ; Lauren Fonteyn - Adapting vs. Pre-training Language Models for Historical Languages

jdmdh:9152 - Journal of Data Mining & Digital Humanities, June 13, 2022, NLP4DH - https://doi.org/10.46298/jdmdh.9152
Adapting vs. Pre-training Language Models for Historical LanguagesArticle

Authors: Enrique Manjavacas 1,2,3; Lauren Fonteyn ORCID1,2,3

As large language models such as BERT are becoming increasingly popular in Digital Humanities (DH), the question has arisen as to how such models can be made suitable for application to specific textual domains, including that of 'historical text'. Large language models like BERT can be pretrained from scratch on a specific textual domain and achieve strong performance on a series of downstream tasks. However, this is a costly endeavour, both in terms of the computational resources as well as the substantial amounts of training data it requires. An appealing alternative, then, is to employ existing 'general purpose' models (pre-trained on present-day language) and subsequently adapt them to a specific domain by further pre-training. Focusing on the domain of historical text in English, this paper demonstrates that pre-training on domain-specific (i.e. historical) data from scratch yields a generally stronger background model than adapting a present-day language model. We show this on the basis of a variety of downstream tasks, ranging from established tasks such as Part-of-Speech tagging, Named Entity Recognition and Word Sense Disambiguation, to ad-hoc tasks like Sentence Periodization, which are specifically designed to test historically relevant processing.


Volume: NLP4DH
Section: Digital humanities in languages
Published on: June 13, 2022
Accepted on: April 5, 2022
Submitted on: March 1, 2022
Keywords: [INFO.INFO-CL]Computer Science [cs]/Computation and Language [cs.CL]

6 Documents citing this article

Consultation statistics

This page has been seen 3138 times.
This article's PDF has been downloaded 3289 times.