Tokenization of modern and old Western European languages seems to be fairly simple, as it stands on the presence mostly of markers such as spaces and punctuation. However, when dealing with old sources like manuscripts written in scripta continua, antiquity epigraphy or Middle Age manuscripts, (1) such markers are mostly absent, (2) spelling variation and rich morphology make dictionary based approaches difficult. Applying convolutional encoding to characters followed by linear categorization to word-boundary or in-word-sequence is shown to be effective at tokenizing such inputs. Additionally, the software is released with a simple interface for tokenizing a corpus or generating a training set.
Rubrique : Vers un écosystème numérique : NLP. Infrastructure de corpus. Méthodes de récupération des textes et de calcul des similarités de textes
Publié le : 7 avril 2020
Accepté le : 7 avril 2020
Soumis le : 18 juin 2019
Mots-clés : convolutional network,scripta continua,tokenization,Old French,word segmentation,[SHS.LANGUE]Humanities and Social Sciences/Linguistics,[SHS.CLASS]Humanities and Social Sciences/Classical studies,[INFO]Computer Science [cs]