Abstract: The application of Natural Language Processing (NLP) to specialized domains, such as the
law, has recently received a surge of interest.
As many legal services rely on processing and
analyzing large collections of documents, automating such tasks with NLP tools emerges
as a key challenge. Many popular language
models, such as BERT (Kenton and Toutanova,
2019) or ROBERTA (Liu et al., 2019), are
general-purpose models, which have limitations on processing specialized legal terminology and syntax. In addition, legal documents
may contain specialized vocabulary from other
domains, such as medical terminology in personal injury text. Here, we propose LEGALRELECTRA, a legal-domain language model that
is trained on mixed-domain legal and medical
corpora. We show that our model improves
over general-domain and single-domain medical and legal language models when processing mixed-domain (personal injury) text. Our
training architecture implements the ELECTRA framework, but utilizes REFORMER instead of BERT for its generator and discriminator. We show that this improves the model’s
performance on processing long passages and
results in better long-range text comprehension.
0 Replies
Loading