CUAD: An Expert-Annotated NLP Dataset for Legal Contract ReviewDownload PDF

02 Jun 2021, 20:13 (modified: 08 Nov 2021, 22:33)NeurIPS 2021 Track Datasets and Benchmarks Round1 SubmissionReaders: Everyone
Keywords: law, legal nlp
TL;DR: NLP for law is in its infancy due to a lack of training data. To address this, we created a large dataset for contract review. The dataset would have cost over $2,000,000 without volunteer legal experts.
Abstract: Many specialized domains remain untouched by deep learning, as large labeled datasets require expensive expert annotators. We address this bottleneck within the legal domain by introducing the Contract Understanding Atticus Dataset (CUAD), a new dataset for legal contract review. CUAD was created with dozens of legal experts from The Atticus Project and consists of over 13,000 annotations. The task is to highlight salient portions of a contract that are important for a human to review. We find that Transformer models have nascent performance, but that this performance is strongly influenced by model design and training dataset size. Despite these promising results, there is still substantial room for improvement. As one of the only large, specialized NLP benchmarks annotated by experts, CUAD can serve as a challenging research benchmark for the broader NLP community.
Supplementary Material: zip
URL: ;
9 Replies