A Proxy for Assessing the Automatic Encodability of Regulation

Published: 01 Jan 2024, Last Modified: 13 Oct 2024CSLAW 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Artificial Intelligence (AI) is already changing the way law is being applied, thereby fundamentally affecting the very core of society. While there is increasing interest in the possibility to create machines that automatically process, adapt, and enforce regulation, cross-disciplinary research between AI and law has not yet determined to what extent such legally intelligent machines can and should be built. This article addresses this gap by providing the first attempt to quantify the automatic encodability of regulations. To do so, we propose an algorithm that first gauges sentence complexity for machines by leveraging natural language processing (NLP) techniques for sentence simplification for open relations extraction systems; in addition, the algorithm assesses word complexity for machines by attempting to link terms to supposed functional requirements, a task that involves finding matching concepts in public ontologies and controlled vocabularies. We apply our methodology to several legislations---a few of which have already been manually transformed into machine-processable form successfully, and others for which the assumption is that they are less encodable due to the many open-textured terms that they contain. This analysis demonstrates coherence with our expectations that those deemed as featuring high ambiguity are less prone to be automatically turned into automatically processable regulation. This research is highly relevant as it provides directions to the AI as well as the legal community, and to interdisciplinary teams, with respect to enabling a nuanced discussion needed within the field on the normative challenges that the automatic processing, adaptation, and enforcement of regulation is already creating, and will trigger in the future.
Loading