Is Machine Learning Model Checking Privacy Preserving?

Published: 01 Jan 2024, Last Modified: 15 Jan 2025ISoLA (2) 2024EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Model checking, which formally verifies whether a system exhibits a certain behaviour or property, is typically tackled by means of algorithms that require the knowledge of the system under analysis. To address this drawback, machine learning model checking has been proposed as a powerful approach for casting the model checking problem as an optimization problem in which a predictor is learnt in a continuous latent space capturing the semantics of formulae. More in detail, a kernel for Signal Temporal Logic (STL) is introduced, so that features of specifications are automatically extracted leveraging the kernel trick. This permits to verify a new formula without the need of accessing a (generative) model of the system, using only a given set of formulae and their satisfaction value, potentially leading to a privacy-preserving method usable to query specifications of a system without giving access to it. This paper investigates the feasibility of this approach quantifying the amount of information leakage due to machine learning model checking on the system that is checked. The analysis is carried out for STL under different training regimes.
Loading