Abstract: In several domains, systems generate continuous streams of data during their execution,
including meaningful telemetry information, that can be used to perform tasks like preemptive failure
detection. Deep learning models have been exploited for these tasks with increasing success, but they hardly
provide guarantees over their execution, a problem which is exacerbated by their lack of interpretability.
In many critical contexts, formal methods, which ensure the correct behavior of a system, are thus necessary.
However, specifying in advance all the relevant properties and building a complete model of the system
against which to check them is often out of reach in real-world scenarios. To overcome these limitations,
we design a framework that resorts to monitoring, a lightweight runtime verification technique that does not
require an explicit model specification, and pairs it with machine learning. Its goal is to automatically derive
relevant properties, related to a bad behavior of the considered system, encoded by means of formulas of
Signal Temporal Logic (STL). Results based on experiments performed on well-known benchmark datasets
show that the proposed framework is able to effectively anticipate critical system behaviors in an online
setting, providing human-interpretable results.
Loading