Automatic Generation of Safety-compliant Linear Temporal Logic via Large Language Model: A Self-supervised Framework

17 Sept 2025 (modified: 11 Feb 2026)Submitted to ICLR 2026EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Linear Temporal Logic, Large Language Model, Language Inclusion Check, Automated Verification
Abstract: Converting high‑level natural‑language task descriptions into formal specifications such as Linear Temporal Logic (LTL) is essential for ensuring safety in cyber‑physical systems (CPS). Existing work, however, only optimizes translation quality without explicitly verifying the output against safety constraints. We present AutoSafeLTL, a self‑supervised, cloud–edge–collaborative framework that automates the generation of safety‑compliant LTL specifications while preserving logical consistency and semantic fidelity. A lightweight edge‑side three-stage-fine-tuned LLM offers real‑time conversion from natural language to LTL specifications (NL2LTL) and guarantees safety‑critical latency and data locality. Two larger‑capacity cloud‑side agents then iteratively refine the alignment: 1) LLM‑as‑an‑Aligner matches atomic propositions to safety constraints, and 2) LLM‑as‑a‑Critic interprets counterexamples from Inclusion Check to guide corrective regeneration. This collaborative architecture provides a safety-guaranteed alignment mechanism between high-level user intent and formally verifiable system behavior, demonstrating the potential of our framework to advance AI Alignment in safety-critical domains. Our approach achieves 0% violation rates on multiple benchmarks, enabling trustworthy specification generation and verification for both AI and critical CPS applications.
Supplementary Material: zip
Primary Area: alignment, fairness, safety, privacy, and societal considerations
Submission Number: 9521
Loading