An Asset Foundation Model for Industrial Asset Performance Management

ICLR 2026 Conference Submission21140 Authors

19 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Asset Foundation Model, Foundation Model for Timeseries, Industrial Asset Performance Management, Generative AI for Operations, Interpretable Transformer Architecture
TL;DR: We propose a foundation model for high value industrial asset management which models the sensor signals, critical events, alarms with interpretable transformer architecture enabling multiple end applications for industrial operations and planning.
Abstract: We introduce the Asset Foundation Model (AFM), a generative framework for asset performance management (APM) spanning high-value industrial assets and manufacturing processes. AFM applies across sectors such as energy, chemicals, manufacturing, utilities etc., by leveraging rich time-series data and event streams to provide a robust basis for next-generation APM solutions. A shared transformer backbone with lightweight heads supports forecasting, anomaly detection, and event querying. The model is pretrained on operational and simulator corpora, then fine-tuned on asset-specific histories for minimal effort adaptation, using per-sensor discrete tokenization for robustness. Beyond sensors, the AFM incorporates alarms, set-point changes, and maintenance logs via event tokens, enabling time-aligned “what/when” queries and high value applications such as root-cause triage, alarm suppression, and maintenance planning. In representative field deployments (e.g., ESPs and compressors), the AFM exceeds prior gains, delivers earlier warnings, and reduces false alarm minutes. Operator-oriented explanations based on attention rollout and integrated gradients highlight which sensors/events drove each alert, while natural language querying allow experts to “talk to the data” features. Calibrated prediction intervals from discrete to continuous with isotonic calibration support risk aware thresholds. On the theory side, we prove closed form bounds on quantization error and a Lipschitz stability result for discretization noise through the encoder, justifying sample efficient adaptation with frozen backbones. Public benchmarks corroborate competitive accuracy and calibrated coverage. The result is a versatile, scalable, and interpretable foundational framework that reduces the need for bespoke per-asset models. This is one of the foundational generative AI modeling works in the industrial domain, proposing a versatile foundational model with significant business impact on industrial asset management.
Primary Area: foundation or frontier models, including LLMs
Submission Number: 21140
Loading