Abstract: As the physical and digital worlds are converging with the rapid evolution of advanced technologies, the Internet-of-Things (IoT) is transitioning from a network of passive, rule-based nodes into an intelligent, adaptive system, that understands, reasons, and responds to human intent. The future of IoT is foreseen to be systems that not only listens but also understands, where human demands are seamlessly translated into machine actions, and sensor data are transformed into meaningful insights. At the core of this vision is the significant advancements of Large Language Models, which offer a bridge between human language and machine language, and support the fusion of multimodal data streams. In this article, we put a forward looking vision of how large language models (LLMs) will be embedded into the fabric of IoT systems, represented by the Large Perceptive Models (LPMs) concept, for which users will no longer configure devices manually but simply express their needs in their natural language, leaving the role of translation and orchestration to the LLMs, to optimize in real-time, enabling use-cases from network resource allocation and environment mapping, into energy optimization. Furthermore, we explore how LPMs can understand the non-trivial language of IoT signals, transforming radio-frequency (RF) data, environmental metrics, and sensor outputs into meaningful, proactive insights, that can be comprehended by the users. To realize this vision, we introduce an agentic LLM-IoT architecture, where specialized estimation, modeling, and solving agents collaborate to automate decision-making, optimize performance, and enhance system intelligence.
External IDs:doi:10.1109/mwc.2025.3613680
Loading