Multi-Modal Integrated Sensing and Communication in Internet of Things With Large Language Models

Published: 2025, Last Modified: 11 Nov 2025IEEE Internet Things Mag. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Integrated sensing and communication (ISAC) technology has emerged as a fundamental technology underpinning the advancement of the Internet of Things (IoT). Nonetheless, conventional single-modal ISAC systems predominantly depend on radio frequency radar for environmental awareness. Their performance in complicated dynamic contexts is inadequate because to the constraints of perceptual capability from a single modality. To address this shortcoming, multi-modal ISAC (M-ISAC) enhances the comprehensiveness and precision of environmental information by amalgamating data from several sensors, including radar, Light Detection and Ranging (LiDAR), cameras, and Global Positioning Systems (GPS). In this paper, we initially delineate the conventional application contexts of M-ISAC inside the IoT and evaluates the present state of traditional signal processing techniques and artificial intelligence (AI) in M-ISAC. Subsequently, the architecture of large language models (LLMs) and their potential capacities to improve the efficacy of M-ISAC are shown. Finally, we discuss emerging challenges and future research directions for LLM-driven M-ISAC systems.
Loading