Keywords: On-Device LLM, Wi-Fi Roaming, Cross-Layer Optimization, Context‑Aware Wi‑Fi Roaming, PHY/MAC, Wireless Communications, In‑Context Learning, Chain‑of‑Thought Prompting, Fine Tuning, Post Training, Parameter‑Efficient Fine‑Tuning, Quantization, Preference Optimization, Edge AI, LLM, Machine Learning
TL;DR: On-device LLM adapts Wi-Fi roaming thresholds dynamically using context-aware reasoning, significantly reducing unnecessary handovers and balancing connection stability with signal strength, outperforming legacy and RL-based methods.
Abstract: Roaming in Wireless LAN (Wi-Fi) is a critical yet challenging task for maintaining seamless connectivity in dynamic mobile environments. Conventional threshold-based or heuristic schemes often fail, leading to either sticky or excessive handovers. We introduce the first cross-layer use of an on-device large language model (LLM): high-level reasoning in the application layer that issues real-time actions executed in the PHY/MAC stack. The LLM addresses two tasks: (i) context-aware AP selection, where structured prompts fuse environmental cues (e.g., location, time) to choose the best BSSID; and (ii) dynamic threshold adjustment, where the model adaptively decides when to roam. To satisfy the tight latency and resource budgets of edge hardware, we apply a suite of optimizations—chain-of-thought prompting, parameter-efficient fine-tuning, and quantization. Experiments on indoor and outdoor datasets show that our approach surpasses legacy heuristics and DRL baselines, achieving a strong balance between roaming stability and signal quality. These findings underscore the promise of application-layer LLM reasoning for lower-layer wireless control in future edge systems.
Submission Number: 27
Loading