Keywords: transformer, in-context learning, wireless communication
Abstract: By leveraging in-context learning (ICL), pretrained Transformers adapt to unseen tasks from example prompts without task-specific fine-tuning.
This adaptability has motivated their use in wireless communications, where ICL-based Transformers have shown strong performance on symbol detection.
However, deploying a Transformer solely for symbol detection is less cost-effective.
Can we design a multi-task Transformer that, without significantly increasing inference overhead, unifies additional modules of the wireless communication receiver within a single model?
In this work, we propose a multi-task ICL Transformer that treats pilots within a coherence block as prompts, and jointly outputs the detected data symbol and an explicit channel-frequency response (CFR).
Empirically, we find that activating the model’s multi-task capability improves both training efficiency and receiver performance under the same model size, compared to an ICL-based Transformer performing symbol detection alone.
Submission Number: 37
Loading