Abstract: Conventional communication systems focus on re-covering the messages sent by the transmitter at the receiver, by undoing the errors introduced by the channel. In semantic communication, the goal is instead to preserve the semantics (i.e., meaning) of the intended message $X$. A well-known framework adopted by the literature in modelling semantic communication is task-oriented communication. There, an encoder generates a feature representation $Z$ (e.g., a trained neural network) to transmit across the channel to fulfill a certain task (e.g., classification with labels $Y$). This naturally leads to the study of the information bottleneck (IB) principle - which proposes reducing the mutual information $I (X; Z)$ (compression) while simultaneously maxi-mizing $I (Z; Y)$ (fitting). Although this objective seems initially meaningful, we posit that both the standard task-oriented setting for studying semantic communication and the IB approach may not be optimal. In this paper, we first propose a novel paradigm of multi-task generalizable communication, where, at test time, the messages may originate from a different unseen classification task compared to those during training. We intuitively reason that the multi-task generalizable setting is more feasible and relevant in the context of real-world semantic communication, compared to task-oriented communication. Next, we propose a Reconstruction Loss Aware (RLA) approach, which yields better feature encodings that are more generalizable towards unseen tasks. We empirically observe that, in the proposed paradigm, the IB principle is not optimal. We also demonstrate that RLA is better (15%-20% higher accuracy) compared to a benchmark approach for task-oriented communication, for diverse non-overlapping unseen tasks.
Loading