TAP: The Attention Patch for Cross-Modal Knowledge Transfer from Unlabeled Modality

TMLR Paper2047 Authors

12 Jan 2024 (modified: 05 Apr 2024)Under review for TMLREveryoneRevisionsBibTeX
Abstract: This paper addresses a cross-modal learning framework, where the objective is to enhance the performance of supervised learning in the primary modality using an unlabeled, unpaired secondary modality. Taking a probabilistic approach for missing information estimation, we show that the extra information contained in the secondary modality can be estimated via Nadaraya-Watson (NW) kernel regression, which can further be expressed as a kernelized cross-attention module (under linear transformation). This expression lays the foundation for introducing The Attention Patch (TAP), a simple neural network add-on that can be trained to allow data-level knowledge transfer from the unlabeled modality. We provide extensive numerical simulations using real-world datasets to show that TAP can provide statistically significant improvement in generalization across different domains and different neural network architectures, making use of seemingly unusable unlabeled cross-modal data.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Yan_Liu1
Submission Number: 2047
Loading