TAP: The Attention Patch for Cross-Modal Knowledge Transfer from Unlabeled Modality

Published: 17 Jun 2024, Last Modified: 17 Jun 2024Accepted by TMLREveryoneRevisionsBibTeX
Abstract: This paper addresses a cross-modal learning framework, where the objective is to enhance the performance of supervised learning in the primary modality using an unlabeled, unpaired secondary modality. Taking a probabilistic approach for missing information estimation, we show that the extra information contained in the secondary modality can be estimated via Nadaraya-Watson (NW) kernel regression, which can further be expressed as a kernelized cross-attention module (under linear transformation). This expression lays the foundation for introducing The Attention Patch (TAP), a simple neural network add-on that can be trained to allow data-level knowledge transfer from the unlabeled modality. We provide extensive numerical simulations using real-world datasets to show that TAP can provide statistically significant improvement in generalization across different domains and different neural network architectures, making use of seemingly unusable unlabeled cross-modal data.
Submission Length: Regular submission (no more than 12 pages of main content)
Code: https://github.com/grittibetti/TAP
Assigned Action Editor: ~Yan_Liu1
Submission Number: 2047
Loading