MILCA: Multiple Instance Learning using Counting and Attention

24 Sept 2024 (modified: 05 Feb 2025)Submitted to ICLR 2025EveryoneRevisionsBibTeXCC BY 4.0
Keywords: Multiple Instance Learning, counting, attention
TL;DR: We show that a simple feature counting combined with an attention model is better than all current state of the art Multiple Instance Learning Algorithms
Abstract: In Multiple Instance Learning (MIL), a bag is comprised of instances and the label is prescribed to the whole bag, with no information on the labels of each instance. The leading approaches for MIL are Embedded Space (ES) solutions, where the full bag is embedded into a vector space. While very complex models were constructed for MIL classification tasks, we show that often some features are associated with a class, and a simple counting/summing algorithm leads to similar or better accuracy than current solutions. This can be improved in some cases by weighting these selected features using a fully connected network to predict the coefficient of each feature. However, a simple relative contribution of each feature, where the sum of the coefficients is normalized to 1, fails to count the feature. Thus instead, we replace the softmax by a projection of the coefficients to [-1,1] or [0,1] but do not limit their sum. This allows the model to count features. The resulting algorithm - MILCA (Multiple Instance Learning using Counting and Attention) is applied to multiple previous and new real-world MIL tasks, as well as recovering the host disease history from sequenced T Cell Receptor Repertoires. In most cases, MILCA is significantly better and way more efficient than currently used MIL algorithms, with a 3 \% higher accuracy than current SOTA on average. To summarize, in MIL classification tasks, where often the number of features is large compared to the number of bags, complex models are typically not better than a weighted sum of informative features. The code for MILCA is available at: github.com/submissionanonymous6/MILCA
Primary Area: other topics in machine learning (i.e., none of the above)
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics.
Submission Guidelines: I certify that this submission complies with the submission instructions as described on https://iclr.cc/Conferences/2025/AuthorGuide.
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Submission Number: 3506
Loading