Abstract: Highlights•We developed two distinct strategies and two algorithms that address certain challenges associated with employing adversarial perturbation algorithms in the context of MIL. Furthermore, the efficacy of the algorithm has been verified across diverse input data scenarios.•The resulting adversarial bags and the proposed methods can be deemed valuable knowledge or prior experience for future learners. This is because MI-CAP can generate perturbations that remain unaffected by other bags, and the perturbations generated by MI-UAP can be conveniently stored as part of the knowledge base.•We have demonstrated the generalizability of these perturbations through experiments, illustrating that perturbations generated by one neural network can fool other networks as well. Furthermore, we have put forth a straightforward strategy to mitigate the impact of these perturbations as much as possible.
Loading