SlotSAM: Bootstrap Segmentation Foundation Model under Real-world Shifts via Object-Centric Learning
Foundation models have made incredible strides in achieving zero-shot or few-shot generalization, leveraging prompt engineering to mimic the problem-solving approach of human intelligence. However, when it comes to some foundation models like Segment Anything, there is still a challenge in performing well under real-world shifts. One of the real-world shifts is the distribution shift, the out-of-distribution data, such as camouflaged and medical images. Another is inconsistent prompting strategies during fine-tuning and testing, leading to decreased performance. We draw inspiration from human intelligence, particularly the process by which individuals decompose scenes into components in unfamiliar environments to determine the positions or boundaries of each component. To this end, we introduce SlotSAM, a method that reconstructs features from the encoder in a self-supervised manner to create object-centric representations. These representations are then integrated into the foundation model, bolstering its object-level perceptual capabilities while reducing the impact of distribution-related variables. The beauty of SlotSAM lies in its simplicity and adaptability to various tasks, making it a versatile solution that significantly enhances the generalization abilities of foundation models. Through limited parameter fine-tuning in a bootstrap manner, our approach paves the way for improved generalization in novel environments.