CSMAE : Cataract Surgical Masked Autoencoder (MAE) Based Pre-Training

Published: 01 Jan 2025, Last Modified: 11 Nov 2025ISBI 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Automated analysis of surgical videos is crucial for improving surgical training, workflow optimization, and postoperative assessment. We introduce a CSMAE, Masked Autoencoder (MAE)-based pretraining approach, specifically developed for Cataract Surgery video analysis, where instead of randomly selecting tokens for Masking, they are selected based on spatiotemporal importance of the token. We created a large dataset of cataract surgery videos to improve the model's learning efficiency and expand its robustness in low-data regime. Our pre-trained model can be easily adapted for specific downstream tasks via fine-tuning, serving as a robust backbone for further analysis. Through rigorous testing on downstream steprecognition task on two Cataract surgery video datasets, D99 and Cataract-101our approach surpasses current state-of-the-art self-supervised pre-training and adapter-based transfer learning methods by a significant margin. This advancement not only demonstrates the potential of our MAE-based pretraining in the field of surgical video analysis but also sets a new benchmark for future research.
Loading