Learning from a Few Shots: Data-efficient Cervical Vertebral Maturation Assessment

Published: 27 Mar 2025, Last Modified: 01 May 2025MIDL 2025 PosterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Few-shot Learning, Transfer Learning, Orthodontic, CVM Assessment
TL;DR: Data-efficient Deep Learning for CVM Assessment
Abstract: The timing of treatment is a crucial decision in orthodontics. Initiating treatment during the appropriate growth phase leads to optimal patient outcomes and can prevent prolonged treatment durations. The most commonly used method for classifying growth phases is cervical vertebral maturation (CVM) assessment, which categorizes CVM into six stages based on the shape and size of the cervical vertebrae. Due to the complexity of manual CVM analysis, it often falls short in performance when assessed visually. Deep learning methods can assist physicians in classifying CVM stages, thus improving orthodontic workflows and treatments. However, a significant challenge in deep learning-based CVM assessment is the limited dataset volume, resulting from difficulties in data collection and annotation. While small training datasets can greatly hinder the model’s generalization performance, research on data-efficient training methods for CVM assessment is still lacking. To the best of our knowledge, this paper is the first to evaluate the potential of few-shot learning and in- domain transfer learning for CVM assessment. Specifically, we investigate the architectures ResNet18 and MedSam-2D. Few-shot learning enhances classification performance by up to 9%. Additionally, in-domain pre-training (using chest X-ray data) results in a significant performance increase of up to 4%.
Primary Subject Area: Learning with Noisy Labels and Limited Data
Secondary Subject Area: Application: Dermatology
Paper Type: Validation or Application
Registration Requirement: Yes
Submission Number: 171
Loading

OpenReview is a long-term project to advance science through improved peer review with legal nonprofit status. We gratefully acknowledge the support of the OpenReview Sponsors. © 2025 OpenReview