Keywords: ECG, Expert Over-read, Self-training, Contrastive learning, Clinical AI
TL;DR: Over-read ECG Model
Abstract: Automated machine-read ECG interpretations are widely used in clinical practice but of- ten unreliable, leading to systematic diagnostic errors. This work investigates how training with cardiologist over-reads impacts model accuracy and clinical reliability. Using a large paired corpus of over two million ECGs containing both machine and expert interpretations, we evaluate three learning paradigms: (i) supervised learning on expert over-read labels, (ii) Self-training that extends expert supervision to public ECGs, and (iii) multi- modal contrastive learning with CLIP and NegCLIP. Across all settings, models trained with expert over-read data consistently outperform those trained on machine-read labels, especially for rare but clinically important conditions. Self-training and NegCLIP further demonstrate scalable strategies to propagate expert knowledge beyond labeled datasets. These findings highlight the essential role of expert over-reads in developing trustworthy and clinically aligned ECG AI systems.
Primary Subject Area: Application: Cardiology
Secondary Subject Area: Detection and Diagnosis
Registration Requirement: Yes
Visa & Travel: Yes
Read CFP & Author Instructions: Yes
Originality Policy: Yes
Single-blind & Not Under Review Elsewhere: Yes
LLM Policy: Yes
Submission Number: 7
Loading