Leveraging Vision-Language Pre-training for Human Activity Recognition in Still Images

Published: 2025, Last Modified: 27 Dec 2025CoRR 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Recognising human activity in a single photo enables indexing, safety and assistive applications, yet lacks motion cues. Using 285 MSCOCO images labelled as walking, running, sitting, and standing, scratch CNNs scored 41% accuracy. Fine-tuning multimodal CLIP raised this to 76%, demonstrating that contrastive vision-language pre-training decisively improves still-image action recognition in real-world deployments.
Loading