Abstract: We present a new egocentric procedural error dataset
containing videos with various types of errors as well as
normal videos and propose a new framework for procedural
error detection using error-free training videos only. Our
framework consists of an action segmentation model and a
contrastive step prototype learning module to segment actions and learn useful features for error detection. Based
on the observation that interactions between hands and objects often inform action and error understanding, we propose to combine holistic frame features with relations features, which we learn by building a graph using active object detection followed by a Graph Convolutional Network.
To handle errors, unseen during training, we use our contrastive step prototype learning to learn multiple prototypes
for each step, capturing variations of error-free step executions. At inference time, we use feature-prototype similarities for error detection. By experiments on three datasets,
we show that our proposed framework outperforms state-ofthe-art video anomaly detection methods for error detection
and provides smooth action and error predictions.
Loading