Visual Sync: Multi‑Camera Synchronization via Cross‑View Object Motion

Published: 18 Sept 2025, Last Modified: 29 Oct 2025NeurIPS 2025 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: camera synchronization, video motion analysis, scene understanding
TL;DR: multi-camera synchronization in the wild
Abstract: Today, people can easily record memorable moments, ranging from concerts, sports events, lectures, family gatherings, and birthday parties with multiple consumer cameras. However, synchronizing these cross‑camera streams remains challenging. Existing methods assume controlled settings, specific targets, manual correction, or costly hardware. We present VisualSync, an optimization framework based on multi‑view dynamics that aligns unposed, unsynchronized videos at millisecond accuracy. Our key insight is that any moving 3D point, when co‑visible in two cameras, obeys epipolar constraints once properly synchronized. To exploit this, VisualSync leverages off‑the‑shelf 3D reconstruction, feature matching, and dense tracking to extract tracklets, relative poses, and cross‑view correspondences. It then jointly minimizes the epipolar error to estimate each camera’s time offset. Experiments on four diverse, challenging datasets show that VisualSync outperforms baseline methods, achieving an average synchronization error below 130 ms.
Supplementary Material: zip
Primary Area: Applications (e.g., vision, language, speech and audio, Creative AI)
Submission Number: 11135
Loading