Neural space–time model for dynamic multi-shot imaging

Ruiming Cao, Nikita S. Divekar, James K. Nuñez, Srigokul Upadhyayula, Laura Waller

Published: 01 Dec 2024, Last Modified: 12 Nov 2025Nature MethodsEveryoneRevisionsCC BY-SA 4.0
Abstract: Computational imaging reconstructions from multiple measurements that are captured sequentially often suffer from motion artifacts if the scene is dynamic. We propose a neural space–time model (NSTM) that jointly estimates the scene and its motion dynamics, without data priors or pre-training. Hence, we can both remove motion artifacts and resolve sample dynamics from the same set of raw measurements used for the conventional reconstruction. We demonstrate NSTM in three computational imaging systems: differential phase-contrast microscopy, three-dimensional structured illumination microscopy and rolling-shutter DiffuserCam. We show that NSTM can recover subcellular motion dynamics and thus reduce the misinterpretation of living systems caused by motion artifacts. A neural space–time model can recover a dynamic scene by modeling its spatiotemporal relationship in multi-shot imaging reconstruction for reduced motion artifacts and improved imaging of fast processes in living cells.
Loading