Abstract: Computational imaging reconstructions from multiple measurements that are captured sequentially often suffer from motion artifacts if the scene is dynamic. We propose a neural space–time model (NSTM) that jointly estimates the scene and its motion dynamics, without data priors or pre-training. Hence, we can both remove motion artifacts and resolve sample dynamics from the same set of raw measurements used for the conventional reconstruction. We demonstrate NSTM in three computational imaging systems: differential phase-contrast microscopy, three-dimensional structured illumination microscopy and rolling-shutter DiffuserCam. We show that NSTM can recover subcellular motion dynamics and thus reduce the misinterpretation of living systems caused by motion artifacts. A neural space–time model can recover a dynamic scene by modeling its spatiotemporal relationship in multi-shot imaging reconstruction for reduced motion artifacts and improved imaging of fast processes in living cells.
External IDs:doi:10.1038/s41592-024-02417-0
Loading