Sparse3Diff: A Diffusion Framework for 3D Reconstruction from Sparse 2D Slices in Volumetric Optical Imaging
Abstract: Volumetric optical imaging is an essential tool for understanding various biological processes. However, due to the inherent limitations, such as long imaging time, volume scanning techniques reduce volumetric information into sparse 2D slices. Although many deep learning methods attempt to reconstruct 3D volumes from sparse slices, they struggle with out-of-distribution (OOD) data, which arises from the diversity of biological structures and the limited structural information in sparse slices. To overcome these challenges, we propose Sparse3Diff, a novel diffusion-based framework that reconstructs high-fidelity 3D volumes from sparse 2D slices. Sparse3Diff incorporates a sparse slice-guided position-aware diffusion process that utilizes sparse slices as guidance and conditions on z-position to maintain structural coherence along the z-axis. Additionally, to achieve stable reconstruction under sparse OOD data, we propose a self-alignment strategy that enables the model to be gradually fine-tuned by leveraging its own inferred slices as self-guidance. Experimental results demonstrate that even with sparse OOD data, the Sparse3Diff achieves accurate 3D reconstruction and remains robust across various scanning datasets.
External IDs:dblp:conf/miccai/LeeJLSKNJSK25
Loading