Keywords: deep learning, synthetic data generation, distributed computing, neuroimaging, magnetic resonance imaging
TL;DR: We took a synthetic data generation pipeline, engineered a framework to deploy it on 20 computers, and got great results
Abstract: The limited availability of diverse, high-quality datasets is a significant challenge in applying deep learning to neuroimaging research. Although synthetic data generation can potentially address this issue, on-the-fly generation is computationally demanding, while training on pre-generated data is inflexible and may incur high storage costs.
We introduce Brainpipe, a scalable in-memory data pipeline that significantly improves the performance of on-the-fly synthetic data generation for deep learning in neuroimaging. Brainpipe's architecture decouples data generation from training by running multiple generators in independent parallel processes, facilitating near-linear performance gains proportional to the number of generators used.
It efficiently handles terabytes of data using MongoDB, greatly minimizing prohibitive storage costs. The robust, modular design enables flexible pipeline configurations and fault-tolerant operation.
We evaluated Brainpipe with SynthSeg, a synthetic brain segmentation data generation tool that requires 7 days to train a model. When deployed in parallel, Brainpipe achieved a near-linear 15.7x increase in throughput with 16 generators. With 20 generators, we can train a model in 9 hours instead of 7 days.
This demonstrates Brainpipe's ability to greatly accelerate experimentation cycles. While Brainpipe represents a substantial step forward, it also reveals opportunities for future research in optimizing generation-training balance and resource allocation. Its ability to facilitate distributed deep learning has significant implications for enabling more ambitious neuroimaging research.
Track: 7. Digital radiology and pathology
Registration Id: RBNRQHMWX9D
Submission Number: 404
Loading