- Keywords: Capsule Network, Generative Adversarial Network, Neurons, Axons, Synthetic Data, Segmentation, Image synthesis, Image-to-Image translation
- TL;DR: Synthesising biomedical images using a convolutional capsule generative adversarial network.
- Abstract: The field of biomedical imaging, among others, often suffers from a lack of labelled data. One solution is to use generative models to synthesise more data. To this end, we introduce CapsPix2Pix, which combines convolutional capsules with the pix2pix framework, to synthesise images conditioned on segmentation labels. We apply our approach to a new biomedical dataset of cortical axons imaged by two-photon microscopy, as a method of data augmentation for small datasets. We evaluate performance both qualitatively and quantitatively. Quantitative evaluation is performed by using image data generated by either CapsPix2Pix or pix2pix to train a U-net on a segmentation task, then testing on real microscopy data. Our method quantitatively performs as well as pix2pix, with an order of magnitude fewer parameters. Additionally, CapsPix2Pix is far more capable at synthesising images of different appearance, but the same underlying geometry. Finally, qualitative analysis of the features learned by CapsPix2Pix suggests that individual capsules capture diverse and often semantically meaningful groups of features, covering structures such as synapses, axons and noise.
- Code of conduct: I have read and accept the code of conduct.