RT Journal Article SR Electronic T1 Segmentation-Enhanced CycleGAN JF bioRxiv FD Cold Spring Harbor Laboratory SP 548081 DO 10.1101/548081 A1 Michał Januszewski A1 Viren Jain YR 2019 UL http://biorxiv.org/content/early/2019/02/13/548081.abstract AB Algorithmic reconstruction of neurons from volume electron microscopy data traditionally requires training machine learning models on dataset-specific ground truth annotations that are expensive and tedious to acquire. We enhanced the training procedure of an unsupervised image-to-image translation method with additional components derived from an automated neuron segmentation approach. We show that this method, Segmentation-Enhanced CycleGAN (SECGAN), enables near perfect reconstruction accuracy on a benchmark connectomics segmentation dataset despite operating in a “zero-shot” setting in which the segmentation model was trained using only volumetric labels from a different dataset and imaging method. By reducing or eliminating the need for novel ground truth annotations, SECGANs alleviate one of the main practical burdens involved in pursuing automated reconstruction of volume electron microscopy data.