Abstract
A central problem in biomedical imaging is the automated segmentation of images for further quantitative analysis. Recently, fully convolutional neural networks, such as the U-Net, were applied successfully in a variety of segmentation tasks. A downside of this approach is the requirement for a large amount of well-prepared training samples, consisting of image - ground truth mask pairs. Since training data must be created by hand for each experiment, this task can be very costly and time-consuming. Here, we present a segmentation method based on cycle consistent generative adversarial networks, which can be trained even in absence of prepared image - mask pairs. We show that it successfully performs image segmentation tasks on samples with substantial defects and even generalizes well to different tissue types.