RT Journal Article SR Electronic T1 Self-supervised retinal thickness prediction enables deep learning from unlabeled data to boost classification of diabetic retinopathy JF bioRxiv FD Cold Spring Harbor Laboratory SP 861757 DO 10.1101/861757 A1 Olle G. Holmberg A1 Niklas D. Köhler A1 Thiago Martins A1 Jakob Siedlecki A1 Tina Herold A1 Leonie Keidel A1 Ben Asani A1 Johannes Schiefelbein A1 Siegfried Priglinger A1 Karsten U. Kortuem A1 Fabian J. Theis YR 2019 UL http://biorxiv.org/content/early/2019/12/02/861757.abstract AB Access to large, annotated samples represents a considerable challenge for training accurate deep-learning models in medical imaging. While current leading-edge transfer learning from pre-trained models can help with cases lacking data, it limits design choices, and generally results in the use of unnecessarily large models. We propose a novel, self-supervised training scheme for obtaining high-quality, pre-trained networks from unlabeled, cross-modal medical imaging data, which will allow for creating accurate and efficient models. We demonstrate this by accurately predicting optical coherence tomography (OCT)-based retinal thickness measurements from simple infrared (IR) fundus images. Subsequently, learned representations outperformed advanced classifiers on a separate diabetic retinopathy classification task in a scenario of scarce training data. Our cross-modal, three-staged scheme effectively replaced 26,343 diabetic retinopathy annotations with 1,009 semantic segmentations on OCT and reached the same classification accuracy using only 25% of fundus images, without any drawbacks, since OCT is not required for predictions. We expect this concept will also apply to other multimodal clinical data-imaging, health records, and genomics data, and be applicable to corresponding sample-starved learning problems.