TY - JOUR T1 - DeepScope: Nonintrusive Whole Slide Saliency Annotation and Prediction from Pathologists at the Microscope JF - bioRxiv DO - 10.1101/097246 SP - 097246 AU - Andrew J. Schaumberg AU - S. Joseph Sirintrapun AU - Hikmat A. Al-Ahmadie AU - Peter J. Schüffler AU - Thomas J. Fuchs Y1 - 2017/01/01 UR - http://biorxiv.org/content/early/2017/01/22/097246.abstract N2 - Modern digital pathology departments have grown to produce whole-slide image data at petabyte scale, an unprecedented treasure chest for medical machine learning tasks. Unfortunately, most digital slides are not annotated at the image level, hindering large-scale application of supervised learning. Manual labeling is prohibitive, requiring pathologists with decades of training and outstanding clinical service responsibilities. This problem is further aggravated by the United States Food and Drug Administration’s ruling that primary diagnosis must come from a glass slide rather than a digital image. We present the first end-to-end framework to overcome this problem, gathering annotations in a nonintrusive manner during a pathologist’s routine clinical work: (i) microscope-specific 3D-printed commodity camera mounts are used to video record the glass-slide-based clinical diagnosis process; (ii) after routine scanning of the whole slide, the video frames are registered to the digital slide; (iii) motion and observation time are estimated to generate a spatial and temporal saliency map of the whole slide. Demonstrating the utility of these annotations, we train a convolutional neural network that detects diagnosis-relevant salient regions, then report accuracy of 85.15% in bladder and 91.40% in prostate, with 75.00% accuracy when training on prostate but predicting in bladder, despite different pathologists examining the different tissues. When training on one patient but testing on another, AUROC in bladder is 0.7929±0.1109 and in prostate is 0.9568±0.0374. Our tool is available at https://bitbucket.org/aschaumberg/deepscope. ER -