RT Journal Article SR Electronic T1 Automatic Instrument Segmentation in Robot-Assisted Surgery Using Deep Learning JF bioRxiv FD Cold Spring Harbor Laboratory SP 275867 DO 10.1101/275867 A1 Alexey Shvets A1 Alexander Rakhlin A1 Alexandr A. Kalinin A1 Vladimir Iglovikov YR 2018 UL http://biorxiv.org/content/early/2018/03/03/275867.abstract AB Semantic segmentation of robotic instruments is an important problem for the robot-assisted surgery. One of the main challenges is to correctly detect an instrument’s position for the tracking and pose estimation in the vicinity of surgical scenes. Accurate pixel-wise instrument segmentation is needed to address this challenge. In this paper we describe our winning solution for MICCAI 2017 Endoscopic Vision SubChallenge: Robotic Instrument Segmentation and its further refinement. Our approach demonstrates an improvement over the state-of-the-art results using several novel deep neural network architectures. It addressed the binary segmentation problem, where every pixel in an image is labeled as an instrument or background from the surgery video feed. In addition, we solve a multi-class segmentation problem, in which we distinguish between different instruments or different parts of an instrument from the background. In this setting, our approach outper-forms other methods in every task subcategory for automatic instrument segmentation thereby providing state-of-the-art results for these problems. The source code for our solution is made publicly available at https://github.com/ternaus/robot-surgery-segmentation