RT Journal Article SR Electronic T1 Track-To-Learn: A general framework for tractography with deep reinforcement learning JF bioRxiv FD Cold Spring Harbor Laboratory SP 2020.11.16.385229 DO 10.1101/2020.11.16.385229 A1 Antoine Théberge A1 Christian Desrosiers A1 Maxime Descoteaux A1 Pierre-Marc Jodoin YR 2020 UL http://biorxiv.org/content/early/2020/11/17/2020.11.16.385229.abstract AB Diffusion MRI tractography is currently the only non-invasive tool able to assess the white-matter structural connectivity of a brain. Since its inception, it has been widely documented that tractography is prone to producing erroneous tracks while missing true positive connections. Anatomical priors have been conceived and implemented in classical algorithms to try and tackle these issues, yet problems still remain and the conception and validation of these priors is very challenging. Recently, supervised learning algorithms have been proposed to learn the tracking procedure implicitly from data, without relying on anatomical priors. However, these methods rely on labelled data that is very hard to obtain. To remove the need for such data but still leverage the expressiveness of neural networks, we introduce Track-To-Learn: A general framework to pose tractography as a deep reinforcement learning problem. Deep reinforcement learning is a type of machine learning that does not depend on ground-truth data but rather on the concept of “reward”. We implement and train algorithms to maximize returns from a reward function based on the alignment of streamlines with principal directions extracted from diffusion data. We show that competitive results can be obtained on known data and that the algorithms are able to generalize far better to new, unseen data, than prior machine learning-based tractography algorithms. To the best of our knowledge, this is the first successful use of deep reinforcement learning for tractography.Competing Interest StatementThe authors have declared no competing interest.