RT Journal Article SR Electronic T1 Data-driven Assessment of Structural Image Quality JF bioRxiv FD Cold Spring Harbor Laboratory SP 125161 DO 10.1101/125161 A1 Adon F. G. Rosen A1 David R. Roalf A1 Kosha Ruparel A1 Jason Blake A1 Kevin Seelaus A1 Prayosha Villa A1 Phillip A. Cook A1 Christos Davatzikos A1 Mark A. Elliott A1 Angel Garcia de La Garza A1 Efstathios D. Gennatas A1 Megan Quarmley A1 J. Eric Schmitt A1 Russell T. Shinohara A1 M. Dylan Tisdall A1 R. Cameron Craddock A1 Raquel E. Gur A1 Ruben C. Gur A1 Theodore D. Satterthwaite YR 2017 UL http://biorxiv.org/content/early/2017/04/08/125161.abstract AB Data quality is increasingly recognized as one of the most important confounders in brain imaging research. It is particularly important for studies of brain development, where age is systematically related to in-scanner motion and data quality. Prior work has demonstrated that in-scanner head motion biases estimates of structural neuroimaging measures. Yet, objective measures of data quality are not available for most structural brain images. Here we sought to identify reliable, quantitative measures of data quality for T1-weighted volumes, describe how such measures of quality relate to common measures of brain structure, and delineate how this in turn may bias inference regarding brain development in youth. Three highly-trained raters provided manual ratings for 1601 T1-weighted volumes acquired as part of the Philadelphia Neurodevelopmental Cohort. Expert manual ratings were compared to automated quality measures, which derived measures from the Preprocessed Connectomes Project’s Quality Assurance Protocol (QAP). Generalized linear mixed-effects models using the automated quality measures were constructed in a training sample (n = 1067) to: 1) identify unusable images with significant artifacts, and 2) quantify subtle artifacts in usable images. These models were then tested in an independent validation dataset (n = 534). Results reveal that unusable images can be detected with a high degree of accuracy: a model including background kurtosis and skewness achieved an AUC of 0.95 in the training dataset and 0.94 in the independent validation dataset. While identification of subtle artifact was more challenging, an 8-parameter model achieved an AUC of 0.80 in the training dataset, and 0.92 in the validation dataset. Notably, quantitative measures of image quality were related to cortical thickness and gray matter density; measures of cortical volume were less affected by artifact. Furthermore, these quantitative measures of image quality demonstrated comparable or superior performance to estimates of motion derived from other imaging sequences acquired during the same protocol. Finally, data quality significantly altered structural brain maturation occurring during adolescent development. Taken together, these results indicate that reliable measures of data quality can be automatically derived from T1-weighted volumes, and that failing to control for data quality can systematically bias the results of studies of brain development.