Abstract
Introduction Quality assurance (QA) is vital for ensuring the integrity of processed neuroimaging data for use in clinical neurosciences research. Manual QA (visual inspection) of processed brains for cortical surface reconstruction errors is resource-intensive, particularly with large datasets. Several semi-automated QA tools use quantitative detection of subjects for editing based on outlier brain regions. There were two project goals: (1) evaluate the adequacy of a statistical QA method relative to visual inspection, and (2) examine whether error identification and correction significantly impacts estimation of cortical parameters and established brain-behavior relationships.
Methods T1 MPRAGE images (N = 530) of healthy adults were obtained from the NKI-Rockland Sample and reconstructed using Freesurfer 5.3. Visual inspection of T1 images was conducted for: (1) participants (n = 110) with outlier values (z scores ± 3 SD) for subcortical and cortical segmentation volumes (outlier group), and (2) a random sample of remaining participants (n = 110) with segmentation values that did not meet the outlier criterion (nonoutlier group).
Results The outlier group had 21% more participants with visual inspection-identified errors than participants in the non-outlier group, with a medium effect size (Φ = 0.22). Nevertheless, a considerable portion of images with errors of cortical extension were found in the non-outlier group (41%). Sex significantly predicted error rate; men were 2.8 times more likely to have errors than women. Although nine brain regions significantly changed size from pre-to postediting (with effect sizes ranging from 0.26 to 0.59), editing did not substantially change the correlations of neurocognitive tasks and brain volumes (ps > 0.05).
Conclusions Statistically-based QA, although less resource intensive, is not accurate enough to supplant visual inspection. We discuss practical implications of our findings to guide resource allocation decisions for image processing.