PT - JOURNAL ARTICLE AU - Ann-Marie G. de Lange AU - Melis Anatürk AU - Jaroslav Rokicki AU - Laura K.M. Han AU - Katja Franke AU - Dag Alnæs AU - Klaus P. Ebmeier AU - Bogdan Draganski AU - Tobias Kaufmann AU - Lars T. Westlye AU - Tim Hahn AU - James H. Cole TI - Mind the gap: performance metric evaluation in brain-age prediction AID - 10.1101/2021.05.16.444349 DP - 2021 Jan 01 TA - bioRxiv PG - 2021.05.16.444349 4099 - http://biorxiv.org/content/early/2021/05/17/2021.05.16.444349.short 4100 - http://biorxiv.org/content/early/2021/05/17/2021.05.16.444349.full AB - Estimating age based on neuroimaging-derived data has become a popular approach to developing markers for brain integrity and health. While a variety of machine-learning algorithms can provide accurate predictions of age based on brain characteristics, there is significant variation in model accuracy reported across studies. We predicted age based on neuroimaging data in two population-based datasets, and assessed the effects of age range, sample size, and age-bias correction on the model performance metrics r, R2, Root Mean Squared Error (RMSE), and Mean Absolute Error (MAE). The results showed that these metrics vary considerably depending on cohort age range; r and R2 values are lower when measured in samples with a narrower age range. RMSE and MAE are also lower in samples with a narrower age range due to smaller errors/brain age delta values when predictions are closer to the mean age of the group. Across subsets with different age ranges, performance metrics improve with increasing sample size. Performance metrics further vary depending on prediction variance as well as mean age difference between training and test sets, and age-bias corrected metrics indicate high accuracy - also for models showing poor initial performance. In conclusion, performance metrics used for evaluating age prediction models depend on cohort and study-specific data characteristics, and cannot be directly compared across different studies. Since age-bias corrected metrics in general indicate high accuracy, even for poorly performing models, inspection of uncorrected model results provides important information about underlying model attributes such as prediction variance.Competing Interest StatementThe authors have declared no competing interest.