PT - JOURNAL ARTICLE AU - Fernando Meyer AU - Till-Robin Lesker AU - David Koslicki AU - Adrian Fritz AU - Alexey Gurevich AU - Aaron E. Darling AU - Alexander Sczyrba AU - Andreas Bremges AU - Alice C. McHardy TI - Tutorial: Assessing metagenomics software with the CAMI benchmarking toolkit AID - 10.1101/2020.08.11.245712 DP - 2020 Jan 01 TA - bioRxiv PG - 2020.08.11.245712 4099 - http://biorxiv.org/content/early/2020/08/12/2020.08.11.245712.short 4100 - http://biorxiv.org/content/early/2020/08/12/2020.08.11.245712.full AB - Computational methods are key in microbiome research, and obtaining a quantitative and unbiased performance estimate is important for method developers and applied researchers. For meaningful comparisons between methods, to identify best practices, common use cases, and to reduce overhead in benchmarking, it is necessary to have standardized data sets, procedures, and metrics for evaluation. In this tutorial, we describe emerging standards in computational metaomics benchmarking derived and agreed upon by a larger community of researchers. Specifically, we outline recent efforts by the Critical Assessment of Metagenome Interpretation (CAMI) initiative, which supplies method developers and applied researchers with exhaustive quantitative data about software performance in realistic scenarios and organizes community-driven benchmarking challenges. We explain the most relevant evaluation metrics to assess metagenome assembly, binning, and profiling results, and provide step-by-step instructions on how to generate them. The instructions use simulated mouse gut metagenome data released in preparation for the second round of CAMI challenges and showcase the use of a repository of tool results for CAMI data sets. This tutorial will serve as a reference to the community and facilitate informative and reproducible benchmarking in microbiome research.Competing Interest StatementThe authors have declared no competing interest.