Abstract
Genome scan approaches promise to map genomic regions involved in adaptation of individuals to their environment. Outcomes of genome scans have been shown to depend on several factors including the underlying demography, the adaptive scenario, and the software or method used. We took advantage of a pedagogical experiment carried out during a summer school to explore the effect of an unexplored source of variability, which is the degree of user expertise.Participants were asked to analyze three simulated data challenges with methods presented during the summer school. In addition to submitting lists, participants evaluated a priori their level of expertise. We measured the quality of each genome scan analysis by computing a score that depends on false discovery rate and statistical power. In an easy and a difficult challenge, less advanced participants obtained similar scores compared to advanced ones, demonstrating that participants with little background in genome scan methods were able to learn how to use complex software after short introductory tutorials. However, in a challenge ofintermediate difficulty, advanced participants obtained better scores. To explain the difference, we introduce a probabilistic model that shows that a larger variation in scores is expected for SNPs of intermediate difficulty of detection. We conclude that practitioners should develop their statistical and computational expertise to follow the development of complex methods. To encourage training, we release the website of the summer school where users can submit lists of candidate loci, which will be scored and compared to the scores obtained by previous users.