RT Journal Article SR Electronic T1 Call for participation: Collaborative benchmarking of functional-structural root architecture models. The case of root water uptake JF bioRxiv FD Cold Spring Harbor Laboratory SP 808972 DO 10.1101/808972 A1 Andrea Schnepf A1 Christopher K. Black A1 Valentin Couvreur A1 Benjamin M. Delory A1 Claude Doussan A1 Axelle Koch A1 Timo Koch A1 Mathieu Javaux A1 Magdalena Landl A1 Daniel Leitner A1 Guillaume Lobet A1 Trung Hieu Mai A1 FĂ©licien Meunier A1 Lukas Petrich A1 Johannes A. Postma A1 Eckart Priesack A1 Volker Schmidt A1 Jan Vanderborght A1 Harry Vereecken A1 Matthias Weber YR 2019 UL http://biorxiv.org/content/early/2019/10/17/808972.abstract AB Three-dimensional models of root growth, architecture and function are becoming important tools that aid the design of agricultural management schemes and the selection of beneficial root traits. However, while benchmarking is common in many disciplines that use numerical models such as natural and engineering sciences, functional-structural root architecture models have never been systematically compared. The following reasons might induce disagreement between the simulation results of different models: different representation of root growth, sink term of root water and solute uptake and representation of the rhizosphere. Presently, the extent of discrepancies is unknown, and a framework for quantitatively comparing functional-structural root architecture models is required. We propose, in a first step, to define benchmarking scenarios that test individual components of complex models: root architecture, water flow in soil and water flow in roots. While the latter two will focus mainly on comparing numerical aspects, the root architectural models have to be compared at a conceptual level as they generally differ in process representation. Therefore defining common inputs that allow recreating reference root systems in all models will be a key challenge. In a second step, benchmarking scenarios for the coupled problems are defined. We expect that the results of step 1 will enable us to better interpret differences found in step 2. This benchmarking will result in a better understanding of the different models and contribute towards improving them. Improved models will allow us to simulate various scenarios with greater confidence and avoid bugs, numerical errors or conceptual misunderstandings. This work will set a standard for future model development.