ABSTRACT
Validating a quantitative scientific model requires comparing its predictions against many experimental observations, ideally from many labs, using transparent, robust, statistical comparisons. Unfortunately, in rapidly-growing fields like neuroscience, this is becoming increasingly untenable, even for the most conscientious scientists. Thus the merits and limitations of existing models, or whether a new model is an improvement on the state-of-the-art, is often unclear.
Software engineers seeking to verify, validate and contribute to a complex software project rely on suites of simple executable tests, called “unit tests”. Drawing inspiration from this practice, we previously developed SciUnit, an easy-to-use framework for developing data-driven “model validation tests” – executable functions, here written in Python. Each such test generates and statistically validates predictions from a model against one relevant feature of empirical data to produce a score indicating agreement between the model and the data. Suites of such validation tests can be used to clearly identify the merits and limitations of existing models and developmental progress on new models.
Here we describe NeuronUnit, a library that builds upon SciUnit and integrates with several existing neuroinformatics resources to support the validation of single-neuron models using data gathered by neurophysiologists and neuroanatomists. NeuronUnit integrates with existing technologies like Jupyter, Pandas, NeuroML and resources such as NeuroElectro, The Allen Institute, and The Human Brain Project in order to make neuron model validation as easy as possible for computational neuroscientists.