PT - JOURNAL ARTICLE AU - Michael J. Lee AU - James J. DiCarlo TI - An empirical assay of visual object learning in humans and baseline image-computable models AID - 10.1101/2022.12.31.522402 DP - 2023 Jan 01 TA - bioRxiv PG - 2022.12.31.522402 4099 - http://biorxiv.org/content/early/2023/01/23/2022.12.31.522402.short 4100 - http://biorxiv.org/content/early/2023/01/23/2022.12.31.522402.full AB - How humans learn new visual objects is a longstanding scientific problem. Previous work has led to a diverse collection of models for how object learning may be accomplished, but a current limitation in the field is a lack of empirical benchmarks that evaluate the predictive validity of specific, image-computable models and facilitate fair comparisons between competing models. Here, we used online psychophysics to measure human learning trajectories over a set of tasks involving novel 3D objects, then used those data to develop such benchmarks. We make all data and benchmarks publicly available, and, to our knowledge, they are currently the largest publicly-available collection of visual object learning psychophysical data in humans. Consistent with intuition, we found that humans generally require very few images (<10) to approach their asymptotic accuracy, find some object discriminations more easy to learn than others, and generalize quite well over a range of image transformations, even after just one view of each object. To serve as baseline reference values for those benchmarks, we implemented and tested a large number of baseline models (n=2,408), each based on a standard cognitive theory of learning: that humans re-represent images in a fixed, Euclidean space, then learn linear decision boundaries in that space to identify objects in future images. We found some of these baseline models make surprisingly accurate predictions, but also identified reliable prediction gaps between all baseline models and humans, particularly in the few-shot learning setting.Competing Interest StatementThe authors have declared no competing interest.