RT Journal Article SR Electronic T1 An image-computable model of human visual shape similarity JF bioRxiv FD Cold Spring Harbor Laboratory SP 2020.01.10.901876 DO 10.1101/2020.01.10.901876 A1 Yaniv Morgenstern A1 Frieder Hartmann A1 Filipp Schmidt A1 Henning Tiedemann A1 Eugen Prokott A1 Guido Maiello A1 Roland W. Fleming YR 2020 UL http://biorxiv.org/content/early/2020/01/11/2020.01.10.901876.abstract AB Shape is a defining feature of objects. Yet, no image-computable model accurately predicts how similar or different shapes appear to human observers. To address this, we developed a model (‘ShapeComp’), based on over 100 shape features (e.g., area, compactness, Fourier descriptors). When trained to capture the variance in a database of >25,000 animal silhouettes, ShapeComp predicts human shape similarity judgments almost perfectly (r2>0.99) without fitting any parameters to human data. To test the model, we created carefully selected arrays of complex novel shapes using a Generative Adversarial Network trained on the animal silhouettes, which we presented to observers in a wide range of tasks. Our findings show that human shape perception is inherently multidimensional and optimized for comparing natural shapes. ShapeComp outperforms conventional metrics, and can also be used to generate perceptually uniform stimulus sets, making it a powerful tool for investigating shape and object representations in the human brain.