TY - JOUR T1 - How we learn things we don’t know already: A theory of learning structured representations from experience JF - bioRxiv DO - 10.1101/198804 SP - 198804 AU - Leonidas A. A. Doumas AU - Guillermo Puebla AU - Andrea E. Martin Y1 - 2017/01/01 UR - http://biorxiv.org/content/early/2017/10/18/198804.abstract N2 - How a system represents information tightly constrains the kinds of problems it can solve. Humans routinely solve problems that appear to require structured representations of stimulus properties and relations. Answering the question of how we acquire these representations has central importance in an account of human cognition. We propose a theory of how a system can learn invariant responses to instances of similarity and relative magnitude, and how structured relational representations can be learned from initially unstructured inputs. We instantiate that theory in the DORA (Discovery of Relations by Analogy) computational framework. The result is a system that learns structured representations of relations from unstructured flat feature vector representations of objects with absolute properties. The resulting representations meet the requirements of human structured relational representations, and the model captures several specific phenomena from the literature on cognitive development. In doing so, we address a major limitation of current accounts of cognition, and provide an existence proof for how structured representations might be learned from experience. ER -