TY - JOUR T1 - From neural network to psychophysics of time: Exploring emergent properties of RNNs using novel Hamiltonian formalism JF - bioRxiv DO - 10.1101/125849 SP - 125849 AU - Rakesh Sengupta AU - Anindya Pattanayak AU - Raju Surampudi Bapi Y1 - 2017/01/01 UR - http://biorxiv.org/content/early/2017/04/10/125849.abstract N2 - The stability analysis of dynamical neural network systems generally follows the route of finding a suitable Liapunov function after the fashion Hopfield’s famous paper on content addressable memory network or by finding conditions that make divergent solutions impossible. For the current work we focused on biological recurrent neural networks (bRNNs) that require transient external inputs (Cohen-Grossberg networks). In the current work we have proposed a general method to construct Liapunov functions for recurrent neural network with the help of a physically meaningful Hamiltonian function. This construct allows us to explore the emergent properties of the recurrent network (e.g., parameter configuration needed for winner-take-all competition in a leaky accumulator design) beyond that available in standard stability analysis, while also comparing well with standard stability analysis (ordinary differential equation approach) as a special case of the general stability constraint derived from the Hamiltonian formulation. We also show that the Cohen-Grossberg Liapunov function can be derived naturally from the Hamiltonian formalism. A strength of the construct comes from its usability as a predictor for behavior in psychophysical experiments involving numerosity and temporal duration judgements. ER -