PT - JOURNAL ARTICLE AU - Toshitake Asabuki AU - Naoki Hiratani AU - Tomoki Fukai TI - Chunking sequence information by mutually predicting recurrent neural networks AID - 10.1101/215392 DP - 2017 Jan 01 TA - bioRxiv PG - 215392 4099 - http://biorxiv.org/content/early/2017/11/09/215392.short 4100 - http://biorxiv.org/content/early/2017/11/09/215392.full AB - Interpretation and execution of complex sequences is crucial for various cognitive tasks such as language processing and motor control. The brain solves this problem arguably by dividing a sequence into discrete chunks of contiguous items. While chunking has been accounted for by predictive uncertainty, alternative mechanisms have also been suggested, and the mechanism underlying chunking is poorly understood. Here, we propose a class of unsupervised neural networks for learning and identifying repeated patterns in sequence input with various degrees of complexity. In this model, a pair of reservoir computing modules, each of which comprises a recurrent neural network and readout units, supervise each other to consistently predict others’ responses to frequently recurring segments. Interestingly, this system generates neural responses similar to those formed in the basal ganglia during habit formation. Our model extends reservoir computing to higher cognitive function and demonstrates its resemblance to sequence processing by cortico-basal ganglia loops.