PT - JOURNAL ARTICLE AU - Birgit Kriener AU - Rishidev Chaudhuri AU - Ila R. Fiete TI - How fast is neural winner-take-all when deciding between many options? AID - 10.1101/231753 DP - 2017 Jan 01 TA - bioRxiv PG - 231753 4099 - http://biorxiv.org/content/early/2017/12/11/231753.short 4100 - http://biorxiv.org/content/early/2017/12/11/231753.full AB - Identifying the maximal element (max, argmax) in a set is a core computational element in inference, decision making, optimization, action selection, consensus, and foraging. We show that running sequentially through a list of N fluctuating items takes Nlog(N) time to accurately find the max, prohibitively slow for large N. The power of computation in the brain is ascribed to its parallelism, yet it is theoretically unclear whether, even on an elemental task like the max operation, leaky and noisy neurons can perform a distributed computation that cuts the required time by a factor of N, a benchmark for parallel computation. We show that conventional winner-take-all circuit models fail to realize the parallelism benchmark and worse, in the presence of noise altogether fail to produce a winner when N is large. If, however, neurons are equipped with a second nonlinearity so that weakly active neurons cannot contribute inhibition to the circuit, the network matches the accuracy of the serial strategy but does so N times faster, partially self-adjusting integration time for task difficulty and number of options and saturating the parallelism benchmark without parameter fine-tuning. Finally, in the regime of few choices (small N), the same circuit predicts Hick's law of decision making; thus Hick's law behavior is a symptom of efficient parallel computation. Our work shows that distributed computation that saturates the parallelism benchmark is possible in networks of noisy and finite-memory neurons.