Abstract
Bees' remarkable visual learning abilities make them ideal for studying active information acquisition and representation. Here, we develop a biologically inspired model to examine how flight behaviours during visual scanning shape neural representation in the insect brain, exploring the interplay between scanning behaviour, neural connectivity, and visual encoding efficiency. Incorporating non-associative learning, adaptive changes without reinforcement, and exposing the model to sequential natural images during scanning, we obtain results that closely match neurobiological observations. Active scanning and non-associative learning dynamically shape neural activity, optimising information flow and representation. Lobula neurons, crucial for visual integration, self-organise into orientation-selective cells with sparse, decorrelated responses to orthogonal bar movements. They encode a range of orientations, biased by input speed and contrast, suggesting co-evolution with scanning behaviour to enhance visual representation and support efficient coding. To assess the significance of this spatiotemporal coding, we extend the model with circuitry analogous to the mushroom body, a region linked to associative learning. The model demonstrates robust performance in pattern recognition, implying a similar encoding mechanism in insects. Integrating behavioural, neurobiological, and computational insights, this study highlights how spatiotemporal coding in the lobula efficiently compresses visual features, offering broader insights into active vision strategies and bio-inspired automation.
Competing Interest Statement
The authors have declared no competing interest.
Footnotes
This version of the manuscript has been revised to update the following: Model generalisation and performance: Expanded discussion clarifies that the model does not define an optimal scanning strategy for bees. Neural representation and efficiency: Figures 3, 5, and 6F updated with new control experiments on scanning effects; additional analysis on sparsity and decorrelation added. Methodology and statistical validation: Methods now detail simulations, statistical tests, and model variability; statistical significance explicitly reported. Circuit sufficiency and minimality: Claims on the minimal circuit refined to clarify functional architecture rather than absolute minimality; alternative learning rules discussed. Figure and manuscript updates: Three new figures added, two revised, and additional content improves clarity and rigor. These revisions strengthen the manuscript's clarity, depth, and impact.