Abstract
Navigating across light gradients is essential for survival for many animals. However, we still have a poor understanding of the algorithms that underlie such behaviors. Here we develop a novel phototaxis assay for Drosophila larvae in which light intensity is always spatially uniform but updates depending on the location of the animal in the arena. Even though larvae can only rely on temporal cues in this closed-loop setup, we find that they are capable of finding preferred areas of low light intensity. Further detailed analysis of their behavior reveals that larvae turn more frequently and that heading angle changes increase when they experience luminance increments over extended periods of time. We suggest that temporal integration of luminance change during runs is an important – and so far largely unexplored – element of phototaxis.
Summary statement Using a novel closed-loop behavioral assay, we show that Drosophila larvae can navigate light gradients exclusively using temporal cues. Analyzing and modeling their behavior in detail, we propose that larvae achieve this by integrating luminance change during runs.
Introduction
Many animals have evolved behaviors to find favorable locations in complex natural environments. Such behaviors include chemotaxis to approach or avoid chemical stimuli; thermotaxis to find cooler or warmer regions; and phototaxis to approach or avoid light (Luo et al., 2010; Gomez-Marin et al., 2011; Kane et al., 2013; Gomez-Marin and Louis, 2014; Gepner et al., 2015; Klein et al., 2015).
Drosophila larvae are negatively phototactic, preferring darker regions (Sawin et al., 1994). To navigate, larvae alternate between runs and turns. During runs, larvae move relatively straight. During turns, they slow down and perform head-casts (Lahiri et al., 2011) to sample their environment for navigational decisions (Gomez-Marin and Louis, 2012; Kane et al., 2013; Humberg et al., 2018; Humberg and Sprecher, 2018). However, it is unclear whether such local spatial sampling is necessary to perform phototaxis. Zebrafish larvae, for example, can perform phototaxis even when light intensity is uniform across space but changes over time with the animal’s position (Chen and Engert, 2014). In a purely temporal phototaxis assay, spatial information is absent, so navigation must depend on other cues.
Previous work indicates that as brightness increases, Drosophila larvae make shorter runs and bigger turns (Kane et al., 2013; Humberg et al., 2018). This is reminiscent of chemotactic strategies, where decreasing concentrations of a favorable odorant increase the likelihood of turning (Gomez-Marin et al., 2011). While it has been shown that temporal sampling of olfactory cues is sufficient to guide chemotaxis (Schulze et al., 2015), it remains unclear whether larvae can use a purely temporal strategy for visual navigation.
Using a virtual landscape in which luminance is always spatially uniform but depends on the location of the animal in the arena, we confirm that larvae can perform phototaxis by modulating run-length and heading angle. Our data indicates that larvae achieve this by integrating luminance change during runs.
Materials and methods
Experimental setup
All experiments were performed using wild-type 2nd-instar Drosophila melanogaster larvae collected 3–4 days after egg-laying. This age was chosen to ensure consistent phototactic behavior because older larvae might change their light preference (Sawin-McCormack et al., 1995). Larvae were raised on agarose plates with grape juice and yeast paste, with a 12h/12h light-dark cycle at 22°C and 60% humidity. Before experiments, larvae were washed in droplets of deionized water. All experiments were carried out between 2 pm and 7 pm to avoid potential circadian effects (Mazzoni et al., 2005).
Larvae were placed in the center of a custom-made circular acrylic dish (6 cm radius) filled with a thin layer of freshly made 2% agarose (Fig. 1A). As previously described (Bahl and Engert, 2020), spatially uniform whole-field illumination was presented via a projector (60 Hz, AAXA P300 Pico Projector) from below. Light intensity was measured with an iPhone 11 Pro (Lux Light Meter Pro app) at a distance of about 5 cm, giving values from 0 Lux (dark) to 410 Lux (white). For tracking, the scene was illuminated using infrared LED panels (940 nm panel, Cop Security). A high-speed camera (90 Hz, Grasshopper3-NIR, FLIR Systems) with an infrared filter (R72, Hoya) was used to track the larva’s centroid position in real-time. 8 independent arenas were operated in parallel, making the system medium to high-throughput and relatively cost-effective.
Three virtual light intensity landscapes were used: a “Valley” stimulus, a “Ramp” stimulus, and a “Constant” stimulus. For the “Valley” and “Ramp” stimuli, the spatially uniform light intensity (λ) was updated in closed-loop according to λ = 410. (r − 3)2/ 9 (Fig. 1B) and (Fig. S1C), respectively, where r is the larva’s radial distance to the center of the arena. Both profiles ensure that luminance levels near the wall are high, decreasing the edge preference of larvae and reducing boundary effects. For the “Constant” stimulus, luminance values remained gray (λ = 0.5) regardless of the larva’s position. The position of the animal was defined by its centroid, rather than its head or tail. This choice significantly simplified the experimental procedure and is justified as larvae are small in size relative to the slowly changing always spatially uniform virtual luminance landscapes. The latency between the detection of the animal’s position and the closed-loop update of the visual stimulus is estimated to be ∼50 ms (tracking delay ∼10 ms + ∼40 ms delay for the communication between computer CPU, GPU, and projector), which is negligible given the slow crawling speed of larvae.
Each experiment lasted for 60 min. For all stimuli, animals were presented with constant gray during the first 15 min, allowing them to distribute in the arena.
Data analysis and statistics
All data analysis was performed using custom-written Python code on the 45 min period after acclimatization. To avoid tracking problems and minimize boundary effects, data were excluded where larvae were within 0.1 cm distance to the edge.
The circular arena was binned in three concentric regions depending on the radius r:r= 0 − 2 cm, r = 2 − 4 cm, and r = 4 − 6 cm. These regions were named the “Bright” center, the “Dark” ring, and the “Bright” ring for the “Valley” stimulus (Fig. 1B) and the “Dark” center, the “Gray” ring, and the “Bright” ring for the “Ramp” stimulus (Fig. S1C). Animal speed was computed by interpolating the trajectory to 1 s bins and then by taking the average distance of consecutive points (Fig. 1E).
Turn events were detected using a pose estimation toolbox, DeepPoseKit (Graving et al., 2019). 100 frames were manually annotated (head, centroid, and tail) to train the network, which was then used to predict animal posture across all frames from all animals. Body curvature was defined as the angle between the tail-to-centroid vector and the centroid-to-head vector (Fig. 2A). In a few frames, the algorithm detected the head and the tail at the same location, leading to the transient detection of large curvatures. These events were discarded by low-pass filtering traces with a Butterworth filter (cutoff frequency: 3 Hz). Turn events were defined as a local curvature peak above 30° and needed to be separated from the previous such event by at least 2 s in time and 0.2 cm in space.
Turn angles were defined as the angle between the location in the arena 2 s before a turn event and 2 s after. Run-length was defined as the time between consecutive turn events. Each turn event was labeled as “Dark” or “Bright”, based on the luminance equations and binning described above (Dark: less than 45 Lux, Bright: otherwise), and as “Darkening” or “Brightening” based on the change in luminance since the last turn event (Fig. 2F,G). As turn events are typically short and spatially confined, by stimulus design, the whole-field luminance change during such events is nearly zero. Hence, the luminance change during turns was defined as the brightness difference 1 s before and 1 s after the event (Fig. 2D). The luminance change during runs was defined as the difference in luminance between two consecutive turn events (Fig. 2D).
Notably, as a control for the spatial arrangement of our stimulus and boundary effects, the same binning, naming conventions, and analysis methods were also used for the “Constant” stimulus even though the arena remained constantly gray for those animals. For example, control animals that spend time in the “Dark” ring (gray open circles in Fig. 1D) actually perceive constant “Gray” during the entire experiment.
Two-sample t-tests were used for pairwise comparisons between the experimental and control data. Paired-sample t-tests were used for pairwise comparisons within groups. Larvae were discarded if they spent more than 99% of the experimental time in a single region or if their speed was zero. All data analysis was done automatically in the same way for the experimental and control groups.
Modeling
Simulations (Fig. 3 and Fig. S3) were custom-written in Python 3.7, using the high-performance Python compiler numba. Simulations were performed using Euler’s Method with a timestep of dt = 0.01 s. Model larvae were initialized with a random position and orientation. At each time step, larvae stochastically chose one of two possible actions: They could either move forward, with a speed of 0.04 cm/s (parameter was taken from the experiment, Fig. 1E), or turn. The baseline probability for turning was p = 0.00066. This value was directly computed from the experiment to match the measured average run-length of T = 15 s (Fig. 2E,F), following p = dt / T. When making turns, turn angles were drawn from a Gaussian distribution with a baseline standard deviation of 32°, matching the experimental value (Fig. 2C,E,F). If model larvae reached the edge, a new random direction vector was chosen, preventing them from leaving the arena.
In correspondence with our experimental findings (Fig. 2E,F), the model was equipped with four additional navigational rules (Fig. 3A).
“Rule 1”: When the environment is “Dark” (luminance smaller than 45 Lux), turn angles decrease. When it is “Bright” (luminance larger than 45 Lux), turn angles increase.
“Rule 2”: When the environment is “Dark” (luminance smaller than 45 Lux), run-lengths increase. When it is “Bright” (luminance larger than 45 Lux), run-lengths decrease.
“Rule 3”: When the environment is “Darkening” (change since previous turn smaller than zero), turn angles decrease. When it is “Brightening’’ (change since previous turn larger than zero), turn angles increase.
“Rule 4”: When the environment is “Darkening” (change since previous turn smaller than zero), run-lengths increase. When it is “Brightening’’ (change since previous turn larger than zero), run-lengths decrease.
Changes in turn angle were accomplished by adjusting the standard deviation of the Gaussian distribution by ±30%, the effect size observed in our experiments (Fig. 2E,F). We modulated run-length (T) by scaling them by ±30%, thereby modulating the probability of turning (p = dt / T). When combinations of those rules were tested (Fig. 3A), effects were concatenated.
A performance index (PI) (Fig. 3A) was used to characterize how well animals or models performed temporal phototaxis. The metric was based on the difference between the experimental and control group for the fraction of time spent in the “Dark” ring. To compute this value, bootstrapping was used to average 1000 samples of randomly chosen differences between experimental and control conditions.
For the parameter grid search (Fig. 3A), the absolute turn angle and the run-length were varied systematically. To this end, respective baseline values as taken from the experiment (Fig. 2E,F), were changed by an order of magnitude by scaling them with two multipliers (run-length multiplier and turn angle multiplier).
Results
Fly larvae can navigate a virtual luminance gradient
We first asked whether fly larvae can perform temporal phototaxis, i.e. navigate a virtual light landscape lacking spatial information. We placed individual animals in an agarose-filled arena, allowed them to freely explore, and tracked their position in real-time (Fig. 1A). We presented spatially uniform light from below, with luminance levels following a quadratic dependence of the larva’s distance from the center (“Valley” stimulus, Fig. 1B). To control for naive location preference, we presented position-independent gray luminance (“Constant” stimulus). For both groups, we analyzed how animals distribute across three concentric regions: the “Bright” center, the “Dark” ring, and the “Bright” ring.
Larvae that navigated the “Valley” stimulus spent a significantly higher fraction of time in the “Dark” ring (Fig. 1C) than those that navigated the “Constant” stimulus (Figs. 1D, S1A). This behavior was most pronounced between minutes 10 and 40 of the experiment (Fig. S1B). To verify that this behavior was not an artifact of our specific stimulus design, we also tested a gradient where brightness monotonically “ramps” with radial distance (Fig. S1C), and observed that larvae also here navigated to dark regions (Fig. S1D,E).
Because larvae lacked spatial luminance cues in our setup, it was unclear which behavioral algorithms they employ. One basic, yet sufficient, algorithm would be to reduce movement in darker regions. However, speed was independent of luminance (Fig. 1E, S1F), suggesting that larvae employ more complex navigational strategies.
We conclude that Drosophila larvae are capable of performing phototaxis in the absence of spatial information and that this behavior cannot be explained by a simple luminance-dependent modulation of crawling speed.
Larval temporal phototaxis depends on luminance change over time
In spatially differentiated light landscapes, fly larvae make navigational decisions by sampling luminance differences during head-casts. However, in our setup, larvae experience negligible brightness fluctuations during head-casts. Instead, they might modulate the magnitude and/or frequency of turns as a function of luminance. To explore this possibility, we segmented trajectories into runs and turns. Accordingly, we applied a freely available deep learning-based package, DeepPoseKit (Graving et al., 2019) to extract the larvae’s head, centroid, and tail positions from the experimental video (Fig. 2A and Video S1). From there, we calculated the animal’s body curvature to identify head-casting events and quantify turn angles and run-lengths (Fig. 2A–C). As mentioned before, luminance changes during the spatially confined turns were much smaller than during runs (Fig. 2D).
To quantify the effect of luminance on heading angles and run-lengths, we looked at how these parameters varied with the larva’s position. During the “Valley” but not the “Constant” stimulus, turns in the “Dark” region led to smaller heading angle changes than in the “Bright” regions (Fig. 2E). Similarly, runs before a turn in the “Dark” region of the “Valley” stimulus were slightly longer compared to runs ending in the “Bright” region. However, this also occurred with the “Constant” stimulus, suggesting that the effect might not arise from a visuomotor transformation.
Next, we explored whether luminance history affects behavior. As run-lengths were highly variable, ranging from ∼3 s to ∼40 s (Fig. 2C), we focused our analysis on the luminance change between consecutive turns. We classified turns by whether larvae experienced a decrease or increase in whole-field luminance during the preceding run. We found that heading angle changes were smaller and that run-lengths were longer when larvae had experienced a brightness decrease compared to an increase (Fig. 2F). We did not observe these effects in control animals.
To further quantify the effects of luminance and luminance change on heading angle change, we performed regression analysis directly on turns (Fig. S2). While turn angles scale with luminance, they do so more strongly with luminance change.
These observations led us to hypothesize that larvae might integrate information about the change in luminance during runs and that this integration might span several seconds. To obtain an idea about time-scales, we computed a turn event-triggered luminance average (Fig. 2G). We observed that, on average, turns performed in the “Valley” stimulus are preceded by an extended period of >20 seconds of brightening, suggesting that long-term luminance dynamics drive turns.
In summary, our detailed analysis of turns and runs confirms that, first, luminance levels modulate heading angle change and, second, changes in luminance prior to turns modulate heading angle change as well as run-length.
A simple algorithmic model can explain larval temporal phototaxis
We next wanted to test whether the identified behavioral features are sufficient to explain larval temporal phototaxis. Based on our experimental findings (Fig. 2), we propose four rules as navigational strategies (Fig. 3A). For rules 1 and 2, the instantaneous luminance modulates the heading angle change and run-length, respectively. By contrast, for rules 3 and 4, the luminance change since the last turn modulates the heading angle changes and run-lengths.
To test these navigational rules, we simulated larvae as particles that could either move straight or make turns while exploring our experimental landscapes (“Valley”, “Constant”). To compare the performances of different models, we calculated a phototaxis index (difference of time spent in the “Dark” ring between experimental and control groups, Fig. 3A). For all permutations of our rules, we explored a set of multipliers for the heading angle change and run-length, with a multiplier of one corresponding to the experimental averages (Fig. 2E,F). This allowed us to both assess the robustness of our model to parameter choice and observe how performance behaves as a function of each parameter. As expected, with no active rules, the larvae distributed comparably in the “Valley” and “Constant” stimuli. Activating rules 1 or 2, performance did not improve, suggesting that modulation of behavior based on instantaneous luminance is insufficient to perform temporal phototaxis. Activating rules 3 or 4, phototaxis emerged for small run-lengths and large turn angle multipliers. However, for multipliers consistent with the observed data, the resulting phototaxis index was weaker than in the experimental data. Only when combining rules 3 and 4, phototaxis performance matches the experimental values. Combining all four rules yielded minimal improvements. Therefore, for further analysis, we focused on a combination of rules 3 and 4, with both multipliers set to one.
Simulated larvae navigating the “Valley” stimulus moved towards darker regions (Fig. 3B). As a control for our modeling approach, we applied the same analysis to model data as used for experimental data. Like real larvae (Fig. 1D,E), simulated larvae navigating the “Valley” stimulus spent more time in the “Dark” ring than larvae navigating the “Constant” stimulus (Fig. 3C) without modulating speed (Fig. 3D). Furthermore, distributions of turn angle changes, run-lengths, and luminance changes were comparable to experimental data (compare Figs. 2C,D and 3E,F). When we examined the effects of instantaneous luminance and luminance change on turn angle amplitude and run-length (Fig. 3G,H), we observed similar patterns as in the experimental data (Fig. 2E,F). It is particularly notable that, even though we only used rules that depend on luminance changes (rules 3 and 4), a significant dependency on instantaneous luminance still arose. This suggests that this observation is an emergent property resulting from how larvae respond to luminance changes. As found in experiments (Fig. 2G), turns are preceded by long stretches of increasing brightness (Fig. 3I), supporting our hypothesis that larvae integrate luminance change over several seconds. Finally, to verify that our model generalizes to other visual stimulus patterns, we simulated larvae exploring our “Ramp” stimulus (Fig. S1C) and observed phototaxis performance comparable to that of real larvae (Fig. S3A,B).
In summary, after implementing our experimentally observed navigational rules in a simple computational model, we propose that the most critical element of larval temporal phototaxis is the ability to integrate luminance change over extended time periods. Modulating turn angle amplitude and run-length based on such measurement is sufficient to perform temporal phototaxis.
Discussion
Closed-loop systems are powerful tools to identify various strategies of an animal’s sensorimotor transformation. They have been employed in many animal models including adult and larval Drosophila (Bahl et al., 2013; Tadres and Louis, 2019), larval zebrafish (Chen and Engert, 2014; Bahl and Engert, 2020), and C. elegans (Leifer et al., 2011; Kocabas et al., 2012). Using a closed-loop behavioral assay, we show that Drosophila larvae find the darker regions of a virtual luminance gradient that lacks any spatial contrast cues. Temporal phototaxis behavioral algorithms have already been dissected in open-loop configurations, where stimuli are decoupled from an animal’s actions. Following a global luminance increase, larvae modify both their heading angle magnitude and their run-length (Kane et al., 2013; Gepner et al., 2015), in agreement with our findings. We were able to demonstrate that these navigational strategies are in fact sufficient for phototactic navigation. Given that brightness fluctuations in our assay are slow and negligibly small during turns, we suggest that animals integrate luminance change over time to make decisions about the strength and timing of turns.
Previous work has shown that larvae can navigate olfactory or thermal gradients using only temporal cues (Luo et al., 2010; Schulze et al., 2015). Here, we demonstrate temporal taxis in a visual gradient, enabling future exploration of the shared computational principles and neural pathways across these sensory modalities. Compared to previously used systems our setup is relatively simple, cost-effective, and can operate medium to high-throughput. It employs a neural network-based framework for larval tracking, providing an alternative to custom-written posture estimation code.
By demonstrating that Drosophila larvae can perform temporal phototaxis, we have improved our understanding of how they process visual information during navigation. However, understanding phototaxis requires studying both the temporal and spatial computations (Humberg et al., 2018). We have presented a strategy to isolate the temporal component. Studying the spatial component is technically more challenging. Even when navigating spatially differentiated landscapes, larvae might still use temporal comparisons of light intensity during head-casts (Kane et al., 2013; Humberg et al., 2018). Pure spatial phototaxis has been studied in zebrafish larvae (Huang et al., 2013) by locking a sharp contrast edge to the center of a freely moving animal’s head. Testing such stimuli in Drosophila larvae will require more precise real-time position, orientation, and posture measurements, but experimental results could be used to construct a spatial phototaxis model which could then be combined with our proposed temporal phototaxis model.
Author contributions
All authors contributed equally to the design of the project. A.B. built the behavioral setup. M.Z. performed experiments. M.Z. and A.B. and analyzed data. M.Z., K.J.H., K.V., and A.B. wrote the manuscript. K.J.H., K.V., and A.B. supervised the work.
Competing interests
The authors declare no competing interests.
Funding
K.J.H. was funded by the Harvard Mind Brain Behavior Initiative. K.V. received funding from a German Science Foundation Research Fellowship #345729665. A.B. was supported by the Human Frontier Science Program Long-Term Fellowship LT000626/2016.
Data availability
The data that support the findings of this study are available from the corresponding author upon request. Source code for data analysis and modeling are available on GitHub (https://github.com/arminbahl/drosophila_phototaxis_paper).
Supplementary figures
Acknowledgments
We thank L. Hernandez-Nunez for discussions and reading through the manuscript. We are grateful to F. Engert and A. Samuel and their lab members for discussions and general support.