High-throughput simulations indicate feasibility of navigation by familiarity with a local sensor such as scorpion pectines

Scorpions have arguably the most elaborate “tongues” on the planet: two paired ventral combs, called pectines, that are covered in thousands of chemo-tactile peg sensilla and that sweep the ground as the animal walks. Males use their pectines to detect female pheromones during the mating season, but females have pectines too: What additional purpose must the pectines serve? Why are there so many pegs? We take a computational approach to test the hypothesis that scorpions use their pectines to navigate by chemo-textural familiarity in a manner analogous to the visual navigation-by-scene-familiarity hypothesis for hymenopteran insects. We have developed a general model of navigation by familiarity with a local sensor and have chosen a range of plausible parameters for it based on the existing behavioral, physiological, morphological, and neurological understanding of the pectines. Similarly, we constructed virtual environments based on the scorpion’s native sand habitat. Using a novel methodology of highly parallel high-throughput simulations, we comprehensively tested 2160 combinations of sensory and environmental properties in each of 24 different situations, giving a total of 51,840 trials. Our results show that navigation by familiarity with a local pectine-like sensor is feasible. Further, they suggest a subtle interplay between “complexity” and “continuity” in navigation by familiarity and give the surprising result that more complexity — more detail and more information — is not always better for navigational performance. Author summary Scorpions’ pectines are intricate taste-and-touch sensory appendages that brush the ground as the animal walks. Pectines are involved in detecting pheromones, but their exquisite complexity — a pair of pectines can have around 100,000 sensory neurons — suggests that they do more. One hypothesis is “Navigation by Scene Familiarity,” which explains how bees and ants use their compound eyes to navigate home: the insect visually scans side to side as it moves, compares what it sees to scenes learned along a training path, and moves in the direction that looks most familiar. We propose that the scorpions’ pectines can be used to navigate similarly: instead of looking around, they sweep side to side sensing local chemical and textural information. We crafted simulated scorpions based on current understanding of the pectines and tested their navigational performance in virtual versions of the animals’ sandy habitat. Using a supercomputer, we varied nine environmental, sensory, and situational properties and ran a total of 51,840 trials of simulated navigation. We showed that navigation by familiarity with a local sensor like the pectines is feasible. Surprisingly, we also found that having a more detailed landscape and/or a more sensitive sensor is not always optimal.

Scorpions present at least two unanswered scientific mysteries: the selection pressure 2 that maintains their pectines -two ornate, sensitive, and energetically expensive 3 sensory organs on their underbellies -and the mechanism by which they return to 4 their burrows after emerging to hunt. We propose that these two questions are 5 connected and that the pectines are used to navigate by a process akin to familiarity 6 navigation, as has been previously suggested for hymenopteran insects [1]. ground-facing teeth ( Figure 1B). The distal face of each tooth contains a dense array of 23 minute, peg-shaped sensilla called "pegs" (Figure 2). In some species, such as male 24 Smeringerus mesaensis (formerly Paruroctonus mesaensis), the number of pegs per 25 tooth reaches 1600 for a total of over 100,000 across both pectines [4,19]. The peg slits 26 are oriented perpendicular to the travel of the ground as the teeth fan out when the 27 pectines brush the surface. Chemo-tactile peg sensilla. A: An SEM of the five distal teeth of a male P. utahensis pecten shows the patches of peg sensilla that coat the ground-facing surfaces of each tooth (photo by E. Knowlton). The superimposed circles on the penultimate tooth show the approximate resolution of chemical discrimination based on peg response patterns and pecten sweep kinetics (see text). B: An expanded view of a patch of peg sensilla from male Smeringerus mesaensis shows the consistent orientation of their pore tips perpendicular to the relative movement of the ground (arrow indicates direction of ground movement).
Each peg is supplied with about a dozen sensory neurons. All but one of these 29 neurons have unbranched dendritic outer segments that extend into a fluid-filled 30 chamber just proximal to the peg's slit-shaped terminal pore [7,8] and can best be 31 classified as gustatory neurons [20]. The remaining neuron terminates in a tubular body 32 near the peg base, which is a hallmark of arthropod mechanoreceptors [8,21]. The peg 33 neurons course along the pecten spine and through the pectinal nerve to the 34 subesophageal ganglion (SEG) in the posterior brain [10,11,19,22,23]. A cross-section 35 of the pectinal neuropil in the SEG shows an ordered, topographical arrangement of 36 integrating information simultaneously from multiple pegs yields precise information 48 that is in line with the short duration of a typical pectinal sweep [14]. In particular, it 49 was found that a minimum of eight pegs working in parallel were required to distinguish 50 citric acid from ethanol [15]. The superimposed circles on the pectinal tooth in 51 Figure 2A gives a sense of this putative spatial resolution. 52 Navigation by familiarity 53 The Navigation by Scene Familiarity Hypothesis is a provocative and elegant idea that 54 has been proposed in studies of the visual navigation of hymenopteran insects. It 55 suggests that the dense arrays of ommatidia in insect compound eyes are used to 56 transform scenes from the insect's visual world into spatial matrices of information. To 57 navigate back to a hive or food source, the insect simply moves in the direction that Navigation by familiarity with a local sensor 67 In this paper, we consider the hypothesis that the dense fields of peg sensilla on pectines 68 are analogous to the tightly packed ommatidia in compound eyes, detecting matrices of 69 chemical and textural information that are used for navigation by familiarity. While our 70 hypothesis is inspired by visual familiarity navigation, the pectines are a sensor that 71 only senses the local environment, which is critically different from a non-local sensor 72 like eyes that gather information from both near and far. 73 This work uses computer simulations to establish whether and when it is possible to 74 navigate by familiarity using a local sensor. Throughout this manuscript, we will use 75 NFLS as our acronym for Navigation by Familiarity with a Local Sensor. We have 76 developed a general computer model of NFLS and use a broad base of morphological, 77 physiological, and behavioral information to configure a simulated sensor that models 78 the pectines. Previous work has suggested a subtle interplay between the environmental 79 and sensory conditions needed to enable familiarity navigation [18,37,52]. To explore  navigation. This is plausible because scorpions are nocturnal and the desert winds 88 are calm at night, especially on the leeward side of the dunes (pers. obs.; [53]). 89 2. The shape of the agent's sensor is a constant rectangle -in scorpion terms, the 90 two pectines move together. Although experimental data indicate that the 91 pectines move independently (see Figure 3 in S1 Appendix), the relevant 92 consideration for NFLS is only whether both pectines achieve the same orientation 93 within a short amount of time. This requirement could be satisfied by a wide 94 variety of pecten movement patterns, but we chose to model the simplest. 95 experience nothing but pure darkness or blinding white outside of a certain range 97 of light intensity, we assume that the agent's experience of chemical concentration 98 or mechanical pressure clips to some range.

99
Sensing the environment 100 The landscape in which the agent navigates is represented by a two-dimensional 8-bit The agent's sensor is represented as a grid of large sensor "pixels" of equal 107 rectangular shape. The dimensions of the sensor pixels are given in terms of landscape 108 pixels. A pecten-like sensor could, for example, be a 40 × 1 grid of sensor pixels that 109 each represents a tooth and corresponds to a 6 × 12 area on the landscape. The choice 110 to model the sensor as a grid corresponds to the grid-like physiology and neurology of 111 the pectines [10,11,15]. 112 We call the grid of sensor pixels resulting from the agent's using its sensor to observe 113 the landscape a "glimpse" consistently with the familiarity literature. Despite the 114 vision-specific connotations of the word in common use, in this paper a "glimpse" 115 consists of chemo-textural and not visual information. When referring to visual 116 information, we use the word "scene." We denote the sensor matrix glimpsed by the 117 agent at position (x, y) and orientation θ by Glimpse(x, y, θ).

118
To compute Glimpse(x, y, θ), we first crop the landscape to the rectangle 119 representing the sensor centered on (x, y) and rotate to the agent's orientation given by 120 θ. (Rotations use nearest-neighbor interpolation.) The resulting image represents the 121 relevant portion of the landscape that is directly under the sensor. The grid of sensor 122 pixels is then superimposed on the cropped portion of the landscape. The value of each 123 sensor pixel is computed from the block of landscape pixels it covers: The brightness of 124 the sensor pixel is set to the mean brightness of the block, the chemical identity of the 125 shows the resulting glimpses, and (2d) shows the resulting familiarity "compass." See section 4 in S1 Appendix for the parameters of the shown agent. sensor pixel is set to the chemical that has the greatest total concentration across the block, and the concentration is set to the total concentration of the chosen chemical.

127
Note that each sensor pixel detects only the chemicals directly in contact with it. This 128 restriction is justified by [14], which shows that the response of peg sensilla to pure 129 volatiles, such as 6-carbon alcohols, falls to essentially nothing when the stimulant is 130 further than 10 microns away.

131
The resulting sensor matrix is itself an 8-bit HSB image. To simulate a less sensitive 132 sensor, the brightness values in the sensor matrix are each rounded to the closest of a 133 small number of valid levels. If the number of allowed levels is set to two, for example, 134 the result is a binary sensor: each pixel's brightness is rounded to the nearer of 0 or 255. 135 Our model can also independently apply similar rounding to the hue and saturation in 136 order to simulate a configurable sensitivity to chemical differences, but we have not 137 done so in this work.

138
Training 139 Given a training path as a sequence of points (x 1 , y 1 ), (x 2 , y 2 ), ..., (x n , y n ) we collect the 140 training glimpses T 1 , T 2 , ..., T n−1 as Training is shown schematically in panel 1 of Figure 3.  back-and-forth rotation, called "saccading," is observed in other animals, such as ants and bees, hypothesized to use familiarity navigation [54,55]. Saccading allows the 153 animal to test a range of candidate directions in which it could move. Rather than 154 rotating their entire body, like bees do, we propose that scorpions saccade by moving 155 their pectines, which are highly mobile when the animal walks (see, for example, Figure 156 3 in S1 Appendix). The process of saccading and choosing an orientation is shown 157 schematically in panel 2 of Figure 3.

158
To complete a step, the set of saccade offsets S is first defined as the S n angles evenly 159 distributed over the interval [− 1 2 S width , 1 2 S width ] including the endpoints, where S width 160 is the saccade width parameter and S n is a parameter denoting the number of glimpses 161 per step. If the agent is currently at position (x, y, θ), its new heading is chosen as: The agent then steps in the chosen direction, achieving a new position of where l is the step length parameter. 163 We define the familiarity of a sensory glimpse G as 164 Familiarity(G) = arg max i=1,...,n−1 where d(·, ·) ∈ [0, 1] is a metric (in the informal rather than strict sense) among glimpses. 165 We define d(·, ·) as a weighted combination of a metric for textural differences and a 166 metric for chemical differences: where β ∈ [0, 1] is the weight that determines how much relative importance to give to 168 chemistry versus texture. For the texture difference metric, we use the sum of absolute 169 differences (SADS) metric [52]: where G(i, j, {H, S, or B}) denotes the H, S, or B component of the (i, j)th sensor pixel in glimpse G. For the chemical difference metric, we define: In words, when the (i, j)th sensor pixels in G 1 and G 2 have the same chemical identity, 171 the summand is the difference between their concentrations. When they have different 172 chemical identities, the summand is the sum of their concentrations. This yields the 173 intuitive result that there is a bigger difference between a high concentration of The code for our model and visualizations is open-source and publicly available [56]; 183 details on its optimized implementation can be found in section 4 of S1 Appendix. primarily inspired by high-throughput virtual screening and optimization efforts in 190 materials science such as [57] and [58]. Similar methods have also attained prominence 191 in proteomics, computational chemistry, pharmacology, and many other fields. with which the scorpion's brain distinguishes between the pegs on a tooth. tooth are processed at the brain.

243
The grouped nature of the pegs is supported by experimental evidence: It 244 takes approximately eight pegs working in tandem to distinguish between 245 chemicals [15]. While each peg contains a mechanoreceptor [8,9], and the 246 kinetics of the mechanosensory response [12] may yield finer-grained 247 resolution, the textural discrimination power of the pegs has not been tested. 248 As such, we chose to use the more conservative values based on the 249 chemosensory response. Our analysis of behavioral data (section 3 in S1 Appendix) led us to estimate 275 that the scorpion takes about 180ms to move 1.3 mm, the length of one step. 276 Dividing yields a maximum of approximately 156 glimpses per step.

277
To be conservative and working under the assumption that the chemical

305
Success metrics 306 We defined and computed five metrics for every trial:  Hereafter, we define a "sensory configuration" as a unique combination of values for the 317 parameters listed as sensory properties above. An "environmental configuration" is 318 defined similarly from the list of environmental properties. A "sensory-environmental 319 configuration" is a unique combination of one of each. We define a "situation" as a 320 unique combination of a start offset, a training path curvature, and a specific landscape 321 (as mentioned above we tested four different but identically generated landscapes).  indicates the percentage of all trials with the given sensory configuration that were successful. "Avg. properties of successes" show averages over all trials with the given sensory configuration that were successful; the values given after ± are standard deviations.
For each sensory, environmental, and sensory-environmental configuration, we 327 computed its success rate as the percentage of successful trials among all that used the 328 given configuration. The five most successful sensory, environmental, and 329 sensory-environmental configurations are given in Table 1, Table 2, and Table 3 330 respectively. We computed the success rates twice: first across trials without start 10 ± 2.78 mm "% successful" indicates the percentage of all trials with the given environmental configuration that were successful. "Avg. properties of successes" show averages over all trials with the given environmental configuration that were successful; the values given after ± are standard deviations. 40°40 × 6 0.5 89 ± 24 % 0.9 ± 0.2 1.72 ± 4.30 mm "% successful" indicates the percentage of all trials with the given sensory-environmental configuration that were successful. "Avg. properties of successes" show averages over all trials with the given sensory-environmental configuration that were successful; the values given after ± are standard deviations.
offsets and then across all trials. The first calculation serves to establish the 332 fundamental limitations of NFLS in an ideal world, while the second introduces realistic 333 imperfections.

334
Without start offsets, there is at least one physically plausible sensory configuration 336 that succeeds in 100 % of tested environments and situations. We believe that this is 337 strong evidence that the scorpion's environment and physiology contain and sense 338 sufficient information to enable NFLS. The existence of other highly successful sensory 339 configurations -about 5 % of sensory configurations had a success rate above 80 % 340 without offsets (Figure 6 in S1 Appendix) -further supports this conclusion. Among 341 complete sensory-environmental configurations, 2 % (33 configurations) achieved a 342 100 % success rate across all situational parameters ( Figure 5 in S1 Appendix).

Feasibility of NFLS with offsets 344
In general, trials with offsets were three times less successful than those without. Thus,345 in considering all trials, we expected and saw a decrease in success rates: The success 346 rate of the best sensory configuration declined from 100 % to 58 %, while the success 347 rate of the top sensory-environmental configuration declined from 100 % to 67 % 348 ( Table 1, Table 3). Nonetheless, the top configurations still succeeded in many  to use familiarity navigation [68,69]; there is at present, however, no evidence for or agent that started at an offset but quickly captured the path is just as successful, if not 406 more so, than a similar one that started on the path. Path completion reflects this 407 reasoning. 408 We expected a number of the trends observed in Figure 5. (The significance of the 409 trends is confirmed by the confidence intervals in Figure 7 in S1 Appendix.) For increasing success rates. The smoothing effect could also explain why the best performance for a binary sensor was seen at the low resolution of 40 × 2, a setting that 425 also blurs the landscape by averaging more landscape pixels into a single sensor pixel.

426
Binary sensors were also the only configuration where performance was enhanced by 427 increasing the weighting of chemical information in familiarity calculations: performance 428 peaked at an equal β = 0.5 balance between textural and chemical information.
429 Surprisingly, sensors with more texture levels performed best when chemical information 430 was neglected (β = 0), though performance did not dramatically drop until high where it is necessary to distinguish between many more glimpses, having more chemical 454 information could become unambiguously critical for navigational success. 455 We propose, however, that these results can be explained through a fundamental 456 interplay in familiarity navigation between "complexity" -the need to distinguish 457 between glimpses in different places and orientations -and "continuity" -the need 458 for small offsets to minimally affect familiarity [37,52] small compared to the size of the sensor, continuity will be low but the landscape will continuity. In the case of β = 1.0, texture is completely ignored and the imbalance is 479 even more pronounced, causing navigation to become essentially impossible (see 480 Figure 5). 481 We believe that this trade-off is a critical area for further research in familiarity present to support such a hypothesis, we believe it deserves attention and could be 506 relevant for robotic applications regardless of its biological plausibility. 507 We also propose that our analysis of the effects of different parameters on 508 navigational performance, although constrained in this work by scorpion physiology, 509 could serve as a guide for the design of robots that use NFLS, which could have  feedback, and ongoing support.

552
We also thank Mariëlle Hoefnagels for carefully reviewing the manuscript and her 553 many valuable suggestions for improvement.

554
The authors thank the other members of the Gaffin Scorpion Lab -Tanner Ortery, 555 Safra Shakir, Drew Doak, and Jacob Sims -for their feedback and discussions on early 556 work.

557
The computing for this project was performed at the OU Supercomputing Center for 558 Education and Research (OSCER) at the University of Oklahoma (OU). The authors 559 gratefully acknowledge the staff of the center for their helpfulness.