Abstract
Somatosensation connects animals to their immediate environment, shaping critical behaviours essential for adaptation, learning, and survival. The investigation of somatosensation in mice presents significant challenges, primarily due to the practical difficulties of delivering somatosensory stimulation to their skin as they are in motion. To address this problem, we have developed a method for precise delivery of somatosensory stimuli in mice as they are moving through environments. The approach employs real-time keypoint tracking and targeted optical stimulation, offering precision while preserving the naturalistic context of the behaviours studied to overcome the traditional trade-offs between experimental control and natural behaviour. We demonstrate the method across nociceptive testing to unrestrained behaviour in different environments. We observed that minimal nociceptive inputs can evoke rapid behaviours and these modify movements when stimuli are applied during motion. This method provides a way to explore the diverse roles of somatosensation, from reflexes to decision-making, in naturalistic settings.
Introduction
Somatosensation is integral to how animals interact with their environments, shaping behaviours crucial for adaptation, learning and survival. However, current methods for somatosensory stimulation of skin usually necessitate direct physical contact, requiring researchers to limit the behaviours and environments of model organisms. This often involves restraining animals, restricting their movements, and inducing stress-related physiological and behavioural changes. There is a growing demand for studying freely-moving mice in naturalistic environments (1–5). Yet, unlike visual, olfactory, and auditory systems, that can exploit remotely acting stimuli, delivering somatosensory inputs in mice that are moving through environments presents a significant challenge. To address this challenge, we introduce a method capable of automatically delivering somatosensory stimuli in a remote, precise, and dynamic manner. Our approach enables the targeting of mice within naturalistic settings as they engage in free exploration.
The somatosensory system is essential for connecting the body to the environment, recruiting reflexes, fine-tuning movement through sensory feedback, and sensing temperature, touch, and potential tissue damage. These functions are critical for sensorimotor loops, perception, learning, and action. Traditionally, probing this system has involved a compromise between achieving spatiotemporal precision and preserving natural animal behaviour. Precise stimuli can be delivered to specific regions such as the paws, body, or whiskers in mice if they are head-fixed or restrained, enabling studies on sensation, motivation, learning, and decision-making (6–14).
In preclinical settings, when studying rapid reflexive responses, somatosensory stimuli are typically delivered to the paws of mice confined to small chambers, a setup that impairs natural behaviour. This approach has been instrumental in gaining insights into nociception and broader somatosensory processes, and recent efforts have begun to automate these stimulus deliveries to reduce manual labour and experimenter bias (15–17). Meanwhile, somatosensory stimuli such as shocks and air puffs have long been used in aversive learning studies to probe cells and circuits underpinning aversion, avoidance, fear, expectancy, memory formation, and retrieval. These methods have contributed significantly to our understanding over decades, yet often overlook the broader application as somatosensory stimuli.
Applying localised somatosensory stimuli to moving rodents requires experimenters to be in close proximity, to continuously observe the movements of the animals as they explore, and manually touch a hind paw at regular intervals (18, 19). This has demonstrated utility in recent studies of circuits involved in pain processing (20, 21). Such attempts allow for environmental exploration, but the close proximity of researchers required for the manual stimulation of moving mice undercuts its ecological validity; conversely, shock-based methods can be automated, enhancing ecological validity but lack the spatial precision required for detailed somatosensory research. By leveraging advances in remote somatosensory stimulation (22) and real-time markerless body part tracking (23), and with a crucial emphasis on naturalistic environments (3), we now circumvent these traditional trade-offs between spatiotemporal precision and naturalistic settings. Our method can begin to unify these disparate research efforts.
We develop a method that enables precision somatosensory stimulation in mice that are freely moving through large environments. The system works in closed-loop, using real-time body part tracking to locate specific coordinates (keypoints) and precisely target thermal or remote optogenetic stimuli using lasers. These spatiotemporally precise somatosensory stimuli are fully automated and can be tuned according to environmental contexts. We establish random-access multi-animal stimulation for basic nociceptive testing, show that minimal noxious somatosensory inputs evoke reflexes and whole-body behaviours, and demonstrate that mice adjust movement in a maze when stimulated during running. Together, we offer a method to deliver precise somatosensory inputs in freely behaving mice, providing a framework to examine somatosensory processes spanning reflexes to decision making.
Results
To develop closed-loop somatosensory stimulation in mice exploring large environments, we built a system that could rapidly track, target, and stimulate mice remotely (Fig. 1). We used real-time pose estimation to target lasers for spatiotemporally precise naturalistic thermal stimulation and optogenetic stimulation of genetically-defined afferent fibres at the skin surface (22, 23). The approach was demonstrated in different environments, allowing automatised somatosensory stimulation in a large open arena and during goal-directed behaviour as mice run through a maze.
Development of closed-loop somatosensory stimulation
We designed a system that could automatically track and target somatosensory stimuli in 0.5 m x 0.5 m environments, leveraging our laser-scanning design (22). Environments were placed atop a large glass platform so that exploring mice could be recorded with a camera from below (Fig. 2a,b). The camera enabled real-time pose estimation with DeepLabCut-Live! (23), providing the keypoints of multiple body parts for every frame of the camera feed. The x and y frame keypoints were converted to pre-mapped control signals for x and y galvanometer mirrors to direct laser beams as required. This provides a flexible closed-loop system to dynamically control scanned stimulation according to behaviourally-relevant criteria.
The method was optically precise. The glass platform was just less than 1 m above the galvanometers, resulting in a maximum focal length variability of 3.49%, minimising differences in laser spot size across the stimulation plane. The absolute optical power and power density were uniform across the glass platform (coefficient of variation 3.49 and 2.92, respectively; Extended Data Fig. 1a). The laser spot size was calibrated to 2.00 ± 0.08 mm2 (coefficient of variation = 3.85) at the stimulation plane with a series of lenses along the blue light beam path (22). The laser spot could be moved along specific trajectories creating patterns (Fig. 2c, Extended Data Fig. 1b). We used 10,000 x,y voltage pairs to jump the laser across the stimulation plane and map the voltages to corresponding pixels (Fig. 2c). Surface fits resulted in a pixel-voltage mapping dictionary that minimised non-linear distortions resulting in a mean average Euclidean error (MAE) of 1.2 pixels (0.54 mm) between predicted and actual laser spot locations. The system and glass platform were stable, with a small displacement around half a hind paw width (MAE = 1.94 ± 2.61 mm) each week during intensive use of the system (Extended Data Fig. 1c). Remapping with our approach takes <30 minutes to correct for any such drift.
The method could accurately target moving mice. Real-time estimation of keypoints (Fig. 2d,e) was used for closed-loop control of the galvanometer mirror angles, resulting in pairwise correlations of R = 0.999 along both x - and y -axes (Fig. 2f). Thus, the laser could be targeted in real time to body parts when certain programmatic criteria were met (see Extended Data Fig. 2 for the information flow). For instance, the laser beam could be triggered if the Euclidean distance of an individual keypoint moves with variance ≤v for time ≥t and keypoint estimation likelihood was ≥l; where v, t, and l are user-defined variables. To determine the targeting accuracy, we used wild type mice that did not express ChR2 so that blue light pulses did not cause behavioural responses. The latency between acquiring a 1.2 MP frame and targeting a laser was 84 ± 12 ms (mean ± SD using 16,000 trials across 4 wild type mice; Extended Data Fig. 1d). This delay is sufficient to target paws during the movement of the mouse: the hind paws were static for 350 ± 44 ms during the stance phase and were moving for 100 ± 1 ms during the swing phase (Extended Data Fig. 1e). The positioning of paws during the stance phase of locomotion create ‘footprints’ in keypoint space, indicating moments when the paws are momentarily still even as the mouse moves (Fig. 2g).
Mice move at variable speeds while exploring, which can be categorised (Fig. 2h). While stationary (58 ± 7% of the time), the hind paws were static in 99.8 ± 0.1% frames, and this remains high even as speed increases: 95.7 ± 0.1% frames for low speed (28 ± 4% of the time), 79.3 ± 0.1% frames at medium speed (8 ± 2% of the time), and 60 ± 0.3% at high speed (6 ± 1% of the time). Therefore, even during locomotion the hind paws are static long enough for stimulation with short-latency body part tracking (Fig. 2i,j). The laser was successfully targeted to the hind paws with a high success rate when the mouse was moving at medium speeds (75.5 ± 3.0% - 95.5 ± 2.6%; Fig. 2k), though the success rate was reduced during very fast movements, as expected. We found zero keypoint confusion across all speeds (657/657 trials targeted the correct hind paw). The optical system targeted the hind paws of wild type mice exploring an open arena 650 ± 30 times within 5 minutes at 30 fps (n = 4 mice), providing ample opportunity for experimental stimulation in short periods of time. We also targeted the fore paws but confusion between them (5.7 ± 2.2%) resulted in a low success rate compared to the hind paws (Extended Data Fig. 1f). Laser spots were delivered with high accuracy to targeted body parts, typically showing an error of 0.7 mm MAE (Fig. 2l). Thus, this design resulted in a fully automated system that facilitates precise optical targeting of freely-moving mice in a large environment.
Somatosensory stimulation in a large environment
We next use the closed-loop method to automatically deliver somatosensory stimuli. An open arena was chosen to promote movement (Fig. 3a,b). Behaviour could be examined by pose estimation, reconstructing the movement trajectories of body parts in the arena (Fig. 3c). Minimal somatosensory stimulation was achieved using brief transdermal optogenetic stimulation of nociceptors: mice expressed the blue-light sensitive opsin, ChR2, in nociceptors innervating the skin (Trpv1::ChR2; (11, 22, 24)). Mice explored the arena and when stationary were stimulated on the hind paw with brief 10 ms pulses of light, each at least 10 minutes apart (Fig. 3b). We found that 100% of trials accurately targeted and hit the hind paw (Fig. 3d).
The precise somatosensory input could be mapped to motor output in the form of coordinated behaviour in freely moving mice. The brief stimuli caused paw withdrawals along with global behaviour, including rapid head orientation and body repositioning (Fig. 3e,f). These behaviours could be quantified as time-locked keypoint traces for each body part (Fig. 3e,f). Our method enables fully automated and precise optical delivery of somatosensory stimuli to freely-moving mice while simultaneously recording and quantifying behaviour.
Somatosensory stimulation of mice running through a maze
We demonstrate that freely-moving mice, running through a maze environment, could be stimulated with high spatiotemporal accuracy. The ability to deliver somatosensory stimuli to mice moving through ecologically relevant tasks has previously been a significant challenge and experimenters typically stimulate the moving mice manually (18–21).
To motivate movement in an ecologically relevant task, we built a novel maze that encourages alternation between two rewards at separate locations. The maze had one-way doors, ensuring that after making a left-right decision, mice can potentially obtain a reward but are required to circle back to the maze’s start point to re-initiate the action-reward cycle (Fig. 4a,b). The reward ports were activated by a brief nose poke and delivered a drop of sucrose water, followed by a timeout period. In addition to the timeout, the reward ports were only reset once the mouse exited the reward chamber through a one-way door, as determined by real-time keypoint tracking. The combination of a long timeout and required exit from the reward chamber rendered it more time-efficient to cycle between the two reward chambers and encouraged running along the corridors.
Mice were trained over three days in the maze. In the first training session, they rapidly navigated the one-way doors with little delay after a few attempts. This quickly led to the use of reward ports in the chambers on either side of the maze. The first reward was collected after 440 ± 117 s (4 mice). By the third training session, this time decreased to 82 ± 38 s, suggesting rapid learning. During the reward port timeout, mice typically explored the corridor connected to the reward chamber or exited it to explore the entry corridor. By the third day of training, the number of trials had increased for all mice. On average, mice completed 64 ± 24 rewarded trials in the third training session (<2 hour in duration), although individual performance varied considerably. For example, one mouse completed 97 trials in the third session, while another mouse completed only 10 trials (Fig. 4c).
Mice could be stimulated effectively while they were running. Example movement trajectories from one mouse are shown in Fig. 4d. Using Trpv1::ChR2 mice, we demonstrate that a brief (3 ms) nociceptive stimulus can be used in the context of localised somatosensory hypersensitivity. We employed a widely used model of inflammatory pain with an unilateral injection of complete Freund’s adjuvant (CFA) in the hind paw. The nociceptive stimulus was successfully targeted to the right (uninjected) paw with negligible confusion between paws (705/706 stimuli targeted the correct hind paw). This enables future studies where phasic and tonic pain can be separated. Despite ongoing hypersensitivity and phasic stimuli, mice remained actively engaged in the task, consistently collecting rewards sequentially from each side as trained (Fig. 4f,g). Their rapid movement along the stimulation corridors (maximum speed of 241 ± 53 mm/s) necessitated precise targeting (Fig. 4e).
We found that the nociceptive stimuli can slow goal-directed movement. The stimulation resulted in reflexive responses that interrupt locomotion as mice orient in response to the nociceptive stimuli, presumably to investigate the stimulated site. This results in slower locomotion speeds (P = 0.037 with paired t-test, n = 4 mice) and more variable trajectories along a higher frequency stimulation corridor (Fig. 4h). Once past the stimulation corridors the mice successfully collected a reward in almost all trials. Accurately targeting and delivering somatosensory stimuli to mice as they explore naturalistic and complex environments can provide insights into how sensory inputs shape ecologically-relevant goal-directed behaviour.
Multi-animal assessment of nocifensive behaviours
Ecologically valid tasks can be used to examine naturalistic behaviour during learning and action, but certain situations may require that this is grounded in basic behavioural responses. For example, to explore the relationships between nociceptive inputs, nocifensive responses, and complex behaviour. To demonstrate flexibility of the approach, we show automatic testing of nocifensive behaviour on nine mice (3 × 3 configuration) in individual chambers. A method for random access targeting was developed (Fig. 5a); we detected idle mice by monitoring the motion in each chamber, then rapidly selecting and cropping to one chamber for real-time pose estimation and stimulation (Fig. 5b). This reduced the computational burden compared to running real-time pose estimation on all nine mice simultaneously as the image resolution was reduced (23). The process operated as a loop, ensuring that automated stimuli were spaced by at least one minute apart for each mouse.
To test the method we used: (1) thermal stimulation with a 10 s infrared (785 nm) light spot on the hind paw of wild type mice; and (2) optogenetic stimulation of cutaneous nociceptors with a 3 ms blue (473 nm) light spot on the glabrous plantar surface of the hind paw of Trpv1::ChR2 mice. This >3000-fold difference in pulse duration demonstrates the temporal resolution afforded by optogenetics. We varied the intensity of the optogenetic stimuli using 10 Hz pulse trains (0.5 - 8 mW/mm2) compared to a single pulse at higher intensity (40 mW/mm2).
Thermal and optogenetic stimulation induced similar nocifensive behaviours, encompassing paw responses and whole-body movements (Fig. 5c). This can be seen from traces of hind paw movement following thermal or optogenetic stimuli (Fig. 5d and f). Mice concurrently directed their heads towards the stimulus location while repositioning their bodies away from it, consistent with previous studies demonstrating the synchronisation of global behaviours like head orienting with local reflexes on a sub-second timescale (24, 25). In the case of thermal stimulation, the traces of the hind paw movements and cumulative distributions demonstrated nocifensive behaviours elicited with infrared stimulation (Fig. 5d and e).Optogenetic stimulation-induced response latencies followed the rank-order of stimulus intensity, showing the dynamic range possible with this method (Fig. 5g). Littermate control mice did not exhibit responses consistent with open arena stimulation and descriptions of the same mouse lines (22, 24). The method is demonstrated as both versatile and precise, enabling the study of a spectrum of behaviours in freely moving mice. These behaviours include local paw withdrawals to global movements including head orientation and body repositioning. This establishes a robust technology and framework for future investigations into complex behaviours, including decision-making and learning in the context of pain and somatosensation.
Discussion
The somatosensory system provides a critical link between the brain, body, and the immediate external environment. The complex ways in which this system supports movement, learning, and action in rodents has historically posed substantial methodological challenges. Traditional methodologies have varied widely, encompassing both innovative and practical approaches—from the precise stimulation of whiskers or skin in head-fixed animals (8–13, 26–28) to the more straightforward manual touching of paws in freely moving mice (19–21, 29). These methods, however, have inherent limitations in replicating the dynamic and complex interactions experienced in naturalistic settings. In response to these limitations, we have developed a system that enables somatosensory stimulation in large environments. This closed-loop system automatically tracks, targets, and stimulates mice remotely so that it is now possible to study the somatosensory system in naturalistic environmental settings.
Generating somatosensory inputs in freely moving mice requires stimuli that are spatially and temporally precise. We achieved millisecond-timescale stimulation of small skin areas using transdermal optogenetics (‘remote touch’ (22)). Opsins were genetically targeted to specific afferent fibres innervating skin and activated with light targeted precisely via a laser in free-space. This beam path was aligned to a second laser system and employed for thermal stimulation on a timescale of seconds. Delivery of these stimuli was controlled by a feedback system comprising three main components: (1) real-time tracking infers the keypoints of various body parts, continuously transmitting this data to the controller; (2) user-defined policies determine the stimulation conditions (spatial, temporal, and physiological state-dependence), providing control signals for the actuators; and (3) mirror galvanometers target the beam path to specified keypoints and signals trigger light delivery, following which real-time tracking is then resumed. We demonstrate that this system can precisely target the hind paw for stimulation, even as the mice are in motion.
We demonstrate the versatility of the method from automated multi-animal nociceptive testing to ephemeral stimulation of freely moving mice in a large arena. We show that stimuli could be targeted with high accuracy and resulted in immediate behavioural responses that could be mapped. The method was used to deliver somatosensory stimuli to mice running through a maze. Mice were trained on an alternation task with stimuli applied en route to the reward, thus separating choice, punishment, and reward in a naturalistic environment. Mice with ongoing pain still readily engaged in this task during nociceptive stimulation (punishment). They did not avoid routes in this task, but the recruitment of reflexes during locomotion caused immediate evaluative behaviour that temporarily disrupted goal-directed behaviour. Achieving such localised stimulation has been challenging with traditional methods: electric grid floors generate variable generalised stimuli that are complicated in established models of chronic pain that generate unilateral hind paw hypersensitivity, and manual stimulation lacks capacity, reliability, and is potentially confounded by experimenter and observer biases. Our method addresses these issues, allowing for the dissociation of touch, phasic pain, and tonic pain to better understand their relationships with behaviour.
Stimulation of the body and paws enables the study of pain, touch, thermoception and movement (30–41). Paw stimulation is also ubiquitous in aversive learning and memory, using a crude shock stimulation with a grid floor (42, 43), and can now be carried out with precision. The stimulation can be static or dynamic and localised to small areas on the body in an automated manner. Automation improves the spatiotemporal precision of stimulus delivery compared to traditional manual methods, it reduces labour and enhances the reliability of the data. All experiments were conducted remotely from an adjacent room to minimise potential observer effects and biases on the mice. Automated nociceptive assays have principally focused on the initial rapid movements elicited by stimulation of mice in small chambers (15). Here, we provide a method to examine how these rapid movements are embedded within complex behaviour in naturalistic environments, opening new ways to investigate nociception, and somatosensation more broadly.
While we demonstrate the utility of the optical system using nociceptive stimuli, this method can deliver various somatosensory inputs by targeting specific afferents for selective opsin expression, whether they are thermoreceptive, chemoreceptive, or mechanosensitive (15–17, 22, 44). It is important to acknowledge that these represent artificial stimuli that do not occur in nature but provide much needed spatial, temporal, and genetic precision (42, 45, 46). Our method enables delivery of multiple wavelengths of light separately or together, if combinations of opsins were required from the vast optogenetic toolbox. Opsins can be used to activate or silence neurons, with a range of kinetic properties, diverse light wavelength profiles allowing multi-colour manipulations, or control different downstream signalling effectors (47). Thermal stimuli may be more naturalistic and are also used routinely in research (13, 15, 48) but slow thermal dissipation can require mice to be stationary for consistent stimulation. Automation provides opportunities for development of analgesics, particularly when moving beyond reflexes to spontaneous, free operant behaviours. Somatosensory stimulation in naturalistic environments can be readily combined with approaches to quantify behaviour (49–56).
The method has many applications for sensorimotor reflexes, perception, memory, learning, and action. It is flexible enough to trigger stimulation based on various states, including periods of inactivity or locomotion, at specific spatial locations, and with precise timing. Future work made possible by this method are expected to include examining how somatosensory input can interrupt and modulate specific swing phases (39), self-grooming, posture states (24), and other spontaneous behavioural syllables (51). It can facilitate investigations of naturalistic learning, whether through mazes or social interactions or interactions with the environment or objects (57, 58), and of sleep fragmentation (59), anxiety (60, 61), fear (42), stress (43). Finally, it has potential to provide free operant methods for analgesic development for chronic pain. These directions can utilise tools for mechanistic dissection of cell and circuit biology in the context of naturalistic behaviours.
In summary, establishing how behaviour is shaped by somatosensation requires that mice can be stimulated while freely behaving. We describe a method that addresses this need, delivering somatosensory inputs in a manner that is remote, precise, flexible, state-dependent, and fully automated to target freely-behaving mice that are actively exploring naturalistic environments.
Methods
Animals
Mice were housed at 21 ± 2°C, 55% relative humidity, following a 12-hour light: 12-hour dark cycle with ad libitum access to food and water. Optogenetic experiments were performed using mice with ChR2 selectively expressed in nociceptors (Trpv1::ChR2). Heterozygous Trpv1-Cre mice, which have Cre recombinase inserted downstream of the Trpv1 gene (RRID:IMSR_JAX:017769, B6.129-Trpv1tm1(cre)Bbm/J, (62)), were crossed with mice homozygous for Credependent ChR2(H134R)-tdTomato (RRID:IMSR_JAX:012567, Ai27(RCL-hChR2(H134R))/tdT-DChR2-tdTomato, (63)). This produced progeny heterogeneous for both transgenes (Trpv1::ChR2) and control littermates that do not encode Cre recombinase but do encode Cre-dependent ChR2-tdTomato. Blue light directed to the glabrous plantar surface of the hind paw in Trpv1::ChR2 mice results in the direct time-locked activation of broad-class nociceptors with single action potential resolution (24). Experiments with the infrared (IR) laser were performed using wild type mice (RRID:IMSR_JAX:000664, C57BL/6J). Equal numbers of male and female adult mice were used (aged between 6 and 40 weeks), with 2 - 5 cohorts of mice per experiment. All animal work was carried out according to the UK Animal Scientific Procedures Act (1986), approved by the UCL Animal Welfare and Ethical Review Body (AWERB) and performed under licenses released by the UK Home Office.
Design and development
Several substantial improvements were made to the optical design (22) to enable automated, multi-colour, closed-loop optical stimulation across a large environment. Part lists are provided in Supplementary Tables 1, 2 and 3.
The optical system was mounted on a large aluminium breadboard (0.75 m x 0.75 m) to provide more space for optical components and stability to the large glass platform. The diode laser beam (blue light, 473 nm, Cobolt, 06-01 MLD) was focused to the center of the galvanometers using two broadband dielectric mirrors (M1 and M2) via an axial adjustable lens (L1, 30 mm focal length), a collimating lens (L2, 150 mm focal length), a long focal length lens (L3, 500 mm focal length). We added a second laser beam path to enable multi-colour stimulation, using separate mirrors and lenses and an appropriate dichroic mirror. The infrared (IR) laser (785 nm, SLOC, RLM785TA-1500) beam passed through an optical beam shutter (Thorlabs, SH05RM) to pulse the light with a controller (Thorlabs, KSC101). Two additional mirrors (M3 and M4) aligned the IR beam through a long focal length lens (750 mm) to the DM, where the beam path was aligned to converge with the blue light laser beam path into a pair of galvanometer mirrors (GM).
For the large environment, a 0.55 m x 0.55 m glass stimulation platform was held in place above the optical components via a vertical optical construction rail (95 mm x 95 mm x 1500 mm) attached to the aluminium breadboard, as shown in Fig. 2a. Aluminium construction rails (25 mm x 25 mm x 500 mm) were secured at each corner of the glass platform frame and the opposite side of the platform to the optical rail to ensure stability. The blue light laser spot size (1/e2 width) was calibrated to 2.3 mm2 using the non-rotating L1 adjustable lens housing and an optical beam profiler (BP209-VIS/M, Thorlabs). For the experiment using the IR laser, two near-IR hot mirrors (Thorlabs, FM201) were placed on top of the USB 3.0 camera (acA1920-40um camera, Basler) lens to minimise how much IR light was imaged by the camera.
For real-time markerless pose estimation to support automated, closed-loop stimulation, an additional camera was positioned below the glass stimulation platform. Behaviour was captured at 30 frames per second (fps) via a USB 3.0 to the primary computer (C1), which controls video recording, pose estimation, calculations, and directs the galvanometer mirrors to target lasers.
Optical system calibration
The optical parameters of the system were characterised using the blue laser due to the high quality beam. The uniformity of blue light diode laser spot size across the glass stimulation platform was measured with an optical beam profiler (Thorlabs, BP209-VIS/M) placed at 16 locations across the platform. The beam profiler aperture was positioned at these locations using a custom laser-cut acrylic plate. Laser power was attenuated by 25% with an ND filter (Thorlabs, NE506B, optical density 0.6) to be within the operating range of the beam profiler. Absolute power (mW) at the 16 locations was assessed with a S121C photodiode measured by an optical power meter (Thorlabs, PM100D). The laser beam area and the optical power meter were used to calculate power density (mW/mm2) at each location (Extended Data Fig. 1a).
There was negligible distortion in the acquisition camera across the glass platform. This was determined by imaging a chessboard camera calibration pattern of 20 mm x 20 mm squares in a 14 × 10 grid at 5 different locations across the glass. OpenCV was used to measure square sizes and we calculated the min-max range of all squares was <1 pixel, at 0.89 pixels, which is considered negligible. The Euclidean norm was computed for a matrix of the corners of all squares, providing a scale factor of 0.45 mm/pixel.
To generate a pixel-voltage coordinate dictionary that can be used to convert x,y pixel coordinates to x,y galvanometer voltage coordinates, the following steps were carried out. First, the galvanometers were raster stepped to direct the blue laser spot to a grid of 10,000 points (100 × 100), capturing these with the pose-estimation camera. For every point of the raster, the x,y voltages were mapped to the the peak intensity pixel. A x,y voltage was then computed for every pixel by interpolation, fitting with a two-dimensional polynomial equation. This automated procedure took <30 minutes and resulted in a pre-computed pixel-voltage dictionary. Entering an x,y coordinate for a body part, inferred from the camera feed, returns the interpolated x,y voltages to target the laser to the same location. We repeated the mapping once every week over the course of 10 weeks to ensure the stability of the mapping. This was done during extensive experimentation to account for potential movements during cleaning and changes in arenas.
Pose estimation
Training a DeepLabCut network model
DeepLabCut installation (v2.2.0.2, (49, 64)) was coupled to Tensorflow-GPU (v2.5.0, with CUDA v11.2 and cuDNN v8.1). Training of the DeepLabCut neural network model was used with default network and training settings in an Anaconda environment with Python v3.8.13 installed. Videos were selected based on their representation of the whole breadth of behavioural responses, and k-means clustering was used to select the training images. 437 frames were labelled from 22 selected videos, and the network was trained for 200,000 iterations. Following further optimisation of lighting, 210 frames from 11 additional videos were manually labelled and machine labels from 171 outlier frames from 9 videos were manually refined. These were fed back to the training dataset and the network retrained for a further 200,000 iterations. Training resulted in an MAE of 3.29 pixels, which is comparable to human ground truth variability quantified elsewhere (see (49)). This model was used for all pose estimation. The video resolution (1920 × 1200) required a processing time higher that the frame interval (33.33 ms), resulting in real-time pose estimation on a sub-sample of all frames recorded. Therefore, post-hoc pose estimation was carried out to analyse all frames.
Real-time tracking
DLC-Live! SDK (v1.0, (23)) was installed on a computer with fast processing capabilities (AMD Ryzen 5 3600 Six Core CPU (3.6 GHz - 4.2 GHz), NVIDIA GEFORCE RTX 2080 Ti GPU, quad-core RAM (64 GB), Windows 10, custom manufactured by PC Specialist Ltd.) in an Anaconda virtual environment (Python v3.7.10) with DeepLabCut (v2.1.10.4) installed. DLC-Live! SDK installation was coupled to Tensorflow-GPU (v1.13, with CUDA v10 and cuDNN v7.4). Integration of the Basler camera and the DLC-Live! GUI (DLG) utilised Python wrapper, pypylon (v1.7.2, Basler), to facilitate communication with the pylon Camera Software Suite through a Linux subsystem in Windows 10 (WSL Ubuntu, v20.04). The trained DeepLabCut network model was loaded into the DLG, which captures the data from the camera and performs real-time pose estimation on the incoming camera feed. Custom code was written in Python for each experimental design; this comprised the conditions that defined the behavioural protocol and controlled stimulation as required (see Extended Data Fig. 2).
Optical system characterisation
We characterised the latencies for real-time tracking, targeting, and stimulation. Control signals for the camera, mirror galvanometers, and laser were measured simultaneously at 100 kHz using a Digidata 1440a (Molecular Devices). During the exposure of each 5 ms frame, the tracking camera sent a voltage signal from its GPIO. The x - and y -axis scanner position outputs from the two mirror galvanometer drivers were used to monitor the movement of the mirrors. A 1 ms laser signal was sent to a microcontroller (Arduino UNO) to generate parallel digital outputs, which triggered the laser and monitored its timings. All four control signals were recorded during four 5-minute sessions with a wild type C57BL/6J mice exploring a circular arena. The tracking camera was set to record at a resolution of 1100 pixels x 1100 pixels, with a 5 ms exposure at 30 fps. The processor code identified frames with a likelihood >0.8 for the ‘left_hindpaw_mid’ keypoint. The x,y pixel location was then converted to mirror galvanometer x,y voltage signals using a pixel-voltage coordinate dictionary. A multifunction DAQ device (USB-6002, National Instruments) was used to send these x,y voltages and subsequently a 1 ms command to the laser-triggering microcontroller. The laser was triggered only if more than 500 ms had passed since the previous stimulation. The camera signal confirmed an exposure time of 5 ms and a frame rate of 30 fps. The latency between camera acquisition and stimulation was calculated by collecting timestamps immediately after stimulation and comparing these to the frame timestamp on which pose estimation was carried out. The latency between galvanometers moving and laser stimulation was determined by comparing the timings of galvanometer jumps and laser signals. This delay was 3.3 ± 0.5 ms (mean ± SD, for 245 trials). To synchronise the four voltage signals with frame and stimulation timestamps, we determined the timing of the first galvanometer jump when pose estimation was initialised.
The accuracy of real-time tracking for the ‘left_hindpaw_mid’ keypoint was assessed by manually identifying its coordinates (ground truth) and comparing these to the coordinates predicted by the DeepLabCut network model in real time on frames extracted from 5 videos of different mice exploring an open arena. Frames with a likelihood >0.8 were selected, as in experimental protocols. Euclidean distances were calculated pairwise between ground truth coordinates and model-generated coordinates, and averaged to give the mean average Euclidean error (MAE). The MAE between the predicted and actual coordinates was 1.36 mm (calculated on 1,281 frames).
The accuracy of body part targeting was determined using a high-speed Basler acA2000-165umNIR camera recording 648 pixel x 650 pixel frames at 270 fps during the 5-minute sessions described above. We used a >0.8 likelihood for the ‘left_forepaw’ keypoint in additional sessions. High-speed recordings captured each 1 ms laser pulse, and frames containing these pulses were identified using the reflection of the laser. We manually assessed 1,279 frames and classified them as a ‘hit’ or ‘miss’, and whether a ‘hit’ was on the targeted paw to quantify confusion during keypoint tracking.
The accuracy of hitting the body part depended on how fast the mice were moving. To demonstrate this, we segmented the keypoint series using four speed categories: stationary, low, medium, and high. Speed was calculated using the Euclidean distance the ‘tail_base’ keypoint moved in each frame, divided this by the time elapsed, and smoothing the speed with a 10-frame rolling mean filter. The speed histogram informed the windows of categories (<20, 20-120, 120-220, >220 pixels per second, for stationary, low, medium, and high, respectively). The accuracy of hitting the ‘left_hindpaw_mid’ keypoint was calculated on frames across each speed category: 156 frames for stationary, 159 frames for low, 155 frames for medium, and 187 frames for high. Similarly, the accuracy of hitting the ‘left_forepaw’ keypoint was calculated on 155 frames for stationary, 156 frames for low, 155 frames for medium and 156 frames for high.
The keypoint-laser spot error for the ‘left_hindpaw_mid’ keypoint was determined in the same frames by manually identifying the body part coordinates (ground truth) on frames immediately prior to stimulation and the coordinates for the laser spot on stimulation frames. These estimates were first made for all pre-stimulation frames and then for the set of stimulation frames. The mean average Euclidean error (MAE) was approximately 0.7 mm across all locomotion speeds categories (463 frames).
Multi-chamber real-time pose estimation
To target and stimulate individual mice when multiple mice were present in chambers on the stimulation platform, we performed chamber-based cropping and subsequent real-time pose estimation. Nine mice were placed into nine chambers (100 mm x 100 mm wide, 120 mm tall), we monitored the motion in each chamber to find mice that were ‘idle’, the camera feed was cropped, body parts estimated, and the laser targeted to the hind paw coordinates.
The frame-to-frame absolute difference in pixel values (motion energy) was calculated in each region of interest for the individual chambers. Background noise was removed below <10 motion energy and the mouse was defined as ‘idle’ if the summed motion energy was less than a specified threshold (30,000 motion energy) for 2 seconds. Idle mice that had not been stimulated in the previous 10 seconds were pseudo-randomly selected and their chamber cropped. The pose estimation (x, y) coordinates generated by the DeepLabCut network model were used to target the laser to the hind paw. We modified the following scripts in the dlclive and dlclivegui packages in DLC-Live! SDK to develop the multi-chamber real-time tracking approach: dlclive, utils (dlclive package) and pose_process (dlclivegui package).
Assembly of a naturalistic task
The maze was constructed of 3 mm matte black acrylic (200 mm in height) and measured 500 mm x 180 mm (inner dimensions). The maze was constructed as a single junction, with 40 mm width corridors forming two chambers (70 mm x 100 mm) at either end of the junction corridors. The entrance to the maze was connected to a transparent acrylic chamber (100 mm x 100 mm, 130 mm tall). There were one-way doors (100 mm tall, 30 mm wide) designed as a push-through flap cut from 0.5 mm styrene and secured to the door frame with butterfly pins. The one-way doors were positioned at the junction and to leave the reward chambers; this created a one-way system, so once the mouse exited either chamber it was required to go back around the maze and through the junction decision point to re-enter the reward chamber. Each chamber contained a rectangular opening (20.5 mm x 11.5 mm) through which a water delivery port (Sanworks mouse behaviour port) was fixed to the walls to allow the mouse to collect rewards. A water reward (∼ 5 μl of 10% sucrose water) was delivered when the mouse’s nose broke the IR beam in the water delivery port. The reward delivery system was controlled with an Arduino. In addition to a reward timeout period of 45 - 60 s, the mouse was required to leave the chamber before the water reward port was reset and another reward could be collected.
Behavioural protocols
Experimental room, arena and cleaning set-up
The experimental room was maintained at 21°C with relative humidity between 45 - 65%. All behaviour experiments on the system were performed in custom-built arenas laser cut from matte black acrylic and placed on the glass stimulation platform. Two infrared LED panels illuminated opposite sides of the arena to optimise lighting and achieve high contrast images. White noise at 68 dB was generated with custom Python code, through a L60 Ultrasound Speaker (Petterson Elektronic AB) via a second DAQ device (USB-6211, NI) and amplifier. The white noise played continuously through the duration of the habituation sessions and the experiment. The glass stimulation platform was cleaned twice with 70% ethanol, while the acrylic arena was cleaned twice with an odorless surface disinfectant between each animal to minimise olfactory cues. The lasers were targeted to the hind or fore paw glabrous skin in all experiments, contingent on meeting specific conditions defined in the protocol.
Habituation
Animals were placed in custom matte black acrylic chambers (100 mm x 100 mm, 80 mm in height) placed on a von Frey wire mesh grid and underwent two habituation sessions to the experimental room for 1 to 2 hours. Mice also underwent 1 to 2 handling sessions prior to experiments.
Minimal somatosensory stimulation in an open arena
Mice were placed in an acrylic arena painted matte black (500 mm outer diameter, 150 mm in height, 5 mm thick). Dividers (160 mm tall, 116 mm wide, matte black, 3 mm thick) were slotted onto the arena wall to separate the arena into 6 segments to enrich the environment. Mice were allowed to freely explore for 60 minutes. Individual 10 ms duration blue light laser pulses were remotely targeted to the left hind paw with a ≥10-minute interstimulus interval. Each stimulation was delivered contingent on the conditions that the hind paw was still and had not been stimulated <10-minutes prior. The hind paw was considered still when both the standard deviation of its keypoint (x,y) was <1 pixel and the likelihood of this keypoint was >0.8 throughout a 2 s period. Stimulation was repeated over two sessions on consecutive days. Data was collected from 26 mice in total from 5 different cohorts. 16 Trpv1::ChR2 were split into two groups: 10 mice received blue light stimulation, and 6 mice received no stimulation as control. 10 littermate controls that received blue light stimulation were also used.
Somatosensory stimulation in a maze
Mice were first habituated to the maze without any doors during a 1 hour session. On three separate days following this, they underwent 3 training sessions with the one-way doors in place. Mice were water-deprived 16 to 18 hours prior to each experimental session to motivate the use of water rewards. A trial was defined as the mouse successfully collecting one reward; the collection of multiple rewards required the mouse to leave the reward chamber. Mice that had not made >10 trials by the third training session were excluded from the subsequent stimulation sessions due to poor engagement. 7 out of 12 Trpv1::ChR2 mice from 4 cohorts passed this criteria. As proof-of-principle for precise contralateral stimulation in the context of a unilateral pain state, mice received 7 μL of complete Freund’s adjuvant (CFA) via intraplantar injection in the left hind paw. CFA-injected mice showed significant mechanical allodynia compared to saline controls (P = 0.039 with Mann-Whitney test, n = 4 mice). After baseline measurements of mechanical sensitivity, mice were injected with CFA and mechanical allodynia was evaluated in both hind paws 2 days following injection. Mechanical allodynia resulting from injection of CFA into the left hind paw was measured by von Frey testing (Up-Down method). Mice were placed in individual chambers (100 mm x 100 mm) on a mesh wire floor and habituated to the test setup prior to testing. The von Frey test was conducted blind to experimental groups. Mice underwent two stimulation sessions in the maze, in which optogenetic stimuli were delivered to the right (uninjected) hind paw in the stimulation zones. There were two stimulation protocols: the left corridor was paired with 3 ms laser pulses at 5 Hz and the right corridor was paired with 3 ms laser pulses at 1 Hz. Laser power density was 40 mW/mm2. Training and experimental sessions lasted 1-2 hours.
Somatosensory stimulation in multiple chambers
Nine mice were placed in 100 mm x 100 mm individual chambers in a 3 × 3 configuration, covered by a lid. Mice were habituated to the chambers atop the glass stimulation platform for two hours in two sessions prior to the first experimental day.
For the experiment with thermal stimulation, 18 C57BL/6J mice from 2 cohorts were used. Mice were placed in the chambers for 2 hours, and a 10 s laser pulse was targeted to one of the hindpaws, with up to 10 stimulations on each paw >1 minute apart. IR laser spot size was 2.2 mm2 and the optical power was set to 1.4 W in the first cohort of mice and to 1.65 W in the second cohort of mice to elicit paw responses between 10 - 12 s.
For the experiment with transdermal optogenetic stimulation, 9 Trpv1::ChR2 and 9 littermate controls from two cohorts were used. Mice were similarly placed in the chambers for 2 hours, with optogenetic stimulations delivered to each hindpaw >1 minute apart. The stimulation protocol comprised 6 conditions: a single pulse stimulation at 40 mW/mm2, and a train of pulses (3 ms pulses at 10 Hz for 10 s) at 8, 4, 2, 1 and 0.5 mW/mm2. Spot size was 2.0 mm2. The order of stimulation intensity was pseudorandomised with Euler tours (65).
Data analysis
Data compression, analysis and visualisation
Videos were acquired in AVI format and fed through offline DeepLabCut post estimation to generate (x,y) coordinates and likelihoods for each body part. For the analysis of the recordings with multiple chambers, AVI video files were converted to MP4 format using H.264 compression. The MP4 video files were cropped into individual mouse chambers (230 × 230 pixels) before running pose estimation. Analyses were based on the position of the hind paw or tail base coordinates. All analysis code was written in Python 3 (v3.9.7), using the NumPy, Pandas and OpenCV packages. Data was visualised using Matplotlib and Seaborn packages. Figure schematics in Figs. 3 and 5 were created using BioRender.com and renderings in Figs. 2 and 4 were created using Solidworks.
Calculation of paw response latency with motion energy
Motion energy was computed as the difference between neighbouring frames, removing background noise below 50, and taking the mean. The trial window extends from 1 second before to 10 seconds after the initiation of the thermal stimulation. A trial was considered a response if motion energy >0.32 in the trial window 0.5 seconds post-stimulation onset so that the stimulation artefact was not included. The response probability was determined based on the number of responses recorded for each paw for every mouse. The response latency was calculated for the responses by taking the time point at which the motion energy exceeded the response threshold in the stimulation time window and subtracting this from the stimulation onset time point.
Calculation of paw response latency with pose estimation
The body part coordinates during the trial windows (2 seconds prior to, and 10 seconds following, the onset of the optogenetic stimulation) were used to calculate Euclidean distances from the baseline coordinate of the body part, which was taken at 2 seconds prior to the onset of the optogenetic stimulation (the beginning of the stimulation trial window). Analysis was conducted on the keypoint on the hind paw toes to reduce stimulation artefacts from light delivered to the centre of the hindpaw using coordinates with >0.8 likelihood values. If the keypoint moved more than 3 pixels within the stimulation trial window the trial was classified as a response. For the responses, the latency was determined by taking the time where the movement first exceeds 3 pixels, relative to the stimulation onset time.
Calculation of speed
The estimated tail base coordinates were used to visualise trajectories in the open arena and maze. These estimated coordinates were used if the likelihood >0.8 (open arena) and >0.85 (maze). Tracking errors were removed when the Euclidean distance jumped >30 pixels in a single frame and linear interpolation was performed using the 3 frames either side of the removed values. Speed was calculated by taking the difference in Euclidean distance (Δd) between frames as a function of the respective difference in frame times (Δt) and converting to mm/s using the scale factor calculated above. For the maze, we calculated the speed (vigour) by capturing each ‘corridor run’ from the point the tail base entered the corridor to when it exited the corridor. Speed for the corridor run was calculated within this time window as above.
Statistical analysis
Statistical analysis was performed in Python, with the SciPy, Statsmodels and Pingouin packages. Normality was determined using the Shapiro-Wilk normality test. The specific tests used for each comparison are detailed in the text. Statistical significance was considered as P <0.05. Data are reported as mean ± standard error of the mean (SEM) unless stated otherwise. The mouse was the experimental unit.
Data and code availability
The system design and code for mapping and control are available at https://github.com/browne-lab/closed-loop-somatosensory-stimulation.
Contributions
L.E.B. supervised and conceptualised the project. L.E.B and I.P. designed the experiments, built the closed-loop system, and wrote code. Q.G. set up the reward systems. A.S-P supported initial experiments. I.P. conducted the experiments. I.P and L.E.B. analysed data and wrote the manuscript. All authors reviewed the manuscript.
Supplementary Information
Acknowledgements
We thank Patrick Haggard and Andrew MacAskill for comments on the manuscript. This work was supported a Sir Henry Dale Fellowship jointly funded by the Wellcome Trust and the Royal Society (109372/Z/15/Z) and funding from the Medical Research Council (MR/N013867/1).