ABSTRACT
It has been a major challenge to systematically evaluate and compare how pharmacological perturbations influence social behavioral outcomes. Although some pharmacological agents are known to alter social behavior, precise description and quantification of such effects have proven difficult. The complexity of brain functions regulating sociality makes it challenging to predict drug effects on social behavior without testing in live animals, and most existing behavioral assays are low-throughput and provide only unidimensional readouts of social function. To achieve richer characterization of drug effects on sociality, we developed a scalable social behavioral assay for zebrafish named ZeChat based on unsupervised deep learning. High-dimensional and dynamic social behavioral phenotypes are automatically classified using this method. By screening a neuroactive compound library, we found that different classes of chemicals evoke distinct patterns of social behavioral fingerprints. By examining these patterns, we discovered that dopamine D3 agonists possess a social stimulative effect on zebrafish. The D3 agonists pramipexole, piribedil, and 7-hydroxy-DPAT-HBr rescued social deficits in a valproic acid-induced zebrafish autism model. The ZeChat platform provides a promising approach for dissecting the pharmacology of social behavior and discovering novel social-modulatory compounds.
INTRODUCTION
Sociality is broadly conserved across the animal kingdom, facilitating cooperation, reproduction, and protection from predation. In humans, social dysfunction is a hallmark of several neuropsychiatric disorders such as autism, schizophrenia, bipolar disorder, and Williams syndrome, to name a few. In particular, social communication impairment is considered a core symptom of autism. Despite its importance, we lack a comprehensive understanding of how the diverse classes of neuroactive drugs impact social behavior. This is evidenced by the fact that although certain antipsychotics, antidepressants, and stimulants medications are used clinically to help manage some symptoms of autism1,2, no treatment is currently available to ameliorate the disease-relevant social deficit.
It has been a major challenge to comprehensively assess and compare how chemicals affect complex behaviors such as sociality. Simple in vitro assays cannot effectively model drug effects on whole organisms, especially on brain activity. Rodent models lack sufficient throughput and are cost-prohibitive for a comprehensive examination of the hundreds of neuroactive drugs currently available, limiting their uses to small-scale hypothesis-driven testing. On the other hand, the zebrafish has become an increasingly important model organism for social behavioral research3, and recent developments in zebrafish behavioral profiling have demonstrated a promising alternative approach to meeting this challenge. Indeed, multidimensional behavioral profiling in zebrafish has been used to systematically assess thousands of chemicals for effects on motor responses4,5, rest/wake behavior6, and appetite7.
Current methods of social behavioral analysis in zebrafish are mostly limited to quantifying the average measurement of a human-defined simplex trait such as social preference8, social orienting9, and group cohesion10, or a collection of several simplex traits11, with limited throughput. Restricted by their unidimensional nature, these measurements often fail to adequately represent the complex and multidimensional nature of social behavior in space and time. To comprehensively assess social behavior for behavioral profiling, we sought to develop an automated method to classify the real-time dynamics of social behavior based solely on information provided by the data, without any human intervention, in a scalable format. To achieve this goal, we adopted an unsupervised deep learning approach: deep learning based on a convolutional autoencoder can automatically extract social-relevant features from a behavioral recording, while unsupervised learning allows for unbiased classification of real-time behavioral phenotypes; both processes were conducted free of human instructions.
Here, we report a fully automated and scalable social behavioral profiling platform named ZeChat. Built on an unsupervised deep learning backbone, ZeChat embeds the high-dimensional and dynamical social behavioral data into a 2-dimensional space and assigns the embedded datapoints to distinct behavioral categories, thus converting a fish’s entire social behavioral recording to a behavioral fingerprint in the form of a numerical vector. Screening 237 known neuroactive compounds using the ZeChat system generated a rich set of social-relevant behavioral phenotypes which enabled unbiased clustering and classification of drug-treated animals. Based on the social behavioral profile compiled from the screen, we discovered a social stimulative effect of dopamine D3 receptor agonists (D3 agonists). Acute exposure to D3 agonists rescued social deficits in a valproic acid-induced zebrafish autism model. Our results demonstrate that multidimensional social behavioral phenotypes can be distilled into simple behavioral fingerprints to classify the effect of psychotropic chemicals on sociality.
RESULTS
Rationale and overview of the ZeChat behavioral analysis framework
The ZeChat workflow is summarized in Figure 1a. We probed social interaction in a 2-chamber setup, in which each fish swims freely in a square arena with visual access to its partner fish through a transparent window. In this setup, a fish’s position inside the arena, as well as its posture and movement dynamics, were deemed relevant for social interaction. Inspired by Berman et al.12, we sought to describe social behavior as a point moving through a high-dimensional space of positional, postural, and motional features, and to assign segmented subspaces to sub-behaviors. First, a preprocessing step distilled social-relevant information from the recorded images. A convolutional autoencoder then unbiasedly extracted key features from the preprocessed images to a latent vector, which is then projected onto its first 40 principal components. We converted the time series of each principal component to a wavelet spectrogram to incorporate behavioral dynamics into a feature vector. Finally, each feature vector was embedded into a 2-dimensaional map and classified to distinct behavioral categories.
Social-relevant information can be extracted via behavioral recording and image preprocessing
The zebrafish becomes socially active at 3 weeks of age8 while remaining small in size (~ 1 cm long), enabling us to visualize social interaction in a confined space. To allow easy separation of individual fish for subsequent analysis, pairs of fish were each placed in a separate 2 cm × 2 cm arena and allowed to interact only through a transparent window (Supplementary Fig. 1a; Supplementary Video 1). A custom-built high-throughput imaging platform was used to record 40 pairs of fish simultaneously with sufficient spatiotemporal resolution to capture dynamic changes of the fish’s postures and positions (Fig. 1b-c & Supplementary Video 2). Sexual dimorphism is not readily apparent at this stage, so fish were paired without sex distinction.
For image preprocessing, images of each arena were cropped with the transparent window always in the upright position to preserve fish’s positional information. Each fish was first tracked to be isolated from the background (Fig. 1d & Supplementary Video 3: tracked). Consecutive frames were subtracted to show postural changes between consecutive frames in the resulting silhouette (Fig. 1d & Supplementary Video 3: silhouette). In parallel, we colored each fish based on its instantaneous direction and velocity of movement calculated by dense optical flow13 (Fig. 1d & Supplementary Video 3: dense optical flow). Finally, each dense optical flow image was masked by its corresponding silhouette to generate a merged image (Fig. 1c & Supplementary Video 3: merge; Supplementary Fig. 1c).
Preprocessed images can be transformed to feature vectors by feature extraction and time-frequency analysis
Without any human intervention, convolutional autoencoders can automatically “learn” to extract useful features from input images into a latent vector, which is then used to reconstruct these images. We therefore used this deep learning architecture to extract key features of the preprocessed images into the latent vector for subsequent analyses. As part of the initial setup, we first pre-trained the convolutional autoencoder using a training set of preprocessed images (Fig. 1e & Supplementary Fig. 1d). The resulting latent vectors were projected onto the first 40 principal components by a principal component analysis (PCA), preserving ~ 95% of the total variance. When running the ZeChat analysis, preprocessed images were converted to time series of 40 principal components by the pre-trained autoencoder and PCA models.
Behaviors happen in durations, necessitating time to be taken into consideration to properly interpret information extracted from the behavioral recordings. To embed time-related information into the final feature vector, we adopted the method of applying continuous wavelet transform (CWT) on the time series of each of the 40 principal components to capture oscillations across many timescales12. From the 40 resulting spectrograms, 25 amplitudes at each timepoint were concatenated into a single vector of length 40 × 25. Up to this point, each original recorded frame was converted to a single 1,000-dimensional feature vector (Supplementary Fig. 2).
Feature vectors are assigned to behavioral categories by nonlinear embedding and classification
Finally, we adopted a method developed by Berman et al.12, with modifications, to assign feature vectors to behavioral categories through nonlinear embedding and classification. The high dimensional feature vectors were embedded to a 2-dimensional space by nonlinear dimensionality reduction using t-distributed stochastic neighbor embedding (t-SNE)14. Due to computational limitations, we first embedded a small subset of randomly sampled feature vectors to create a reference map. Because t-SNE is non-parametric, we applied a parametric variant of t-SNE named kernel t-SNE15 to embed additional datapoints onto the reference map. We named the resulting 2-dimensional behavioral space ZeChat map (Fig. 1f).
Calculating the probability density function (PDF) of ZeChat map identified regions with high datapoint density as local maxima (Fig. 1g), marking the locations of potential behavioral categories12. We segmented ZeChat map into 80 regions based on locations of the local maxima using a watershed transform algorithm, allowing each original recorded frame – now embedded as a datapoint in ZeChat map – to be assigned to a particular behavioral category (Fig. 1h).
The pause-move dynamic of ZeChat map
We made videos to help visualize how a fish’s real-time behavioral changes translate to datapoint trajectories on the ZeChat map (Supplementary Video 4). We found that the trajectory of the 2-dimensional embedding alternates between sustained pauses within certain regions of the map and rapid movements from one region to a distant region on the map. Plotting the velocity of the trajectory revealed a “pause-move” dynamic (Fig. 2a). The low-velocity points were localized in distinguishable peaks that often overlapped with the ZeChat map’s local maxima (Fig. 2b & 1g). In contrast, the high-velocity points were more uniformly distributed (Fig. 2b). This result supports the idea that the social-relevant behavioral changes can be represented by a course through a high-dimensional space of postural, motional, and positional features in which the course halts at locations that correspond to discrete sub-behaviors12.
Neuroactive compound screening reveals diverse social behavioral responses
To systematically assess how neuroactive compounds modulate social behavior, we conducted a screen of 237 compounds including modulators of the dopamine, serotonin, and opioid-related pathways. These pathways were selected because they have been implicating in influencing social behavior16–18. Briefly, 3-week-old juvenile fish were treated with compounds by bath exposure for 1-3 hours prior to ZeChat recording. Ten fish were treated with each compound, and fish treated with the same compound were paired with each other for ZeChat recording (Fig. 3a). A set of DMSO control fish was included in every recording.
Counting the number of times a fish’s behavior is classified to each behavioral category generated a behavioral fingerprint in the form of an 80-dimensional numerical vector. Fish treated with the same compound showed highly similar behavioral fingerprints (Fig. 3b), suggesting that the behavioral fingerprints produced by a given compound are consistent across multiple individual animals. To consolidate data, we combined the behavioral fingerprints of fish treated with the same compound by keeping the median value of each behavioral category. All 237 consolidated behavioral fingerprints plus DMSO controls were normalized, and the medians of DMSO controls were subtracted from all samples to help visualize changes in behavioral fingerprints compared to wild type behavior.
Hierarchical clustering reveals a diversity of behavioral responses (Fig. 4 & Supplementary Fig. 3). We found that compounds belonging to the same functional class consistently evoked highly similar behavioral fingerprints (Fig. 5a and Supplementary Fig. 4 & 5). To compare the typical behavioral fingerprints of major drug classes, we calculated the median value of each behavioral category for all behavioral fingerprints elicited by functionally similar molecules. Only drug classes with no fewer than 3 compounds tested in the screen were included in this analysis. Hierarchical clustering of the resulting behavioral fingerprints again revealed distinct behavioral phenotypes (Fig. 5b). Remarkably, compounds targeting the 3 major neurotransmitter pathways, e.g., the serotonin, dopamine, and opioid pathways, were naturally separated by hierarchical clustering (Fig. 5b: functional classes of drugs are color coded to distinguish the 3 major pathways).
Dopamine D3 receptor agonists rescue social deficits in a VPA-induced autism model
Surprisingly, we noticed that the dopamine D1, D2, and D3 receptor agonists were clustered well apart from each other (Fig. 5b: black arrows), suggesting that selected activations of the dopamine D1, D2, and D3 receptor-related neuronal circuits elicited distinct social behavioral phenotypes. The five D3 agonists tested in the 237-compound screen generated highly similar behavioral fingerprints sharing a unique pattern in which strong signals are observed in the higher-number behavioral categories (Fig. 6a-b). In contrast, the D1 and D2 agonists elicited very different behavioral fingerprints with no enrichment in these higher-number behavioral categories (Fig. 6a). By examining raw behavioral recordings, we noticed that the D3-agonist-treated fish tend to spend a significant amount of time swimming intensively while pressing against the transparent window. Compared to wild type animals, these fish demonstrated persistent and strong high-frequency tail beats, fast swim velocity, and quick and frequent turns; they also rarely retreated from proximity to the transparent window (Supplementary Video 5 and Fig. 6c). We hypothesized that these D3 agonist-associated behaviors may signify enhanced sociality.
We attempted to validate the hypothesized social stimulative property of D3 agonists in a zebrafish autism model with a social deficit phenotype. Embryonic exposure to valproic acid (VPA) is an established model of autism in rodents19 and zebrafish20. Using a simple zebrafish social preference assay21, we observed a clear social deficit phenotype in VPA-treated zebrafish (Supplementary Fig. 6a). To test the effect of D3 agonists against social deficits, we acquired 3 structurally diverse D3 agonists, pramipexole, piribedil, and 7-hydroxy-DPAT-HBr (Supplementary Fig. 6b). Both pramipexole and piribedil are FDA-approved antiparkinsonian agents. We found that exposure to D3 agonists for 1 hour by simple submersion prior to the social preference assay effectively rescued the social deficit in the VPA-treated fish (Fig. 6d).
DISCUSSION
ZeChat is a deep learning-based behavioral assessment tool enabling scalable and low-cost zebrafish behavioral profiling to characterize changes in sociality. The in vivo ZeChat platform combines advantages of in vitro and rodent models, enabling scalable testing with high behavioral resolution. Compared to previous zebrafish behavioral profiling methods, the ZeChat analysis method specifically processes and analyzes social behavior-relevant information, linking known neuroactive drugs with complex but distinct social behavioral outcomes.
Apart from unsupervised machine learning, alternative approaches are available for improving the resolution of social behavioral analysis, but not without drawbacks. For example, supervised machine learning methods have been widely adopted to analyze social interactions in fruit fly22,23, zebrafish24, and mouse25. However, this method still relies on human interpretation of animal behavior to classify and assign behavior and is likely unable to fully reveal the complexity and subtilty of social behavior. Another approach uses predefined measurement criteria to mathematically model and classify social interaction26,27, which reduces human biases in the analysis, but the quality of its outcome is highly dependent upon the validity of the model. In comparison, unsupervised methods have successfully revealed stereotypic behavioral motifs in individual animals of C. elegans28–34, fruit fly12,35–39, zebrafish40–42, and mouse43,44, as well as paired interactions in fruit fly45,46, without any human interventions or a priori assumptions, providing a viable approach for our purpose.
However, all these approaches still rely on manual selection of features for data preprocessing, which requires strong domain knowledge in the behaving animal. These prerequisites are not always met, especially when faced with complex problems such as analyzing subtle behavioral changes in a video or analyzing sequences of behaviors, as it is difficult for a human observer to exhaustively extract useful features from an image or a sequence of images. Deep learning methods, on the other hand, can automatically learn to extract abstract features from images. As behavioral recordings are sequences of images, the potential benefit for applying deep learning to process these data is apparent. In fact, several recent studies have successfully utilized deep learning to facilitate individual animal identification47, tracking48, and movement prediction49 in zebrafish, paving the way for its application in ZeChat.
In alignment with our findings, the D3 receptor has been previously implicated in social behavioral regulation. In humans, pramipexole alleviates social anxiety in selective serotonin reuptake inhibitor (SSRI)-treated patients50. In rodents, two D3 agonists 7-OH-DPAT and PD 128907 were reported to cause a variety of complex alterations in social behavior51,52. Further investigations are needed to validate these findings in rodents using other D3 agonists and under different test conditions, drug doses, and genetic backgrounds of the animals, but the results in zebrafish, rats, and humans all point to an important role of D3 receptors in modulating social behavior. In addition, because both pramipexole and piribedil are FDA-approved antiparkinsonian agents, it may be worthwhile examining their impact on the social behavior of patients receiving these drugs.
Future studies using the ZeChat platform may expand to screening other neuroactive compounds, compounds with no known neuroactivity, and uncharacterized compounds, in the hope of identifying additional phenotypes and drug classes with social-modulatory properties. The characteristic behavioral fingerprint of the D3 agonists may be used to discover novel compounds with similar behavioral effects. In addition to wild type fish, fish carrying mutations relevant to human psychiatric disorders can also be assayed, and their behavioral fingerprints compared to the neuroactive compound clustergram to associate genetic mutations with perturbations of neuronal pathways. As demonstrated by Hoffman et al.53, small molecules evoking an anti-correlated behavioral fingerprint may ameliorate social deficits in the mutant fish. Hence, by providing a rapid, high-resolution means of characterizing and categorizing zebrafish with altered social behaviors, ZeChat represents a useful tool for investigating the role of genes and pharmacological agents in modulating complex social behaviors.
MATERIALS AND METHODS
The ZeChat imaging system setup
The basic unit of this system is a 10 mm deep, 20 mm wide, and 41.5 mm long (internal dimension) rectangular chamber with 2 mm thick walls. A 10 × 4 array consist of 40 independent testing units was 3D printed using white PLA at 100% infill. The printed test arena was glued onto a 3/16” thick white translucent (43% light transmission) acrylic sheet (US Plastic) using a silicone sealer (Marineland). Each unit was then divided into two square-shaped compartments by inserting a 1.5 mm thick transparent acrylic window – precision cut to 10 mm x 41 mm pieces using a laser cutter – into 0.5 mm deep printed slots located in the middle of each unit on the side of the 41.5 mm wall and fastened using the silicone sealer.
The key component of the imaging system is a 322 mm diameter bi-telecentric lens (Opto Engineering) with an IR (850 nm) bandpass filter (Opto Engineering). A telecentric lens only allows passing of light that is parallel to the optical axis, thus avoiding parallax error in imaging, and enables all test units – being located either in the middle or close to the edge of the field of view – to be imaged without distortion. Videos were taken at 50 frames per second (fps) by a 75 FPS Blackfly S Mono 5.0 MP USB3 Vision camera (PointGrey) with a resolution of 2448 × 2048. The tail beat frequency (TBF) for adult zebrafish is ~ 20 Hz54, therefore images taken at 50 Hz by the camera should adequately sample motion-relevant features based on the Nyquist–Shannon sampling theorem. The imaging platform was back-illuminated with an infrared (850 nm) LED array (EnvironmentalLights) to provide light for video recording. The infrared LED array was positioned on top of a heat sink (H S Marston). The imaging platform was also illuminated from two opposing sides using white LED arrays (EnvironmentalLights) to provide ambient light for the test subjects. Structural supports and enclosure were custom built using parts purchased from Thorlabs, McMaster Carr, and US Plastic.
ZeChat test
Test subjects were individually placed into each unit – one on each side of the transparent window – using a transfer pipette with its tip cut off. Their visual access to each other was temporarily blocked by a 3-D printed nontransprent comb-like structure (Supplementary Fig. 1b) prior to each recording session. Once all test subjects were placed into test arenas, the entire test apparatus was transferred into the imaging station and the combs were removed to allow visual access between each pair of fish.
The 2-compartment social interaction setup allows the behavior of each fish to be recorded and analyzed independently without having to go through complex and often computationally demanding and time-consuming tracking procedures to separate each fish. Videos were streamed and recorded using the software Bonsai55. A 10 min test session was video recorded for each test. To give fish an acclimation period at the beginning of each test and to take into consideration that the effects of some of the drugs tend to wear off quickly, only the 5 min video segment between 2.5 min and 7.5 min was used for subsequent analyses. All subsequent data processing and analyses were conducted in Python using packages including OpenCV, scikit-learn, Keras, PyWavelets, and imutils.
Data preprocessing
For data preprocessing, individual fish were first separated from the background using the K-nearest neighbors method56. A separate video segment was cropped out for each fish which contains a recording of the entire square compartment where the fish is located. Because the relative position of a fish to its compartment is relevant to social interaction dynamics, each compartment was analyzed as a whole. And because each compartment is polarized, with only one of the four sides being transparent to another fish, for each pair of compartments, the video containing fish in the “top” compartment is flipped vertically by rotating 180 degrees to match the orientation of video recording the “bottom” compartment, so that the side of the compartment facing the transparent widow always faces upward in each video.
To capture changes in each fish’s posture between consecutive frames, we subtracted every current frame from its previous frame. The resulting images were binary-thresholded to generate silhouette-like masks. In parallel, we calculated each fish’s direction of movement between consecutive frames using the Franeback Method of dense optical flow13 and used this information to color the fish; motionless fish appear dark after applying this method, thus restricting our analysis to fish in motion. Finally, we applied the mask acquired by subtracting consecutive frames to the dense optical flow image so that the image colored by dense optical flow is cropped by the subtracted silhouette-like mask.
Training the convolutional autoencoder and feature extraction
The architecture of the convolutional autoencoder consists of three encoding layers each containing 64, 32, and 16 filters, and three decoding layers each containing 16, 32, and 64 filters. We used a training set of preprocessed images to pre-train the convolutional autoencoder. The preprocessed images with a dimension of 220 pixels × 220 pixels were first resized to 56 pixels × 56 pixels to reduce computational requirements. Because a wild type fish typically spends most of the time interacting with its paired fish by staying close to the transparent window, causing the position of the fish in input images to be highly polarized, we enriched the training dataset by rotating each resized image by 90°, 180°, and 270° to generate input images with more postural and positional variations.
The autoencoder forces input images to pass through a “bottleneck” before reconstruction. The bottleneck, or the latent representation space, has a dimension of 784. We then applied principal component analysis (PCA) to this 784-dimensional feature vector and extracted 40 principal components which preserved ~ 95% of total variance.
Time-frequency analysis of feature dynamics
Calculating the 40 principal components for each video frame yields 40 timeseries for each video. Each timeseries was then expanded into a spectrogram by applying the Continuous Wavelet Transform (CWT). The Morlet wavelet was used as the mother wavelet and 25 scales were chosen to match frequencies spanning from 0.38 Hz to 5 Hz, with the range of frequencies empirically determined to preserve the strongest signals. The time-frequency representation augments the instantaneous representation by capturing oscillations across many timescales. The spectral amplitudes of each time point were then concatenated into a vector of length 40 × 25, giving rise to a 1,000-dimensional representation for each original video frame. Each 1,000-dimensional vector was normalized to having a sum of 1 in order to treat each vector as a probability distribution for subsequent calculation.
Nonlinear embedding and segmentation
We then performed nonlinear dimensionality reduction on these high dimensional vectors using the popular nonlinear manifold embedding algorithm t-distributed stochastic neighbor embedding (t-SNE)14. We randomly selected and embedded 3,000 feature vectors from 60 fish to generate a reference map. The t-SNE algorithm is non-parametric. Therefore, additional datapoints were embedded onto the reference map using a parametric kernel t-SNE15 method to form the ZeChat map. As the feature vectors are normalized and treated as probability distributions, we calculated the Jensen–Shannon distance (the square root of the Jensen–Shannon divergence) between each pair of vectors as a distance metric for both t-SNE and kernel t-SNE. We chose the Jensen–Shannon distance as a metric for calculating distances due to it being symmetric and bounded by 0 and 1 which avoids the generation of infinite values.
We calculated the probability density function (PDF) of this map by convolving with a Gaussian kernel. Due to computational limitations, this calculation was conducted using a ZeChat map containing 10,000 randomly selected datapoints. The resulting probability density map was then inverted to turn local maxima into “valleys”. The “ridges” between valleys were detected using Laplacian transform. Finally, a watershed transform was applied to mark the borders between each valley to unbiasedly segment the ZeChat map into 80 behavioral categories.
For ZeChat analysis, to reduce computation time, we randomly sampled 5000 frames from each fish for kernel t-SNE embedding and subsequent analyses.
Behavioral fingerprint calculation and hierarchical clustering
Each frame is assigned a watershed region (behavioral category) based on ZeChat map segmentation. For each fish, the total number of frames assigned to each watershed region was counted, giving rise to a behavioral fingerprint in the form of an 80-dimensional vector. Behavioral fingerprints of fish treated by each drug were combined into one fingerprint by calculating the median of each behavioral category. All combined raw behavioral fingerprints were normalized so that the signals of each behavioral category were between 0 and 1. To help visualize the difference in behavioral patterns between drug treatments and DMSO control, we calculated the median of each behavioral category of all DMSO controls to generate a representative fingerprint for DMSO control, and subtracted this fingerprint from all drug treatment samples. Finally, the normalized and DMSO-subtracted fingerprints of each drug treatment were clustered using the clustermap function (metric=‘euclidean’, method=’complete’) of Python’s Seaborn library.
Zebrafish chemical treatment and screening
For ZeChat testing, 21 dpf zebrafish were collected from nursery tanks. Fish of roughly average size were selected to minimize the effect of size differences. For the screen, 10 fish were picked into a 60 mm petri dish containing 10 ml E3 medium. Compounds were then added to each dish at a final concentration of 10 μM (non-peptide molecules) or 1 μM (endogenous neuropeptides and their analogs). Fish were incubated for 1-3 hours prior to ZeChat testing. Immediately before testing fish in a petri dish, the content of the petri dish was poured through a nylon tea strainer to remove liquid while keeping fish in the tea strainer. The tea strainer was then consecutively dipped into 3 petri dishes containing E3 to wash the residual chemical away from the fish. The fish were then poured into a petri dish containing clean E3 and each individual was transferred into the ZeChat test arena using a plastic transfer pipette for testing.
Rescue of VPA fish and social preference testing
VPA treatment was conducted by submerging embryos in 1 μM VPA in E3 medium from 0 to 3 dpf. The drug treated embryos were washed at 3 dpf and transferred to petri dishes containing clean E3 medium. At 5-7 dpf, larvae were transferred into nursery tanks and raised to 21 dpf for behavioral testing of social preference using a 3-chamber assay apparatus21. For the D3 agonist rescue experiment, 20 VPA-treated fish were picked into a 25 mm deep 10 cm petri dish containing 30 ml E3 medium. Compounds were then added to each dish and fish were incubated for 1 hour. Immediately before testing, fish were washed as described above, and individually placed into the social preference testing arenas for behavioral testing.
Chemical library and other compounds
All screening compounds were acquired from the Biomol neuroactive compound library (Biomol) which contains a total of 700 neuroactive drugs dissolved in DMSO at a stock concentration of 10 mM or 1 mM (for only a small subset of drugs). Valproic acid was purchased from Sigma-Aldrich. Pramipexole was purchased from Cayman Chemical. Piribedil was purchased from Selleck Chemicals. 7-hydroxy-DPAT-HBr was purchased from Santa Cruz. All individually purchased compounds were dissolved in DMSO. Chemical structures were generated using PubChem Sketcher.
Zebrafish husbandry
Fertilized eggs (up to 10,000 embryos per day) were collected from group mating of EkkWill strain zebrafish (Danio rerio) (EkkWill Waterlife Resources). Embryos were raised in HEPES (10 mM) buffered E3 medium at 28°C, with or without compound treatment, during the first 3 days. At 3 days post fertilization (dpf), chorion debris was removed, and larvae were transferred into petri dishes containing fresh E3 medium. At 5 – 7 dpf, larvae were transferred into nursery tanks and raised at 28°C on a 14/10 hr on/off light cycle.
Statistical analysis
Graphs were generated using GraphPad Prism or Python using the Matplotlib package. Data were analyzed using the 2-tailed Student’s t-test. P values less than 0.05 were considered significant.
Code availability
Code is available on the GitHub repository at https://github.com/yijie-geng/ZeChat and is archived on Zenodo under DOI: 10.5281/zenodo.5519964.
AUTHOR CONTRIBUTIONS
Y.G. conceived the study, built the equipment, designed and conducted the experiments, wrote the Python codes, analyzed the data, and wrote the manuscript. R.T.P conceived the study, designed the experiments, interpreted the data, and revised the manuscript.
COMPETING INTERESTS
The authors declare no competing interests.
SUPPLEMENTARY DATA
Supplementary Video 1. Video recording of a pair of fish interacting in a ZeChat unit. Each unit is divided into two arenas by a transparent window.
Supplementary Video 2. Video recording of 40 pairs of fish interacting in a full-sized ZeChat test array.
Supplementary Video 3. A combination of 4 processed clips of the same video recording, showing the intermediate and final outcomes of image preprocessing.
Supplementary Video 4. Side-by-side view of fish’s behavioral recording and its trajectory on ZeChat map in real-time to visualize how a fish’s behavior translates to datapoint embeddings in the ZeChat map.
Supplementary Video 5. Video recordings of wild type (DMSO) and dopamine D3 agonist-treated (10 μM piribedil) fish. Demonstrating a more intense interaction pattern between pairs of D3 agonist-treated fish compared to the wild type.
ACKNOWLEDGEMENTS
We thank members of our research group for helpful advice. This work was supported by the L. S. Skaggs Presidential Endowed Chair and by the National Institute of Environmental Health Sciences of the National Institutes of Health under Award Number K99ES031050. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Institutes of Health.