Abstract
Registration and quantification of stained histological brain slices is a central step in the study of a vast array of neural mechanisms across species. These analyses are often done manually or require the integration of several different softwares which often do not work well together, resulting in laborintensive and cumbersome workflows, or requiring significant coding skills. The main challenge in this type of analysis is to correctly outline the region of interest (ROI), achieve inter-rater validity, especially when targeting multiple ROIs, an approach which is becoming common practice. We developed Brainways, a simple AI-based software for biologists that automatically registers coronal brain slices to a 3D atlas and provides brain-wide quantification of fluorescent markers across subjects, and supports co-labelling of multiple stains (via QuPath file import). Here we demonstrate the use of Brainways on a dataset from a previously published experiment investigating the neural network involved in empathic helping behavior of rats towards trapped ingroup or outgroup members. The re-analysis, which covered over a 100-fold more brain tissue compared to the original, replicated the manually quantified results, providing support for the reliability of Brainways, which is also highly cost-effective compared to manual methods. The automatic registration algorithm achieved 80% accuracy for correct slice matching to the Waxholm rat atlas. Increased accuracy can be obtained manually via high throughput visual scanning and adjustment using the Brainways GUI. Brainways functionality is exposed through a python-based API, and can be trained on any atlas (e.g. mice, zebrafish, human), as well as enhanced for different applications.
Introduction
Fluorescence tagging for different markers in neural tissue is commonly used as a method for studying the relationship between neuronal properties and behavior. Analyzing fluorescence in histological images requires registration to a common reference brain atlas, followed by quantification of fluorescence levels or numbers of marker-positive cells in each brain region of interest (ROI). Due to a lack of available tools for performing registration and quantification in a fast, streamlined and accurate manner, this process is often performed manually. Thus, many researchers limit the scope of their investigation to a specific ROI, or use sampling to reduce the quantification timeline. Unlike manual analysis, which requires expertise, is labor-intensive and prone to inter-rater variability, automatic registration and quantification would provide efficient and reliable analysis of entire sections.
While there are software solutions that separately perform parts of this analysis in mice (Carey et al., 2022; Song et al., 2020; Xiong et al., 2018), integration required to complete the analysis results in a cumbersome workflow, often requires programming experience, and doesn’t provide a solution for researchers working with rats or other animals. Other existing solutions are suitable only for lightsheet microscopy (Niedworok et al., 2016; Tyson et al., 2022; Renier et al., 2016; Kirst et al., 2020), which is gaining traction as a method to gather volumetric images from whole brains. However, these algorithms fail for images of individual brain slices, still used by most labs for their high accessibility and cost-effectiveness. Unlike lightsheet microscopy images, individual slices are prone to tissue tearing and significant elastic deformations due to the handling of the slice prior to imaging. Additionally, these algorithms require the slices to be sequentially ordered, which may get lost in the staining process, and manually restoring it is time consuming and requires expertise. Moreover, the resolution in the anterior-posterior dimension is relatively low in coronal slices compared to lightsheet microscopy, which is a challenge for current methods. Thus, there is a need for a system that performs the end-to-end analysis of individual sections in a streamlined manner, and can be adapted for different model organisms.
To this end, we developed Brainways, a rapid, userfriendly, open-source AI-based software that provides an automated pipeline for analysis of coronal sections. Brainways aims to address these challenges as well as provide a platform for collaboration across labs using different model organisms for studying the brain. Brainways provides an easy way to perform automated registration and quantification of histological rat brain slices using a graphical user interface (GUI) that allows adjustment of the automatic registration if needed. Brainways performs cell detection and quantification, maps the detected cells to the registered ROIs, and outputs positive cell counts per region for multiple subjects via an excel file, providing a short turn-around from slide scanning to brain-wide quantification. The registration is performed using a deep learning algorithm trained on data that was annotated using the Brainways GUI. The algorithm outputs the location of the coronal slice in the Anterior-Posterior axis and detects the visible hemisphere (left, right, or both). The other steps in the software are performed using off-the-shelf algorithms which were integrated into Brainways to allow a seamless workflow: non-rigid registration is performed using Elastix (Klein et al., 2010), and cell detection is performed using StarDist (Schmidt et al., 2018). The rational and building-blocks of Brainways will be described in detail below.
In order to examine the validity and reliability of the quantification provided by Brainways, we re-analysed a manually quantified dataset previously published by our group (Ben-Ami Bartal et al., 2021). In this usecase, the main findings were reliably replicated despite the difference in quantification methods and much larger coverage of neural tissue used in the re-analysis compared to the original quantification. This dataset consists of neural activity indexed via the immediate-early gene c-Fos of rats tested in the helping behavior test (HBT). The study had aimed to outline the neural activity associated with prosocial motivation to release a trapped conspecific. Using Brainways, we replicated the original network of neural regions active in the presence of a trapped ingroup member. This supports the previous finding that prosocial motivation in rats is associated with activity in a dispersed brain-wide network highly homologous to the human empathy network, and includes central regions of the motivation and reward system. The re-analysis revealed the network in more detail than the original quantification, as additional regions were analyzed and the amount of sampled tissue was significantly higher than before. Thus, several regions that were only trending in the original publications have emerged as significant using Brainways. Moreover, the Brainways-assisted re-analysis took a fraction of the time of the manual quantification, which had required multiple experimenters working over many months. Thus, we propose that Brainways is an effective, useful tool at its current format for analysis of rat coronal sections. The code is freely available on GitHub1, along with an instructional video demonstration.
Results
Brainways is composed of five modules that combine to lead from the digital image to an excel output of positive cell numbers from a whole cohort of subjects (Fig. 2). The modules can be accessed either through the Python API, or through a GUI for easy access for users who lack coding skills. The “Atlas Registration” module takes a slice image and registers it to the selected brain atlas of choice. The Waxholm Space Rat Atlas was used for all data presented in this paper (Papp et al., 2014; Osen et al., 2019). The “Rigid Registration” module takes the slice that was registered using the previous module, computationally separates the tissue from the background, and performs an affine transformation to match the input tissue to the atlas slice at the registered location. The “Non-Rigid Registration” module then takes the output of the previous module and performs an elastic registration using Elastix Thin-PlateSpline interpolation algorithm. Outputs of all previous modules can be visualized and manually modified using the Brainways GUI. The “Cell Detection” module is used to perform cell/nucleus detection from within Brainways using the StarDist algorithm (Schmidt et al., 2018), or by importing cell detections to Brainways from another software (e.g. QuPath). The detected cell locations are then registered to the atlas by applying the previous registration steps to the cell locations. Finally, the “Analysis” module takes the result of all previous modules, outputs normalized cell counts for each brain region and allows further statistical analysis.
Atlas Registration
The aim of the Atlas Registration module is to find the 3D location of a coronal rat brain slice image in the reference atlas (Fig. 2A). The Atlas Registration module receives a coronal rat brain slice image and outputs the slice’s location in the Anterior-Posterior (AP) axis of the atlas, rotation of the slice relative to the atlas in three rotation axes, and which of the hemispheres is visible in the slice. Because the rough features of the slice are visible in low resolution, the input images are downscaled and fed to a deep neural network that was trained to output the AP axis and the visible hemisphere classification. The registration parameters, including the 3D rotation parameters, can be manually adjusted using sliders in the Brainways GUI. The Atlas Registration algorithm can also be accessed using the python API from any python script.
The Brainways GUI can be used to register image slices for a variety of atlases from different species. This is achieved by using the Brainglobe Atlas API (Claudi et al., 2020) for atlas retrieval and access in Brainways. Since there was no rat atlas available in the Brainglobe repository, we have adjusted and added the Waxholm Space Rat Atlas version 4.0 to the Brainglobe repository. By using the Brainglobe Atlas API, Brainways can be used to manually register brain slices to any of the brain atlases available in Brainglobe, such as mouse, zebra fish, and humans. While Brainways currently provides automatic registration exclusively for the Waxholm Space Rat atlas, we are working on adding support for the Allen mouse brain atlas (Wang et al., 2020).
The automatic registration algorithm was trained using 1,802 slices from 5 experiments performed by our group. As some slices had multiple markers, all available markers were used to train the network, for a total of 2,529 images used in the training procedure2. To achieve better registration accuracy using limited amounts of data, the algorithm first trained on 20,000 synthetic slices generated by virtually slicing the Waxholm Space Rat Atlas. Each synthetic slice was taken from a random location in the atlas with a random 3D rotation, and then augmented in various ways (Supplementary 2). The registration algorithm’s deep neural network architecture is based on the Resnet 50 (He et al., 2016) image classification architecture, pretrained on ImageNet (Deng et al., 2009). Three classification heads were added to the baseline architecture, one for AP axis registration, one for visible hemisphere estimation, and one for confidence estimation (Fig. 3). Confidence estimation is achieved by training the network to predict whether the AP axis output by the network matches the correct registration. Confidence estimation works well in identifying slices which are extremely bright or extremely dark, images with missing tissue, or extremely deformed tissue (Supplementary 2).
To measure model performance, the AP value output from the network was compared to the AP value annotated by the expert annotators on the test dataset. If the network AP value matched the annotator value by up to 20 voxel units, the registration was considered as correct. The model achieved 80% accuracy according to this metric on the test dataset, with a mean absolute error of 13.93 voxels. When taking into consideration only slices on which the model was highly confident based on the confidence estimation output, the total accuracy increased to 85%. The accuracy of the hemisphere estimation output was 88.7%. We expect the registration accuracy to increase the more training data is annotated by researchers using Brainways.
Rigid Registration
The main goal of the Rigid Registration module is to roughly align the brain tissue in the input image to the atlas slice that was found using the previous step (Fig. 2B). The Rigid Registration module receives both the coronal rat brain slice image and the atlas slice that was matched using the previous step, and outputs four rigid registration parameters: left-right translation, updown translation, horizontal scale, and vertical scale. The Rigid Registration parameters can be viewed and modified using sliders via the Brainways GUI. The registration is visualized by overlapping the input image with a transparent view of the brain regions as defined by the atlas (Fig. 1).
To perform rigid registration between the slice and the atlas, the brain tissue in the slice image needs to be separated from the background so that only the tissue itself is registered. This is achieved by binarizing the input image using KMeans on the pixel values in the input image, followed by connected component analysis on binary image (Supplementary 3). Following brain tissue background separation, a bounding box is taken around the brain tissue in the input image and in the atlas slice, and registration parameters to transform the input box to the atlas box are calculated. The automatic Rigid Registration algorithm can be run using a button in the Brainways GUI or via the python API.
Non-rigid Registration
Individual histological brain slices often have more significant deformations compared to lightsheet imaging in which the whole brain is fixed in place. Due to the thinness of the slices, between 20-60 μm, any slight movement can cause tissue tearing and deformations. Experienced experimenters can minimize the extent to which this problem occurs, but it still remains a major issue when processing brain slice images. The Non-rigid Registration module receives the brain slice image after rigid registration from the previous module, and the matching atlas slice from the Atlas Registration module, and outputs keypoints for elastic deformation using Thin Plate Spline (TPS) interpolation. The registration is visualized by overlapping the input image with a transparent view of the brain regions as defined by the atlas.
To achieve the elastic deformation, 24 initial ‘source’ points were evenly spaced on the brain tissue, and 24 ‘destination’ points in the same locations as the ‘source’ points. The ‘destination’ points can be moved around using the Brainway GUI, resulting in elastic deformation of the brain tissue that will match the ‘source’ points to the ‘destination’ points (Fig. 2C). If the initial 24 point pairs are not sufficient to register key brain regions of interest, additional points can be added or subtracted using the Brainways GUI.
Automatic Non-rigid Registration is supported using the TPS interpolation registration from the Elastix (Klein et al., 2010; Shamonin, 2013) python package. While Elastix tends to perform good automatic registration to match the outlines of the brain slice to the outlines of the atlas slice, it often fails for more subtle internal structures. Thus, Brainways is designed for experimenters to review the registrations and manually correct them when necessary. In future versions of Brainways, we plan to train a deep neural network to perform the TPS interpolation. Given enough training data, we expect a deep neural network to perform better (and faster) than the current implementation.
Cell Detection
After registration, the number of positively stained cells in each region needs to be quantified. The Cell Detection module receives the brain slice images and outputs the x and y coordinates and the matching brain region for each positively stained cell/nuclei. To achieve this, the cells are first automatically detected and then registered to the atlas by using the registration parameters found in the previous modules.
The StarDist (Schmidt et al., 2018) cell detection algorithm was used for automatic detection of positively stained cells. StarDist outputs a pixel mask for each detected cell (Fig. 2D), from which several parameters are extracted, such as the central x and y coordinates, the area of the cell in microns, and the mean intensity value of each detected cell. These parameters are used for later filtering of false positives. After filtering, each x, y coordinate is transformed from the 2D image space to 3D atlas space by applying the transformations as computed by previous modules. Finally, the corresponding brain region of each cell is computed from its location in atlas space coordinates.
The automatic detections can be previewed in the Brainways GUI. The whole process can also be run via the python API. Other cell detection algorithms can also be used and imported into Brainways from a csv file containing the x and y coordinated for each image by using the Import Cell Detections feature.
Analysis
Following registration and cell counting, results from the whole experiment (multiple subjects belonging to a single project) are aggregated and exported to an Excel file for subsequent analysis. For each animal and brain region, the total region area in microns, cell count, and normalized cell count (cell count per 250μm2) are extracted. Normalized cell count is a better ground for comparison between animals and experiments due to it being independent of variations in area size between subjects, and the exact number and location of the imaged slices for a specific region. Detected cells from the whole brain can also be viewed in 3D by using the Cell Viewer module in the Brainways GUI (Fig. 2E)
Case Study
To investigate the validity of Brainways, and demonstrate the advantage of using it over manual quantification methods, we present below a Brainways-assisted quantification of a dataset from our group (Ben-Ami Bartal et al., 2021). The dataset depicts images of coronal slices stained for the immediate-early gene c-Fos taken from rats tested in the Helping Behavior Test (HBT, Fig. 4A) with ingroup or outgroup members. The HBT paradigm reflects prosocial motivation in rats, who may release a trapped conspecific, thus improving their wellbeing (Bartal et al., 2011). In the original study, rats selectively helped ingroup members (same-strain conspecifics), but not outgroup members (strangers of an unfamiliar strain, Fig. 4B), in line with previous findings (Ben-Ami Bartal et al., 2014). This dataset was originally manually quantified via visual identification of 84 ROIs, followed by ImageJ (Schindelin et al., 2012) cell detection for region-by-region quantification (Fig. 4CD).
While this strategy has proved helpful towards elucidating the neural mechanisms of prosocial motivation, the analysis was very labor-intensive, and used only a small fraction of the total available imaged brain tissue. Brainways, which significantly alleviates these issues, was used below to reanalyze the original images, and the resulting brain-wide quantification of c-Fos+ cell numbers was used to rerun the comparison of neural activity patterns between rats in the ingroup and outgroup conditions. To this end, c-Fos counts were quantified using Brainways (Fig. 4E-F, see methods), providing averages of c-Fos+ cell numbers per brain region located on the scanned images. Overall, the resulting re-analysis was highly similar to the original one, providing evidence for the validity of this automated approach, as will be detailed below.
To identify patterns of neural activity associated with each condition, a similar strategy was followed as the original publication (Ben-Ami Bartal et al., 2021). Using multivariate task partial least square analysis (PLS, see Methods, (Mcintosh, 1999; McIntosh et al., 1996)), neural activity was compared for rats tested in the HBT with either trapped ingroup or outgroup members compared to rats in the control condition. The original analysis revealed a distinct neural network activated in the presence of a trapped ingroup member during the HBT (Fig. 5A-B). This network included the Insula, ACC, orbitofrontal cortex and sensory cortices. Importantly, activity in regions associated with the neural reward network, including the NAC, LS, VDB, and parts of the OFC and PRL were specifically correlated with prosocial motivation and dissociated from arousal around a non-helped trapped outgroup member (Ben-Ami Bartal et al., 2021; Breton et al., 2022). As before, when PLS was conducted on Brainways-derived data from the ingroup, outgroup and baseline conditions, a significant latent variable (LV, p<0.001) emerged, and following bootstrapping and permutations provided a measure of the contribution of each brain region to the significant LV (termed ‘salience’). This analysis revealed increased brain-wide activity for rats tested in the HBT with either trapped ingroup or outgroup members compared to rats in the baseline condition (Fig. 5C, black bars). This network included regions in sensory cortex, frontal regions, limbic and reward regions (Fig. 5C, colored bars). Notably, as previously reported, these regions include areas that have been shown to participate in empathy in humans (Shamay-Tsoory and Lamm, 2018; Decety et al., 2016) and rodents (Meyza and Knapska, 2018; Wu and Hong, 2022; Walsh et al., 2023; Paradiso et al., 2021). A direct comparison between the ingroup and outgroup conditions also yielded a significant LV, (LV1, p<0.01), and pointed to a distinct network that was significantly more active for ingroup members compared to outgroup members (Fig. 5D).
Next, to conduct a comparison with the original analysis, c-Fos+ cell numbers were compared for the ingroup, outgroup and baseline conditions (ANOVA, FDR corrected for multiple comparisons (Benjamini et al., 2006)). The results of this analysis expands on the original publication where only four regions (MO, PrL, NAC, LS) reached the significance threshold for this comparison (Fig. 5E). With exception of the MO, the three other regions (NAC, Septum, PrL) were replicated. Furthermore, areas that were not previously quantified also proved significant, including the IL, VP, and parts of the thalamus (LEC, Subiculum, Fig. 5F). Multiple regions that previously emerged as active in the HBT ingroup condition compared to baseline were replicated (Fig. 5G), including the claustrum, insula, piriform and endopiriform cortex, basal forebrain, Cpu, and the amygdala. Thus, the Brainways-assisted re-analysis reproduced the original findings and revealed the expanded empathic helping network more fully, providing further evidence in the role of this network in prosocial motivation.
Discussion
Histological brain slicing is the most commonly used brain imaging method in animal studies. Automatic easy-to-use tools are required to allow fast and global quantification of large amounts of data, even for labs lacking highly skilled programmers. To this end, we developed Brainways, a software for registration and quantification of stained individual brain slices. Brainways allows performing automatic registration of coronal slices to a 3D digital atlas, supports elastic deformations, stained cell detection, and exporting aggregated cell counts for each brain region and animal in an experiment. The automatic registration is performed using a deep neural network algorithm that receives the coronal images and outputs the registration parameters. To encourage further development in the field, the Brainways code is publicly shared as an open-source software, as well as the data used for training and validating the registration algorithm3. The software can be used by researchers with no programming skills through a GUI, or via a Python API for for users who wish to utilize Brainways as part of other scripts or programs.
To demonstrate the utility of Brainways, we have reanalyzed a previously published experiment that was originally analyzed using manual acquisition methods. The Brainways-assisted analysis largely replicated the neural activity pattern found in the previous study, and revealed additional regions which were not included in the manual quantification. Using Brainways, the amount of tissue covered in the analysis increased by over a hundred-fold, which contributed to greater statistical significance of the results compared to the previous study by increasing the signal strength. The similarity of the findings is heartening, both strengthening the validity of the identified ROIs as well as the reliability of the output provided by Brainways. Notably, even though the analysis was based on the same dataset, closely replicating the previous results is not a trivial outcome, because the re-analysis covered a significantly larger surface area, and was processed from the raw images by different experimenters who did not see the original data.
The current approach is highly beneficial for rapidly gaining brain-wide quantification of fluorescence. However, there are some caveats that should be considered. While many regions have been added, as well as new subregions, the Brainways level of parcellation depends on the reference atlas used. Here, the digital Waxholm Space Rat Atlas was used, and some key regions, such as the amygdala, hypothalamus and basal forebrain, are not subdivided any further. Encouragingly, this atlas is under active development and we are hopeful the next version will include these important subregions. Furthermore, this is not an issue for investigators working with mice, as the existing mouse brain atlas is highly parcellated. Indeed, although Brainways was developed and tested on rat brain slices, the software can easily be extended to other organisms such as mice and zebrafish. Brainways is using the Brainglobe API as its backbone for accessing the 3D atlas, which makes Brainways compatible with all atlases offered by Brainglobe, including mice, zebrafish and human atlases. The automatic registration algorithm is currently only trained for rat coronal slices, and we are actively working on training a model for mouse coronal slices as well, and expect to have an updated version published presently.
Another limitation of Brainways is that it tends to underfit small parcellated subregions. Thus, users are expected to manually scan and fix fit errors during the nonrigid registration phase, especially if they are focusing on differences between specific subregions of interest. We plan to train a non-rigid registration algorithm based on the data collected using the Brainways GUI in order to alleviate this issue. It is important to note that regardless of model accuracy, attentive processing of the sections is an inherent part of ensuring data quality, in any software solution. Brainways GUI is especially designed to speed and facilitate the visual scanning and manual adjustment of the automatic registration. Finally, Brainways currently processes a single channel automatically. Yet, reserachers often use co-labeling of several fluorescent markers, and need to be able to run cell detections on different stains simultaneously. Brainways supports importing cell detections of multiple channels (e.g. from QuPath), registering them to the atlas, and exporting a co-labelling cell count for each brain region and each animal in the experiment. Brainways currently only supports cell detection of a single stain from within the software, and we plan to include cell detection of multiple stains in the near future.
Methods
Methods used for obtaining the c-Fos data used in the case study have been described in detail in BenAmi Bartal et al. (2021). Thus they are described in brief below.
Dataset curation
To perform the registrations for the re-analysis, four annotators blind to task condition each received a portion of the original dataset. The annotators used the Brainways GUI to perform automatic registrations for each slice, followed by manual fit adjustments when they perceived a mismatch between the atlas and the digital scan. The complete dataset was then reviewed by a fifth annotator to create the final registrations.
Experimental design
Animals
Rat studies were performed in accordance with protocols approved by the Institutional Animal Care and Use Committee at the University of California, Berkeley.
Rats were kept on a 12-hr light-dark cycle and received food and water ad libitum. In total, 83 rats were tested across all experiments. Adult pair-housed male SD rats (‘SD,’ age p60–p90 days) were used as the free and trapped ingroup rats (Charles River, Portage, MI). Adult male Long–Evans rats were used as trapped outgroup rats (‘LE,’ Envigo, CA). All rats that were ordered were allowed a minimum of 5 days to acclimate to the facility prior to beginning testing.
Helping Behavior Test
The HBT was performed as described previously (Bartal et al., 2011). During the HBT, a free rat was placed in an open arena containing a rat trapped inside a restrainer. The free rat could help the trapped rat by opening the restrainer door with its snout, thereby releasing the trapped rat. One-hour-long sessions were repeated daily over a 2-week period. At 40 min, the door was opened half-way by the experimenter; this was typically followed by the exit of the trapped rat and was aimed at preventing learned helplessness. Door-opening was counted as such when performed by the free rat before the half-way opening point. Rats that learned to open the restrainer and consistently opened it on the final three days of testing were labeled as ‘openers’. On the last day of testing, the restrainer was latched shut throughout the 60 min session and rats were perfused immediately following behavioral testing.
Immunohistochemistry
On the last day of testing, animals were sacrificed within 90 min from the beginning of the session, at the peak expression of the early immediate gene product c-Fos. Brains were extracted following perfusion, frozen, sliced at 40 μm and stained for c-Fos. Sections were stained for c-Fos using rabbit antic-Fos antiserum (ABE457; Millipore, 1:1000; 1% NDS; 0.3% TxTBS) with Alexa Fluor 488-conjugated donkey anti-rabbit antiserum (AF488; Jackson, 1:500; 1% NDS; 0.3% TxTBS). Sections were further stained in DAPI (1:40,000) Details of the imaging and manual acquisition methods can be found in the original publications (Ben-Ami Bartal et al., 2021). Immunostained tissue was imaged at 10× using a wide-field fluorescence microscope (Zeiss AxioScan) and was processed in Zen software.
Statistical analysis
Task PLS analysis
Task PLS is a multivariate statistical technique that was used to identify optimal patterns of neural activity that differentiate between the experimental conditions (Mcintosh, 1999; McIntosh et al., 1996). PLS produces a set of mutually orthogonal LV pairs. One element of the LV depicts the contrast, which reflects a commonality or difference between conditions. The other element of the LV, the relative contribution of each brain region (termed here ‘salience’), identifies brain regions that show the activation profile across tasks, indicating which brain areas are maximally expressed in a particular LV. Statistical assessment of PLS was performed by using permutation testing for LVs and bootstrap estimation of standard error for the brain region saliences. For the LV, significance was assessed by permutation testing. For brain region salience, reliability was assessed using bootstrap estimation of standard error. Brain regions with a bootstrap ratio >2.57 (roughly corresponding to a confidence interval of 99%) were considered as reliably contributing to the pattern. Missing values were interpolated by the average for the test condition.
Other statistical tests
In addition to the PLS analysis described above, a one-way ANOVA was conducted on the c-Fos data to compare the HBT ingroup and outgroup conditions and baseline for each brain region. FDR post-hoc corrections were used following all ANOVAs.
Acknowledgements
We thank the following people for their help with this manuscript: Estherina Trachtenberg, Einat Bigelman, Keren Ruzal, and Reut Hazani. This work was supported by the Azrieli Foundation, Israel Science Foundation (ISF), and Tel Aviv University Center for AI and Data Science (TAD).
Footnotes
↵1 Brainways: https://github.com/bkntr/brainways; Brainways GUI: https://github.com/bkntr/napari-brainways; Brainways registration model: https://github.com/bkntr/brainways-reg-model
↵2 All our data is publicly available, more details in the GitHub page.
↵3 Brainways: https://github.com/bkntr/brainways; Brainways GUI: https://github.com/bkntr/napari-brainways; Brainways registration model: https://github.com/bkntr/brainways-reg-model