A next-generation, histological atlas of the human brain and its application to automated brain MRI segmentation

Magnetic resonance imaging (MRI) is the standard tool to image the human brain in vivo. In this domain, digital brain atlases are essential for subject-specific segmentation of anatomical regions of interest (ROIs) and spatial comparison of neuroanatomy from different subjects in a common coordinate frame. High-resolution, digital atlases derived from histology (e.g., Allen atlas [7], BigBrain [13], Julich [15]), are currently the state of the art and provide exquisite 3D cytoarchitectural maps, but lack probabilistic labels throughout the whole brain. Here we present NextBrain, a next-generation probabilistic atlas of human brain anatomy built from serial 3D histology and corresponding highly granular delineations of five whole brain hemispheres. We developed AI techniques to align and reconstruct ~10,000 histological sections into coherent 3D volumes with joint geometric constraints (no overlap or gaps between sections), as well as to semi-automatically trace the boundaries of 333 distinct anatomical ROIs on all these sections. Comprehensive delineation on multiple cases enabled us to build the first probabilistic histological atlas of the whole human brain. Further, we created a companion Bayesian tool for automated segmentation of the 333 ROIs in any in vivo or ex vivo brain MRI scan using the NextBrain atlas. We showcase two applications of the atlas: automated segmentation of ultra-high-resolution ex vivo MRI and volumetric analysis of Alzheimer’s disease and healthy brain ageing based on ~4,000 publicly available in vivo MRI scans. We publicly release: the raw and aligned data (including an online visualisation tool); the probabilistic atlas; the segmentation tool; and ground truth delineations for a 100 μm isotropic ex vivo hemisphere (that we use for quantitative evaluation of our segmentation method in this paper). By enabling researchers worldwide to analyse brain MRI scans at a superior level of granularity without manual effort or highly specific neuroanatomical knowledge, NextBrain holds promise to increase the specificity of MRI findings and ultimately accelerate our quest to understand the human brain in health and disease.

Publicly available neuroimaging packages (Free-66 Surfer [30], FSL [31], SPM [32], or AFNI [33]) enable re-67 searchers to perform large-scale studies with thousands 68 of scans [34][35][36][37] to study of healthy ageing, as well as a 69 broad spectrum of brain diseases, such as Alzheimer's, 70 multiple sclerosis, or depression [38][39][40][41].A core compo-71 nent of these neuroimaging packages is digital 3D brain 72 atlases.These are reference 3D brain images that are 73 representative of a certain population and can comprise 74 image intensities, neuroanatomical labels, or both.We 75 note that, due to its highly convoluted structure, the 76 cerebral cortex is often modelled with specific atlases 77 defined on surface coordinate systems [42,43] -rather 78 than 3D images).We refer the reader to [44]  acquisitions with voxels in the 100 μm range [3,[48][49][50].104 However, it fails to visualise cytoarchitecture and re-105 solve many boundaries between brain areas.Histology, 106 on the other hand, is a microscopic 2D modality that can 107 visualise distinct aspects of cytoarchitecture using an ar-108 ray of stains -thus revealing neuroanatomy with much 109 higher detail.Earlier versions of histological atlases 110 were printed, often not digitised, and comprised only a 111 small set of labelled sections.Representative examples 112 include the Morel atlas of the thalamus and basal gan-113 glia [51] or the Mai atlas of the whole brain [1] (Fig. 1A).114 While printed atlases are not useful for computational 115 analysis, serial histology can be combined with image 116 registration (alignment) methods to enable volumetric 117 reconstruction of 3D histology [52], thus opening the 118 door to creating 3D histological atlases.These have two 119 major advantages over MRI atlases: (i) providing a more 120 detailed CCF; and (ii) the ability to segment MRI scans at 121 finer resolution -with potentially higher sensitivity and 122 specificity to detect brain alterations caused by brain 123 diseases or to measure treatment effects.124 Earlier 3D histological atlases were limited in terms of 125 anatomical coverage.Following the Morel atlas, two 126 digital atlases of the basal ganglia and thalamus were 127 presented [8,11] (Fig. 1B-C).To automatically obtain 128 segmentations for living subjects, one needs to register 129 their MRI scans with the histological atlases, which is dif-130 ficult due to differences in image resolution and con-131 trast between the two modalities.For this reason, the 132 authors mapped the atlases to 3D MRI templates (e.g., 133 the MNI atlas [53]) that can be more easily registered to 134 in vivo images of other subjects.A similar atlas combin-135 ing histological and MRI data was proposed for the hip-136 pocampus [12] (Fig. 1D-F).Our group presented a histo-137 logical atlas of the thalamus [14] (Fig. 1G), but instead of 138 using MNI as a stepping stone, we used Bayesian meth-139 ods [54] to map our atlas to in vivo scans directly.140 More recently, several efforts have aimed at the con-141 siderably bigger endeavour of building histological at-142 lases of the whole human brain: 143 -BigBrain [13] comprises over 7,000 histological sec-144 tions of a single brain, which were accurately recon-145 structed in 3D with an ex vivo MRI scan as reference 146 (Fig. 1H).BigBrain paved the road for its follow-up Ju-147 lich-Brain [15], which aggregates data from 23 individu-148 als.A subset of 10 cases have been provided to the com-149 munity for labelling, which has led to the annotation of 150 248 cytoarchitectonic areas as part of 41 projects.The 151 maximum likelihood maps have been mapped to MNI 152 space for in vivo MRI analysis [55], but have two caveats 153 (Fig. 1I): they align poorly with the underlying MNI tem-154 plate, and subcortical annotations are only partial.

155
-The Allen reference brain [7] (Fig. 1J) has comprehen-156 sive anatomical annotations on high-resolution histol-157 ogy and is integrated with the Allen gene expression at-158 lases.However, it only has delineations for a sparse set 159 of histological sections of a single specimen (resembling 160 a printed atlas).For 3D analysis of in vivo MRI, the au-161 thors have manually labelled the MNI template using a 162 protocol inspired by their own atlas (Fig. 1K), but with a fraction of the labels and less accurately delineationssince they are made on MRI and not histology.
-The Ahead brains [22] (Fig. 1L-N) comprise quantitative MRI and registered 3D histology for two separate specimens.These have anatomical labels for a few dozen structures, but almost exclusively of the basal ganglia.Moreover, these labels were obtained from the MRI with automated methods, rather than manually traced on the high-resolution histology.
While these histological atlases of the whole brain provide exquisite 3D cytoarchitectural maps, interoperability with other datasets (e.g., gene expression), and some degree of MRI-histology integration, there are currently neither: (i) datasets with densely labelled 3D histology of the whole brain; nor (ii) probabilistic atlases built from such datasets, which would enable analyses such as Bayesian segmentation or CCF mapping of the 180 whole brain at the subregion level.181 In this article, we present NextBrain, a next-genera-182 tion probabilistic atlas of the human brain built from 183 comprehensively labelled, multi-modal 3D histology of 184 five half brains (Fig. 1O-P).semi-automated segmentation methods (Fig. 1Q).The 193 3D label maps are finally use to build a probabilistic atlas 194 (Fig. 1R), which is combined with a Bayesian tool for au-195 tomated segmentation of MRI scans (Fig. 1S).196  Instead, we solve this challenging problem with a cus-245 tom, state-of-the-art image registration framework (Fig. 246 3), which includes three components specifically devel-247 oped for this project: (i) a differentiable regulariser that 248 minimises overlap of different blocks and gaps in be-249  tween [58]; (ii) an AI registration method that uses con-250 trastive learning to provide highly accurate alignment of 251 corresponding brain tissue across MRI and histol-252 ogy [10]; and (iii) a Bayesian refinement technique 253 based on Lie algebra that guarantees the 3D smooth-254 ness of the reconstruction across modalities, even in the 255 presence of outliers due to tissue folding and tear-256 ing [9].We note that this is an evolution of our previ-257 ously presented pipeline [6], which incorporates the 258 aforementioned contrastive AI method and jointly opti-259 mises the affine and nonlinear transforms to achieve a 260 32% reduction in registration error (details below).261

Printed histological atlases (2D)
Qualitatively, it is apparent from Fig. 3    The colour coding follows the convention of the Allen atlas [7], where the hue indicates the structure (e.g., purple is thalamus, violet is hippocampus, green is amygdala) and the saturation is proportional to neuronal density.The colour of each voxel is a weighted sum of the colour corresponding to the ROIs, weighted by the corresponding probabilities at that voxel.The red lines separate ROIs based on the most probable label at each voxel, thus highlighting boundaries between ROIs of similar colour; we note that the jagged boundaries are a common discretization artefact of probabilistic atlases in regions where two or more labels mix continuously, e.g., the two layers of the cerebellar cortex.and comparison with ground truth (only available for right hemisphere).We show two coronal, sagittal, and axial slices.The MRI was resampled to 200 μm isotropic resolution for processing.As in previous figures, the segmentation uses the Allen colour map [7] with boundaries overlaid in red.We note that the manual segmentation uses a coarser labelling protocol.

Fine-grained analysis of in vivo MRI
age and intracranial volume).Using a simple linear classifier on a task where strong differences are expected allows us to use classification accuracy as a proxy for the quality of the input features, i.e., the ROI volumes derived from the automated segmentations.To enable direct comparison, we used a sample of 383 subjects from the ADNI dataset [72] (168 AD, 215 controls) that we used in previous publications [14,49,50].
Using the ROI volumes estimated by FreeSurfer 7.0 (which do not include subregions) yields and area under the receiver operating characteristic curve (AUROC) equal to 0.911, which classification accuracy of 85.4% at its elbow.The Allen MNI template exploits subregion information to achieve AUROC = 0.929 and 86.9% accuracy.The increased segmentation accuracy and granularity of NextBrain enables it to achieve AUROC = 0.953 and 90.3% accuracy -with a significant increase in AUROC with respect to the Allen MNI template (p = 0.01 for a DeLong test).This AUROC is also superior to those of specific ex vivo atlases we have presented in the prior work [14,49,50] -which range from 0.830 to 0.931 Application to fine-grained signature of aging: We performed Bayesian segmentation with NextBrain on 705 subjects (aged 36-90, mean 59.6 years) from the Ageing HCP dataset [73], which comprises high-quality in vivo scans at 0.8mm resolution.We computed the volumes of the ROIs for every subject, corrected them for total intracranial volume (by division) and sex (by regression), 579 and computed their Spearman correlation with age.We 580 used the Spearman rather than Pearson correlation be-581 cause, being rank-based, it is a better model for ageing 582 trajectories as they are known to be nonlinear for wide 583 age ranges [74,75].

584
The result of this analysis is, to the best of our 585 knowledge, the most comprehensive map of regional 586 ageing of the human brain to date (Fig. 6A and Extended 587 Data Fig. 7A; see also full trajectories for select ROIs in 588 Extended Data Fig. 8).Cortically, we found significant date (g) showed a stronger negative correlation be-605 tween age and volume than the posterior caudate (h).606 Similarly, the external segment of the globus pallidus (i) 607 showed a stronger correlation than the internal seg-608 ment (j) -an effect that was not observed in previous 609 work studying the whole pallidum [77].(Fig. 2B).

742
-Dissection.After MRI scanning, each hemisphere is dis-743 sected to fit into standard 74x52mm cassettes.First, 744 each hemisphere was split into cerebrum, cerebellum, 745 and brainstem.Using a metal frame as a guide, these 746 were subsequently cut into 10mm-thick slices in coro-747 nal, sagittal, and axial orientation, respectively.These 748 slices were photographed inside a rectangular frame of 749 known dimensions for pixel size and perspective correc-750 tion; we refer to these images as "whole slice photo-751 graphs."While the brainstem and cerebellum slices all 752 fit into the cassettes, the cerebrum slices were further 753 cut into as many blocks as needed."Blocked slice pho-754 tographs" were also taken for these blocks (Fig. 2C, left).

755
-Tissue processing and sectioning.After standard tissue 756 processing steps, each tissue block was embedded in 757 paraffin wax and sectioned with a sledge microtome at 758 25m thickness.Before each cut, a photograph was 759 taken with a 24MPx Nikon D5100 camera (ISO = 100, ap-760 erture = f/20, shutter speed = automatic) mounted right 761 above the microtome, pointed perpendicularly to the 762 sectioning plane.These photographs (henceforth 763 "blockface photographs") were corrected for pixel size 764 and perspective using fiducial markers.The blockface 765 photographs have poor contrast between grey and 766 white matter (Fig. 2C, right) but also negligible nonlinear 767 geometric distortion, so they can be readily stacked into 768 3D volumes.A 2D convolutional neural network (CNN) 769 pretrained on the ImageNet dataset [87]  as the MRI contrast allowed -without subdividing the 1035 cortex.Then, we used SmartInterpol [5] to complete the 1036 segmentation of the missing slices.Next, we manually 1037 corrected the SmartInterpol output as needed, until we were satisfied with the 200 μm isotropic segmentation.The cortex was subdivided using standard FreeSurfer routines.This labelling scheme led to a ground truth segmentation with 98 ROIs, which we have made publicly available (details under "Data Availability").Supplementary Videos 3 and 4 fly over the coronal and axial slices of the labelled scan, respectively.
As explained in the Results section, we used a simplified version of the NextBrain atlas when segmenting the 100 μm scan, in order to better match the ROIs of the automated segmentation and the ground truth (especially in the brainstem).This version was created by replacing the brainstem labels in the histological 3D reconstruction (Fig. 2G, right) by new segmentations made directly in the underlying MRI scan.These segmentations were made with the same methods as for the 100 μm isotropic scan.The new combined segmentations were used to rebuild the atlas.

Automated segmentation with Allen MNI template
Automated labelling with the Allen MNI template relied on registration-based segmentation with the NiftyReg package [65,97], which yields state-of-the art performance in brain MRI registration [103].We used the same deformation model and parameters as the Nif-tyReg authors used in their own registration-based segmentation work [104]: (i) symmetric registration with a deformation model parameterised by a grid of control points (spacing: 2.5 mm = 5 voxels) and B-spline interpolation; (ii) local normalised cross correlation as objective function (standard deviation: 2.5mm); and (iii) bending energy regularisation (relative weight: 0.001).

Linear discriminant analysis (LDA) for AD classification
Linear classification of AD vs controls based on ROI volumes was performed as follows.Leaving one subject out at the time, we used all other subjects to: (i) compute linear regression coefficients to correct for sex and age (intracranial volume was corrected by division); (ii) estimate mean vectors for the two classes (̅ !, ̅ " ), as well as a pooled covariance matrix ( Σ ); and (iii) use the means and covariance to compute an unbiased log-likehood criterion  for the left-out subject: (̅ ) = (̅ " − ̅ # ) $ Σ %" [̅ − 0.5 (̅ " + ̅ # )], where ̅ is the vector with ICV-, sex-, and age-corrected volumes for the left-out subject.Once the criterion  has been computed for all subjects, we it can be globally thresholded for accuracy and ROC analysis.We note that, for NextBrain, the high number of ROIs renders the covariance matrix singular.We prevent this by using regularised LDA: we normalise all the ROIs to unit variance and then compute the covariance as Σ = S + λI, where S is the sample covariance,  is the identity matrix, and λ = 1.0 is a constant.We note that normalizing to unit variance enables us to use a fixed, unit λ -rather 1092 than having to estimate λ for every left-out subject.1093

B-spline fitting of aging trajectories 1094
To compute the B-spline fits in Extended Data Fig. Fig.
acquisition and FreeSurfer processing.Left: sagittal slice of MRI.Center: corresponding FreeSurfer segmentation.Right: 3D rendering of reconstructed and parcellated pial surface.C | Tissue blocking and processing.Left: blocked coronal slice of the cerebrum.Right: blockface photo of a cerebral block.D | Histology: coronal section of cerebrum stained with LFB (left) and H&E (right).E | AI-assisted labelling of 333 ROIs on LFB (left: cerebrum; mid: brainstem; right: cerebellum).F | 3D rendering of blocks after Initial linear alignment using a joint registration method with soft shape constraints.G | Reconstructed coronal slice of LFB (left), H&E (mid), and labels (right), overlaid on MRI, after nonlinear registration with AI and robust Bayesian refinement.H | Sagittal (left), coronal (mid), and axial slices of our atlas.Each voxel is painted with a linear combination the colours of each label, multiplied by their probabilities.F | Coronal slice of an in vivo MRI scan and its segmentation with the atlas.The atlas can also be used for segmenting ex vivo MRI and as common coordinate frame for population analyses.

Fig. 2 :
Fig. 2: NextBrain workflow.(A) Photograph of formalin-fixed hemisphere.(B) High-resolution (400 μm) ex vivo MRI scan, FreeSurfer segmentation, and extracted pial surface (parcellated with FreeSurfer).(C) Tissue slabs and blocks, before and after paraffin embedding.(D) Section stained with H&E and LFB.(E) Semi-automated labelling of 333 ROIs on sections using an AI method [5].(F) Initialization of affine alignment of tissue blocks using a custom registration algorithm that minimises overlap and gaps between blocks.(G) Refinement of registration with histology and nonlinear transform, using a combination of AI and Bayesian techniques [9,10].(H) Orthogonal slices of 3D probabilistic atlas.(I) Automated Bayesian segmentation of an in vivo scan into 333 ROIs using the atlas.

Fig. 3 :
Fig. 3: 3D reconstruction of Case 1. (A) Coronal slice of 3D reconstruction; boundaries between blocks are noticeable from uneven staining.(B) Registered MRI, LFB, and H&E histology of a block, with tissue boundaries (traced on LFB) overlaid.(C) Orthogonal view of reconstruction, which is smooth thanks to the Bayesian refinement, and avoids gaps and overlaps thanks to the regulariser.(D) Visualization of 3D landmark registration error (left); histogram of its magnitude (right); and mean ± standard deviation (bottom), compared with our previous pipeline [6].See Extended Data for results on the other cases.The average landmark error across all cases is 0.99mm (vs 1.45 for [6]).

Fig. 4 :
Fig. 4: NextBrain probabilistic atlas.(A) Portions of the NextBrain probabilistic atlas (which has 333 ROIs), the SAMSEG atlas in FreeSurfer [2] (13 ROIs), and the manual labels of MNI based on the Allen atlas [7] (138 ROIs).(B) Close-up of three orthogonal slices of NextBrain.The colour coding follows the convention of the Allen atlas[7], where the hue indicates the structure (e.g., purple is thalamus, violet is hippocampus, green is amygdala) and the saturation is proportional to neuronal density.The colour of each voxel is a weighted sum of the colour corresponding to the ROIs, weighted by the corresponding probabilities at that voxel.The red lines separate ROIs based on the most probable label at each voxel, thus highlighting boundaries between ROIs of similar colour; we note that the jagged boundaries are a common discretization artefact of probabilistic atlases in regions where two or more labels mix continuously, e.g., the two layers of the cerebellar cortex.

497Fig. 5 :
Fig.5: Automated Bayesian segmentation of publicly available ultra-high resolution ex vivo brain MRI[3] using the simplified version of NextBrain, and comparison with ground truth (only available for right hemisphere).We show two coronal, sagittal, and axial slices.The MRI was resampled to 200 μm isotropic resolution for processing.As in previous figures, the segmentation uses the Allen colour map[7] with boundaries overlaid in red.We note that the manual segmentation uses a coarser labelling protocol.

Fig. 6 :
Fig. 6: Absolute value of Spearman correlation for ROI volumes vs age derived from in vivo MRI scans: (A) Ageing HCP dataset (image resolution: .8mmisotropic; age range: 36-90 years; mean age: 59.6 years); please see main text for meaning of markers (letters).(B) OpenBHB dataset [4], restricted to subjects with ages over 35 years to match Ageing HCP (resolution 1 mm isotropic; age range: 36-86 years; mean age: 57.9 years).(C) Full OpenBHB dataset (age range: 6-86 years, mean age: 25.2 years); please note the different scale of the colour bar.The ROI volumes are corrected by intracranial volume (by division) and sex (by regression).Further slices are shown in Extended Data Fig. 6.

Extended Data Fig. 5 :
Sagittal, coronal, and axial slices of the continuous maps of the 3D landmark registration error.The maps are computed from the discrete landmarks (displayed in Fig.3Dand Extended Data Figs.1-4D) using Gaussian kernel regression with σ = 10 mm.There is no clear spatial pattern for the anatomical distribution of the error across subjects.

. 7 :. 8 :
Absolute value of Spearman correlation for ROI volumes vs age derived from in vivo MRI scans (additional slices).The visualisation follows the same convention as in Figure 5: (A) Ageing HCP dataset.(B) OpenBHB dataset, restricted to ages over 35.(C) Full OpenBHB dataset.Aging trajectories for select ROIs in HCP dataset, showing differential pattens in subregions of brain structures (thalamus, hippocampus, cortex, etc).The red dots correspond to the ROI volumes of individual subjects, corrected by intracranial volume (by division) and sex (by regression).The blue lines represent the maximum likehood fit of a Laplace distribution with location and scale parameters parametrised by a B-spline with four control points (equally space between 30 and 95 years).The continuous blue line represents the location, whereas the dashed lines represent the 95% confidence interval (equal to three times the scale parameter in either direction).Volumes of contralateral structures are averaged across left and right.
Magnetic resonance imaging (MRI) is arguably the most 51 important tool to study the human brain in vivo.Its ex-52 quisite contrast between different types of soft tissue 53 provides a window into the living brain without ionising 54 radiation, making it suitable to healthy volunteers.Ad-

Densely labelled 3D histology of five human 223 hemispheres
As the first densely labelled probabilistic atlas of the 197 human brain built from histology, NextBrain enables 198 brain MRI analysis at a level of detail that was previously 199 not possible.Our results showcase: the high accuracy of 200 our 3D histology reconstructions; NextBrain's ability to 201 accurately segment MRI scans acquired in vivo or ex 202 vivo; its ability to separate diseased and control subjects 203 in an Alzheimer's group study; and a volumetric study of 204 healthy brain aging with unprecedented detail.

generation probabilistic atlas of the 326 human brain 327
Our pipeline is widely applicable as it produces accu-302 rate 3D reconstructions from blocked tissue in standard-303 sized cassettes, sectioned with a standard microtome.304Thecomputercode and aligned dataset is freely availa-305 ble in our public repository (see Data Availability).For 306 educational and data inspection purposes, we have built 307 an online visualisation tool for the multi-modality data, 308which is available at: github-pages.ucl.ac.uk/NextBrain.
[62]tion below and the supplement).This public dataset en-319 ables researchers worldwide to conduct their own stud-320 ies not only in 3D histology reconstruction, but also 321 other fields like: high-resolution segmentation of MRI 322or histology[61]; MRI-to-histology and histological 323 stain-to-stain image translation[62]; deriving MRI sig-324 nal models from histology [63]; and many others.325Anext-Thelabels from the five human hemispheres were co-328 registered and merged into a probabilistic atlas.This 329 was achieved with a method that alternately registers 330 the volumes to the estimate of the template, and up-331 dates the template via averaging [64].The registration 332 method is diffeomorphic [65] to ensure preservation of 333 the neuroanatomic topology (e.g., ROIs do not split or 334 disappear in the deformation process).Crucially, we use 335 an initialization based on the MNI template, which 336 serves two important purposes: preventing biases to-337 wards any of the cases (which would happen if we ini-338 tialised with one of them); and "centring" our atlas on a 339 well-established CCF computed from 305 subjects, 340 which largely mitigates our relatively low number of 341 cases.Since the MNI template is a greyscale volume, the 342 first iteration of atlas building uses registrations com-343 puted with the ex vivo MRI scans.Subsequent iterations 344 register labels directly with a metric based on the prob-345 ability of the discrete labels according to the atlas [64].346 Fig. 4 shows close-ups of orthogonal slices of the atlas, 347 which models voxel-wide probabilities for the 333 ROIs 348 on a 0.2mm isotropic grid.The resolution and detail of 349 the atlas represents a substantial advance with respect 350 to the SAMSEG atlas [2] currently in FreeSurfer (Fig. 4A).351 SAMSEG models 13 brain ROIs at 1 mm resolution and 352 is, to the best of our knowledge, the most detailed prob-353 abilistic atlas that covers all brain regions.The figure 354 also shows approximately corresponding slices of the 355 manual labelling of the MNI atlas with the simplified Al-356 len protocol [7].Compared with NextBrain, this labelling 357 is not probabilistic and does not include many histolog-358 ical boundaries that are invisible on the MNI template 359 (e.g., hippocampal subregions, in violet).For this rea-360 son, it only has 138 ROIs -while NextBrain has 333.361 A comprehensive comparison between and all digit-362 ised sections of the printed atlas by Mai & Paxinos [1] 363 and approximately equivalent sections of the Allen ref-364 erence brain and NextBrain is included in the supple-365 ment.The agreement between the three atlases is gen-366 erally good, especially for the outer boundaries of the 367 whole structures, e.g., the whole hippocampus, amyg-368 dala, or thalamus.Mild differences can be found in the 369 SAMSEG NextBrain (A) Comparison with whole brain atlases (B) Close-ups with boundaries of maximum probability segmentations overlaid in red Coronal Sagittal Axial

Automated segmentation of ultra-high reso- 401 lution ex vivo MRI 402
Sample slices and their corresponding automated 478 and manual segmentations are shown in Fig. 5.The ex-479 quisite resolution and contrast of the dataset enables 480 our atlas to accurately delineate a large number of ROIs 481 with very different sizes, including small nuclei and sub-482 regions of the hippocampus, amygdala, thalamus, hypo-483 thalamus, midbrain, etc. Differences in label granularity 484 aside, the consistency between the automated and 485 ground truth segmentation is qualitatively very strong.486 To the best of our knowledge, this is the most com-487 prehensive dense segmentation of a human brain MRI 488 scan to date.As ex vivo datasets with tens of scans be-489 come available [61,69,70], our tool has great potential 490 in augmenting mesoscopic studies of the human brain.491 Moreover, the labelled MRI that we are releasing has 492 great potential in other neuroimaging studies, e.
One of the new analyses that NextBrain enables is the 403 automated fine-grained segmentation of ultra-high-res-404 olution ex vivo MRI.Since motion is not a factor in ex 405 vivo imaging, very long MRI scanning times can be used 406 to acquire data at resolutions that are infeasible in vivo.g., for 493 training or evaluating segmentation algorithms; for ROI 494 analysis in the high-resolution ex vivo space; or for vol-495 umetric analysis via registration-based segmentation.496

of ultra-high resolution ex vivo brain MRI and 1024 simplified version of NextBrain atlas
8, we 1095 first corrected the ROI volumes by sex (using regression) 1096 and intracranial volume (by division).Next, we modelled 1097 the data with a Laplace distribution, which is robust 1098 against outliers which may be caused by potential seg-1099 mentation mistakes.Specifically, we used an age-de-1100 pendent Laplacian where the location  and scale  are 1101 both B-splines with four evenly space control points at 1102 30, 51.6, 73.3, and 95 years.The fit is optimised with 1103 gradient ascent over the log-likelihood function: 1104 ( & ,  ' ) = < log A ( ; D ( ;  & F, ( ( ;  ' )G, ; , ) is the Laplace distribution with loca-1106 tion  and scale ;  ( is the volume of ROI for subject ; 1107  ( is the age of subject ; ( ( ;  & ) is a B-spline de-1108 scribing the location, parameterised by  & ; and 1109 ( ( ;  ' ) is a B-spline describing the scale, parameter-1110 ised by  ' .The 95% confidence interval of the Laplace 1111 distribution is given by  ± 3.1112