Connectome spectrum electromagnetic tomography: a method to reconstruct electrical brain sources at high-spatial resolution

The discovery that the structural connectome serves as a compact representation of brain activity allowed us to apply compressed sensing to M/EEG source reconstruction of brain electrical activity. We show that the introduction of this biological constraint significantly increases the signal-to-noise ratio and spatial conspicuity of the reconstruction in human datasets. This significant gain in precision and accuracy could allow neuroscientists to extend their research beyond current limits.


Introduction
In order to understand the complex mechanisms underpinning human perception, cognition and emotion, we need to consider the neuronal activity at the whole-brain level. Although rapid advances in technology are allowing scientists to get closer to this aim in small animal models [1][2][3] , this endeavor is still out of reach in human neuroimaging, where a combination of different imaging tools is needed to simultaneously maximize spatial and temporal resolution 4 .
When choosing an imaging technology to estimate brain activity, there is a trade-off between spatial and temporal resolution. This is typically the case for the two most common functional neuroimaging techniques, namely functional Magnetic Resonance Imaging (fMRI), which offers high spatial but poor temporal resolution, and magneto/electro-encephalography (M/EEG), which offers the opposite. In order to circumvent this limitation, Electromagnetic Tomography (ET) has been proposed as a computational method to image electrical activity. This works by combining M/EEG and structural MR images to achieve both decent spatial resolution and rapid temporal sampling of neural signals 5 (Fig. 1a).
Despite its great potential, ET has clear weaknesses 6 . There are two main sources of unreliability in ET, one is associated with the forward model and another with the inverse problem. The forward model describes the transmission of electric fields in the head (see Fig. 1a). It models the measured M/EEG data produced from a map of electrical activation in the gray-matter. Accurate forward models solve Maxwell's equations taking into account the different conductivity properties of different tissues of the brain. The associated problems arise from the discretisation of the problem and the inaccuracy of the estimated conductivities. On the other hand, the inverse problem involves estimating the activation pattern that produces a given M/EEG measurement. Since the number of sources in the gray-matter is much larger than the number of electrodes or sensors, the inverse problem is ill-posed, i.e., an infinite number of electrical cortical configurations can produce the measured M/EEG data. In order to solve the inverse problem, assumptions on the source distributions need to be made. The main problem in solving the inverse problem is the validity of the assumptions used. In this paper, we focus our efforts on improving the solution of the inverse problem.
The constraints that are commonly used to solve the M/EEG inverse problem are in fact known to produce low-resolution reconstructions [7][8][9][10][11] . This is consequential as a large repertoire of neural activity patterns escapes the assumptions imposed by inverse reconstruction methods. In order to solve the M/EEG inverse problem at higher spatial reliability, new categories of realistic assumptions are needed.
Over the last two decades, converging evidence has shown that human brain activity exhibits patterns that are highly constrained by the underlying structural connectivity 12 . As for any other physical object, such as a metal plate, or a vibrating rope, resonant (spatial) frequencies of the brain are determined by the underlying structure. Recent data suggest that brain activity can be efficiently expressed as combination of its normal modes 13 , hence providing a natural link between brain function and its underlying connectivity structure. These normal modes are known in the literature as connectome harmonics and form the building blocks of well-known brain functional networks associated both with rest and different tasks 13 (Fig. 1c,d). Importantly, it has been shown that visual perception 14,15 and different states of consciousness 16 are sparsely characterized by the activation pattern of connectome harmonics.
In the current work, we describe a novel method for M/EEG tomography that exploits the sparse structure of brain activity on its connectome spectrum and wavelet domains ( Fig. 1c-f). More precisely, we formulate the M/EEG tomography problem within the compressed sensing framework 17,18 (Fig. 1g).
Others have already attempted to use structural connectivity information to constrain the M/EEG inverse solution by enforcing smoothness among connected sources 21,22 , sparsity among connectivity wavelets 19 , and other methods 20 . These articles suggest a potential improvement in the localization accuracy due to the anatomical constraints. However, a reliable validation of this effect is still lacking.
In this work, we demonstrate that by incorporating the high-resolution connectivity structure of the brain (Fig. 1b), we increase the SNR and the accuracy of the reconstructed brain activity (Fig. 1h). We tested our method in simulated EEG data from brain activity patterns corresponding to fMRI task activations ( Fig. 2a-f), and in real EEG data from two experiments on visual evoked potentials ( Fig.  2g-k).

Figure 1. Connectome spectrum electromagnetic tomography (CSET) pipeline. a)
Illustration of the EEG inverse problem. The EEG inverse problem consist on the estimation of the electrical sources in the brain that generated the measured electrode signal at the scalp. This problem is ill-posed because the number of sources (N) is much larger than the number of electrodes (M). There exist an infinite number of source combinations that could generate a single electrode signal. For this reason, mathematical regularization based on assumptions on mechanistic or empirical aspects of brain activity are needed. b) The high-resolution individualized connectomes are constructed by combining the long-range white-matter connectivity (based on tractography) and the short-range cortical connectivity (based on Euclidean distance). c) High-resolution connectome harmonics are constructed as the eigenvectors of the high-resolution connectome graph Laplacian. Here only the first 5 (out of N) eigenvectors are shown. d) Connectome spectrum, i.e., the eigenvalues corresponding to the connectome harmonics. e) Wavelet filters defined over the distribution of values in the connectome spectrum. Each of the scales is used to define the wavelet transform, and each one weighs a different portion of the spectrum. f) Connectome wavelets for 4 different scales, corresponding to the filters in subpanel e), the arrow indicates the node on which the wavelets are localized. g) Definition of the inverse problem solved in CSET and CWET. The first formula generally defines a penalized optimization problem. The second formula is specific to compressed sensing. By exchanging the by the Fourier transform or by the wavelet transform we obtain the CSET and CWET problems, respectively. g) Illustration of a CSET solution for one subject in the face perception experiment. In this illustration, the optimization starts from an initial estimate (eLORETA solution) that solves the fidelity term and ends in an estimate that has a sparse connectome Fourier transform. Fusiform face area is highlighted in the final estimate (right-most spatial maps), but not in the stateof-the-art solution (eLORETA, left-most spatial maps).

Increased sensitivity towards physiological patterns
In silico simulations are a very versatile tool to test the behavior of source reconstruction algorithms under different conditions (e.g., different parameter configurations, data sampling strategies, or assumptions underlying the generative model of the signal). In order to minimize the bias associated to the selection of the generative model, and to have brain activity signals that are as physiologically plausible as possible, we opted for using fMRI data as ground truth signals (Fig. 2a). Specifically, we used task fMRI to obtain a detailed spatial response to faces and moving dots, for each participant.
To assess reconstruction performance, we computed the r-squared metric between the ground truth and reconstructed signals. This analysis indicated that CSET, our proposed method using the individuals' brain connectivity to reconstruct brain activity, recovers the ground truth signal with the best accuracy in cases of montages with different numbers of electrodes ( Fig. 2c-d). In fact, the CSET reconstruction captured the fMRI response with more than twice the accuracy of the second-best method (r = 0.46 versus r = 0.23 with MNE). Furthermore, CSET best approximated the distribution of Fourier coefficients from the fMRI graph (Fig. 2e), suggesting that state-of-the-art methods significantly underestimate the sparsity of brain activity in this space (KS-distance = 0.42 vs [0.59, 0.59, 0.62 0.62, 0.61]).
Additionally, the qualitative assessment of the reconstructions suggests that, while the state-of-the-art methods concentrate all the signal energy in the frontal and parietal regions, i.e., the regions close to the electrodes, the CSET and CWET methods are able to capture ventral activation in addition (

Improved EEG source reconstruction accuracy
Visual evoked potentials of well-known neurophysiological paradigms (such as face or motion perception) provide data with high signal-to-noise ratio (in comparison to other types of EEG experimental paradigms) and their activation response is well documented in other imaging modalities or animal models. We reconstructed brain activity maps from the VEPCON's EEG data 21 (Fig. 3a). This dataset contains two sets of EEG VEP recordings, the face dataset (faces -scrambled faces), and the motion dataset (coherent -incoherent motion) (see Methods). We focus our analysis of the spatial conspicuity of the reconstruction on each participant's main activation peak, i.e., the time point at which the difference in the measured EEG response between the experimental and contrast stimuli is maximal in absolute terms. As a ground truth, we compared the EEG reconstructions with the previously used fMRI response. Figure 3 shows two improvements in reconstructing brain response during face and motion perception by the proposed CSET and CWET methods over state-of-the-art methods: first, their reconstructions more accurately capture the expected activation pattern (retrieved from fMRI), as shown by higher rsquared values (Fig. 3b). Second, the reconstruction is more precise for these methods, as shown by the boosted signal-to-noise ratio (SNR, Fig. 3c), and the enhanced dynamics of each task's region of interest (Fig. 3d, fusiform gyrus in the face task and middle-temporal in the motion task). The reconstructions are shown in Fig. 3e.

Conclusion
In the field of neuroimaging, when choosing a non-invasive tool to image whole brain activity, there is a trade-off between temporal and spatial resolution. Electromagnetic tomography (ET) combines structural MRI with M/EEG recordings to maximise resolution in both domains and hence has the potential to overcome this trade-off and become the tool of choice 5 .
However, the lack of real brain activity data and regularisers that accurately constrain neurophysiologically plausible signals greatly limits the applicability and usefulness of existing ET methods, which are unreliable and yield low spatial resolution 6 .
In this paper, we use multimodal neuroimaging to rigorously investigate the reliability of ET and propose new methods based on incorporating diffusion MRI-derived structural connectivity data into the solution. Particularly, based on previous results demonstrating a close relationship between brain function and brain structural connectivity 14,15 , we exploit the connectivity structure of the brain to improve existing ET methods.
We developed Connectome Spectrum and Connectome Wavelet Electromagnetic Tomography (CSET and CWET), which enhance the sparsity in the connectome spectrum of brain activity, i.e., the Fourier representation of activity in the connectivity graph. These two methods are based on compressed sensing, a well-established signal processing tool for solving inverse problems that exploits the sparsity of signals.
Using these two new algorithms, we demonstrate a significant increase in the accuracy and precision of the reconstructed brain activity signals. In a first step, we use simulated EEG signals from fMRI responses to demonstrate that compressed sensing using appropriate regularisers can capture realistic neurophysiological patterns. In particular, our results indicate that methods based on minimum norm estimation are able to reconstruct temporal patterns with much higher accuracy than state-of-the-art methods based on the temporal statistics of the data.
In a second step, we show that, by reconstructing real EEG signals, our methods can reconstruct brain responses with higher spatial conspicuity and more robustness to intrinsic noise in the EEG signal, during two different perceptual processes. In particular, the signal-to-noise ratio of the reconstructed signal and the signal energy in those brain regions most involved in the task are increased.
Thus, our method addresses the problems of reliability and spatial resolution at the same time, taking advantage of the latest advances in graph signal processing and connectomics. The spatio-temporal accuracy of our imaging method will help neuroscientists to extend their research beyond the current limitations, i.e., the low sampling frequency of functional MRI and the poor spatial resolution of M/EEG.

The forward model
When a pyramidal neuron in the gray-matter receives an excitatory postsynaptic potential (EPSP), its voltage dependent sodium channels open, the positively charged sodium ions enter in the neuron and due to the electrical neutrality conservation principle, an active source of current is produced in the apical region of that neuron. This creates an electrical dipole. When many neighbouring pyramidal neurons activate simultaneously, they generate an electrical dipole that is strong enough that it traverses the different tissues in the head, and can be measured by M/EEG electrodes 22  is known as the lead-field matrix, and its components ‫ۯ‬ , define the contribution of the ݅ -th cortical source to the ݆ -th M/EEG sensor. The lead-field matrix summarizes the contribution from each of the cortical sources to each of the scalp electrodes, and it is computed by solving Maxwell's equations 22 . Given our primary focus on the inverse problem, we decided to use the default head volume and forward computation pipeline implemented in MNE-Python 23 , as it is a well-documented method in an openly available software toolbox.
In this work, we constrain the sources to be normally oriented to the cortical surface of the brain. There exist two main reasons behind this choice. First, the complexity of the algorithm is reduced by reconstructing a single value per source rather than a value per each coordinate axis (x, y and z). Second, the dipoles originated due to the excitation of pyramidal neurons are mostly oriented normally to the cortical surface 22 .

The inverse problem
Conversely to the forward model, the inverse problem aims at estimating the electrical sources ‫ܠ‬ ௧ in the gray-matter that generated the scalp EEG measurements ‫܊‬ ௧ . Using a regularized unconstrained optimization framework, we formulate the inverse problem as: is the estimated reconstruction of the brain activation pattern at time is the likelihood function, or the fidelity loss, between the measured M/EEG data and the estimated reconstruction.

࣬ ሺ ሻ
is the regularization function that imposes structure in the data according to some prior assumption, and λ is the hyper-parameter that controls the weight of this regularization. Given that the number of gray-matter sources (ܰ) is much larger than the number of M/EEG sensors (‫ܯ‬ሻ, the inverse problem is under-determined or ill-posed. This means that there exist an infinite number of source activity configurations ‫ܠ(‬ ௧ ) that produce the measured EEG scalp potential ‫܊(‬ ௧ ). The inclusion of a regularization function allows to find a unique solution among those configurations.
The most common regularization approach is the so-called Tikhonov regularization, which enforces solutions to have the minimum possible norm (i.e., ). For ET, such regularization tends to produce imprecise reconstructions, and additional prior based on physiological properties of neural activity need to be incorporated. A group of ET methods impose local smoothness among spatially adjacent gray-matter regions 7 ‫܅‬ computes a weighted average of the neighborhood of each cortical source. The idea behind this regularization is that, in order to be detected, the activity of many neighboring neural populations needs to be synchronous. The solutions from these methods tend to be more accurate but offer very low spatial resolution. We hypothesize that this is due to the scale of local spatial synchrony not being the same as the scale of smoothness imposed in this type of methods.

The high-resolution structural connectome
The structural connectome defines how the fiber bundles in the brain's white matter support the connectivity among different gray matter regions. For the VEPCON dataset, we estimated the tractograms from the diffusion weighted imaging (DWI) data of each participant, using Connectome Mapper 3 24 . The tractography algorithm was performed after denoising the diffusion data with MRTRIX MP-PCA, bias field correction ANTS-N4, eddy current and motion correction from FSL, by launching 10 million deterministic streamlines (with ACT) which were posteriorly filtered with SIFT. After that, we followed the approach presented by Mansour and colleagues (2021) to obtain high-resolution individual connectomes at the resolution of approximately 8000 nodes on the brain's gray matter surface.
For datasets with no diffusion MRI data available, we constructed a group consensus connectome from high-resolution individual connectomes of the VEPCON dataset. The consensus connectome was created based on the distance-dependent thresholding method 26 .

Connectome based Compressed Sensing
Compressed sensing (CS) tries to reconstruct an unknown signal ‫ܠ‬ , which has a higher dimensionality than that of the measured signal ‫܊‬ by solving the optimization problem defined by Eq. (3). CS leverages the known sparsity of the signal to find a unique solution to the ill-posed inverse problem. If the signal ‫ܠ‬ is not directly sparse, but it admits a sparse representation through some basis ۴ , then an appropriate CS formulation is: where ‫ۯ‬ is the forward operator, i.e., the gain matrix, and ߣ and ߚ are two regularization parameters that control the proportion of sparsity and power of the reconstruction.
The common optimization strategy of the problems defined by CSET and CWET (defined below) is handled as follows: i.
Depth normalization: The L ଶ -norm term is known to bias the optimized solution towards sources that are closer to the electrodes 27 . This is usually alleviated by incorporating a weighting factor on the L ଶ -norm term 28 . Here, in order to avoid the increased computational effort of this weighting, we adopt a similar strategy to MNE-Python and weight the rows of the gain matrix only once, as follows: This is equivalent to normalizing L ଶ -norm term at a much lesser computational cost.
ii. Data normalization: The total power of the EEG data will affect the power of the reconstructed source pattern. This will consequently have an impact on the effect of the regularization parameters. In order to allow for hyperparameter selection that is independent on the statistics of the EEG data, we normalize the EEG data to have a unit Optimization via accelerated proximal gradient descent: The penalized optimization problems defined by CSET and CWET both contain an L ଵ -norm, which is a non-differentiable functional. For this reason, iterative optimization methods based solely on the gradient, such as gradient descent, cannot be used. Here, we used the accelerated proximal gradient descent method (APGD) 29 , a well-known primal-dual splitting optimization algorithm. Despite CSET and CWET featuring three operators, we used APGD over other primal-dual splitting algorithms specialized for inverse problems with three-operators. The reason is that the fidelity term and the squared L ଶ -norm term can be composed into a single differentiable map, thus yielding in practice a problem defined by two terms, one differentiable and one nondifferentiable. In this sense, the advantage of APGD over other algorithms is that it requires less splitting and thus is computationally more efficient 30 . The optimization is solved with Pycsou 31 , a Python package for solving linear inverse problems with state-of-the-art proximal algorithms (available at https://matthieumeo.github.io/pycsou/html/index). The step sizes and acceleration factors are automatically set in Pycsou, which selects them in order to fulfill the convergence rates guarantees. As a stopping criterion we selected an absolute relative error of 10 -4 . iv.
Re-scale reconstruction: As a final step, the solution of the optimization problem, i.e., the reconstruction, is re-scaled by the normalization factors applied to the EEG data and the gain matrix as follows:

Connectome Spectrum Electrical Tomography (CSET)
The reader might have already noticed that Eq. (3) can be seen as a specific case of the general formulation of the M/EEG inverse problem in Eq. (2). We have shown in previous publications that the brain activity ‫ܠ‬ admits a sparse transform ‫܃‬ , the connectome-based graph Fourier transform, which represents it as a connectome spectrum ‫ܠ‬ (see Eq. (7)). Connectome Spectrum Electromagnetic Tomography (CSET) consists in solving the optimization problem formulated in Eq. (3) by leveraging the sparsity of brain activity when represented in the connectome spectrum basis.
The structural connectome defines a graph, in which each node of the graph represents a region in the brain cortical surface, and the edges of this graph define the connectivity between each pair of nodes as estimated from the high-resolution connectome. To obtain the connectome spectrum of the brain activity signal we first need to perform an eigendecomposition of the connectome graph Laplacian. The matrix ‫܃‬ containing the eigenvectors of the graph Laplacian in its columns is used to compute the connectome spectrum ‫ܠ‬ of the brain activity signal ‫ܠ‬ by means of the graph Fourier transform (see our previous publications for detailed explanation 14,32 ):

Connectome Wavelets Electrical Tomography (CWET)
Building on our work with the connectome harmonics and Graph Fourier Transform, we leverage another tool from the field of graph signal processing: graph wavelets. Wavelets offer an attractive framework for processing brain signals as they are both localized in space and frequency. This contrasts with connectome harmonics that are only localized in terms of frequency but highly distributed spatially, i.e., each harmonic is spread over all nodes of the brain graph. Although the brain works in a highly distributed manner, it seems reasonable to expect that some brain signals are more localized in certain brain areas.
We follow the framework proposed by [33] to define wavelets on a graph. These Spectral Graph Wavelet Transforms (SGWT) are highly linked to the previously described Graph Fourier Transform as the wavelets are defined in the graph frequency domain. Wavelets are formulated as bandpass filters acting on a signal's spectrum, these filters are called kernels and their shape depends on the chosen wavelet design. Formally, the wavelet operator as defined in [33] is applied to a function on the graph nodes (i.e., a brain signal) by modulating connectome harmonics in the graph Fourier domain: Taking the inverse GFT, we obtain the following formulation: Where ‫ݑ‬ is the ݈ -th eigenvector of the graph Laplacian and ‫ݔ‬ ሺ ݊ ሻ is a function on the vertices. As mentioned above, the kernels are band-pass filters. In order to fully represent any signal defined on the graph, a "scaling" kernel is added to the construction of the transform, this latter filter allows to conserve the low-pass component of the signals; its shape also depends on the chosen wavelet kernel design. The scaling kernel and wavelets at different scales are represented in Fig. 1e for one connectome from the VEPCON dataset. We chose the Meyer-like wavelet design presented in 34 as they are particularly well adapted to represent brain signals.
Practically, a wavelet transform matrix can be generated by applying ܶ to Kronecker delta signals localized at each node of the graph, and such for each scale. This transform matrix allows to compute the wavelet transform of any signal by matrix-vector multiplication, similarly to the graph Fourier transform implementation. Contrary to the GFT that corresponds to a N x N square matrix, the SGWT expressed as a matrix has size N x (K x N), where K is the number of scales. It is thus a Kovercomplete frame with linearly dependent columns. We set K to 5 as a good compromise between computational cost and splitting of the graph spectrum in different sub-bands.

Datasets
Recently, two multimodal neuroimaging datasets have been released that have great potential to help develop source imaging tools as well as validate their performance and clinical relevance (VEPCON datasets from now on) 21 . In these datasets, visual evoked potentials were recorded for 20 participants while they discriminated between either briefly presented faces and scrambled faces, or coherently moving stimuli and incoherent ones. This dataset is openly accessible (https://openneuro.org/git/0/ds003505). Apart from high-density EEG of visual evoked potentials, the datasets also include structural MRI, diffusion weighted images (DWI) and single-trial behavior. This allowed us to study the reconstruction of brain activity maps that activate in response to very well studied paradigms.
Apart from the data released along the original publication, here we also include fMRI spatial maps of the same participants, under a similar experimental task. These spatial maps are used in this work for two different purposes. In a first instance, for each subject of the VEPCON datasets, we used the fMRI activation pattern corresponding to the subjects' response to a face stimulation paradigm as ground truth. This ground truth signal was used as simulated electrical brain activity, and then used to compare against the reconstruction from the simulated measurements.

i. MRI pre-processing
The acquisition of functional MRI data (fMRI) was performed at the HFR Fribourg -Hôpital cantonal (Fribourg, CH), using a Discovery MR750 3.0T (GE Healthcare, Chicago, USA). fMRI data were acquired while each subject performed two different visual tasks, one on faces, and another on moving dots.
Across subjects, the order of the two tasks was counterbalanced. For each task, we used a block structure similar to the one adopted for the EEG session (see Methods), but with 48 trials for each task condition (instead of 200). At the end of each block, the instruction to "REST" was presented, followed by a fixed break of 12s (Rest period). Exceptionally, the last Rest period that had a duration of 60s. In each trial, subjects had a limited time of 1,500ms to respond, after which their response was considered incorrect.
An MRI-compatible fiber optic response pad (Current Designs Inc., Philadelphia, USA) was used to collect the participant responses. The visual stimuli were presented on a NordicNeuroLab (Bergen, NO) MRI-compatible LCD monitor (32 inches diagonal size, 1920x1080 resolution, 405 c/m 2 surface luminance, 4000:1 contrast, 60 Hz refresh rate, 6.5 ms response time), placed above the scanner-bed at 244 cm from the subject's eyes and made visible to the subject through a mirror placed on the head coil. Those subjects who suffered from some eye disorder (e.g., myopia) wore MRI-compatible glasses with appropriate lenses for optical correction. The fMRI data were acquired using a T2*weighted EPI sequence with 40 slices each, with slice-thickness of 3 mm, between-slices spacing of 0.3 mm, interleaved bottom-up slice acquisition, anterior-to-posterior phase encoding direction, repetition time (TR) of 2,500 ms, echo time (TE) of 30 ms, and flip-angle of 85°. The first 4 volumes of each run were discarded.
The Statistical Parametric Mapping (SPM) toolbox was used for preprocessing the fMRI data (toolbox version SPM12; University College London, UK; https://www.fil.ion.ucl.ac.uk/spm/). First, the functional images were aligned to the mean of each session, using a two-pass realignment procedure for motion correction, and then a slice-timing correction was applied 35 . After realignment, the mean functional image was co-registered to the anatomical image using the normalized mutual information as the cost function. The SPM12 standard segmentation procedure was adopted to obtain the masks for cerebrospinal fluid (CSF) and white matter (WM), which were used to extract the time courses of CSF and WM signals for each subject. All images were then normalized to the Montreal Neurological Institute and Hospital (MNI) stereotaxic space with a fourth-order B-spline interpolation, and smoothed with a Gaussian filter with 8 mm FWHM kernel.

ii. fMRI statistical maps
A two-stage approach based on a general linear model (GLM) was employed to analyze the functional images 36 . In this approach, the first-level analysis was implemented using a block design with two regressors of interest, each modeled with a boxcar function convolved with the canonical hemodynamic response function (HRF). Two regressors of interest were defined to model the two stimuli conditions (Faces vs. Scrambled, on vs. off motion of the disk). The GLM included also a set of nuisance regressors that modeled the six motion realignment parameters, the mean signals in CSF and WM, and a constant term. Finally, a high-pass filter (200 s cutoff) was applied to the functional images time series, which allowed removing noise at very low frequencies. The second-level group analysis was implemented on the previously obtained statistical maps and involved voxel-wise t-test comparison across participants. The obtained volumetric statistical maps were mapped to the same surface used for source reconstruction in the native space, using the Freesurfer's mri_vol2surf function (https://surfer.nmr.mgh.harvard.edu/fswiki/mri_vol2surf). A group estimate of the response pattern to the stimuli was estimated by morphing each surface map to the same space (subject "sub-01" native space). The average map was finally morphed back to the individual´s native space and used to assess the performance of source reconstruction.

Code availability
CSET is published under the terms of the open source 3-Clause Berkeley Software Distribution (3-Clause BSD) license, which allows unlimited modification, redistribution and commercial use in source and binary forms, as long as the copyright notice is retained and the license's disclaimers of warranty are maintained. The source code for CSET is hosted at https://github.com/connectomicslab/eeg_compressed_sensing where all development and contributions are transparently discussed and managed. Each release will be made available to the Python Package Index (PyPI) and archived to Zenodo.

Limitations and Future work
While the type of regularization is an important aspect in reconstructing electromagnetic activity, other parts of the pipeline, such as the data preprocessing, the optimization scheme, and the forward model can be equally important in influencing the reconstruction. In this work, we have not studied these other parts of the pipeline in order to focus on the inverse problem. Future studies should assess whether the performance of the proposed inverse problems is systematically affected by any steps of the pre-processing and forward modelling.
In this work we have only tackled regularization in the spatial domain of the reconstruction. State-ofthe-art approaches leverage the temporal smoothness of the data to enforce continuity in the reconstruction. In future approaches, we recommend tackling the combination of regularization in the spatial and temporal domains.
Another important limitation of this work is that we have only used five wavelet scales in the CWET algorithm due to the computational cost of the wavelet transform. This has an important effect on the adaptability of the reconstruction to different types of activation maps. For future work, it would be interesting to expand this algorithm to a larger number of scales.