MEG and EEG data fusion: simultaneous localisation of face-evoked responses

Neuroimage. 2009 Aug 15;47(2):581-9. doi: 10.1016/j.neuroimage.2009.04.063. Epub 2009 May 3.

Abstract

We present an empirical Bayesian scheme for distributed multimodal inversion of electromagnetic forward models of EEG and MEG signals. We used a generative model with common source activity and separate error components for each modality. Under this scheme, the weightings of error for each modality, relative to source components, are estimated automatically from the data, by optimising the model-evidence. This obviates the need for arbitrary user-defined weightings. To evaluate the scheme, we acquired three types of data simultaneously from twelve participants: total magnetic flux (as recorded by 102 magnetometers), orthogonal in-plane gradients of the magnetic field (as recorded by 204 planar gradiometers) and voltage differences in the electrical field (recorded by 70 electrodes). We assessed the relative precision of each sensor-type in terms of signal-to-noise ratio (SNR); using empirical sample variances and optimised estimators from the generative model. We then compared the localisation of face-evoked responses, using each modality separately, with that obtained by their "fusion" under the common generative model. Finally, we quantified the conditional precisions of the source estimates using their posterior covariance, confirming that EEG can improve MEG-based source reconstructions.

Publication types

  • Research Support, Non-U.S. Gov't

MeSH terms

  • Adult
  • Algorithms*
  • Brain Mapping / methods*
  • Electroencephalography / methods*
  • Evoked Potentials, Visual / physiology*
  • Face*
  • Female
  • Humans
  • Magnetoencephalography / methods*
  • Male
  • Pattern Recognition, Visual / physiology*
  • Subtraction Technique*
  • Visual Cortex / physiology*
  • Young Adult