ALL Metrics
-
Views
-
Downloads
Get PDF
Get XML
Cite
Export
Track
Data Note

High-resolution 7-Tesla fMRI data on the perception of musical genres – an extension to the studyforrest dataset

[version 1; peer review: 2 approved with reservations]
PUBLISHED 29 Jun 2015
Author details Author details
OPEN PEER REVIEW
REVIEWER STATUS

This article is included in the INCF gateway.

This article is included in the Real-life cognition collection.

This article is included in the Data: Use and Reuse collection.

Abstract

Here we present an extension to the studyforrest dataset – a versatile resource for studying the behavior of the human brain in situations of real-life complexity (http://studyforrest.org). This release adds more high-resolution, ultra high-field (7 Tesla) functional magnetic resonance imaging (fMRI) data from the same individuals. The twenty participants were repeatedly stimulated with a total of 25 music clips, with and without speech content, from five different genres using a slow event-related paradigm. The data release includes raw fMRI data, as well as precomputed structural alignments for within-subject and group analysis. In addition to fMRI, simultaneously recorded cardiac and respiratory traces, as well the complete implementation of the stimulation paradigm, including stimuli, are provided. An initial quality control analysis reveals distinguishable patterns of response to individual genres throughout a large expanse of areas known to be involved in auditory and speech processing. The present data can be used to, for example, generate encoding models for music perception that can be validated against the previously released fMRI data from stimulation with the “Forrest Gump” audio-movie and its rich musical content. In order to facilitate replicative and derived works, only free and open-source software was utilized.

Keywords

functional magnetic resonance imaging, music perception, natural sounds, 7 Tesla, auditory features

Background

Previously, we have released a large, high-resolution, 7 Tesla fMRI dataset on the processing of natural auditory stimuli – a two-hour audio movie1. Recently, we have extended this initial release with a detailed annotation of the emotional content of the stimulus2 to broaden the range of research questions that could be addressed with these data. Here we further amend this dataset with additional high-resolution fMRI data from the same participants on the perception of musical genres. We employed a proven paradigm and stimuli that have been previously shown to enable investigation of distributed population codes of musical timbre in bilateral superior temporal cortices3.

The present data release enables comparative studies of the representation of musical genres (spectrum, timbre, vocal content) with ultra high-field, high resolution fMRI data from a larger sample of participants. In conjuction with the previous data releases, it will also further expand the continuum of research question that can be approached with the joint dataset. For example, the development of encoding models for cortical representations of music in complex auditory stimuli (the audio-movie contains several dozen musical excerpts from a broad range of genres). To this end, we include extracted audio features that represent the time-frequency information of each stimulus in four different views. The views are mapped to different perceptually-motivated scales (mel and decibel scales) and via a decorrelating linear transformation (DCT-II). It is hoped that providing these example features will catalyze discoveries of auditory stimulus codes in neural populations.

Lastly, these data can also serve as a public resource for benchmarking algorithms for functional alignment [e.g., 4], or other analyses, and thus, further the availability of resources for the investigation of real-life cognition5.

Materials and methods

Participants

Acquisition of the data described herein was part of a previously published study1, and took place in close temporal proximity (no more than a few weeks apart). The participants in this data release are identical to those previously reported. They were fully instructed about the nature of the study and were paid a total of 100 EUR for their participation, which included the previously reported data acquisitions, as well as the one described herein. All data acquisitions were jointly approved by the ethics committee of the Otto-von-Guericke-University of Magdeburg, Germany (approval reference 37/13).

Stimulus

All stimuli employed in this study are identical to those used in a previous study [for details refer to 3]. They were five natural, stereo, high-quality music stimuli (6 s duration; 44.1 kHz sampling rate) for each of five different musical genres: 1) Ambient, 2) Roots Country 3) Heavy Metal, 4) 50s Rock’n’Roll, and 5) Symphonic (see Figure 1 for details).

313cb510-d428-47d0-b2b2-dd903fc56449_figure1.gif

Figure 1. Spectrograms for all 25 stimuli showing structural differences in the time-frequency characteristics of the five musical genres.

Each stimulus was a six second excerpt from the middle of a distinct musical piece. Excerpts were normalized so that their root-mean-square power values were equal, and a 50 ms quarter-sine ramp was applied at the start and end of each excerpt to suppress transients. Most prominent are the differences between music clips with and without vocal components.

Procedures and stimulation setup

The setup for audio-visual presentation was as previously reported1. Participants listened to the audio using custom-built in-ear headphones, and an LCD projector displayed visual instructions on a rear-projection screen that they saw via a mirror attached to the head coil.

At the start of each recording session, during the preparatory MR scans, participants listened to a series of longer excerpts of musical pieces and songs from the five different genres. During this phase participants were instructed to request adjustments of the stimulus volume in order to guarantee optimal perception of the stimuli against the noise pedestal emitted by the scanner. There was no overlap between the songs presented in this phase and those used as stimuli in the main experiment.

Eight scanning runs followed the initial sound calibration. Each run was started by the participant with a key-press ready signal. There were 25 trials, with five different stimuli (Figure 1) for each of the five genres per run (see Figure 2 for details on the experiment design). At the end of each run participants were given the opportunity for a break of variable length until they indicated readiness for the next run. Most participants started the next run within a minute.

313cb510-d428-47d0-b2b2-dd903fc56449_figure2.gif

Figure 2. Experiment design.

(A) Trial configuration. The start of each trial was synchronized with the MRI volume acquisition trigger. When the trigger was received the permanently displayed white fixation cross turned green, a 6 s music stimulus was presented, and, immediately afterwards, the fixation cross turned white again. Stimulation was followed by a variable delay (minimum delay 4 s). For the five trials of a genre, a 4 s and 8 s delay occurred once, while the remained three trials included a 6 s delay period. Thereby all trials had 4–8 s of uniform stimulation (no audio, white fixation cross) after each musical stimulus. The order of delays was randomized within a run. During trials with an 8 s second delay participants were presented with a yes/no question four seconds after the end of the music stimulus. The content of the question was randomized and asked for particular features of the stimulus that had just ended (e.g., “Was there a female singer?”, “Did the song have a happy melody?”). Participants had to indicate their response by pressing one of two buttons with the index or middle finger of their right hand corresponding to the response alternative presented on the screen. “Yes” was always mapped to the left side (index finger), “No” always to the right side (middle finger). The question had the purpose of keeping the participants attentive to the stimuli and counteract the effect of increasing familiarity across multiple runs. (B) Run configuration. The 25 stimuli were identical across runs and presented exactly once per run. Order of stimulus genres within each run was counter-balanced using De Bruijn cycles6 (alphabet size = 5, counter-balancing level = 2), hence each genre was followed by any other genre equally often and exactly once. Eight unique genre order sequences were generated and used for all participants, while randomizing the order of run sequences across participants. This was done in order to enable the application of the hyperalignment algorithm4. Data acquisition for two participants showed anomalies with respect to this procedure (see Table 2 for details).

Stimulus presentation and response logging were implemented using PsychoPy7 running on a computer with the (Neuro)Debian operating system8.

Functional MRI data acquisition

The acquisition protocol for functional MRI was largely identical with the one previously reported1, hence only differences and key facts are listed here.

Importantly, the same landmark-based procedure for automatic slice positioning that was used to align the scanner field-of-view between acquisition sessions, was used again to align the field-of-view for this acquisition with the one in the previous study1. As the exact same alignment target was used, this led to a very similar field-of-view configuration across acquisitions.

Each acquisition run consisted of 153 volumes (repetition time of 2.0 seconds with no inter-volume gaps).

Physiological recordings

The cardiac and respiratory traces were recorded for the full duration of all eight runs. The acquisition setup for physiological was identical with the one previously reported1.

Dataset content

The released data comprises raw and pre-processed fMRI data, physiological recordings, behavioral log files, and auditory stimuli (total 95 GB). Table 1 provides an overview of the location of individual data components. The following sections briefly describe important properties.

Table 1. Data set layout.

File paths and descriptions for all available content.

File pathDescription
/
    [model,scan,study,task]_key.txt
    models/model001/[condition_key,task_contrasts].txt
    acquisition_protocols/task002_fmri_session.pdf
Meta data
OpenFMRI study meta data

Siemens 7T Magnetom acquisition protocol settings
stimulus/task002/
    [intro,runs].csv
    mg_7T.psyexpi, make_design.py
    stimuli/[ambient,country,metal,rocknroll,symphonic]_00[0-4].wav
    make_audiofeature.py, features/*
Stimulus file and paradigm implementation
Helper CSV tables for PsychoPy experiment
PsychoPy (v1.82) experiment, sequence generation script
Stimulus files (PCM WAVE, 16 bit, 44100 Hz stereo)
Extracted audio features and script
SUBJECT/behav/task002_run00[1-8]/behavdata.txt
SUBJECT/model/model001/onsets/task002_run00[1-8]/cond00[1-5].txt
Behavioral log files (CSV format)
Stimulation timing specifications (FSL EV3 format)
SUBJECT/physio/task002_run00[1-8]/[physio.txt.gz,conversion.log]Physiological data and conversion log file
SUBJECT/BOLD/task002_run00[1-8]/
    bold[_dico].nii.gz
    moco_ref.nii.gz
    bold_dico_moco.txt
    *dicominfo.txt
    bold_dico_bold7Tp1_to_subjbold7Tp1.nii.gz
    qa/bold_dico_fgmask_bold7Tp1_to_subjbold7Tp1.nii.gz
    qa/bold_dico_moest_bold7Tp1_to_subjbold7Tp1.txt
BOLD fMRI data
Raw and distortion-corrected BOLD fMRI
Motion correction reference scan
Motion estimates (3x translation in mm, 3x rotation in deg)
DICOM meta data dump from MRIConvert
BOLD fMRI aligned to per-subject template (6 DoF; FLIRT)
Foreground voxel mask (across time series)
Motion estimates with respect to per-subject template (MCFLIRT)
SUBJECT/templates/bold7Tp1/
    [head,brain,brain_mask].nii.gz
    in_grpbold7Tp1/[head,brain_mask].nii.gz
    in_grpbold7Tp1/[subj2tmpl,tmpl2subj]_warp.nii.gz
    qa/fgmasks_bold7Tp1_to_subjbold7Tp1.nii.gz
    qa/jointfgmask_bold7Tp1_to_subjbold7Tp1.nii.gz
    qa/jointfgbrainmask_bold7Tp1_to_subjbold7Tp1.nii.gz
    qa/lvl2/[head,head_avgstats,aligned_head_samples].nii.gz
7T BOLD fMRI per-subject template (all phase 1 data)
Average head and skull-stripped image, plus mask
Per-subject template in group template space
FNIRT warps from/to group BOLD template space
4D image with foreground masks across all 7T BOLD fMRI acq.
Intersection of above image across volumes
Intersection of above image with template brain mask
Last iteration template image, overlap stats, and aligned samples prior cropping
templates/grpbold7Tp1/
    [head,brain].nii.gz
    xfm/[mni2tmpl,tmpl2mni]_12dof.mat
    in_mni/brain_12dof.nii.gz
    from_mni/MNI152_T1_1mm.nii.gz
    from_mni/avg152T1_[brain,csf,gray,white].nii.gz
    from_mni/MNI152_T1_1mm_brain_mask[_dil].nii.gz
    qa/lvl4/[head,brain].nii.gz
    qa/lvl4/[head_avgstats,aligned_head_samples].nii.gz
    qa/subjbold7Tp1_to_grpbold7Tp1/brain_mask_[intersection,stats].nii.gz
    qa/subjbold7Tp1_to_grpbold7Tp1/aligned_brain_samples.nii.gz
7T BOLD fMRI group template (all phase 1 data)
Average head and skull-stripped image
Affine transformation to/from MNI152 (FLIRT)
Template transformed and re-sliced into MNI152 (1mm)
MNI152 template from FSL in group template space


Template generation overlap stats, aligned samples


Per-subject template alignment quality control files

Table 2. Overview of known data anomalies (F: functional data, P: physiological recordings during fMRI session).

ModalityParticipantRunDescription
P1,21–8sampling rate is 100 Hz
F22significant movement (translation) during scan
F53,7experiment had to be restarted after the second run; due to technical limitations
sequences for the first two runs were repeated as run 4 and 7 (see Figure 2B)
F54–7significant movement (rotation) during scan
F86–8significant movement (rotation) during scan
F105–7significant movement during scan
F114–8significant movement (translation) during scan
F137–8significant movement (translation) during scan
P186accidental data acquisition stop during the run
F193–4significant movement (translation) during scan
F202significant movement (translation) during scan
F & P205–8no data; participant aborted experiment after four runs

Behavioral log files

Log files are available as plain text files with comma-separated value markup. All enumerations are zero-based. Each lines represents a trial. Columns for the following information are present: order of run in sequence (run), ID of trial sequence for this run (run_id; see Figure 2B), fMRI volume corresponding to stimulation start (total: volume, in the current run: run_volume), stimulus file name (stim), music genre label (genre), inter-stimulus interval in seconds (delay), flag whether a control question with presented (catch), measured asynchrony between MRI trigger and sound onset in seconds (sound_soa), and time stamp of the corresponding MRI trigger with respect to the start of the experiment in seconds (trigger_ts).

Information on the stimulus timing is also available in per-subject, per-run, per-condition plain-text files in FSL’s EV3 format: one line per stimulation event, three columns with stimulus onset and duration (both in seconds relative to the start of a scan), as well as a third column with an arbitrary intensity weight that is always set to 1.

fMRI data

All functional MRI data were converted from the DICOM format into the NIfTI format for publication using the same procedure as in 1.

fMRI data are available in three different flavours, each stored in an individual 4D image for each run separately. Raw BOLD data are stored in bold.nii.gz. While raw BOLD data are suitable for further analysis, they suffer from severe geometric distortions. BOLD data that have been distortion-corrected9 at the scanner console are provided in bold_dico.nii.gz. In addition, distortion-corrected data that have been anatomically aligned to a per-subject BOLD template image are available: bold_bold7Tp1_to_subjbold7Tp1.nii.gz.

Participant motion estimates

Head movement correction was performed with respect to a dedicated reference scan at the start of the recording session within scanner online reconstruction as part of the distortion correction procedure. The associated motion estimates are provided in a whitespace-delimited 6-column text file (translation X, Y, Z in mm, rotation around X, Y, Z in deg) with one row per fMRI volume for each run separately.

Physiological recordings

Physiological data were truncated to start with the first MRI trigger pulse and to end one volume acquisition duration after the last trigger pulse. Data are provided in a four-column (MRI trigger, respiratory trace, cardiac trace and oxygen saturation), space-delimited text file for each run. A log file of the automated conversion procedure is provided in the same directory (conversion.log). Sampling rate for the majority of all participants is 200 Hz (see Table 2 for exceptions).

Audio features

Recent experiments have shown that audio features can be predicted via regression models from fMRI signals to test stimulus coding hypotheses3,10. To facilitate this activity with the current data we extracted four audio features from down-mixed mono stimuli. Feature extraction used a front-end windowed short-time Fourier transform, with window size 16384 samples (371.52 ms) and hop size 4410 samples (100 ms) yielding 63 overlapping feature vectors per stimulus file. Window parameters were chosen to trade temporal for spectral acuity, yielding frequency samples spaced linearly at 2.69 Hz intervals from 0–22.05 kHz. The four features extracted from this representation are described below.

Mel-Frequency Spectrum (mfs) – 48 dimensions. Motivated by human auditory perception the mel scale organizes frequency by equidistant pitch locations as determined by psychophysical experiments. We used the essentia open source audio processing library11 to extract the mel-frequency spectrum, which yielded energy in mel bands by applying a frequency-domain filterbank12 to the short-time Fourier spectrum. Frequency-domain filtering consisted of applying equal area overlapping triangular filters to the Fourier spectrum spaced according to the mel scale and normalized such that the sum of coefficients for every filter equals one.

Mel-Frequency Cepstral Coefficients (mfcc) – 48 dimensions. Cepstral features have been widely reported to perform well in speech recognition and music classification systems13, where the task is required to be sensitive to timbre. Typically, only the lower 10–20 cepstral coefficients (low quefrency) are retained; these encode the shape of the broad spectral envelope – an acoustic correlate of timbre. However, when sensitivity to timbre is not required, utilizing the upper coefficients (high quefrency), that encode fine spectral structure such as pitch, makes the feature robust to timbral changes14. We extracted the full set of 48 cepstral coefficients from the mel-frequency spectrum, by mapping the mel spectrum to a decibel amplitude scale and multiplying by the discrete cosine transform (DCT-II) matrix. It is expected that any application would first remove the constant first column and retain either the subsequent 13–20 coefficients or the remaining upper coefficients after those, depending on whether sensitivity or robustness to timbral difference is required. The remaining two features yield such a separation into low and high quefrency spectral components.

Low-Quefrency and High-Quefrency Mel-Frequency Spectrum (lq_mfs, hq_mfs). Although proven to be useful in machine classification tasks, cepstral coefficients are in a different domain than the spectrum. The last two features map selected cepstral coefficients back to the spectrum domain by reconstructing the 48 mel-frequency spectrum bands using the low-quefrency and high-quefrency mfcc coefficients respectively. In each case, the non-selected coefficients were zeroed and the resulting feature mapped back to the spectral domain using the inverse (transposed) DCT-II matrix and then inverting the decibel amplitude scale. These two sets of features represent broad-spectrum information (timbre) and fine-scale spectral structure (pitch) respectively. The product of these two spectra yields the mel-frequency spectrum.

Source code

The source code for descriptive statistics in Figure 1 and Figure 3, as well as the implementation for the analysis presented in Figure 4 is available in a Git repository at https://github.com/psychoinformatics-de/paper-f1000_pandora_data. Source code for the implementation of the stimulation paradigm and audio feature extraction are included in the data release. Additional scripts for data conversion and quality control are available at: https://github.com/hanke/gumpdata.

313cb510-d428-47d0-b2b2-dd903fc56449_figure3.gif

Figure 3. Summary statistics for head movement estimates across runs and participants.

These estimates indicate relative motion with respect to a dedicated reference scan at the beginning of each scan session. The area shaded in light gray depicts the range across participants, while the medium gray area indicates the 50% percentile around the mean, and the dark gray area shows ± one standard error of the mean. The black line indicates the median estimate. Dashed vertical lines indicate run boundaries where participants had a brief break. The red lines indicate motion estimate time series of outlier participants. An outlier was defined as a participant whose motion estimate exceeded a distance of two standard deviations from the mean across participants for at least one fMRI volume in a run. For a breakdown of detected outliers see Table 2.

313cb510-d428-47d0-b2b2-dd903fc56449_figure4.gif

Figure 4. Localization of genre-discriminating signals in the brain.

(A) Voxel-wise genre-selectivity label. Random-effects GLM group analysis (n=20) were computed using the FEAT component of FSL16. Individual contrasts were evaluated for each genre to identify voxels showing a BOLD response to this particular genre that is larger than the average response to all other genres. For all voxel clusters that show a significant difference at the group-level (cluster forming threshold Z=3.1, cluster probability threshold p<0.05) for any genre, the selectivity label was determined by the maximum Z statistic across all genres. No significant selective activation was found for the ambient genre. The majority of all voxels were labeled selective for one of the musical genres where stimuli contained vocals (country, rock’n’roll, heavy metal). Only a small cluster in BA44 R (Broca’s area) was labeled selective for symphonic music, despite the lack of speech content in these stimuli. (B) For comparison, the location of voxel clusters with above-chance classification accuracy for predicting the genre of a music stimulus (colors only indicate individual clusters, not association with particular genres). The associated areas are largely overlapping with the results of the GLM analysis. However, genre-discriminating signals were identified in a number of additional areas. For details on the MVP analysis and cluster statistics see Table 3. Unthresholded maps for GLM and MVP analyses are available at NeuroVault.org17 collection 308.

Table 3. Average group results of a searchlight-based (radius 2.5 mm) cross-validated within-subject musical genre classification analysis (n=20; SVM classifier; C parameter scaled according to the norm of the data).

The table lists statistics (size, mean/max/std accuracy) as well as localization information (coordinates in mm MNI152) for clusters with above-chance classification performance in the group (cluster-level probability p<0.05; FWE-corrected). Clusters are depicted in Figure 4B. Statistical evaluation was implemented using a bootstrapped permutation analysis, as described by Stelzer and colleagues18 and implemented in PyMVPA19, using 50 permutation searchlight accuracy maps per subject, 10000 bootstrap samples, voxel-wise cluster forming threshold of p<0.001). Apart from two large clusters covering the majority of bilateral area for auditory perception and speech processing, additional clusters with genre-discriminating signals were identified. These include the bilateral medial geniculate bodys, as well as smaller regions on the ventral visual pathway, frontal orbital cortex, and the cerebellum. For these regions the NeuroSynth database20 reports high posterior probabilities for the topics: counting, motor, naming, phonology, prosody, visual, and vocal (as determined with the Neurosynth term atlas shipped with NeuroDebian8).

max location (MNI)center of mass (MNI)
#voxelsmaxXYZmeanstdXYZpcorr.structure
1360990.53-58.0-3.9-0.10.310.071-52.1-20.24.70.0006L sup. temporal, L Broca’s
area, L front. operculum
2344510.5259.50.0-5.40.310.07153.9-17.81.70.0006R sup. temporal, R Broca’s
area, R front. operculum
33200.26-26.532.5-14.50.240.008-29.932.3-15.70.0142L front. orbital
42590.25-24.8-66.3-19.40.240.007-22.7-65.5-20.60.0142L cerebellum
52400.25-40.7-45.0-16.80.240.005-35.3-51.7-16.60.0142L temporal occ. fusiform
62270.2728.0-63.2-22.70.240.00725.3-64.2-18.10.0142R cerebullum, R temp. occ.
fusiform
72270.2633.5-86.9-3.10.240.00533.6-88.5-0.30.0142R lat. occipital
82150.2814.4-30.3-4.20.250.01114.8-29.9-6.40.0145R medial geniculate body
92000.28-13.2-31.0-7.80.240.011-14.1-30.5-6.60.0159L medial geniculate body
101780.26-50.0-63.1-19.40.240.011-51.0-64.4-18.30.0194L temporooccipital
111520.2531.1-79.19.30.240.00432.6-79.09.60.0268R lat. occipital
121440.2728.828.7-14.80.250.00727.427.7-14.50.0280R front. orbital
131240.2524.8-1.24.00.240.00323.20.43.20.0387R putamen
141110.25-25.5-89.8-15.20.240.006-26.2-90.0-13.20.0477L V4
151070.257.1-83.43.30.240.0056.4-86.11.80.0488R V1

Dataset validation

In order to assess data quality, we investigated whether different BOLD response patterns associated with the five musical genres could be discriminated, using either univariate statistical parametric mapping or multivariate pattern (MVP) classification accuracy (searchlight-based analysis, radius of two voxels, sparse spatial sampling with sphere-centers spaced by two voxels, leave-one-run-out cross-validated classification analysis with a support vector machine, accuracy mapped on a voxel reflects the average across all sphere-analysis a voxel participated in). Inspection of the participant motion estimates revealed a median translation of less than a voxel size, and a maximum rotation of about 1 deg (see Figure 3 for outliers).

Despite the variable magnitude of motion, no participant was excluded from the subsequent analysis.

The results of the univariate analysis (Figure 4A) and the MVP analysis (Figure 4B and Table 3) identify largely congruent areas. MVP analysis generally detects larger and more numerous areas, either due to higher sensitivity or a comparably more liberal statistical threshold. Noteably, clusters of above-chance classification accuracy not only contain auditory cortex and other cortical fields related to speech and music processing, but also the subcortical bilateral medial geniculate bodies, a neural relay station immediately prior to the primary auditory cortex in the auditory pathway15.

Given the confirmed wide-spread availability of genre-discriminating signal we conclude that these data are suitable for studying the representation of music and auditory features. Table 2 contains a list of all known data anomalies that may help potential data consumers to select appropriate subsets of this dataset.

Usage notes

These data are part of a larger public dataset available at http://www.studyforrest.org. The website includes information on all available resources, data access options, publications that employ this dataset, as well as source code for data conversion and data processing.

All data are made available under the terms of the Public Domain Dedication and License (PDDL; http://opendatacommons.org/licenses/pddl/1.0/). All source code is released under the terms of the MIT license (http://www.opensource.org/licenses/MIT). In short, this means that anybody is free to download and use this dataset for any purpose as well as to produce and re-share derived data artifacts. While not legally required, we hope that all users of the data will acknowledge the original authors by citing this publication and follow good scientific practice as laid out in the ODC Attribution/Share-Alike Community Norms (http://opendatacommons.org/norms/odc-by-sa/).

Data availability

OpenFMRI.org: High-resolution 7-Tesla fMRI data on the perception of musical genres: ds000113b21

ZENODO: Article sources for 7-Tesla fMRI data on the perception of musical genres, doi: 10.5281/zenodo.1876722

ZENODO: “Forrest Gump” data release source code, doi: 10.5281/zenodo.1877023

Consent

Written informed consent for publication of acquired data in a de-identified form was obtained from all participants.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 29 Jun 2015
Comment
Author details Author details
Competing interests
Grant information
Copyright
Download
 
Export To
metrics
Views Downloads
F1000Research - -
PubMed Central
Data from PMC are received and updated monthly.
- -
Citations
CITE
how to cite this article
Hanke M, Dinga R, Häusler C et al. High-resolution 7-Tesla fMRI data on the perception of musical genres – an extension to the studyforrest dataset [version 1; peer review: 2 approved with reservations] F1000Research 2015, 4:174 (https://doi.org/10.12688/f1000research.6679.1)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
track
receive updates on this article
Track an article to receive email alerts on any updates to this article.

Open Peer Review

Current Reviewer Status: ?
Key to Reviewer Statuses VIEW
ApprovedThe paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approvedFundamental flaws in the paper seriously undermine the findings and conclusions
Version 1
VERSION 1
PUBLISHED 29 Jun 2015
Views
46
Cite
Reviewer Report 04 Aug 2015
Karsten Müller, Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany 
Approved with Reservations
VIEWS 46
The paper presents a very valuable functional MRI (fMRI) data set, investigating brain function on the perception of musical genres. It is a very interesting study with music perception of the human brain. The paper is well written and the ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Müller K. Reviewer Report For: High-resolution 7-Tesla fMRI data on the perception of musical genres – an extension to the studyforrest dataset [version 1; peer review: 2 approved with reservations]. F1000Research 2015, 4:174 (https://doi.org/10.5256/f1000research.7175.r9305)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.
Views
69
Cite
Reviewer Report 28 Jul 2015
Cristiano Micheli, Applied Neurocognitive Psychology Lab, Department of Psychology, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany 
Jochem Rieger, Neurocognitive Psychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany 
Kirsten Weber, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands 
Approved with Reservations
VIEWS 69
General comments
I am not an fMRI expert myself; therefore I deemed it necessary to invite two co-reviewers to add comments on the manuscript. Please find a set of suggested changes hereafter.

The manuscript describes a recent rich amendment of an existing ... Continue reading
CITE
CITE
HOW TO CITE THIS REPORT
Micheli C, Rieger J and Weber K. Reviewer Report For: High-resolution 7-Tesla fMRI data on the perception of musical genres – an extension to the studyforrest dataset [version 1; peer review: 2 approved with reservations]. F1000Research 2015, 4:174 (https://doi.org/10.5256/f1000research.7175.r9304)
NOTE: it is important to ensure the information in square brackets after the title is included in all citations of this article.

Comments on this article Comments (0)

Version 1
VERSION 1 PUBLISHED 29 Jun 2015
Comment
Alongside their report, reviewers assign a status to the article:
Approved - the paper is scientifically sound in its current form and only minor, if any, improvements are suggested
Approved with reservations - A number of small changes, sometimes more significant revisions are required to address specific details and improve the papers academic merit.
Not approved - fundamental flaws in the paper seriously undermine the findings and conclusions
Sign In
If you've forgotten your password, please enter your email address below and we'll send you instructions on how to reset your password.

The email address should be the one you originally registered with F1000.

Email address not valid, please try again

You registered with F1000 via Google, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Google account password, please click here.

You registered with F1000 via Facebook, so we cannot reset your password.

To sign in, please click here.

If you still need help with your Facebook account password, please click here.

Code not correct, please try again
Email us for further assistance.
Server error, please try again.