Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

No evidence for effect of reward motivation on coding of behaviorally relevant category distinctions across the frontoparietal cortex

Sneha Shashidhara, Yaara Erez
doi: https://doi.org/10.1101/609537
Sneha Shashidhara
1MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: sneha.shashidhara@mrc-cbu.cam.ac.uk yaara.erez@mrc-cbu.cam.ac.uk
Yaara Erez
1MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, Cambridge CB2 7EF, UK
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: sneha.shashidhara@mrc-cbu.cam.ac.uk yaara.erez@mrc-cbu.cam.ac.uk
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

Abstract

Selection and integration of information based on current goals is a fundamental aspect of flexible goal-directed behavior. Reward motivation has been shown to improve behavioral performance across multiple cognitive tasks, yet the underlying neural mechanisms that link motivation and control processes, and in particular its effect on context-dependent information processing, remain unclear. Here, we used functional magnetic resonance imaging (fMRI) in 24 human volunteers to test whether reward motivation enhances the coding of behaviorally relevant category distinctions across the frontoparietal cortex, as would be predicted, based on previous experimental evidence and theoretical accounts. In a cued target detection categorization task, participants detected whether an object from a cued visual category was present in a subsequent display. The combination of the cue and the visual category of the object determined the behavioral status of the presented objects. To manipulate reward motivation, half of all trials offered the possibility of a substantial reward. We observed an increase with reward in overall activity across the frontoparietal control network when the cue was presented. Multivariate pattern analysis (MVPA) further showed that behavioral status information for the objects presented was conveyed across the network. However, in contrast to our prediction, reward did not increase the discrimination between behavioral status conditions in the stimulus epoch of a trial when object information was processed depending on a current context. In the high-level general object visual region, the lateral occipital complex, the representation of behavioral status was driven by visual differences and was not modulated by reward. Our study provides useful evidence for the limited effects of reward motivation on task-related neural representations.

Introduction

A fundamental aspect of flexible goal-directed behavior is the selection and integration of information depending on a current goal to determine its relevance to behavior and lead to a decision. In non-human primates, single-cell data from the lateral prefrontal cortex, as well as parietal cortex, provide detailed evidence for the coding of task-relevant information. It has been shown that neural activity contains information about the context, also referred to as cue or task-set, as well as the integrated information of cue and a subsequent input stimulus, such as task-related categorical and behavioral decision [1–6]. In the human brain, a network of frontal and parietal cortical regions, the ‘multiple-demand’ (MD) network [7,8], has been shown to be involved in information selection and integration, and more generally in control processes. This network is associated with multiple aspects of cognitive control, such as spatial and verbal working memory, math, conflict monitoring, rule-guided categorization and task switching, [7,9,10]. The MD network spans the anterior-posterior axis of the middle frontal gyrus (MFG); posterior dorso-lateral frontal cortex (pdLFC); the anterior insula and frontal operculum (AI/FO); the pre-supplementary motor area and the adjacent dorsal anterior cingulate cortex (preSMA/ACC); and intraparietal sulcus (IPS) [11]. Multiple neuroimaging studies demonstrated that distributed patterns of activity across the MD network measured by functional magnetic resonance imaging (fMRI) reflected a variety of task-related information. These include task sets, behavioral relevance and task-dependent categorical decisions [12–19]. In contrast, sensory areas such as the high-level general object visual region, the lateral occipital complex (LOC), as well as the primary visual cortex, contain information about the visual properties and categorization of stimuli, with weaker, or non-existing, task effects [20–22].

With growing interest in recent years in the link between cognitive control and reward motivation, it has been proposed that reward enhances control processes by sharpening representation of task goals and prioritizing task-relevant information across the frontoparietal network and other regions associated with cognitive control [23–26]. In line with this idea, it has been shown that motivation, usually manipulated as monetary reward, increases task performance [27,28]. Neuroimaging studies linked increased activity with reward in frontoparietal regions across a range of tasks, including working memory [29,30], selective attention [31,32], response inhibition [28], and problem solving [33].

Although the accumulating evidence at the behavioral and neural level in humans are consistent with this sharpening and prioritizing account [25,26,34–37], they do not directly address the effect of reward motivation on the coding of task-related information and selection and integration processes. Some support for this idea comes from single-neuron data recorded from the prefrontal cortex of non-human primates: reward was associated with greater spatial selectivity, enhanced activity related to working memory and modulated task-related activity based on the type of reward [38–40]. A more direct evidence in humans was recently demonstrated by Etzel et al. (2016). They showed that reward enhances coding of task cues across the frontoparietal cortex, and suggested that task-set efficacy increases with reward. It remained unclear, however, whether a similar facilitative effect of reward across the frontoparietal network is limited to preparatory cues, or whether reward also enhances the coding of behaviorally relevant information, when the cue and a subsequent stimulus are integrated, leading to the behavioral decision. In a recent electroencephalogram (EEG) study, Hall-McMaster et al. (2019) showed that reward increases coding of task cues, similarly to what was observed in fMRI by Etzel et al., and that this effect was limited to trials that required a switch in context [41]. They also provided some evidence that the representation of features relevant for a given task is enhanced when the reward level is high. While these results demonstrate the temporal dynamics of reward effects, whether these effects of reward are specific to the frontoparietal control network is not clear, given the limited spatial resolution of EEG.

In this study, we build on previous reports and ask whether reward motivation enhances the representation of behaviorally relevant information across the frontoparietal network, determined by the integration of cue and stimulus input. Furthermore, previous studies have associated reward with decreased conflict in interference tasks [28,42,43], suggesting that any effect of reward may be particularly important for high-conflict items, in other words, a conflict-contingent effect. We therefore also asked whether such facilitative effect of reward is selective for highly conflicting items. Lastly, it is commonly accepted that top-down signals from the frontoparietal MD network to the visual cortex play an important role in the processing of task-related information. Therefore, we tested whether similar effects of reward would be observed in the high-level general object visual region, the lateral occipital complex (LOC).

We recently showed that behaviorally relevant, but not irrelevant, category distinctions of objects were coded across the MD network [12]. In contrast, such differences were not observed in the LOC. Here, we used a similar cued target detection categorization task while participants’ brain activity was measured using fMRI. Participants detected whether an object from a cued visual category (target category) was present or absent. On each trial, one of two categories was cued, and objects from those two categories could be either Targets, or nontargets with high behavioral conflict, as they could be targets on other trials (High-conflict nontarget). An additional category was never cued, serving as a nontarget with low behavioral conflict (Low-conflict nontarget). This design created three levels of behavioral status (Targets, High-conflict nontargets, Low-conflict nontargets). Critically, following this integration process, the relevant information that is expected to be represented across the MD network is the behavioral status of a given category, rather than the visual category itself [12]. Therefore, the behaviorally relevant category distinctions were pairs of categories with different behavioral status. We used multivariate pattern analysis (MVPA) to measure representation of the behaviorally relevant category distinctions as reflected in distributed patterns of response in the a priori defined MD network and LOC. To manipulate motivation, on half of all trials a substantial monetary reward was offered. We tested whether the neural pattern discriminability between the behaviorally relevant category distinctions increased with reward, and whether this effect was selective for the distinction between Targets and High-conflict nontargets.

Materials and Methods

Participants

24 participants (13 females), between the ages of 18-40 years (mean age: 25) took part in the study. Four additional participants were excluded due to large head movements during the scan (greater than 5 mm). The sample size was determined prior to data collection, as typical for neuroimaging studies and in accordance with counter-balancing requirements of the experimental design across participants. A similar sample size showed sufficient power to detect representation of behavioral status in a previous study [12]. All participants were right handed with normal or corrected-to-normal vision and had no history of neurological or psychiatric illness. The study was conducted with approval by the Cambridge Psychology Research Ethics Committee. All participants gave written informed consent and were monetarily reimbursed for their time.

Task Design

Participants performed a cued categorization task in the MRI scanner (Figure 1A). Our primary question concerned the representation during the stimulus epoch of a trial where cue and stimulus are integrated, and we therefore designed the task accordingly. At the beginning of each trial, one of three visual categories (sofas, shoes, cars) was cued, determining the target category for that trial. Participants had to indicate whether the subsequent object matched this category or not by pressing a button. For each participant, only two of the categories were cued as targets throughout the experiment. Depending on the cue on a given trial, objects from these categories could be either Targets, or nontargets with high conflict (as they could serve as targets on other trials). The third category was never cued, therefore objects from this category served as Low-conflict nontargets. This design yielded three behavioral status conditions: Targets, High-conflict nontargets and Low-conflict nontargets (Figure 1B). The assignment of the categories to be cued (and therefore serve as either Targets or High-conflict nontargets) or not (and serve as Low-conflict nontargets) was counter-balanced across participants.

Figure 1:
  • Download figure
  • Open in new tab
Figure 1: Experimental paradigm.

A. An example of a trial. A trial began with a cue (1 s) indicating the target category, followed by 500 ms fixation period. Reward trials were cued with three red £ symbols next to the target category. After an additional variable time (0.4, 0.7, 1.0 or 1.3 s), an object was presented for 120 ms. The object was then masked (a scramble of the all the stimuli used), until response or for a maximum of 3 s. The participants pressed a button to indicate whether the object was from the cued category (Target trials) or not (Nontarget trials). B. Experimental conditions. For each participant, two categories served as potential targets depending on the cue, and a third category never served as target. Here as an example, shoes and sofas are the cued categories and cars as the uncued category. In the Target trials, the presented object matched the cued category. In the High-conflict nontarget trials, the object did not match the cued category, but was from the other cued category, therefore could serve as a target on other trials. In the Low-conflict nontarget trials, the presented object was from the category that was never cued. Overall, this design yielded three levels of behavioral status: Targets, High-conflict nontargets, and Low-conflict nontargets. The design was used for both no-reward and reward conditions.

To manipulate motivation, half of the trials were cued as reward trials, in which participants had the chance of earning £1 if they completed the trial correctly and within a time limit. To assure the incentive on each reward trial, four random reward trials out of 40 in each run were assigned the £1 reward. To avoid longer reaction times when participants try to maximize their reward, a response time threshold was used for reward trials, set separately for each participant as the average of 32 trials in a pre-scan session. The participants were told that the maximum reward they could earn is £24 in the entire session (£4 per run), and were not told what the time threshold was. Therefore, to maximize their gain, participants had to treat every reward trial as a £1 trial and respond as quickly and as accurately as possible, just as in no-reward trials.

Each trial started with a 1 s cue, which was the name of a visual category that served as the target category for this trial. On reward trials, the cue included three red pound signs presented next to the category name. The cue was followed by a fixation dot in the center of the screen presented for 0.5 s and an additional variable time of either 0.1, 0.4, 0.7 or 1 s, selected randomly, in order to make the stimulus onset time less predictable. The stimulus was then presented for 120 ms and was followed by a mask. Participants indicated by a button press whether this object belonged to the cued target category (present) or not (absent). Following response, a 1 s blank inter-trial interval separated two trials. For both reward and no-reward trials, response time was limited to a maximum of 3 s, after which the 1 s blank inter-trial interval started even when no response was made. For reward trials, an additional subject-specific response time threshold was used as mentioned above to determine whether the participants earned the reward or not, but this time threshold did not affect the task structure and was invisible to the participants.

We used catch trials to decorrelate the BOLD signals of the cue and stimulus phases. 33% of all trials included cue followed by fixation dot for 500 ms, which then turned red for another 500 ms indicating the absence of the stimulus, followed by the inter-trial interval.

Stimuli

Objects were presented at the center of the screen on a grey background. The objects were 2.95° visual angle along the width and 2.98° visual angle along the height. Four exemplars from each visual category were used. Exemplars were chosen with similar colors, dimensions, and orientation across the categories. All exemplars were used an equal number of times in each condition and in each run to ensure that any differences between the experimental conditions will not be driven by the variability of exemplars. To increase the task demand, based on pilot data, we added Gaussian white noise to the stimuli. The post-stimulus mask was generated by randomly combining pieces of the stimuli that were used in the experiment. The mask was the same size as the stimuli and was presented until a response was made or the response time expired.

Structure and Design

Each participant completed 6 functional runs of the task in the scanner (mean duration ± SD: 6.2 ± 0.13 min). Each run started with a response-mapping instructions screen (e.g. left = target present, right = target absent), displayed until the participants pressed a button to continue. Halfway through the run, the instructions screen was presented again with the reversed response mapping. All trials required a button response to indicate whether the target was present or absent, and the change of response mapping ensured that conditions were not confounded by the side of the button press. Each run included 104 trials. Out of these, 8 were dummy trials following the response mapping instructions (4 after each instructions screen), and were excluded from the analysis. Of the remaining 96 trials, one-third (32 trials) were cue-only trials (catch trials). Of the remaining 64 trials, 32 were no-reward trials and 32 were reward trials. Of the 32 no-reward trials, half (16) were cued with one visual category, and half (16) with the other. For each cued category, half of the trials (8) were Target trials, and half of the trials (8) were nontarget trials, to assure an equal number of target (present) and nontarget (absent) trials. Of the nontarget trials, half (4) were High-conflict nontargets, and half (4) were Low-conflict nontargets. There were 4 trials per cue and reward level for the High- and Low-conflict nontarget conditions, and 8 for the Target condition, with the latter split into two regressors (see General Linear Model (GLM) for the Main Task section below). A similar split was used for reward trials. An event-related design was used and the order of the trials was randomized in each run. At the end of each run, the money earned in the reward trials and the number of correct trials (across both reward and no-reward trials) were presented on the screen.

Functional Localizers

In addition to the main task, we used two other tasks in order to functionally localize MD regions and LOC in individual participants using independent data. These were used in conjunction with ROI templates and a double-masking procedure to extract voxel data for MVPA (See ROI definition for more details).

To localize MD regions, we used a spatial working memory task [7]. On each trial, participants remembered 4 locations (Easy condition) or 8 locations (Hard condition) in a 3×4 grid. Each trial started with fixation for 500 ms. Locations on the grid were then highlighted consecutively for 1 s (1 or 2 locations at a time, for the Easy and Hard conditions, respectively). In a subsequent two-alternative forced-choice display (3 s), participants had to choose the grid with the correct highlighted locations by pressing the left or the right button. Feedback was given after every trial for 250 ms. Each trial was 8 s long, and each block included 4 trials (32 s). There was an equal number of correct grids on the right and left in the choice display. Participants completed 2 functional runs of 5 min 20 sec each, with 5 Easy blocks alternated with 5 Hard blocks in each run. We used the contrast of Hard vs. Easy blocks to localize MD regions.

As a localizer for LOC we used a one-back task with blocks of objects interleaved with blocks of scrambled objects. The objects were in grey scale and taken from a set of 61 everyday objects (e.g. camera, coffee cup, etc.). Participants had to press a button when the same image was presented twice in a row. Images were presented for 300 ms followed by a 500 ms fixation. Each block included 15 images with two image repetitions and was 12 s long. Participants completed two runs of this task, with 8 object blocks, 8 scrambled object blocks, and 5 fixation blocks. The objects vs. scrambled objects contrast was used to localize LOC.

Scanning Session

The scanning session included a structural scan, 6 functional runs of the main task, and 4 functional localizer runs – 2 for MD regions and 2 for LOC. The scanning session lasted up to 100 minutes, with an average 65 minutes of EPI time. The tasks were introduced to the participants in a pre-scan training session. The average reaction time of 32 no-reward trials of the main task completed in this practice session was set as the time threshold for the reward trials to be used in the scanner session. All tasks were written and presented using Psychtoolbox3 [44] and MatLab (The MathWorks, Inc).

Data Acquisition

fMRI data were acquired using a Siemens 3T Prisma scanner with a 32-channel head coil. We used a multi-band imaging sequence (CMRR, release 016a) with a multi-band factor of 3, acquiring 2 mm isotropic voxels [45]. Other acquisition parameters were: TR = 1.1 s, TE = 30 ms, 48 slices per volume with a slice thickness of 2 mm and no gap between slices, in plane resolution 2 × 2 mm, field of view 205 mm, flip angle 62°, and interleaved slice acquisition order. No iPAT or in-plane acceleration were used. T1-weighted multiecho MPRAGE [46] high-resolution images were also acquired for all participants, in which four different TEs were used to generate four images (voxel size 1 mm isotropic, field of view of 256 × 256 × 192 mm, TR = 2530 ms, TE = 1.64, 3.5, 5.36, and 7.22 ms). The voxelwise root mean square across the four MPRAGE images was computed to obtain a single structural image.

Data and Statistical Analysis

The primary analysis approach was multi-voxel pattern analysis (MVPA), to assess representation of behaviorally relevant category distinctions with and without reward. An additional ROI-based univariate analysis was conducted to confirm the recruitment of the MD network. Preprocessing, GLM and univariate analysis of the fMRI data were performed using SPM12 (Wellcome Department of Imaging Neuroscience, London, England; www.fil.ion.ucl.ac.uk), and the Automatic Analysis (aa) toolbox [47].

We used an alpha level of .05 for all statistical tests. Bonferroni correction for multiple comparisons was used when required, and the corrected p-values and uncorrected t-values are reported. All t tests that were used to compare two conditions were paired due to the within-subject design. A one-tailed t test was used when the prediction was directional, including testing for classification accuracy above chance level. All other t tests in which the a priori hypothesis was not directional were two-tailed. Additionally, effect size (Cohen’s dz) was computed. All analyses were conducted using custom-made MATLAB (The Mathworks, Inc) scripts, unless otherwise stated.

All raw data and code used in this study will be publicly available upon publication.

Pre-processing

Initial processing included motion correction and slice time correction. The structural image was coregistered to the Montreal Neurological Institute (MNI) template, and then the mean EPI was coregistered to the structural. The structural image was then normalized to the MNI template via a nonlinear deformation, and the resulting transformation was applied on the EPI volumes. Spatial smoothing of FWHM = 5 mm was performed for the functional localizers data only.

General Linear Model (GLM) for the Main Task

We used GLM to model the main task and localizers’ data. Regressors for the main task included 12 conditions during the stimulus epoch and 4 conditions during the cue epoch. Regressors during the stimulus epoch were split according to reward level (no-reward, reward), cued visual category (category 1, category 2), and behavioral status (Target, High-conflict nontarget, Low-conflict nontarget). To assure an equal number of target present and target absent trials, the number of Target trials in our design was twice the number of High-conflict and Low-conflict nontarget trials. The Target trials included two repetitions of each combination of cue, visual category and exemplar, with a similar split for reward trials. These two Target repetitions were modelled as separate Target1 and Target2 regressors in the GLM to make sure that all the regressors were based on an equal number of trials, but were invisible to the participants. All the univariate and multivariate analyses were carried out while keeping the two Target regressors separate to avoid any bias of the results, and they were averaged at the final stage of the results. Overall, the GLM included 16 regressors of interest for the 12 stimulus conditions. Each regressor was based on data from all correct trials in the respective condition in each run (up to 4 trials). To account for possible effects of reaction time (RT) on the beta estimates because of the varying duration of the stimulus epoch, and as a consequence their potential effect on decoding results, these regressors were modelled with durations from stimulus onset to response [48]. This model scales the regressors based on the reaction time, thus the beta estimates reflect activation per unit time and are comparable across conditions with different durations. Regressors during the cue epoch included both task and cue-only (catch) trials and were split by reward level and cued category, modelled with duration of 1 s. Cue regressors were based on 16 trials per regressor per run. As one-third of all trials were catch trials, the cue and stimulus epoch regressors were decorrelated and separable in the GLM. Regressors were convolved with the canonical hemodynamic response function (HRF). The 6 movement parameters and run means were included as covariates of no interest.

GLM for the Functional Localizers

For the MD localizer, regressors included Easy and Hard blocks. For LOC, regressors included objects and scrambled objects blocks. Each block was modelled with its duration. The regressors were convolved with the canonical hemodynamic response function (HRF). The 6 movement parameters and run means were included as covariates of no interest.

Univariate Analysis

We conducted an ROI analysis to test for the effect of reward on overall activity for the different behavioral status conditions and cues. We used templates for the MD network and for LOC as defined below (see ROI definition). Using the MarsBaR toolbox (http://marsbar.sourceforge.net; Brett et al. 2002) for SPM 12, beta estimates for each regressor of interest were extracted and averaged across runs, and across voxels within each ROI, separately for each participant and condition. For the MD network, beta estimates were also averaged across hemispheres (see ROI definition below). Second-level analysis was done on beta estimates across participants using repeated measures ANOVA. The data for the Target condition was averaged across the two Target1 and Target2 regressors, separately for the no-reward and reward conditions.

ROI Definition

MD network template

ROIs of the MD network were defined a priori using an independent data set (Fedorenko et al. 2013; see t-map at http://imaging.mrc-cbu.cam.ac.uk/imaging/MDsystem). These included the anterior, middle, and posterior parts of the middle frontal gyrus (aMFG, mMFG, and pMFG, respectively), a posterior dorsal region of the lateral frontal cortex (pdLFC), AI-FO, pre-SMA/ACC, and IPS, defined in the left and right hemispheres. The visual component in this template is widely accepted as a by-product of using largely visual tasks, and is not normally considered as part of the MD network. Therefore, it was not included in the analysis. The MD network is highly bilateral, with similar responses in both hemispheres [7,12]. We therefore averaged the results across hemispheres in all the analyses.

LOC template

LOC was defined using data from a functional localizer in an independent study with 15 participants (Lorina Naci, PhD dissertation, University of Cambridge). In this localizer, forward- and backward-masked objects were presented, as well as masks alone. Masked objects were contrasted with masks alone to identify object-selective cortex [50]. Division to the anterior part of LOC, the posterior fusiform region (pFs) of the inferior temporal cortex, and its posterior part, the lateral occipital region (LO) was done using a cut-off MNI coordinate of Y=-62, as previous studies have shown differences in processing for these two regions [51,52].

Voxels selection for MVPA

To compare between regions within the MD network and between sub-regions in LOC, we controlled for the ROI size and used the same number of voxels for all regions. We used a dual-masking approach that allowed the use of both a template, consistent across participants, as well as subject-specific data as derived from the functional localizers [53,54]. For each participant, beta estimates of each condition and run were extracted for each ROI based on the MD network and LOC templates. For each MD ROI, we then selected the 200 voxels with the largest t-value for the Hard vs. Easy contrast as derived from the independent subject-specific functional localizer data. This number of voxels was chosen prior to any data analysis, similar to our previous work [12]. For each LOC sub-region, we selected 180 voxels with the largest t-values of the object vs. scrambled contrast from the independent subject-specific functional localizer data. The selected voxels were used for the voxelwise patterns in the MVPA for the main task. The number of voxels that was used for LOC was smaller than for MD regions because of the size of the pFs and LO masks. For the analysis that compared MD regions with the visual regions, we used 180 voxels from all regions to keep the ROI size the same.

Multivoxel pattern analysis (MVPA)

We used MVPA to test for the effect of reward motivation on the discrimination between the task-related behavioral status pairs. Voxelwise patterns using the selected voxels within each template were computed for all the task conditions in the main task. We applied our classification procedure on all possible pairs of conditions as defined by the GLM regressors of interest during the stimulus presentation epoch, for the no-reward and reward conditions separately (Figure 1B). For each pair of conditions, MVPA was performed using a support vector machine classifier (LIBSVM library for MATLAB, c=1) implemented in the Decoding Toolbox [55]. We used leave-one-run-out cross-validation in which the classifier was trained on the data of five runs (training set) and tested on the sixth run (test set). This was repeated 6 times, leaving a different run to test each time, and classification accuracies were averaged across these 6 folds. Classification accuracies were then averaged across pairs of different cued categories, yielding discrimination measures for three pairs of behavioral status (Targets vs. High-conflict nontargets, Targets vs. Low-conflict nontargets, and High-conflict vs. Low-conflict nontargets) within each reward level (no-reward, reward). Because the number of Target trials in our design was twice the number of High-conflict and Low-conflict nontarget trials, each discrimination that involved a Target condition was computed separately for the two Target regressors (Target1 and Target2) and classification accuracies were averaged across them.

The Target and High-conflict nontarget pairs of conditions included cases when both conditions had an item from the same visual category as the stimulus (following different cues), as well as cases in which items from two different visual categories were displayed as stimuli (following the same cue). To test for the contribution of the visual category to the discrimination, we split the Target vs. High-conflict nontarget pairs of conditions into these two cases and the applied statistical tests accordingly.

Whole-brain searchlight pattern analysis

To test whether additional regions outside the MD network show change in discriminability between voxelwise patterns of activity of behavioral status conditions when reward is introduced, we conducted a whole-brain searchlight pattern analysis [56]. This analysis enables the identification of focal regions that carry relevant information, unlike the decoding based on larger ROIs, which tests for a more widely distributed representation of information. For each participant, data was extracted from spherical ROIs with an 8 mm radius, centered on each voxel in the brain. These voxels were used to perform the same MVPA analysis as described above. Thus, for each voxel, we computed the classification accuracies for the relevant distinctions, separately for the no-reward and reward conditions. These whole-brain maps were smoothed using a 5 mm FWHM Gaussian kernel. The t-statistic from a second level random-effects analysis on the smoothed maps was thresholded at the voxel level using FDR correction (p < 0.05).

Results

Behavior

Overall accuracy levels were high (mean ± SD: 92.51% ± 0.08%). Mean and SD accuracy rates for the Target, High-conflict nontarget and Low-conflict nontarget conditions in the no-reward trials were 91.2% ± 5.8%, 89.1% ± 8.8%, and 96.6% ± 3.8%, respectively; and for the reward trials they were 94.2% ± 5.0%, 87.8%± 8.7%, 96.1% ± 4.4%, respectively. A two-way repeated measures ANOVA with reward level and behavioral status as within-subject factors showed no main effect of reward (F1, 23 = 0.49, p = 0.49), confirming that the added time constraint for reward trials did not lead to drop in performance. There was a main effect of behavioral status (F2, 23 = 29.64, p < 0.001) and an interaction between reward level and behavioral status (F2, 23 = 5.81, p < 0.01). Post-hoc tests with Bonferroni correction for multiple comparisons showed larger accuracies for Low-conflict nontargets compared to Targets and High-conflict nontargets (Two-tailed t-test: t 23 = 5.64, p < 0.001, dz = 1.15; t23 = 5.50, p < 0.001, dz = 1.12 respectively) in the no-reward trials, as expected given that the Low-conflict nontarget category was fixed throughout the experiment. In the reward trials, performance accuracies were larger for Targets compared to High-conflict nontargets (t23 = 4.45, p < 0.001, dz = 0.91) and Low-conflict nontargets compared to High-conflict ones (t23 = 5.92, p < 0.001, dz = 1.2), with only marginal difference between Targets and Low-conflict nontargets (t23 = 2.49, p = 0.06). Accuracies for Target trials were larger for the reward trials compared to no-reward (t23 = 2.92, p = 0.008, dz = 0.61), indicating a possible behavioral benefit of reward. There was no difference between reward and no-reward trials for High-conflict and Low-conflict nontargets (t23 < 1.1, p > 0.1, for both).

RT of successful trials for the three behavioral status conditions, Target, High-conflict nontarget and Low-conflict nontarget, in the no-reward trials were 589 ± 98 ms, 662 ± 103 ms, and 626 ± 107 ms, respectively (mean ± SD); RTs for these conditions in the reward trials were 541 ± 99 ms, 614 ± 99 ms, 585 ± 97 ms, respectively (mean ± SD). A two-way repeated measures ANOVA with reward level (no-reward, reward) and behavioral status as within-subject factors showed a main effect of reward (F1, 23 = 40.07, p < 0.001), with reward trials being shorter than no-reward trials, as expected from the experimental design that required a response within a time limit to receive the reward. An additional main effect of behavioral status (F2, 23 = 50.97, p < 0.001) was observed, with no interaction between reward and behavioral status (F2, 23 = 0.63, p = 0.54). Subsequent post-hoc tests with Bonferroni correction for multiple comparisons showed that RTs for Target trials were faster than High-conflict and Low-conflict nontarget trials (t23 = 10.03, p < 0.001, dz = 2.05; t23 = 5.17, p < 0.001, dz = 1.06 respectively), and Low-conflict nontarget trials were faster than the High-conflict ones (t23 = 4.96, p < 0.001, dz = 1.01), as expected from a cued target detection task.

Activity across the MD network during the cue epoch

To address our primary research question, the analysis focused on the stimulus epoch. However, to get a full picture of the data and for comparability with previous studies that showed increase in cue information, we also report the results for the cue epoch here. The analysis focuses only on the MD network and not the LOC, since no object stimuli were presented at this epoch of the trial.

We first tested for a univariate effect of reward during the cue phase (averaged across the β estimates of the two cues) across all MD regions. A two-way repeated measures ANOVA with reward (2: no-reward, reward) and ROI (7) as factors showed a main effect of reward (F1, 23 = 13.75, p = 0.001) with increased activity during the reward trials compared to the no-reward trials. There was also a main effect of ROI (F6, 138 = 6.44, p < 0.001) and an interaction of reward and ROI (F6, 138 = 6.67, p < 0.001). Post-hoc tests showed that all regions except aMFG showed increased activation for reward trials compared to no-reward trials (Two tailed, Bonferroni corrected for 7 comparisons: t23 > 3.06, p < 0.04, dz > 0.62 for all ROIs except aMFG; t23 = 2.38, p = 0.18, dz = 0.49 for aMFG). Overall, the MD network showed a strong univariate reward effect during the cue epoch.

We next asked whether the cues were decodable as measured using MVPA, and whether decoding levels increased with reward as has been previously reported [24,41]. Decoding between the two cues separately for the two reward levels were computed in each of the MD ROIs. A two-way repeated measures ANOVA with reward (2) and ROI (7) as factors showed no main effects or interactions (F < 1.9, p > 0.08). Decoding levels averaged across all MD ROIs were (mean ± SD) 51.71% ± 4.39% and 50.55% ± 6.69% for the reward and no-reward conditions, respectively. There was a marginally significant cue decoding for reward conditions, and no significant decoding for no-reward conditions. (One-tailed t-test, reward: t23 = 1.9, p = 0.07, dz = 0.39; no-reward: t23 = 0.4, p = 0.7, dz = 0.07). Overall, we found that cue information could not be reliably decoded in any of the MD ROIs and in both no-reward and reward conditions.

Lastly, we conducted a complementary whole-brain searchlight analysis to test whether cue decoding was observed in other regions beyond the MD network. A second level random-effects analysis of cue decoding, separately for the no-reward and reward conditions, did not reveal any additional regions that showed cue decoding (FDR correction p < 0.05). An additional analysis to test for increase in cue decoding with reward showed similar results with no voxels surviving FDR correction (p < 0.05). Altogether, our results show that despite substantial increases in overall univariate activity with reward during the cue epoch across the MD network, the cues in our study were not decodable in both no-reward and reward conditions.

Univariate activity in the MD network during the stimulus epoch

We started our analysis for the stimulus epoch by testing for the effect of reward motivation on the overall activity in MD regions, and whether such effect is different for the three behavioral status conditions. We used averaged β estimates for each behavioral status (Target, High-conflict nontarget, Low-conflict nontarget) and reward level (no-reward, reward) in each of the MD ROIs (Figure 2). A three-way repeated measures ANOVA with reward (2), behavioral status (3) and ROI (7) as within-subject factors showed no significant main effect of reward (F1, 23 = 3.37, p = 0.079), despite a trend towards increased activity with reward. There was an interaction of reward level and ROI (F6, 138 = 5.02, p = 0.001), with only the AI/FO showing reward effect following post-hoc tests and Bonferroni correction for multiple (7) comparisons (t23 = 3.88, p = 0.005, dz = 0.79). The IPS showed a reward effect that did not survive multiple comparisons (t23 = 2.58, uncorrected p = 0.016, corrected p = 0.11, dz = 0.53 Importantly, there was no main effect of behavioral status (F2, 46 = 0.97, p = 0.57) and no interaction of reward and behavioral status (F2, 46 = 0.51, p = 0.61). Overall, the univariate results indicated similar levels of activity for the three behavioral status conditions. While we expected an increase in univariate activity in many MD regions with reward [28,33,57], we observed such an increase only in the AI/FO.

Figure 2:
  • Download figure
  • Open in new tab
Figure 2: Univariate activity across the MD network during the stimulus epoch.

A. Univariate results averaged across all MD regions. Results are averaged across the behavioral status conditions for no-reward (blue bar) and reward (red bar) conditions, showing strong recruitment of the network and no increase with reward. B. Average univariate activity across the MD network is shown separately for each behavioral status condition for no-reward (blue bars) and reward (red bars) conditions. Activity is similar for the three behavioral status conditions and does not increase with reward. T: Target, HC: High-conflict nontarget, LC: Low-conflict nontarget). C. Univariate results for the individual MD regions showing similar results for all regions. Post-hoc tests showed that only activity in AI increased with reward. The MD network template is shown for reference. pdLFC: posterior/dorsal lateral prefrontal cortex, IPS: intraparietal sulcus, preSMA: pre-supplementary motor area, ACC: anterior cingulate cortex, AI: anterior insula, FO: frontal operculum, aMFG, mMFG, pMFG: anterior, middle and posterior middle frontal gyrus, respectively. Errors bars indicate S.E.M..

Effect of reward motivation on discrimination of behaviorally relevant category distinctions in the MD network

Our main question concerned the representation of task-related behavioral status information across the MD network and its modulation by reward, and we used MVPA to address that. For each participant and ROI we computed the classification accuracy above chance (50%) for the distinctions between Target vs. High-conflict nontarget, Target vs. Low-conflict nontarget and High-conflict vs. Low-conflict nontargets, separately for no-reward and reward conditions (Figure 3). The analysis was set to test for discrimination between behavioral status conditions within each reward level, and whether these discriminations are larger when reward is introduced compared to the no-reward condition. A three-way repeated-measures ANOVA with reward (2), behavioral distinction (3) and ROI (7) as within-subject factors showed no main effect of ROI (F6, 138 =0.97, p = 0.45) or any interaction of ROI with reward and behavioral distinction (F < 1.16, p > 0.31). Therefore, the classification accuracies were averaged across ROIs for further analysis (Figure 3A). First, we looked at the overall discrimination of behavioral status pairs. Averaged across the three pairs of behavioral status, decoding accuracies were (mean ± SD) 51.4% ± 2.8% and 51.8% ± 3.5% for the no-reward and reward conditions, respectively. Decoding levels were above chance (50%) for both the no-reward and reward trials (one-tailed t-test against chance, corrected for 2 comparisons, reward: t23 = 2.5, corrected p = 0.02, dz = 0.5; no-reward: t23 = 2.34, corrected p = 0.03, dz = 0.48). The decoding levels above chance for the individual pairs of behavioral status for the no-reward and reward conditions are summarized in Table 1. Overall, our results show that on average behaviorally relevant categorical distinctions are represented across the MD network in both no-reward and reward conditions, with some differences between individual pairs of behavioral status.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1: Decoding accuracies for pairs of behavioral status across the MD network.

t values are for a one-tailed t-test against chance level (50%). Corrected p values were obtained using Bonferroni correction for 6 comparisons. + p < 0.06, * p < 0.05, ** p < 0.01

Figure 3:
  • Download figure
  • Open in new tab
Figure 3: Reward does not modulate distinctions of behavioral status across the MD network.

A. Classification accuracy is presented as percentage above chance (50%), averaged across all MD regions and behavioral status pairs, for no-reward (blue bars) and reward (red bars) trials. Behavioral status was decodable but not modulated by reward. Asterisks above bars show significant decoding above chance (One-tailed, Bonferroni corrected for 2 comparisons). B. The data in A is shown separately for the three distinctions of Target vs. High-conflict nontarget, Target vs. Low-conflict non-target, and High-vs. Low-conflict nontargets. T: Target, HC: High-conflict nontarget, LC: Low-conflict nontarget. Asterisks above bars show one-tailed significant discrimination between behavioral categories above chance without correction (black), and Bonferroni corrected for multiple (6) comparisons (red). See Table 1 for details. C. Decoding results are shown for the individual MD regions. The MD network template is shown for reference. pdLFC: posterior/dorsal lateral prefrontal cortex, IPS: intraparietal sulcus, preSMA: pre-supplementary motor area, ACC: anterior cingulate cortex, AI: anterior insula, FO: frontal operculum, aMFG, mMFG, pMFG: anterior, middle and posterior middle frontal gyrus, respectively. Errors bars indicate S.E.M. + p < 0.06, * p < 0.05, ** p < 0.01.

In our critical analysis we tested for the modulatory effect of reward on the discriminability between pairs of behavioral status. In contrast to our prediction, a two-way repeated measures ANOVA with reward (2) and behavioral distinction (3) as within-subject factors showed no main effects of reward or behavioral distinction (F1, 23 = 0.26, p = 0.6; F2, 46 = 1.37, p = 0.26, respectively), and no interaction of the two (F2, 46 = 0.74, p = 0.48). To test for the specific prediction that reward might increase discrimination for the high conflict pair of conditions that may not have been picked up by the ANOVA, we compared decoding levels for the Target vs. High conflict nontarget for the no-reward and reward conditions. Further in contrast to our prediction, classification accuracy was not larger in the reward trials compared to the no-reward trials for the Target vs. High-conflict nontarget distinction (One-tailed paired t-test: t23 =1.07, p = 0.15, dz = 0.22). In summary, although the average decoding levels of behavioral status were above chance across the MD network, we did not find increases in decodability with reward.

To test for other brain regions that may have shown increased pattern discriminability with reward beyond the MD network, we conducted a complementary whole-brain searchlight analysis. In a second-level random-effects analysis of behavioral status classification maps (average across the three pairs of behavioral status) of reward vs. no-reward conditions, none of the voxels survived an FDR threshold of p < 0.05. A separate analysis for classification of Targets vs. High-conflict nontargets showed similar results, with no voxels surviving FDR correction (p < 0.05). Therefore, our data did not reveal any brain regions that showed the predicted increase in discriminability with reward.

Effects of reward motivation on behaviorally relevant category distinctions in LOC

It is widely accepted that the frontoparietal MD network exerts top-down control on visual areas, contributing to task-dependent processing of information. To test for reward effects on decoding of categorical information based on behavioral status in the visual cortex, we performed similar univariate and multivariate analyses during the stimulus epoch in the high-level general object visual region, the lateral occipital complex (LOC), separately for its two sub-regions, LO and pFs. We first conducted univariate analysis to test for an effect of reward and behavioral status on overall activity in LOC, which did not show a change in BOLD response with reward. A four-way repeated-measures ANOVA with reward (2), behavioral status (3), ROI (2), and hemisphere (2) as within-subject factors showed no main effect of reward (F1, 23 = 0.3 p = 0.6), no main effect of ROI (F1, 23 = 3.7 p = 0.07), and a trend of main effect of hemisphere (F1, 23 = 3.95 p = 0.06). There was an interaction of reward and ROI (F1, 23 = 7.3, p = 0.01), but post-hoc tests with correction for multiple (2) comparisons showed that activity was not larger for reward compared to no-reward trials in both LO and pFs (Two-tailed t-test: t23 = 1.45, p = 0.16, dz = 0.3; t23 = 0.94, p = 0.36, dz = 0.2; for LO and pFs, respectively). There was a main effect of behavioral status (F2, 46 = 8.73, p < 0.001), but no interaction of behavioral status and reward (F2, 46 = 0.56, p = 0.57). Altogether, the univariate results show that reward did not lead to increased activity in LOC for any of the behavioral status conditions.

We then tested for the representation of the task-related behavioral status conditions in LOC (Figure 4). Decoding levels averaged across all pairs of behavioral status and the two LOC ROIs were above chance for both no-reward and reward conditions (mean ± SD: 53.23% ± 3.64% and 53.76% ± 4.14% for the no-reward and reward conditions, respectively. One-tailed t-test against chance, corrected for 2 comparisons, reward: t23 = 4.45, corrected p < 0.001, dz = 0.9; no-reward: t23 = 4.34, corrected p < 0.001, dz = 0.9). Importantly, decoding levels were not larger for the reward conditions compared to the no-reward conditions for any of the behavioral status distinctions, with similar results for both LO and pFs. A four-way repeated-measures ANOVA with reward (2), behavioral distinction (3), ROIs (2) and hemispheres (2) as within-subject factors showed no main effect of reward (F1, 23 = 0.34, p = 0.56) or interaction of reward and ROI (F1, 23 = 1.14, p = 0.29). No other main effects or interactions were significant (F < 3.15, p > 0.05). Overall, these results demonstrate that reward did not modulate the coding of the task-related behavioral status distinctions in LOC.

Figure 4:
  • Download figure
  • Open in new tab
Figure 4: Reward motivation does not increase coding of behavioral status in LOC.

A. Classification accuracy averaged across all behavioral status pairs is presented as percentage above chance (50%), averaged across LO and pFS and both hemispheres. Classification accuracies for no-reward (blue bars) and reward (red bars) conditions are similar and above chance. Asterisks above bars show significant decoding level above chance, (one-tailed Bonferroni corrected for 2 comparisons). B. Classification accuracies are similar for all three behavioral status distinctions. T: Target, HC: High-conflict nontarget, LC: Low-conflict nontarget. Asterisks above bars show one-tailed significant discrimination between behavioral categories above chance without correction (black), and corrected for multiple (6) comparisons (red). C. Classification accuracies for LO and pFs are presented separately, averaged across hemispheres. The LOC template is shown on sagittal and coronal planes, with a vertical line dividing it into posterior (LO) and anterior (pFs) regions. Errors bars indicate S.E.M. + p < 0.06, * p < 0.05, ** p < 0.01, *** p < 0.001.

Conflict-contingent vs. visual category effects

An important aspect of the Target and High-conflict nontarget conditions in this experiment was that they both contained the same visual categories, which could be either a target or a nontarget (Figure 1B). Therefore, the Target vs. High-conflict nontarget pairs of conditions in our decoding analysis included cases where the stimuli in the two conditions were items from different visual categories (e.g. shoe and sofa following a ‘shoe’ cue), as well as cases where the two stimuli were items from the same visual category (e.g. shoe following a ‘shoe’ cue and a ‘sofa’ cue). We further investigated whether the representation in the MD network and in the LOC was driven by the task-related high conflict nature of the two conditions or by the different visual categories of the stimuli, and whether there was a facilitative effect of reward which is limited to the representation of the visual categories. For each participant, the decoding accuracy for this behavioral status distinction was computed separately for pairs of conditions in which the stimuli belonged to the same visual category (different cue trials), and for pairs in which the stimuli belonged to different visual categories (same cue trials). This analysis was conducted by selecting 180 voxels for both MD and LOC ROIs, to keep the ROI size the same. For both MD and LOC regions, there was no interaction with ROI or hemisphere, therefore accuracy levels were averaged across hemispheres and ROIs for the MD network and LOC (repeated measures ANOVA with reward (2), distinction type (2, same or different visual category), ROIs (7 for MD, 2 for LOC) and hemispheres (2, just for LOC) as within-subject factors: F < 3.15 p > 0.05 for all interactions with ROI and hemisphere). Figure 5 shows Target vs. High-conflict nontarget distinctions separately for same and different visual categories for no-reward and reward conditions, for both the MD and LOC.

Figure 5:
  • Download figure
  • Open in new tab
Figure 5: Decoding of highly conflicting behavioral status distinctions in the MD network and LOC.

Classification accuracies above chance (50%) are presented for no-reward and same-visual-category distinctions (light blue), no-reward and different-visual-category distinctions (dark blue), reward and same-visual-category distinctions (light red), and reward and different-visual-category distinctions (dark red), separately for the MD network and the LOC, averaged across regions and hemispheres in each system. In the MD network, neither reward nor visual category modulated the discrimination of Target vs. High-conflict nontarget. In contrast, classification accuracies in the LOC are larger when the displayed objects are from two different visual categories compared to when they belong to the same visual category, irrespective of the reward level. Asterisks above bars show one-tailed significant discrimination above chance without correction (black), and corrected for multiple (6) comparisons (red). Significant main effects of visual category are shown above the bars of each system. Errors bars indicate S.E.M. * p < 0.05, ** p < 0.01, *** p < 0.001.

We next tested for the effect of reward and distinction type (same or different visual category) on decoding levels in each the two systems. In the MD network, a two-way repeated measures ANOVA with reward (2) and distinction type (2) as factors showed no main effect of reward (F1, 23 = 0.92, p = 0.35) and no effect of category distinction or their interaction (F1, 23 = 2.9, p = 0.1; F1, 23 = 0.1, p = 0.8, respectively). These results show that there was no effect of reward on high conflict items that may be specific for the distinction between visual categories. In contrast, a similar ANOVA for LOC showed a main effect of distinction type (F1, 23 = 25.9, p < 0.001) and no effect of reward or their interaction (F1, 23 = 0.05, p = 0.8; F1, 23 = 0.46, p = 0.50, respectively). Together, these results demonstrate that representation was driven by visual categories in LOC, but not in the MD network. To further establish this dissociation between the two systems, we used a three-way repeated measures ANOVA with distinction type (2, same or different visual category), reward (2), and brain system (2, MD or LOC) as within-subject factors. There was no main effect of brain system (F1, 23 = 1.2, p = 0.3), allowing us to compare between the two systems. An interaction between distinction type and system (F1, 23 = 16.7, p < 0.001) confirmed that decoding levels in the two systems were affected differently by visual category. Critically to our research question, reward did not lead to increased decoding levels in either of the systems (no main effect of reward or interactions with reward: F < 0.77, p > 0.39).

Discussion

In this study we used a cued target detection task to test for the effect of reward motivation on the coding of task-related behaviorally-relevant category distinctions in the frontoparietal MD network as reflected in distributed patterns of fMRI data. Using MVPA, we showed that information about the behavioral status during the stimulus epoch of a trial was represented across the MD network, similarly to our previous study [12]. However, in contrast to our prediction, reward motivation, in the form of monetary reward, did not enhance the distinctions between the three behavioral status conditions across the MD network. Additionally, we did not find evidence for a selective facilitative effect of reward on discriminability of highly conflicting items (competition-contingent effect). In the LOC, information about the behavioral status of the presented stimuli was primarily driven by visual categories, as expected, and was not modulated by reward motivation.

Previous reports showed an enhancement effect of reward on overall activity in the frontoparietal control network [23,28,57], in line with our data that showed increase in univariate activity with reward during the cue epoch. Recently, it was demonstrated that cue decoding increased with reward motivation [24], and in particular when task rules change from one trial to another [41]. Whether reward also modulates the representation of task-related information that is processed while cue and stimulus information is integrated remained unclear. These two effects of reward are complementary to one another, and are both key aspects of cognitive control and essential when reaching a decision. If reward enhances cue coding, then it would be reasonable to hypothesize that it may also facilitate the integration process of the cue and the subsequent stimulus that leads to successful completion of the task. Furthermore, previous studies have suggested that reward motivation particularly affects conditions of high conflict. Padmala and Pessoa (2011) reported a decrease in interference with reward in response inhibition tasks. Reward also reduced incongruency effect in the Stroop task compared to non-rewarded trials [43] and enhanced error monitoring [42]. Thus, we predicted that reward motivation will enhance the representation of task information, and that this effect may be specific for highly conflicting items. Based on our previous work that showed representation of behavioral status information across the frontoparietal cortex [12], we used three behavioral status levels and their distinctions to test these predictions. While the overall representation of behavioral status across the MD network replicated our previous findings [12], our results did not show an increase in representation with reward, in contrast to our predictions. Additionally, we did not observe a selective increase in representation for the highly conflicting items, namely Targets vs. High-conflict nontargets. Recently, Hall-McMaster et al., (2019) showed some increases in task-relevant stimulus features information when reward levels were high, using distributed patterns in EEG data. In our task, we tested for representation of behavioral status of the presented items, rather than stimulus features. We did not observe changes in representation similar to the ones observed by Hall-McMaster et al. There may be several possible reasons for that, including the type of representations that were tested (features vs. behavioral status), multiple differences in the design that contribute to our ability to detect multivariate representations, the use of a wide coverage with low spatial specificity in the EEG data compared to the more focused ROIs in our fMRI study, and perhaps the most significant being the limited time window where such differences were observed in EEG that cannot be detected with the low temporal resolution fMRI data. More generally, several reasons can provide potential explanations for the results obtained in our study, showing no facilitative effect of reward on pattern discriminability of behavioral status. Indeed, it is possible that the effect of reward is limited to cue decoding when the task context is set, as has been previously demonstrated [24], and does not extend to the stimulus phase when information is processed based on the cue. However, we cannot rule out other possible explanations, including that our reward manipulation was not sufficiently strong to make a difference to pattern discriminability and multiple factors in the experimental design that make small effects hard to detect with current MVPA methods, particularly across the frontoparietal cortex [58]. Additionally, insufficient power is always a concern when reporting null results, and it may be that low power may have affected our ability to detect an effect of reward on decoding. Nevertheless, we note that some of our decoding results demonstrate sufficient power. First, our overall behavioral status decoding levels were above chance, with decoding for the no-reward conditions similar to our previous study [12]. Second, decoding in LOC show a clear pattern of contingency on visual category, as expected in the visual cortex, in contrast to distinct pattern of decoding across the MD network that do not depend on the visual category of the presented items.

Our predictions were based on the sharpening and prioritization account, which postulates that reward motivation leads to a sharpened neural representation of relevant information depending on the current task and needs. Previous neurophysiological evidence provide support for this aspect: reward has been associated with firing of dopaminergic neurons [59,60], and dopamine has been shown to modulate tuning of prefrontal neurons and to sharpen their representations [61–63]. The prioritization aspect can be related to the expected value of control (EVC) theory [64] and reward-based models for the interaction of reward and cognitive control, essentially a cost-benefit trade-off [23]. Cognitive control is effortful and hence an ideal system would allocate it efficiently, with a general aim of maximizing expected utility. Despite the appeal of this account, our results did not show experimental support for this view. At the behavioral level though, we observed some evidence for such a benefit of reward. Accuracy levels of performance in the task for Target trials were higher in the reward compared to the no-reward condition. Additionally, while in the no-reward condition Target trials were less accurate than Low-conflict nontargets, in the reward condition there were no differences between them. We did not observe a similar benefit in reaction times, most likely due to the time threshold that we used for reward trials, which reduced the reaction time on all the reward trials, and may have masked an interaction with reward.

The visual categorization aspect of our task allowed us to investigate effects of reward on representation in LOC compared to the MD network, and in particular whether there is a specific effect of reward that is driven by visual differences. In the MD network, decoding levels were similar between conditions with the same visual category and different visual category, and there was no modulation by reward in any of them. In contrast, the discrimination in LOC was driven by the visual categories, as expected in the visual cortex, with Targets and High-conflict nontargets being discriminable only when items belonged to two different visual categories. While it is widely agreed that the frontoparietal cortex exerts top-down effects on visual areas, there is no clear prediction as to whether any effects of reward should be observed in the visual cortex. Our results provide evidence that the effects of reward were not present in LOC. Previous studies have shown differences in representations between pFs and LO [14,20,65], however, our results were similar for both regions.

Although our primary question addressed the representation of task-related information during the integration of stimulus and cue, we also tested for an effect of reward in the cue epoch. The use of catch trials ensured that the cue and stimulus GLM regressors were appropriately decorrelated. The overall univariate activity across the MD network increased with reward during the cue epoch, possibly reflecting an increase in cognitive effort due to the reward. However, we did not observe cue decoding above chance, in contrast to the results reported by Etzel et al. [24]. One reason for this difference may be related to the design of our task. We used words of the category names as cues, which appeared together at the same time with the no-reward/reward indication – the reward trial cues had additional red pound signs. The choice of words as cues allowed for high task performance (compared to using abstract symbols, which is more difficult), as confirmed in pilot experiments. This may have come at the expense of cue decodability, which was not the focus of the study. The visually salient reward signal presented simultaneously with the cues may have also contributed to reduced cue decoding levels, and the longer delay period in their study probably led to longer active maintenance of the task rule in working memory. Other reasons for the different results compared to Etzel et al. may be related to the size of the effect and our ability to detect it. The effect of reward on cue decoding reported by Etzel et al. was observed across all ROIs, but was inconsistent in individual ROIs, with only one region showing a significant difference between reward and no-reward conditions. Even for decoding across all ROIs, statistical significance was reached in one statistical test but was only marginal in another. It could be that the effect of reward on cue decoding is relatively small and therefore hard to detect. Among others, our ability to detect such effect may be affected by the areas chosen, number of trials and runs completed per subject etc. The contributions of many of these factors to decoding levels in the frontoparietal network more generally are not yet well understood [58]. Enhancement of cue decoding following reward was also recently reported using EEG [41]. This facilitative effect was observed for ‘switch’ trials but not ‘stay’ trials. Our study design was not controlled for ‘switch’ and ‘stay’ trials, and it could be that cue decoding would emerge if these could be considered. An additional prominent difference between the EEG results and ours is the spatial specificity, with much more widespread activity contributing to decoding in EEG data.

There is a growing interest of the scientific community in the interaction between reward motivation and control processes, its neural correlates, and its implementation in computational models of reinforcement learning. Here, we report primarily null results for an effect of reward on task-related representations across the frontoparietal network and we believe that these results, although inconclusive, may be useful for future studies that seek to address similar questions. Reports on the effect of reward on task representation are so far limited, and with the movement towards open and replicable science, it is important to establish the best possible pool of evidence for such effects. Our results will ultimately contribute to the overall estimates of extent of effects of reward on neural representation. Additionally, our results may be used as a starting point for future studies and particularly for power estimates, in respect to both the expected effect size as well as aspects of the experimental design that may contribute to the detectability of reward effects. Different measures can be taken to further increase power within scanning time limits. These might include, but not limited to, collecting a larger amount of data per participant, possibly over more than one scanning session, as well as using other methods that have been used recently to address neural representation for fMRI data such as repetition suppression [66].

In summary, we asked whether reward motivation leads to increased representation of task-related information across the frontoparietal network and the LOC. We found that information about behavioral status was present across the MD network. However, in contrast to our prediction, we did not find an increase in representation levels with reward. In the LOC, we observed representation of behavioral status that was driven by visual category information and was not modulated by reward. With growing interest in the interaction between control processes and reward motivation, our study provides important experimental evidence for the limited extent of effects of reward on task-relevant neural representations.

Acknowledgements

This work was funded by a Royal Society Dorothy Hodgkin Research Fellowship (UK) to Yaara Erez (DH130100). Sneha Shashidhara was supported by a scholarship from the Gates Cambridge Trust, Cambridge, UK. This work was also supported by the Medical Research Council (UK) Intramural Program MC-A060-5PQ10. We thank John Duncan and Daniel Mitchell for fruitful discussions and advice throughout the study.

The authors declare no competing financial interests.

References

  1. [1].↵
    Freedman DJ, Riesenhuber M, Poggio T, Miller EK. Categorical Representation of Visual Stimuli in the Primate Prefrontal Cortex. Science 2001;291:312–6. https://doi.org/10.1126/science.291.5502.312.
    OpenUrlAbstract/FREE Full Text
  2. [2].
    Mante V, Sussillo D, Shenoy K V., Newsome WT. Context-dependent computation by recurrent dynamics in prefrontal cortex. Nature 2013;503:78–84. https://doi.org/10.1038/nature12742.
    OpenUrlCrossRefPubMedWeb of Science
  3. [3].
    Kadohisa M, Petrov P, Stokes MG, Sigala N, Buckley MJ, Gaffan D, et al. Dynamic Construction of a Coherent Attentional State in a Prefrontal Cell Population. Neuron 2013;80:235–46. https://doi.org/10.1016/J.NEURON.2013.07.041.
    OpenUrlCrossRefPubMedWeb of Science
  4. [4].
    Stokes MG, Kusunoki M, Sigala N, Nili H, Gaffan D, Duncan J. Dynamic Coding for Cognitive Control in Prefrontal Cortex. Neuron 2013;78:364–75. https://doi.org/10.1016/J.NEURON.2013.01.039.
    OpenUrlCrossRefPubMedWeb of Science
  5. [5].
    Wallis JD, Anderson KC, Miller EK. Single neurons in prefrontal cortex encode abstract rules. Nature 2001;411:953–6. https://doi.org/10.1038/35082081.
    OpenUrlCrossRefPubMedWeb of Science
  6. [6].↵
    Kusunoki M, Sigala N, Nili H, Gaffan D, Duncan J. Target Detection by Opponent Coding in Monkey Prefrontal Cortex. J Cogn Neurosci 2010;22:751–60. https://doi.org/10.1162/jocn.2009.21216.
    OpenUrlCrossRefPubMedWeb of Science
  7. [7].↵
    Fedorenko E, Duncan J, Kanwisher N. Broad domain generality in focal regions of frontal and parietal cortex. Proc NatL Acad Sci USA 2013;110:16616–21. https://doi.org/10.1073/pnas.1315235110.
    OpenUrlAbstract/FREE Full Text
  8. [8].↵
    Mitchell DJ, Bell AH, Buckley MJ, Mitchell AS, Sallet J, Duncan J. A Putative Multiple-Demand System in the Macaque Brain. J Neurosci 2016;36:8574–85. https://doi.org/10.1523/JNEUROSCI.0810-16.2016.
    OpenUrlAbstract/FREE Full Text
  9. [9].↵
    Vergauwe E, Cowan N. Attending to items in working memory: evidence that refreshing and memory search are closely related. Psychon Bull Rev 2015;22:1001–6. https://doi.org/10.3758/s13423-014-0755-6.
    OpenUrl
  10. [10].↵
    Cole MW, Ito T, Braver TS. The Behavioral Relevance of Task Information in Human Prefrontal Cortex. Cereb Cortex 2016;26:2497–505. https://doi.org/10.1093/cercor/bhv072.
    OpenUrlCrossRefPubMed
  11. [11].↵
    Duncan J. The multiple-demand (MD) system of the primate brain: mental programs for intelligent behaviour. Trends Cogn Sci 2010;14:172–9. https://doi.org/10.1016/j.tics.2010.01.004.
    OpenUrlCrossRefPubMedWeb of Science
  12. [12].↵
    Erez Y, Duncan J. Discrimination of Visual Categories Based on Behavioral Relevance in Widespread Regions of Frontoparietal Cortex. J Neurosci 2015;35:12383–93. https://doi.org/10.1523/JNEUROSCI.1134-15.2015.
    OpenUrlAbstract/FREE Full Text
  13. [13].
    Muhle-Karbe PS, Duncan J, De Baene W, Mitchell DJ, Brass M. Neural Coding for Instruction-Based Task Sets in Human Frontoparietal and Visual Cortex. Cereb Cortex 2017;27:1891–905. https://doi.org/10.1093/cercor/bhw032.
    OpenUrlCrossRefPubMed
  14. [14].↵
    Li S, Ostwald D, Giese M, Kourtzi Z. Flexible Coding for Categorical Decisions in the Human Brain. J Neurosci 2007;27:12321–30. https://doi.org/10.1523/JNEUROSCI.3795-07.2007.
    OpenUrlAbstract/FREE Full Text
  15. [15].
    Wisniewski D, Goschke T, Haynes J-D. Similar coding of freely chosen and externally cued intentions in a fronto-parietal network. Neuroimage 2016;134:450–8. https://doi.org/10.1016/J.NEUROIMAGE.2016.04.044.
    OpenUrl
  16. [16].
    Woolgar A, Hampshire A, Thompson R, Duncan J. Adaptive coding of task-relevant information in human frontoparietal cortex. J Neurosci 2011;31:14592–9. https://doi.org/10.1523/JNEUROSCI.2616-11.2011.
    OpenUrlAbstract/FREE Full Text
  17. [17].
    Woolgar A, Thompson R, Bor D, Duncan J. Multi-voxel coding of stimuli, rules, and responses in human frontoparietal cortex. Neuroimage 2011;56:744–52. https://doi.org/10.1016/J.NEUROIMAGE.2010.04.035.
    OpenUrlCrossRefPubMedWeb of Science
  18. [18].
    Woolgar A, Williams MA, Rich AN. Attention enhances multi-voxel representation of novel objects in frontal, parietal and visual cortices. Neuroimage 2015;109:429–37. https://doi.org/10.1016/j.neuroimage.2014.12.083.
    OpenUrlCrossRefPubMed
  19. [19].↵
    Nastase SA, Connolly AC, Oosterhof NN, Halchenko YO, Guntupalli JS, Visconti M, et al. Attention Selectively Reshapes the Geometry of Distributed Semantic Representation. Cereb Cortex 2017;27:4277–91. https://doi.org/10.1093/cercor/bhx138.
    OpenUrlCrossRefPubMed
  20. [20].↵
    Harel A, Kravitz DJ, Baker CI. Task context impacts visual object processing differentially across the cortex. Proc NatL Acad Sci USA 2014;111:E962–71. https://doi.org/10.1073/pnas.1312567111.
    OpenUrlAbstract/FREE Full Text
  21. [21].
    Hebart MN, Bankson BB, Harel A, Baker CI, Cichy RM. The representational dynamics of task and object processing in humans. Elife 2018;7:e32816. https://doi.org/10.7554/eLife.32816.
    OpenUrlCrossRefPubMed
  22. [22].↵
    Bugatus L, Weiner KS, Grill-Spector K. Task alters category representations in prefrontal but not high-level visual cortex. Neuroimage 2017;155:437–49. https://doi.org/10.1016/J.NEUROIMAGE.2017.03.062.
    OpenUrl
  23. [23].↵
    Botvinick M, Braver TS. Motivation and Cognitive Control: From Behavior to Neural Mechanism. Annu Rev Psychol 2015;66:80–113. https://doi.org/10.1146/annurev-psych-010814-015044.
    OpenUrl
  24. [24].↵
    Etzel JA, Cole MW, Zacks JM, Kay KN, Braver TS. Reward Motivation Enhances Task Coding in Frontoparietal Cortex. Cereb Cortex 2016;26:1647–59. https://doi.org/10.1093/cercor/bhu327.
    OpenUrlCrossRefPubMed
  25. [25].↵
    Simon HA. Motivational and emotional controls of cognition. Psychol Rev 1967;74:29–39.
    OpenUrlCrossRefPubMedWeb of Science
  26. [26].↵
    Kruglanski AW, Shah JY, Fishbach A, Friedman R, Woo Young Chun, Sleeth-Keppler D. A theory of goal systems. Adv Exp Soc Psychol 2002;34:331–78. https://doi.org/10.1016/S0065-2601(02)80008-9.
    OpenUrlCrossRef
  27. [27].↵
    Padmala S, Pessoa L. Interactions between cognition and motivation during response inhibition. Neuropsychologia 2010;48:558–65. https://doi.org/10.1016/j.neuropsychologia.2009.10.017.
    OpenUrl
  28. [28].↵
    Padmala S, Pessoa L. Reward Reduces Conflict by Enhancing Attentional Control and Biasing Visual Cortical Processing. J Cogn Neurosci 2011;23:3419–32. https://doi.org/10.1162/jocn_a_00011.
    OpenUrlCrossRefPubMedWeb of Science
  29. [29].↵
    Pochon JB, Levy R, Fossati P, Lehericy S, Poline JB, Pillon B, et al. The neural system that bridges reward and cognition in humans: An fMRI study. Proc Natl Acad Sci 2002;99:5669–74. https://doi.org/10.1073/pnas.082111099.
    OpenUrlAbstract/FREE Full Text
  30. [30].↵
    Taylor SF, Welsh RC, Wager TD, Luan Phan K, Fitzgerald KD, Gehring WJ. A functional neuroimaging study of motivation and executive function. Neuroimage 2004;21:1045–54. https://doi.org/10.1016/J.NEUROIMAGE.2003.10.032.
    OpenUrlCrossRefPubMedWeb of Science
  31. [31].↵
    Mohanty A, Gitelman DR, Small DM, Mesulam MM. The Spatial Attention Network Interacts with Limbic and Monoaminergic Systems to Modulate Motivation-Induced Attention Shifts. Cereb Cortex 2008;18:2604–13. https://doi.org/10.1093/cercor/bhn021.
    OpenUrlCrossRefPubMedWeb of Science
  32. [32].↵
    Krebs RM, Boehler CN, Roberts KC, Song AW, Woldorff M. The involvement of the dopaminergic midbrain and cortico-striatal-thalamic circuits in the integration of reward prospect and attentional task demands. Cereb Cortex 2012;22:607–15. https://doi.org/10.1093/cercor/bhr134.
    OpenUrlCrossRefPubMedWeb of Science
  33. [33].↵
    Shashidhara S, Mitchell DJ, Erez Y, Duncan J. Progressive Recruitment of the Frontoparietal Multiple-demand System with Increased Task Complexity, Time Pressure, and Reward. J Cogn Neurosci 2019;31:1617–30. https://doi.org/10.1162/jocn_a_01440.
    OpenUrl
  34. [34].↵
    Wallace AFC. : Plans and the Structure of Behavior. George A. Miller, Eugene Galanter, Karl H. Pribram. Am Anthropol 1960;62:1065–7. https://doi.org/10.1525/aa.1960.62.6.02a00190.
    OpenUrl
  35. [35].
    Pessoa L. How do emotion and motivation direct executive control? Trends Cogn Sci 2009;13:160–6. https://doi.org/10.1016/j.tics.2009.01.006.
    OpenUrl
  36. [36].
    Braver TS. The variable nature of cognitive control: a dual mechanisms framework. Trends Cogn Sci 2012;16:106–13. https://doi.org/10.1016/j.tics.2011.12.010.
    OpenUrlCrossRefPubMedWeb of Science
  37. [37].↵
    Chiew KS, Braver TS. Dissociable influences of reward motivation and positive emotion on cognitive control. Cogn Affect Behav Neurosci 2014;14:509–29. https://doi.org/10.3758/s13415-014-0280-0.
    OpenUrl
  38. [38].↵
    Kennerley SW, Wallis JD. Reward-dependent modulation of working memory in lateral prefrontal cortex. J Neurosci 2009;29:3259–70. https://doi.org/10.1523/JNEUROSCI.5353-08.2009.
    OpenUrlAbstract/FREE Full Text
  39. [39].
    Leon MI, Shadlen MN. Effect of Expected Reward Magnitude on the Response of Neurons in the Dorsolateral Prefrontal Cortex of the Macaque. Neuron 1999;24:415–25. https://doi.org/10.1016/S0896-6273(00)80854-5.
    OpenUrlCrossRefPubMedWeb of Science
  40. [40].↵
    Watanabe M. Reward expectancy in primate prefrental neurons. Nature 1996;382:629–32. https://doi.org/10.1038/382629a0.
    OpenUrlCrossRefPubMedWeb of Science
  41. [41].↵
    Hall-McMaster S, Muhle-Karbe PS, Myers NE, Stokes MG. Reward Boosts Neural Coding of Task Rules to Optimize Cognitive Flexibility. J Neurosci 2019;39:8549–61. https://doi.org/10.1523/JNEUROSCI.0631-19.2019.
    OpenUrlAbstract/FREE Full Text
  42. [42].↵
    Stürmer B, Nigbur R, Schacht A, Sommer W. Reward and punishment effects on error processing and conflict control. Front Psychol 2011;2:335. https://doi.org/10.3389/fpsyg.2011.00335.
    OpenUrlPubMed
  43. [43].↵
    Krebs RM, Boehler CN, Appelbaum LG, Woldorff M. Reward Associations Reduce Behavioral Interference by Changing the Temporal Dynamics of Conflict Processing. PLoS One 2013;8:e53894. https://doi.org/10.1371/journal.pone.0053894.
    OpenUrlCrossRef
  44. [44].↵
    Brainard DH. The Psychophysics Toolbox. Spat Vis 1997;10:433–6.
    OpenUrlCrossRefPubMedWeb of Science
  45. [45].↵
    Feinberg DA, Moeller S, Smith SM, Auerbach E, Ramanna S, Gunther M, et al. Multiplexed echo planar imaging for sub-second whole brain FMRI and fast diffusion imaging. PLoS One 2010;5:e15710. https://doi.org/10.1371/journal.pone.0015710.
    OpenUrlCrossRefPubMed
  46. [46].↵
    van der Kouwe AJW, Benner T, Salat DH, Fischl B. Brain morphometry with multiecho MPRAGE. Neuroimage 2008;40:559–69. https://doi.org/10.1016/j.neuroimage.2007.12.025.
    OpenUrlCrossRefPubMedWeb of Science
  47. [47].↵
    Cusack R, Vicente-Grabovetsky A, Mitchell DJ, Wild CJ, Auer T, Linke AC, et al. Automatic analysis (aa): efficient neuroimaging workflows and parallel processing using Matlab and XML. Front Neuroinform 2014;8:90. https://doi.org/10.3389/fninf.2014.00090.
    OpenUrl
  48. [48].↵
    Woolgar A, Golland P, Bode S. Coping with confounds in multivoxel pattern analysis: What should we do about reaction time differences? A comment on Todd, Nystrom & Cohen 2013. Neuroimage 2014;98:506–12. https://doi.org/10.1016/J.NEUROIMAGE.2014.04.059.
    OpenUrlCrossRefPubMedWeb of Science
  49. [49].
    Brett M, Anton J-L, Valabregue R, Poline J-B. Region of interest analysis using an SPM toolbox - Abstract Presented at the 8th International Conference on Functional Mapping of the Human Brain, June 2-6, 2002, Sendai, Japan. Neuroimage 2002. https://doi.org/10.1016/S1053-8119(02)90010-8.
  50. [50].↵
    Malach R, Reppas JB, Benson RR, Kwong KK, Jiang H, Kennedy WA, et al. Object-related activity revealed by functional magnetic resonance imaging in human occipital cortex. Proc NatL Acad Sci USA 1995;92:8135–9. https://doi.org/10.1073/PNAS.92.18.8135.
    OpenUrlAbstract/FREE Full Text
  51. [51].↵
    Erez Y, Yovel G. Clutter Modulates the Representation of Target Objects in the Human Occipitotemporal Cortex. J Cogn Neurosci 2014;26:490–500. https://doi.org/10.1162/jocn_a_00505.
    OpenUrlCrossRefPubMed
  52. [52].↵
    MacEvoy SP, Epstein RA. Constructing scenes from objects in human occipitotemporal cortex. Nat Neurosci 2011;14:1323–9. https://doi.org/10.1038/nn.2903.
    OpenUrlCrossRefPubMed
  53. [53].↵
    Fedorenko E, Hsieh P-J, Nieto-Castañón A, Whitfield-Gabrieli S, Kanwisher N. New method for fMRI investigations of language: defining ROIs functionally in individual subjects. J Neurophysiol 2010;104:1177–94. https://doi.org/10.1152/jn.00032.2010.
    OpenUrlCrossRefPubMedWeb of Science
  54. [54].↵
    Shashidhara S, Spronkers FS, Erez Y. Localizing the “multiple-demand” frontoparietal network in individual subjects. BioRxiv 2019:661934. https://doi.org/10.1101/661934.
  55. [55].↵
    Hebart MN, Gorgen K, Haynes J-D. The Decoding Toolbox (TDT): a versatile software package for multivariate analyses of functional imaging data. Front Neuroinform 2015;8:88. https://doi.org/10.3389/fninf.2014.00088.
    OpenUrl
  56. [56].↵
    Kriegeskorte N, Goebel R, Bandettini P. Information-based functional brain mapping. Proc NatL Acad Sci USA 2006;103:3863–8. https://doi.org/10.1073/pnas.0600244103.
    OpenUrlAbstract/FREE Full Text
  57. [57].↵
    Dixon ML, Christoff K. The Decision to Engage Cognitive Control Is Driven by Expected Reward-Value: Neural and Behavioral Evidence. PLoS One 2012;7:e51637. https://doi.org/10.1371/journal.pone.0051637.
    OpenUrlCrossRefPubMed
  58. [58].↵
    Bhandari A, Gagne C, Badre D. Just above Chance: Is It Harder to Decode Information from Prefrontal Cortex Hemodynamic Activity Patterns? J Cogn Neurosci 2018;30:1473–98. https://doi.org/10.1162/jocn_a_01291.
    OpenUrlCrossRefPubMed
  59. [59].↵
    Schultz W, Dayan P, Montague PR. A neural substrate of prediction and reward. Science 1997;275:1593–9. https://doi.org/10.1126/SCIENCE.275.5306.1593.
    OpenUrlAbstract/FREE Full Text
  60. [60].↵
    Bayer HM, Glimcher PW. Midbrain Dopamine Neurons Encode a Quantitative Reward Prediction Error Signal. Neuron 2005;47:129–41. https://doi.org/10.1016/J.NEURON.2005.05.020.
    OpenUrlCrossRefPubMedWeb of Science
  61. [61].↵
    Vijayraghavan S, Wang M, Birnbaum SG, Williams G V, Arnsten AFT. Inverted-U dopamine D1 receptor actions on prefrontal neurons engaged in working memory. Nat Neurosci 2007;10:376–84. https://doi.org/10.1038/nn1846.
    OpenUrlCrossRefPubMedWeb of Science
  62. [62].
    Thurley K, Senn W, Lüscher H-R. Dopamine Increases the Gain of the Input-Output Response of Rat Prefrontal Pyramidal Neurons. J Neurophysiol 2008;99:2985–97. https://doi.org/10.1152/jn.01098.2007.
    OpenUrlCrossRefPubMedWeb of Science
  63. [63].↵
    Ott T, Nieder A. Dopamine D2 Receptors Enhance Population Dynamics in Primate Prefrontal Working Memory Circuits. Cereb Cortex 2016;27:4423–35. https://doi.org/10.1093/cercor/bhw244.
    OpenUrl
  64. [64].↵
    Shenhav A, Botvinick MM, Cohen JD. The expected value of control: An integrative theory of anterior cingulate cortex function. Neuron 2013;79:217–40. https://doi.org/10.1016/j.neuron.2013.07.007.
    OpenUrlCrossRefPubMedWeb of Science
  65. [65].↵
    Jiang X, Bradley E, Rini RA, Zeffiro T, Vanmeter J, Riesenhuber M. Categorization training results in shape- and category-selective human neural plasticity. Neuron 2007;53:891–903. https://doi.org/10.1016/j.neuron.2007.02.015.
    OpenUrlCrossRefPubMedWeb of Science
  66. [66].↵
    Garvert MM, Moutoussis M, Kurth-Nelson Z, Behrens TEJ, Dolan RJ. Learning-Induced plasticity in medial prefrontal cortex predicts preference malleability. Neuron 2015;85:418–28. https://doi.org/10.1016/j.neuron.2014.12.033.
    OpenUrlCrossRefPubMed
Back to top
PreviousNext
Posted March 31, 2020.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
No evidence for effect of reward motivation on coding of behaviorally relevant category distinctions across the frontoparietal cortex
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
No evidence for effect of reward motivation on coding of behaviorally relevant category distinctions across the frontoparietal cortex
Sneha Shashidhara, Yaara Erez
bioRxiv 609537; doi: https://doi.org/10.1101/609537
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
No evidence for effect of reward motivation on coding of behaviorally relevant category distinctions across the frontoparietal cortex
Sneha Shashidhara, Yaara Erez
bioRxiv 609537; doi: https://doi.org/10.1101/609537

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (3573)
  • Biochemistry (7517)
  • Bioengineering (5478)
  • Bioinformatics (20671)
  • Biophysics (10254)
  • Cancer Biology (7927)
  • Cell Biology (11566)
  • Clinical Trials (138)
  • Developmental Biology (6563)
  • Ecology (10130)
  • Epidemiology (2065)
  • Evolutionary Biology (13532)
  • Genetics (9496)
  • Genomics (12788)
  • Immunology (7869)
  • Microbiology (19443)
  • Molecular Biology (7611)
  • Neuroscience (41862)
  • Paleontology (306)
  • Pathology (1252)
  • Pharmacology and Toxicology (2179)
  • Physiology (3249)
  • Plant Biology (7005)
  • Scientific Communication and Education (1291)
  • Synthetic Biology (1941)
  • Systems Biology (5405)
  • Zoology (1107)