Decoding of EEG signals reveals non-uniformities in the neural geometry of colour

The idea of colour opponency maintains that colour vision arises through the comparison of two chromatic mechanisms, red versus green (RG) and yellow versus blue (YB). The four unique hues, red, green, blue, and yellow, are assumed to appear at the null points of these the two chromatic systems. However, whether unique hues have a distinct signature that can be reliably discerned in neural activity is still an open question. Here we hypothesise that, if unique hues represent a tractable cortical state, they should elicit more robust activity compared to non-unique hues. We use a spatiotemporal decoding approach to reconstruct an activation space for a set of unique and intermediate hues across a range of luminance values. We show that electroencephalographic (EEG) responses carry robust information about isoluminant unique hues within a 100-300 ms window from stimulus onset. Decoding is possible in both passive and active viewing tasks, but is compromised when concurrent high luminance contrast is added to the colour signals. The efficiency of hue decoding is not entirely predicted by their mutual distance in a nominally uniform perceptual colour space. Instead, the encoding space shows pivotal non-uniformities which suggest that anisotropies in neurometric hue-spaces are likely to represent perceptual unique hues. Furthermore, the neural code for hue temporally coincides with the neural code for luminance contrast, thus explaining why potential neural correlates of unique hues have remained so elusive until now.


Introduction 44
The idea of colour opponency maintains that colour vision arises through the comparison of two 45 chromatic mechanisms, red versus green (RG) and blue versus yellow (BY). The four unique hues, 46 red, green, blue, and yellow, are assumed to appear at the null points of these the two chromatic 47 systems (Hering, 1920;Jameson and Hurvich, 1964; De Valois and De Valois, 1993). Colour vision 48 starts in the retina, where light is absorbed in receptors (long-, medium, and short-wavelength 49 sensitive cone receptors -L, M, S) and small bistratified ganglion cells that receive S-(M+L) cone 50 input have been postulated to be the retinal origin of the BY channel, while midget ganglion cells that 51 take the differences between the L and M cone output were believed to be the retinal origin of the RG 52 channel (Lee et al., 2010). However, it has now been confirmed that the chromatic tuning of 53 behaviourally characterised opponent channels differs from these early cone-opponent mechanisms, 54 hence another transformation of chromatic signals must take place between the Lateral Geniculate 55 Nucleus (LGN) and the primary or extrastriate visual cortex (De Valois and De Valois, 1993; 56 Wuerger et al., 2005). 57 58 While some neuroimaging studies have attempted to identify a neural basis for unique hues, their 59 results remain controversial. Stoughton and Conway (2008) reported neuronal clusters which were 60 preferentially tuned to unique hues in the posterior inferior temporal (PIT) cortex of macaques. 61 However, their findings have been challenged on the grounds that the study was not fully controlled 62 for low-level differences in neuronal tuning, which could provide a more parsimonious explanation 63 for their results (Conway and Stoughton, 2009;Mollon, 2009;Bohon et al., 2016). Similarly, Forder 64 and UG stimuli were 14.4º and 133.4º respectively. Orange and turquoise stimuli were chosen such 121 that orange (hue angle 41.5º) was the intermediate hue between UR and unique yellow, and turquoise 122 (hue angle 185.1º) was the intermediate between UG and unique blue. All four stimuli were equally 123 saturated in the CIE 1976 UCS plane. Three stimulus luminance levels were used: nominal iso-124 luminance (24 cd/m 2 ), 45% Weber contrast (34.8 cd/m 2 ) and 90% Weber contrast (45.6 cd/m 2 ). In 125 Experiement 2, stimuli consisted of participants' subjective settings for two unique hues (yellow and 126 green) and two intermediate hues (orange and turquoise), and hues situated 10° to the left and right of 127 the subjective settings (in CIELCh colour space). Thus, for each observer, we effectively had four 128 clusters of colours corresponding to the hues orange, yellow, green, and turquoise, with each cluster 129 consisting of the observer setting for that hue, along with two flanking colours ±10° from the setting 130 (e.g., unique yellow, a yellow 10° counter-clockwise and a yellow 10° clockwise). All colours were 131 nominally isoluminant with the background (CIE 1931 xyY coordinates: 0.3127, 0.3290, 22.93 132 / 2 ). 133 134 EEG data was recorded during a shape discrimination task. The purpose of the task was to engage 135 participants' attention in a stimulus dimension orthogonal to colour -i.e., shape. The stimuli consisted 136 of uniformly coloured shapes shown against a grey background. Each trial began with the appearance 137 of a fixation cross, followed by a 2-degree circular stimulus (passive viewing event) which changed 138 shape (shape change event) into either a diamond or a square (Figure 1). The passive viewing event 139 occurred 700±200 ms after the appearance of the fixation cross, and the shape change event occurred 140 800-1500 ms after the passive viewing event. 141 Figure 1: Experimental design. Each trial started with the appearance of a fixation cross, which was followed 143 by the presentation of a circular uniformly-coloured stimulus at a random offset of 700±200 ms. At a random 144 time-point 800-1500 ms after stimulus onset, the shape of the stimulus changed from circular to either square or 145 diamond. Participants were instructed to discriminate the final shape via a button press as quickly and as 146 accurately as they could. Each trial ended 2 seconds after stimulus onset. Two events were defined during each 147 trial: a passive viewing event defined by the appearance of the stimulus, and a shape change event defined by 148 the change in stimulus shape.

150
Participants identified the final shape of the stimulus using the left or the right button on a button box. 151 The assignment of button to the target shape was counterbalanced between participants. The 152 conditions were randomly intermixed, with a different order for each participant. The entire 153 experiment was conducted in a sound-attenuated, electrically shielded chamber, with the screen being 154 the only source of light. In addition to EEG recordings (described below), two other task-related 155 variables were measured -task accuracy and reaction time. For each colour and shape combination, 156 we had 30 trials. As diamond and square shape-change trials were subsequently collapsed together, 157 this resulted in 60 trials per colour and 720 trials in total, presented in random order and divided into 158 10 blocks. This was the same for both experiments. In addition, Experiment 1 was preceded by a 159 practice of 24 trials, while Experiment 2 was preceded by a practice of 16 trials. The EEG task took 160 approximately 50 minutes to complete. 161 162 After the completion of the EEG experiment, participants rated each colour on a 9-point Likert scale 163 for its representativeness of its category. This took approximately 5 minutes. Participants were asked 164 to imagine the perfect representative for a colour category and rate how representative a sample was 165 of that category, with 1 being the least representative and 9 being the most representative. All colours 166 were displayed simultaneously on the screen during this procedure and remained on the screen until 167 the participants completed the task. Colours were presented on the computer screen as a set of 4 rows 168 of squares that showed the three luminance (Experiment 1) or hue (Experiement 2) values for that 169 colour. 170 There were also two additional measures, specific to each experiment. In Experiment 1, for each 171 participant, heterochromatic flicker photometry (HCFP) at 20 Hz (Walsh, 1958) was used to establish 172 the departure from isoluminance for all colours. The task required the participant to adjust the 173 luminance of the colour until perceived flicker was minimised. Participants performed 8 trials per 174 colour -the step size was 0.5 cd/m2 and the flicker started from a randomly determined point that 175 could be five steps above or below nominal isoluminance. These measurements were conducted to 176 evaluate any individual differences in the amount of luminance contrast effectively present in 177 nominally isoluminant stimuli. Rabin et al. (1994)  10 minutes to complete. The first 6 participants performed the task without context. For the following 192 13 participants, we also presented a colour palette consisting of 19 squares 1° in size that ranged +/-193 45° around the initial hue value, in steps of 5° of hue angle, positioned at 8. MATLAB. FASTER is an automated procedure that detects contaminated trials and noisy channels 210 that need interpolation (either in the entire EEG recording or on any single trials) by calculating 211 statistical parameters of the data and using a Z score of ±3 as the metric that defined contaminated 212 data. ADJUST is an automated procedure that operates on maps resulting from independent 213 component analysis of EEG data, using properties of these components to label them as eye blinks, 214 vertical or horizontal eye movements, or channel discontinuities so that they can be subtracted from 215 the recording. We first rejected trials with global artifacts using FASTER, then ran an independent 216 component analysis and applied ADJUST to the obtained decompositions, and finally, conducted 217 channel interpolation with FASTER. In addition, any trials with eye movements were rejected based 218 on ±25μV deviations from the horizontal electrooculogram in the uncorrected data. Blinks were 219 rejected using a thresholding procedure similar to FASTER (Junghöfer et al., 2000). 220 Incorrect and rejected trials amounted to a very small proportion of the data -in Experiment 1, 221 between 1% and 13% of total trials, and in Experiment 2, between 3% and 17% of total trials. In LDA, the likelihood term is estimated by a multivariate Gaussian density function: 235 Here, is the number of electrodes, is the mean EEG activity for the label , and Σ is the 237 covariance matrix of the activity. The log-posterior objective function ( ) for the label can thus 238 be written as: 239 Data for each observer was modelled separately, and the whole process was repeated 10 times. In 241 each repetition for any given observer, the data were split into 5 folds containing roughly equal The time-series of confusion matrices estimated by tECOC models were used to calculate pairwise 259 dissimilarities between stimulus classes. Given a confusion matrix , where each element denotes 260 the probability of the stimulus type being labelled as by the model, first, a label-normalised matrix 261 was constructed such that = / . This asymmetric measure was then used to calculate a 262 symmetric dissimilarity tensor Δ tECOC given by 263 Here, the geometric mean across stimulus pairs is used to generalise distances in representational 265 space (Shepard, 1958;Kaneshiro et al., 2015). A similar estimation was also made for the EEG data 266 using a more traditional dissimilarity metric given by 267 Here, is the Pearson-correlation matrix for the EEG responses elicited by the stimuli. Finally, 269 pairwise differences in CIELAB hue angles of the stimuli were used to estimate a perceptual 270 dissimilarity matrix. The perceptual dissimilarity was compared to Δ and Δ using rank-271 correlation estimates (Kendall's coefficient). We measured EEG signals in a cohort of 20 participants while they viewed coloured stimuli (coloured 277 shapes on a grey background) consisting of two unique hues -unique green and unique red, and two 278 non-unique hues -orange and turquoise. In each trial, a coloured disc changed shape to a diamond or 279 a square at a random time-point 800-1500ms after stimulus onset (Figure 1). The participant's task 280 was to identify the target shape. The stimuli were either isoluminant with the background (0% 281 luminance contrast), or presented at 45% or 90% luminance-contrast. This gave us a dataset of EEG 282 signals labelled both in hue and luminance-contrast. 283

284
The task was easy, resulting in high overall accuracy (95% ± 1% SE, see Supplementary Figure  After the completion of the EEG experiment, participants rated each colour on a 9-point Likert scale 296 for its representativeness of its category (red, orange, green or turquoise). The average ratings and 297 their SEs were as follows (see Supplementary Figure S4C): isoluminant red 4.35 ± 0.48; red at 45% 298 luminance 2.85 ± 0.32; red at 90% luminance 1.90 ± 0.23; isoluminant green 7.70 ± 0.23; green at 299 45% luminance 6.10 ± 0.35; green at 90% luminance 5.55 ± 0.43; isoluminant orange 3.75 ± 0.48; 300 orange at 45% luminance 4.15 ± 0.43; orange at 90% luminance 3.60 ± 0.32; isoluminant turquoise 301 6.00 ± 0.47; turquoise at 45% luminance 6.75 ± 0.38; turquoise at 90% luminance 6.40 ± 0.5. 302 303 Unique hues can be robustly decoded from EEG signals 304 First, we asked whether the measured EEG waveforms contain consistent, discernible information 305 about the hue of the stimulus. To do this, we trained tECOC models for each observer using only EEG 306 responses to isoluminant stimuli, as this ensured minimal interference from luminance-contrast 307 signals. In the first instance, we performed this analysis for epochs defined by the passive viewing event. We found that within a 100-300 ms window after stimulus onset, both unique hues could be 309 decoded with above-chance accuracy (Figure 2A). The non-unique hues, on the other hand, showed a 310 much lower score ( Figure 2B). This pattern is stable over a range of tECOC time-windows 311 (Supplementary Figure S1A) and also holds when the entire set of 64 electrodes is used 312 (Supplementary Figure S1B). The presence of signal on all electrodes is not surprising -unlike 313 functional magnetic resonance imaging (fMRI), EEG does not detect localised physiological activity 314 in a volume, but instead picks up a linear superposition of signals from a range of physiological 315 sources. Thus, the signal is present in some degree at all sensors, with its amplitude (and thus also its 316 signal to noise ratio) dependent on the position of the sensor relative to the source(s) (see, e.g., Maris,317 2012 for a discussion of the so-called common pick-up problem). 318  For each participant, we also measured subjective isoluminance for each stimulus colour (see Methods  348 for details). While one participant did not understand the task, the means, SEs and ranges of the 349 settings from the remaining 19 participants were as follows: red 0.14 ± 0.57 / 2 (-6 to 5.25 350 / 2 ); green -1.09 ± 0.49 / 2 (-6.58 to 1 / 2 ); orange 0.08 ± 0.56 (-4.34 to 6.50 / 2 ); 351 turquoise (-0.05 ± 0.65 / 2 (-7.08 to 7.83 / 2 ). 352 353 Model accuracy quantifies the ability of the model to correctly identify the hue of a stimulus when 354 presented with the corresponding EEG response. Theoretically, it is the sum of hit rates (true positive 355 rates) for all labels, and corresponds to the diagonal of the confusion matrix. However, a deeper 356 insight into model performance can be obtained when, in addition to the detection accuracy for a 357 given input class, one also considers the probability of misclassification of inputs from this class. To 358 investigate this, we estimated the off-diagonal elements of the confusion matrix. This allowed us to infer which classes are most likely to be confused by the model -thus providing a means of 360 understanding how similar the information contained in EEG signals corresponding to different hues 361 is. The subpanels of Figure 2C (see also Supplementary Video V1) show the probability (over time) 362 with which the model assigns each of the four hue labels to EEG responses elicited by a given input 363 hue (the input hue is labelled above each subpanel). Thus, each subpanel in Figure 2C shows one row 364 of the confusion matrix. Within a 100-300 ms window, each input hue is only confused with its 365 proximal pair (red and orange, and green and turquoise), while the prediction probabilities for non-366 proximal hues are below chance. This is also reflected in the checkerboard-like pattern observed in 367 Supplementary Video V1. Furthermore, the model is likely to label EEG responses to non-unique 368 hues (orange and turquoise) as being elicited by their proximal unique hues (red and green 369 respectively) with almost equal probability, but not vice-versa. Once again, this suggests that EEG 370 signals between 100-300 ms carry more robust representations of unique hues compared to non-371 unique hues. 372

373
The passive viewing at trial outset was followed by a change in the shape of the stimulus from a circle 374 to either a square or a diamond at a random time-point 800-1500 ms from stimulus onset (see Figure  375 1). The colour of the stimulus was task-irrelevant, and the hypothesis here was that since the observer 376 will be attending to the stimulus shape, the EEG signal would be qualitatively different between the 377 passive and shape-change segments. This would, in-turn, allow us to test if this difference is reflected 378 in the ability of the model to classify hue information in the signal. It has been argued that colour-379 related activations should still be observed as long as the hue remains unattended and task-irrelevant 380 (Forder et al., 2017b). To test this hypothesis, we trained tECOC models on the epochs defined by the 381 shape-change event. As expected, the two segments were found to elicit activity which differed 382 significantly both in the sequence of ERP peaks as well as topography ( Figure 3A). However, despite 383 this difference, we were able to perform hue detection during the shape-change segment with an 384 accuracy very similar to the passive viewing segment -both in terms of peak decoding score and its 385 time-course ( Figure 3B). This suggests that the temporal structure of the hue-related information in EEG signals is indeed robust to changes in the task (as long as the hue itself remains task-irrelevant), 387 and can be extracted even when the observer is engaged in a concurrent shape discrimination task. 388 389

Luminance signals interfere with chromatic information in occipital ERPs 402
Next, we investigated whether hue identity could still be decoded when both chromatic and luminance 403 information was present in the EEG signal. A chromatic-driven ERP is characterised by a robust 404 negative deflection at about 120-220 ms after stimulus onset (Murray et al., 1987;Berninger et al., 405 1989;Tobimatsu et al., 1996), but this response is significantly altered by the addition of luminance 406 contrast (Rabin et al., 1994). According to normative work by Rabin et al. (1994), while observer isoluminance drives ERPs in a manner closely resembling nominal isoluminance, any substantial 408 changes in luminance contrast have been found to result in highly dissimilar waveforms. To assess if 409 this would also impact classifier performance, we constructed a model that evaluated how decoding 410 performance was affected when the model was trained on inputs which differ not only in hue but also 411 luminance-contrast. We trained tECOC classifiers for each observer using 12 labels, corresponding to 412 three different luminance-contrast levels for each of the four hues. In Figure 4, we present the 413 performance of our model in a manner similar to Figure 2C. Each panel is one row of the confusion 414 matrix, i.e., given the EEG signals for an input stimulus, it shows the prediction probabilities for all 415 12 labels. The hue of the input is denoted by the row (labelled in the right margin) and its luminance-416 contrast by the column (labelled on top). The same colours as Figure 2C are used to denote the four 417 hues. In addition, for each hue, we also use two additional brightness levels to represent the two 418 luminance contrast ratios (thus, for a given hue, isoluminant stimulus is the least bright, 45% 419 luminance contrast is brighter, and 90% luminance contrast is the brightest). We find that while 420 isoluminant signals can indeed be classified 100-300 ms after stimulus onset (left column), addition of 421 luminance information disrupts the model performance for all hues (middle and right columns). 422 Furthermore, we find that the classifier does not confuse isoluminant and non-isoluminant stimuli. 423 This suggests that in contrast to a change in stimulus-shape where the temporal structure of hue-424 related information was preserved, addition of luminance-contrast to the stimulus disrupts the 425 temporal patterns which encode hue-information. To characterise the effect of luminance, we trained a model using only the luminance labels of EEG 439 signals (i.e., we used three labels corresponding to the three contrast levels). We found that all luminance conditions (Figure 5A) can be decoded to above-chance levels. An examination of the 441 misclassification patterns of the model (Figure 5B)  Stimuli with 45% luminance contrast have an above-chance probability of being misclassified as 90% 455 luminance contrast (and vice-versa). However, this effect is not strictly symmetric, with 90% 456 luminance contrast being easier to detect compared to the 45% contrast. Thus, under non-isoluminant 457 conditions, not only are the hue-driven patterns difficult to detect, but they seem to be progressively 458 overridden by luminance-contrast-driven patterns. To ensure that this effect was driven by luminance, 459 and not by the chromatic content of the stimuli, we set up separate models for each hue, and were able 460 to confirm that the effect was indeed independent of the chromatic content of the stimulus. decoding of hue or luminance polarity from MEG signals and found that generalising luminance 469 polarity across hue works better than generalising hue across polarity. This is consistent with our own 470 findings that decoding of hue is strongly affected by the addition of luminance contrast. Unlike these 471 studies, where only stimuli that combine colour and luminance contrast were used, we also included 472 stimuli that were isoluminant with the background. We found that decoding of hue from such 473 nominally isoluminant stimuli is much more efficient. While it appears that decoding was superior for 474 unique compared to intermediate hues, Hermann et al (2021) also report higher decoding efficiency 475 for red and green compared to orange and blue, although such an asymmetry is not present in the 476 decoding study by Hajonides et al. (2021). Hermann and colleagues suggest that poorer decoding for 477 orange and blue may be due to their alignment with the daylight locus, causing a less consistent signal 478 in the presence of luminance. To disambiguate if unique or intermediate hue status drives a more 479 robust neural signal irrespective of daylight locus alignment, it would be necessary to use a unique hue that is also more aligned with the daylight locus, such as yellow or blue. Thus, in our next 481 experiment, we decided to replace red with yellow, which would allow us to maintain the same 482 proximity structure (red/orange, green/turquoise) but eliminate the potential daylight locus confound. 483 Finally, the stimulus set in Experiment 1 was designed to investigate whether unique hues have more 484 robust EEG representations. To achieve this, we chose unique and non-unique hues that were 485 maximally distant in a perceptual space -red and green, orange and turquoise (see details of the 486 stimulus set in Methods). As already reported by Rosenthal  In Experiment 1 we showed superior decoding performance for unique hues compared to intermediate 496 hues, suggesting a robust neural representation for the former. In Experiment 2, this hypothesis was 497 further critically tested by using small and large hue differences. Our aim was to re-examine decoding 498 of nominally isoluminant unique and intermediate hues with a slightly modified hue set (see Interim 499 Discussion above) and to extend it by decoding local clusters of stimuli around each of these hues. 500 First, we measured individual settings for unique (yellow and green) and non-unique (orange and 501 turquoise) hues for each observer. Next, we made EEG measurements in a task analogous to 502 Experiment 1 using, for each observer, a stimulus set consisting of their subjective settings for the 503 four hues (denoted as the = configuration), and two sets of stimuli generated by rotating the subjective 504 settings by ±10° in CIELAB colour space (denoted as the + and -configurations respectively) -505 leading to a total of 12 stimuli (4 hue-clusters and 3 rotational-configurations, see Supplementary Figure S5D). The individual hue settings were as follows (means and SEs): yellow 101° ± 2°, orange 507 61° ± 3°, green 153° ± 3° and turquoise 198° ± 3°. 508 In the shape discrimination task, grand mean accuracy was 96% ± 1 % SE (see Supplementary Figure  509 Figure S5C). 529

Decoding over large hue differences is predicted by hue angles 531
For each observer, we trained tECOC models over all stimuli: the four hue settings (= group), and the 532 eight stimuli generated by ±10° rotations of each of these settings (+ and -groups respectively). 533 Using the classification results, we generated a time-series of dissimilarity matrices (see Methods for 534 details) and found that the stimulus representations were highly dissimilar in a 100-400 ms window 535 after stimulus onset (Figure 6A). Similarly, we also calculated a perceptual dissimilarity measure by 536 using differences in hue angles of the stimuli in CIELAB space. As expected, perceptual dissimilarity 537 increases as one moves away from a given reference stimulus ( Figure 6B). Using rank-correlation 538 analysis, we found a significant ( < 0.001) increase in Kendall's tau statistic in a 100-400 ms range 539 post-stimulus (Figure 6D), suggesting that perceptual distances are indeed correlated with decoding 540 output. This was also reflected in stable mean and peak dissimilarities during the period of significant 541 correlation ( Figure 6C). We found that the three groups (subjective settings, and the ±10° rotations) cannot be decoded in 582 non-unique hues (Figure 7A, first and fourth rows). However, for unique hues (Figure 7A, second  583 and third row), the rotated groups (first and third columns) can be decoded, while the subjective 584 settings (second column) cannot. This result suggests that EEG representations of unique hues may 585 have lower variability compared to non-unique hues. Such fluctuations in local variability in the 586 representational space can create distortions in the decoding measure, allowing for better decoding of 587 the flanking distributions (one such scenario is illustrated in Figure 7B). Note that in the perceptually 588 uniform CIELAB space the three groups, by design, had equivalent relative distributions (-and + 589 were simply mean-shifted copies of =). 590 591 Discussion 592 Our first finding is that -under isoluminant conditions -EEG responses to unique hues show more 593 distinct patterns compared to non-unique hues, and that these patterns are stable during both passive 594 viewing (Figure 2) and active task-engagement (Figure 3). We can also reach certain conclusions 595 about the underlying neural processes from the time-course of decoding performance. A 100-300 ms 596 decoding window is consistent with the idea that the performance of the model could be driven by 597 both perceptual and post-perceptual contributions (Forder et al., 2017b). This is supported by the fact 598 that the decoding performance steadily rises before peaking between 150-200 ms after stimulus onset, 599 a time-window where EEG signals begin reflecting post-visual evaluative processing (VanRullen and 600 Thorpe, 2001), including colour categorisation (Fonteneau and Davidoff, 2007). The chromatic visual 601 evoked potential (cVEP), which reflects the activation of colour sensitive neurons in early visual 602 cortices, also remains maximal in the same time window (Nunez et al., 2018). However, a high-level 603 interpretation of the decoding on the basis of the categorical status of the stimulus colours is unlikely.
Categorical representativeness ratings do not follow the pattern observed in the classifier performance 605 (see Supplementary Figure S4C), and seem to rather reflect the relation between the colour sample 606 and the focal colour. The most parsimonious explanation for the pivots in colour space that drive 607 asymmetries in decoding around unique hue locations would be that they correspond to hue locations 608 that are associated with a more robust neural representation, thus making it more easily decodable 609 from less robustly represented hues. 610 611 Secondly, classification performance for the decoding of hues is diminished when luminance contrast 612 was added (Figure 4). This was not entirely unexpected since luminance contrast is known to have a 613 strong effect on EEG responses, once luminance contrast is sufficiently strong (Rabin et al., 1994). At 614 the same time, we found that all luminance conditions Figure 5 can be decoded to above-chance 615 levels within the same 100-300 ms window. Thus, under non-isoluminant conditions, not only are the 616 hue-driven patterns more difficult to detect, but they may also be at least partly overridden or replaced 617 by luminance-contrast or joint-colour-and-luminance-contrast-driven activity. Our findings are 618 consistent with the idea that hue is most likely to be encoded by neural populations which also encode 619 luminance. The fact that purely chromatic-tuned cells in the visual cortex are known to be in a 620 minority compared to luminance-tuned or luminance-chromaticity tuned cells (Lennie et al., 1990;621 Johnson et al., 2001) may partly explain why luminance signals tend to override chromatic 622 information in EEG recordings. In V1-V3, the neurons are tuned to many intermediate directions, 623 both in terms of hue and luminance contrast (for a review, see Gegenfurtner and Kiper, 2003). In 624 higher-level areas of the extra-striate cortex, colour representations become organised in ways that 625 resemble perceptual colour spaces Heeger, 2009, 2013). Thus, the decoding in our 626 study is likely to reflect cumulative effects that build up across these areas. Even though we find more 627 robust responses for the two unique hues (red and green) compared to the two non-unique hues 628 (orange and turquoise), decoding is still possible for non-unique hues, implying that there are indeed 629 multiple hue representations that are being encoded by the brain (see, e.g., Brouwer and Heeger, 630 632 Thirdly, we show in Experiment 2, that the geometric structure of this representational space can be 633 explored by carefully designed experiments. Our results demonstrate that while large distances in the 634 neural representational space are indeed correlated with perceptual hue differences (Figure 6), there 635 are local anisotropies associated with unique hues (Figure 7) which are likely to represent local 636 changes in signal variability. Such tunings could reflect properties of our environment such as the 637 statistical regularities in the reflectance spectra of naturally occurring surfaces (Philipona and 638 O' Regan, 2006). Perhaps this is the reason why the neural reality of perceptual red-green and blue-639 yellow hue-opponent mechanisms has proven to be so elusive -it is not a fundamental mechanism 640 hard-wired into the neural circuitry, but a statistical peak in the tuning of neural populations which 641 multiplex both colour and luminance information. Its identification is therefore complicated by the 642 fact that neural populations jointly coding for chromaticity and luminance are likely to show higher 643 responsiveness to the presence of luminance contrast (Johnson et al., 2001), making hue-specific 644 signals much harder to detect. 645 646 A growing number of studies investigating population activity analyse EEG and MEG topographical 647 data by interrogating trajectories in activation manifolds. Our results suggest that the structure of such 648 manifolds can be highly anisotropic, and that these anisotropies can reflect perceptual measurables. In 649 the case of hue perception, it is likely that the local structure of this space is reflected in quasi-650 invariants such as the so-called unique hue percepts. Now that neurometric mapping of hue spaces has 651 been established by numerous studies (Hajonides et  The decoding scripts have been packaged as the tECOC toolbox, which has been made available as a 658 public git repository here. The EEG and behavioural data from both experiments will be shared on the 659 Open Science Framework website. 660