ABSTRACT
A lack of early visual experience causes pronounced auditory and visual cross-modal changes. However, the visual cortical region-specific cross-modal organization down to the single neuron level remains unknown. Here we used two-photon calcium imaging in awake mice that were reared in darkness from birth to map auditory and visual responsiveness of single neurons. We targeted neurons in the primary visual cortex (V1) and higher visual cortical areas (HVAs) that resemble the ventral and dorsal stream regions. We found that lateral dorsal stream areas showed a pronounced increase in auditory response strength, even after accounting for tone-induced whisker movement. Strikingly, this was accompanied by a decreased visual drive, measured in number of recruited neurons and response strength, although these visual effects were more widespread across cortical regions. Together, these results provide a comprehensive functional map of auditory and visual cross-modal changes after a lack of early visual experience across the mouse visual cortex. Moreover, our results suggest that a lack of visual drive of dorsal stream regions might provide an opportunity for remaining senses to take over.
INTRODUCTION
In congenitally blind individuals the deprived visual cortex can shift its processing to one of the remaining sensory modalities, such as hearing, through a process called cross-modal plasticity1–4. In humans, this audio-visual cross-modal plasticity is accompanied by an increased ability to process sounds, for example improved pitch discrimination5 or auditory localization3, possibly attributed to the enhanced processing power auditory information by the visual cortex. In contrast, visual processing is impaired but still present, as seen after restoration of sight in adulthood6–8. At the same time, after adult reversal of visual impairments sound-evoked activity in the visual cortex can remain elevated6,7,9. This suggests that the cross-modal rewiring after early visual deprivation might remain fixed into adulthood and thereby prevent full recovery of vision. How this is organized across the visual cortex on a single neuron level remains unknown.
The visual cortex contains specialized regions which process different types of sensory information; the dorsal stream regions process movement-related information, the ventral stream regions object-like information and the primary visual cortex (V1) processes simpler features. In humans, these regions have been suggested to be differentially sensitive to early visual experience, where especially the dorsal stream regions seem underdeveloped after loss of early visual experience10. In rodents, effects of early visual deprivation on visual processing have also been reported. Dark rearing mice from an early age resulted in lower responsiveness of populations of neurons in dorsal stream-like areas to visual stimuli11, but increased responsiveness to drifting gratings in V112. Together, this shows that dorsal stream regions are more vulnerable to visual experience. However, whether this also means that single neurons of the dorsal stream region become more responsive to sounds remains unknown.
Animal models, and especially mouse models with their abundant options for cell targeting and manipulation, are essential to understand the cell-type specific changes underlying cross-modal changes after loss of early vision. Genetic to functional cellular changes have been reported in the visual cortex after loss of early vision13–15. Layer 2/3 and L4 cells change their genetic identity in V1 after dark rearing from early age14. Complete removal of retinal input to V1 by binocular enucleation even shifts the identity of cell types of V1 towards types in the higher visual cortical regions. Together, this suggests that visual experience is crucial for the formation of the sensory identity of visual cortical regions15. Functionally, single neurons of the visual cortex are sparsely responsive to sound, either by being tone activated or suppressed16, and contribute to audio-visual multisensory integration17–22. In animal models, loss of early visual experience can cause visual cortical neurons to become more responsive to sounds23–25. While most changes in responsiveness have been reported in V1, increased activation of lateral extrastriate area V2 has also been reported23. However, overall information about whether visual experience-dependent changes in auditory responses differ between visual cortical subregions is missing. A complicating factor is that recent studies showed that sound-induced neural activity often appeared to be indirectly caused by sound-induced facial movement instead26,27. Thus, a comprehensive study of movement-controlled auditory and visual responses of single neurons after loss of early visual experience across the visual cortex of the mouse is missing. Additionally, whether these responses are reversible remains unknown.
We therefore set out to explore the effect of loss of early visual experience on auditory and visual processing throughout V1 and higher visual cortical areas (HVAs) anteromedial (AM), posteromedial (PM), anterior (A), rostrolateral (RL), anterolateral (AL) and lateromedial (LM). We hypothesized that dorsal stream areas28,30,31 AM, PM, A, RL and AL would be particularly sensitive to a lack of visual experience and would increase auditory processing and decrease visual processing. Visual experience was manipulated by dark rearing mice from birth. In awake, head-fixed and freely running mice, functional auditory and visual changes at the cellular level were recorded using calcium imaging under a two-photon (2P) microscope in response to pure-tones and moving-grating stimuli. Recordings were targeted to V1 and the combined HVAs: AM/PM (medial dorsal stream), A/RL/AL (lateral dorsal stream) and LM (ventral stream). For auditory stimuli we corrected for the sound-induced arousal effect using video monitoring and face movement analysis. We found that a loss of visual experience has a pronounced effect on single neurons in HVAs, especially the dorsal stream regions, with an increase in auditory processing and a decrease in visual processing. This study provides the first comprehensive mapping of functional auditory and visual cross-modal changes after loss of early visual experience and can contribute to the cellular level understanding of the cross-modal changes after early loss of vision.
RESULTS
Dark rearing increases responsiveness of neurons in anterior higher visual cortical areas to pure tones
To test the effect of loss of early visual experience on auditory processing of single neurons across visual cortical regions we presented pure-tone stimuli to naive mice. One group of mice was reared in absolute darkness from birth and therefore lacked any visual experience and a second group was reared in normal light/dark conditions (Fig. 1A). Adult mice were first placed under a wide-field macroscope to determine the location of visual cortical areas using population receptive field mapping and the Allen Brain reference atlas (Fig. 1B)29,30,32. Activity of single layer 2/3 neurons was recorded using 2P calcium imaging in awake, head-fixed mice that were free to run while pure tones were presented from a single speaker in front of mouse (Fig. 1C). Stimuli were presented in darkness while facial movement and run speed were recorded under infrared light.
Recent work showed that sound-induced activity of V1 neurons is predominantly correlated to facial movement instead of sound itself26,27,33. Especially movement of the whisker pad had a high predictive value for neuronal activity27. Temporally, in neural activity the motor-related component was strongest after about 200 to 300 ms whereas the auditory component was strongest before that time26. As expected, we found that pure tones elicit whisker movement (Fig. S1A and B). Both tone onset and offset evoked whisker movement (Fig. S1B and D). Normally reared and dark reared animals showed frequency- and intensity-dependent whisker movement (Fig. S1C). Tones with a higher intensity (Fig. S1D) and around the peak frequency sensitivity in mouse hearing of around 6 kHz34 (Fig. S1E) induced the strongest whisker movement. No effect of rearing condition was observed. To minimize arousal-related components in tone-induced neuronal activity we: 1) removed all trials with Z-scored whisker movement energy above two, 2) improved the temporal resolution of our fluorescence signal by estimating spike times with a spike deconvolution algorithm35 and restricted our analysis to approximately the first 250 ms after tone onset or offset.
We recorded the frequency response areas (FRA) of neurons and observed that pure tones can activate and inhibit neurons in the visual cortex (Fig. 1E). Both tone onset and offset drove changes in neuronal activity in normally reared and dark animals throughout the visual cortical regions AM/PM, A/RL/AL, LM and V1 (Fig. 1F). Neurons in all cortical regions and between rearing conditions showed similar fractions of tone-induced response types (activation, inhibition, both activation and inhibition or no response) in their FRA (Fig. 1G). However, dark rearing resulted in a higher tone-induced response amplitude of neurons in the A/RL/AL region at their best frequency (BF, tone with strongest response), but not in other regions (Fig. 1H). Further analysis showed no effect of rearing and cortical region on the distribution of best frequencies (Fig. 1I) or on frequency selectivity, measured as receptive field sum (RFS)36 (Fig. 1J). Together, this shows that a loss of early visual experience by dark rearing specifically increases the response strength of single neurons to pure tones in lateral dorsal stream HVAs.
Dark rearing decreases responsiveness of neurons in anterior higher visual cortical areas and V1 to drifting gratings
The specific increase of auditory processing in A/RL/AL visual cortical regions after loss of early visual experience could suggest that this region is especially vulnerable to a lack of early visual experience. We therefore tested if single neuron responses to visual stimuli would correspondingly decrease after dark rearing. To this end, the same normally reared and dark reared mice were also shown full-field drifting gratings at different directions and contrasts (Fig. 2A) while neuronal activity was measured using 2P calcium imaging (Fig. 2B). As expected, we observed neurons responding to drifting gratings, often with a clear preference to a certain direction and contrast (Fig. 2C). Both normally reared and dark reared animals had neurons with responses to drifting gratings across the visual cortical regions (Fig. 2D), mainly of the activated type (Fig. 2E). In the A/RL/AL, LM and V1 region there was a decrease in the overall fraction of responsive neurons after dark rearing (Fig. 2E). Additionally, we observed a decrease in the response amplitude to the preferred stimulus after dark rearing in both the AM/MP and A/RL/AL regions (Fig. 2F). Ad-hoc two-way ANOVAs with Bonferroni correction targeted to A/RL/AL vs LM and A/RL/AL vs V1 showed that the decreased response amplitude was greater for A/RL/AL compared to V1 (rearing x area, F(1, 2918) = 5.529, p = 0.0376) but not for A/RL/AL compared to LM (rearing x area, F(1, 1161) = 2.613, p = 0.2126). Similar analysis for AM/PM vs LM (rearing x area, F(1, 815) = 3.120, p = 0.1574) and AM/PM vs V1 (rearing x area, F(1, 2572) = 4.456, p = 0.0698) were non-significant.
In conclusion, a lack of early visual experience reduced the responsiveness of the visual cortex to drifting gratings. While the fraction of responsive neurons decreased in primary and HVAs, only dorsal stream regions showed reduced response amplitude.
Dark rearing induces a shift in auditory and visual processing at the neuronal population level
Next, we assessed whether changes in auditory and visual response strength also had consequences for processing of auditory and visual information on a population level. We trained a linear discriminant classifier to predict sound frequency and intensity (Fig. 3A), or grating direction and contrast (Fig. 3D) based on the change in activity across the population of neurons. We chose a less stringent approach to deal with possible effects of whisker movement as confound for neural activity: 1) we used the ΔF/F0 as neuronal activity, 2) Linear regression was used to filter out whisker movement and running speed from the ΔF/F0 signal.
Normally reared animals showed a modest but significantly increased decoding accuracy of sound frequency compared to chance in AM/PM and A/RL/AL. Dark rearing further increased frequency decoding accuracy only in A/RL/AL (Fig. 3B). Sound intensity was decoded above chance in AM/PM, A/RL/AL and V1 in normally reared animals and dark rearing further increased intensity decoding accuracy in AM/PM, A/RL/AL and LM (Fig. 3C).
Grating direction and contrast decoding accuracy was above chance across brain regions and rearing condition (Fig. 3E and F). Dark rearing decreased decoding accuracy for grating direction in AM/PM and A/RL/AL but increased decoding accuracy for LM (Fig. 3E). Similarly, dark rearing decreased decoding accuracy for contrast in AM/PM and A/RL/AL but surprisingly an increased decoding accuracy for both LM and V1 (Fig. 3F).
Together, these results indicate that on the neuronal population level sound auditory information increases after loss of early visual experience, especially in HVAs. In contrast, visual information decreases in medial and dorsal higher visual cortical regions after dark rearing but increases in LM and V1.
DISCUSSION
Here we describe cortical region-specific effects of a lack of early visual experience on auditory and visual processing down to the single neuron level. A lack of early visual experience increased sound-evoked response strength specifically in neurons of the dorsal stream areas A/RL/AL. This dorsal stream-specific effect was accompanied by a pronounced decreased processing of visual stimuli by neurons in A/RL/AL as well, although these effects were more widespread across cortical regions.
In both humans and mice, dorsal stream regions and associated functions are particularly sensitive during early development10,11,37. In humans, developmental disorders such as Williams syndrome, autism spectrum disorder and cerebral palsy are associated with impaired performance on tasks involving visual motion processing10. Individuals with early onset blindness also show pronounced deficits in motion processing regions such as middle temporal gyrus, middle occipital gyrus and right cuneus3,38. In mice, populations of neurons in dorsal stream regions, especially AM/PM and AL, develop visual tuning later after eye opening and become less responsive to visual stimuli after dark rearing compared to ventral stream regions11. This had led to the hypothesis that V1 input to dorsal stream regions compared to ventral stream regions is more dependent on early visual experience11. We expand on this work by showing that fewer neurons in dorsal stream regions AM/PM and A/RL/AL are recruited by drifting gratings and become less responsive to visual stimuli after dark rearing. Importantly, we find that these regions also become more responsive to auditory stimuli. Our results therefore provide support for the ‘dorsal vulnerability hypothesis’ and further suggest that the reduced visual input provides opportunity for auditory inputs to dorsal regions to strengthen.
Recently, sound-evoked activity in the visual cortex has been suggested to be caused indirectly by facial movement instead39. While our results indeed find a strong influence of sound on facial movement we do observe sound-evoked activity in single neurons after removing sound-induced whisker movement trials and restricting our analysis windows to periods where more pure auditory activity has been observed. Our results thus support the idea that visual cortical neurons can be activated by sound directly26 possibly contributing to audiovisual multisensory integration.
The source of increased auditory response strength after dark rearing is still unknown. One hypothesis is that exuberant axons from the auditory cortex that are laid out at birth are not pruned away under the influence of feedforward visual input and remain in place into adulthood40. This is supported by anatomical tracing studies in animals41,42 and human fMRI studies43. However, there is also anatomical evidence that auditory subcortical input might drive the visual cortex after lack of early visual experience25,42,44. Future experiments using functional imaging targeted to the dorsal stream neurons in combination with auditory cortical or thalamic silencing might provide a clear answer.
In summary, a lack of early visual experience increases auditory and decreases visual processing in the mouse visual cortex, particularly in the dorsal stream. This could enhance dorsal stream-specific auditory processing but hamper restoration of visual tuning in adulthood.
MATERIALS AND METHODS
Animals
All experiments were approved by the institutional animal care and use committee of the Royal Netherlands Academy of Arts and Sciences under Central Committee Animal experiments (CCD) license AVD80100202215934, Academy of Arts and Sciences. We used X male and X female mice in the dark reared group and X male and X female mice in the normally reared group. All mice were F1 offspring from female CBA/JRj (Janvier labs, https://janvier-labs.com/) crossed to male Vipr2-Cre-/- (Jackson Laboratories, https://www.jaxmice.jax.org/, strain 03133245. Mice were group housed before surgery. After surgery they were either individually or group housed with ad libitum access to food and water under a 12-hour reversed day/night cycle or constant darkness. All mice had access to a running wheel in their home cage. Extra care was taken to ensure dark reared animals remained in absolute darkness up until the first recording. Daily health checks and habituation to the recording setup were done under infrared light (850nm) using custom infrared binoculars. During surgery the eyes were covered using custom light-protective caps. All experiments were performed in the dark phase of normally reared animals.
Viral injection and cranial window surgery
Mice were anesthetized with Isoflurane (5% induction, 1.5 - 2% maintenance in oxygen), body temperature was maintained at a stable 37 ° with a euthermic pad and the eyes were covered with light-protective caps filled with Ophtosan eye ointment to prevent them from drying. Metacam (5 mg/kg s.c.), Temgesic (0.1 mg/kg s.c.), Xylocaine gel (on periost) and Rymadil (0.06 mg/mL, post-operative day 1 to 3 in drinking water) were provided for analgesia and Dexamethason (4-8 mg/kg s.c.) for anti-inflammation and to reduce brain swelling. First, the skin and periosteum were removed above the skull. Then, eight holes were drilled in the skull spread across the visual cortex using a dental drill. A mixture (1:1) of AAV1-hDlx-dlox-mCyRFP1(rev)-dlox (0.67 × 1012 titer, VVF Zurich, v313-1) and AAV9-CaMKIIα-jGCaMP8m (0.67 × 1012 titer, VVF Zurich, v630-9)46 was injected at 400 and 200 µm below cortical surface across 8 injections of 9.2 nL at 46nL/s. A craniotomy was drilled above the visual cortex and a cranial window (4+4+5mm diameter stacked cover slips) was carefully lowered in the craniotomy. The window and a custom metal head ring were attached to the skull using dental cement. Mice were allowed to recover from surgery for a minimum of one week after which they were habituated to head fixation in the recording setup until they comfortably started running.
Wide-field calcium imaging
Population receptive field mapping and field sign analysis using the wide-field macroscope was done as described before29. In short, the complete 4 mm diameter window was imaged using a wide-field fluorescence macroscope (Axio Zoom.V16 Zeiss/Caenotec-Prof. Ralf Schnabel). Images were captured by a high-speed sCMOS camera (pro.edge 5.5) at 20 Hz, 1600 × 1600 pixels with 50 ms exposure time and recorded with Encephalos software (Caenotec-Prof. Ralf Schnabel) after which they were down sampled to 800 × 800 pixels for analysis.
Two-photon calcium imaging
Two-photon calcium imaging was performed on a 2P microscope (Neurolabware) equipped with a Ti-sapphire laser (Mai-Tai ‘Deepsee’, Spectraphysics) running at 920 nm, using 16x, 0.8 NA water immersion objective (Nikon) at 1.0x to 1.6x digital zoom. Imaging was controlled by Scanbox software (Neurolabware) running on Matlab at a frame rate of approximately 15.5 Hz. Mice were head fixed and free to run on a wheel. Running speed was recorded using a rotary encoder running on Arduino and processed in Matlab.
Visual stimuli
Wide-field calcium imaging
Mice were positioned at the centre, 14 cm away, from a 122 × 68 cm LCD screen (iiyama LE5564S-B1, 1920 × 1280 pixels at 60 Hz refresh rate) covering a 144° x 86° field-of-view. Stimuli were created and presented using COGENT (developed by John Romaya at the LON at the Wellcome Department of Imaging Neuroscience) together with Matlab (Mathworks). Stimuli consisted of eccentricity-corrected angled bars at 0°, 45°, 90° or 135° with 10° in diameter of a checkerboard pattern on a grey background (20 cd/m2). A total of 58 stimuli were pseudo-randomly presented 10 times for 0.5 s with an inter-trial-interval consisting of a grey screen (20cd/m2) for 3.6 s. Images were smoothed using a Gaussian filter with two pixels standard deviation and stored per trial. Population receptive field mapping and field sign analysis was done as described as before29,30.
Two-photon calcium imaging
Mice were positioned at the centre, 15 cm away, from a 24 inch gamma-corrected HD LED monitor (1920 × 10870 pixels at 60 Hz refresh rate) covering a 120° x 90° field-of-view. Stimuli were created and presented using OpenGl and Psychophysics Toolbox 3 in Matlab. Visual stimuli consisted of eccentricity-corrected full-field sinusoidal drifting gratings at a direction of 0°, 45°, 90°, 135°, 180°, 225°, 270° or 315° at 8 different contrasts logarithmically spaced between 1% and 100% (“logspace(0,2,8)” in Matlab). Each combination of direction x contrast was pseudo randomly presented for 1 s (0.05 cpd at temporal frequency of 1 Hz) with an inter-trial-interval of 2 s where a mean luminance grey screen was presented, this was repeated 15 times.
Auditory stimuli
For 2P calcium imaging a single ultrasound speaker (Batsound, L400 Ultrasound speaker) was positioned at the level of the mouse at 20 cm distance. For measuring frequency response areas pure-tone stimuli were presented and created using Psychophysics Toolbox 3 and Matlab. Pure-tone stimuli were presented in open field at 12 frequencies between 1.5 kHz to 48 kHz and half-an-octave step with half-octave-step in between and at an intensity of 50, 60, 70 and 80 dB sound pressure level (dB SPL) at 192 kHz with 24 bits using the Terratec Aurean Xfire 8.0 HD external soundcard. Peak intensity levels of background noise were measured around 40 dB SPL. Pure-tone stimuli were presented in pseudorandom order for 0.5 s (including a 5 ms cosine-squared rise/decay time) with 2 to 3 s random inter-trial-interval in between, this was repeated 15 times. Sound intensities were calibrated using a condenser ultrasound microphone (Avisoft-Bioacoustics CM16/CMPA) placed at the position of the pinnae, in combination with the UltraSoundGate 116Hb recording interface and Avisoft-RECORDER USGH software.
Video tracking and whisker energy analysis
Facial movement during 2P calcium imaging was recorded using an infrared camera (FLIR, Flea3 USB3 combined with a Tamron 12VM412ASIR 1/2” 4-12mm F/1.2 Infrared Manual C-Mount Lens) triggered by 2P image acquisition at around 15.5 Hz. Video files were further processed using Facemap47 and Matlab. Here the motion energy was extracted from a manually drawn region of interest around the whisker pad.
Z-scored whisker energy was calculated using the 2 to 0 seconds before tone onset per trial. The peak of the Z-scored whisker energy was extracted between 0 and 0.5 s and averaged across trials for analysis in Fig S1. Trials were removed from pure-tone analysis if the mean Z-scored whisker energy in either 0.5 s after tone onset or offset was above 2.
Two-photon calcium imaging analysis
Raw images were pre-processed using the SpecSeg toolbox48 to select regions of interest (ROIs) that correspond to the soma’s of neurons and extract the mean fluorescence signal. In brief, first, rigid motion corrected was performed using NoRMCorre49, then ROIs were selected based on cross-spectral power across pixels. ROIs were manually refined, and mean signal was extracted per ROI. Then, neuropil was subtracted subtracting 70% of the average pixel values around from a doughnut around the ROI. ΔF/F Values were calculated by subtraction of moving baseline and then dividing by a linear fit of that moving baseline (10th percentile over 500 frames).
To increase the temporal resolution of the ΔF/F trace we used a spike deconvolution algorithm (MLspike, autocalibration of parameters per neuron)35. Additionally, run speed was regressed out of the estimated spike times using a linear model (‘fitlm’ in Matlab). The residual estimation of the firing rate (FR) was used to calculate the ΔFR by subtracting the mean activity 0.5 to 1 s from before tone onset.
After removal of whisker movement trials, responses to pure-tone stimuli were calculated by taking the trial-averaged ΔFR 0 to 0.5 s after tone onset and offset or 0 to 0.5 s after drifting grating onset. A minimum of trials per tone was set at 8 after removal of trials, otherwise the tone was ignored for further analysis. For calculation of response type and response amplitude the period (after tone onset or offset for pure tones or just after grating onset) with the largest significant response was taken from the stimuli with the strongest significant response (best frequency for pure-tone stimuli or preferred stimulus for drifting gratings). A neuron was deemed significantly responding if the ΔFR response from at least one stimulus type exceeded a trial-shuffled threshold. The threshold was set as the 0.1st percentile of a 1000 times shuffled distribution of ΔFR response amplitudes, where the deconvolved spike time were circularly shifted by a random time every shuffle.
Linear discriminant analysis
For population decoding analysis a linear discriminant analysis classifier (‘fitcdiscr’ in Matlab with 0 gamma correction) was trained to predict trial types based on the ΔF/F0 signal. The ΔF/F0 signal was calculated by subtracting the mean ΔF/F signal over 1 to 0 seconds before stimulus onset from the mean ΔF/F signal over 0.2 to 1.2 s after stimulus onset. From the ΔF/F0 response Z-scored whisker energy (for pure-tone stimuli only) and run speed were regressed out using ‘fitlm’ in Matlab. These residual ΔF/F0 were used as predictors in the decoding model and sound intensity, sound frequency, grating direction or grating contrast as labels. The number of neurons used were randomly drawn and limited to the smallest size across groups tested. The prediction accuracy was calculated and validated using 5-fold cross validation. These procedures were repeated 100 times to produce a distribution of prediction accuracies. Performance above chance was tested with a label-shuffled distribution with similar analysis. Significance (above chance and between groups) was tested using a permutation test with 100000 iterations.
Statistical analysis
Statistical details are written in the figure legends. For two-way ANOVAs and repeated measures ANOVAs the data was assumed to be normally distributed. Repeated measures two-way ANOVAs were Geisser-Greenhouse corrected. Data is plotted as mean ± SEM, except for LDA data which is plotted as mean ± SD.
ANOVAs and Fisher’s exact tests were performed in Graphpad Prism 10 and Mann-Whitney U tests and permutations tests in Matlab (R2023b).
ACKNOWLEDGEMENTS
We wish to thank the Levelt lab for helpful discussion, the members of the mechatronics department for technical support and animal caretaker of the Netherlands Institute for Neuroscience for their help. We thank Vasileios Patsourakos, Maaike van der Aa and Ayana Kemble for help with habituation of the mice and Chris van der Togt, Matthew W. Self, Pieter W. Roelfsema and Gerard Borst for technical support, data analysis and data management support. This project received funding from the European Union’s Horizon 2020 Research and Innovation Program under grant agreement nos. 785907 (HBP SGA2, CL & PR) and 945539 (HBP SGA3, CL & PR) and the Dutch Research Council (FlagEra SoundSight, 680-91-320, CL). The authors declare that they have no competing interests.
Footnotes
Changed order of authorships on bioRxiv website, not on paper itself.