Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

2D or Not 2D? An FMRI Study of How Dogs Visually Process Objects

Ashley Prichard, Raveena Chhibber, Kate Athanassiades, Veronica Chiu, Mark Spivak, Gregory S. Berns
doi: https://doi.org/10.1101/2020.06.04.134064
Ashley Prichard
1Psychology Department, Emory University, Atlanta, GA 30322
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Raveena Chhibber
1Psychology Department, Emory University, Atlanta, GA 30322
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Kate Athanassiades
1Psychology Department, Emory University, Atlanta, GA 30322
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Veronica Chiu
1Psychology Department, Emory University, Atlanta, GA 30322
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Mark Spivak
2Comprehensive Pet Therapy, Inc., Sandy Springs, GA 30328
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Gregory S. Berns
1Psychology Department, Emory University, Atlanta, GA 30322
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • For correspondence: gregory.berns@emory.edu
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

ABSTRACT

Given humans’ habitual use of screens, they rarely consider potential differences when viewing two dimensional (2D) stimuli and real-world versions of dimensional stimuli. Dogs also have access to many forms of screens and touch pads, with owners even subscribing to dog-directed content. Humans understand that 2D stimuli are representations of real-world objects, but do dogs? In canine cognition studies, 2D stimuli are almost always used to study what is normally 3D, like faces, and may assume that both 2D and 3D stimuli are represented in the brain the same way. Here, we used awake fMRI of 15 dogs to examine the neural mechanisms underlying dogs’ perception of two- and three-dimensional objects after the dogs were trained on either a two- or three-dimensional version of the objects. Activation within reward processing regions and parietal cortex of the dog brain to 2D and 3D versions of objects was determined by their training experience, as dogs trained on one dimensionality showed greater activation to the dimension on which they were trained. These results show that dogs do not automatically generalize between two- and three-dimensional stimuli and caution against implicit assumptions when using pictures or videos with dogs.

INTRODUCTION

Studies of canine cognition frequently rely on two-dimensional (2D) pictures to test dogs’ ability to discriminate between objects, species, or faces (Albuquerque et al. 2016; Autier-Derian et al. 2013; Barber et al. 2016; Huber et al. 2013; Muller et al. 2015; Pitteri et al. 2014; Wallis et al. 2017). Visual stimuli for these studies are utilized because they are easy to obtain from studies on humans and nonhuman primates and are easy to implement in laboratory settings. But the ecological validity of this line of research hinges on the extent to which the findings transfer to real-world stimuli and contexts (Romero and Snow 2019). As dogs may not perceive 2D visual stimuli as humans do, are images appropriate stimuli for the study of dog cognition?

Visual stimuli are often selected without considering the nature of dogs’ visual perception (Miller and Murphy 1995). For example, dogs have a higher flicker fusion rate than humans. This means that they may perceive the flickering of a video display if the refresh rate is too low. With movies, dogs may notice a gap or flicker between frames. Dogs also have a visual streak as opposed to a fovea (as in primates), causing increased sensitivity to stimuli in the periphery of the visual field. This means that displaying a picture or playing a video to a dog may not accurately reflect what a dog sees in the real world because they may focus on different aspects of the video than we do (Byosiere et al. 2018; Byosiere et al. 2019). Although there is ample evidence that dogs can perceptually discriminate features of images, this does not mean that the images necessarily represent their real-world counterparts to the dog.

Research in canines and other nonhumans that utilize pictures share an underlying assumption that, like humans, dogs perceive 2D stimuli such as faces as similar to real 3D faces. Dogs do behaviorally differentiate pictures, as they can discriminate between pictures of human facial expressions and between pictures of familiar and strange dogs or humans (Autier-Derian et al. 2013; Barber et al. 2016; Huber et al. 2013; Muller et al. 2015; Pitteri et al. 2014; Somppi et al. 2012), and following substantial training, dogs show the ability to follow commands presented by humans through video projection (Pongracz et al. 2003). One study reported dogs’ use of duplicates of the objects or miniature versions as referents to retrieve the corresponding objects, concluding that dogs can use iconic representations (Kaminski et al. 2009). However, the same dogs did not perform well using pictures versions of the corresponding objects. Despite this widespread use of 2D visual stimuli in canine cognition, studies have not shown that dogs use 2D stimuli as referents for real world stimuli.

The ability to abstract from 2D to 3D versions of objects is not uniquely human. Many nonhuman species show evidence of behavioral transfer from pictures or videos to objects, pictures of food, or conspecifics following substantial training (Bovet and Vauclair 2000; Johnson-Ulrich et al. 2016; Wilkinson et al. 2013). This means that there is little evidence for 2D to 3D transfer happening naturalistically in a nonhuman species. Nor does recognition between pictures mean that the animal has abstract knowledge of objects, that they have formed a mental representation, or that they equate pictures and real world objects (Jitsumori 2010; Weisman and Spetch 2010).

Using functional magnetic resonance imaging (fMRI), regions of the primate brain have been identified as selective for processing specific types of visual stimuli, including the fusiform face area (FFA) for processing faces or the lateral occipital complex (LOC) for processing objects (Beauchamp et al. 2004; Durand et al. 2007; Eger et al. 2008; Janssen et al. 2018; Kourtzi and Kanwisher 2000; Kriegeskorte et al. 2008). Yet these fMRI studies have a similar caveat: they too rely on 2D visual stimuli as proxies for real-world stimuli and use subjects who are overly familiar with pictures.

There is some evidence that object processing regions of the human brain respond differently to 2D and 3D versions of stimuli. An fMRI study that directly compared neural activation within the LOC to real world objects and 2D versions of the same objects found that the LOC does not respond to the two versions of stimuli in the same way (Snow et al. 2011). In behavioral studies, real objects also prompt greater attention and memory retrieval than 2D images, and elicit goal-directed actions whereas 2D images do so to a lesser degree (Gomez et al. 2018; Snow et al. 2014). Goal directed actions, such as grasping, are difficult to generalize to 2D versions of objects because 2D versions lack the same binocular cues or proprioceptive feedback (Freud et al. 2018; Gallivan and Culham 2015; Hutchison and Gallivan 2018).

As in human studies of vision, fMRI can be used to elucidate the neural mechanisms underlying dogs’ perception of objects. FMRI studies of awake dogs have increased in complexity and duration, paralleling human fMRI studies. Canine studies show that stimulus-reward associations acquired prior to or during scanning are learned at different rates due to neural biases within the reward-processing regions of the brain, such as the caudate and amygdala (Cook et al. 2016; Prichard et al. 2018a). Dogs also process familiar human words associated with objects in similar language-processing regions of humans, like the temporal-parietal junction, and show greater activation to novel words versus familiar words (Prichard et al. 2018b). As in human imaging, functional localizers have also revealed areas of dogs’ occipital cortex selective for processing human and dog faces (Cuaya et al. 2016; Dilks et al. 2015; Szabo et al. 2020; Thompkins et al. 2018). Together these studies show that activation within areas of the dog brain can be used to predict perceptual or behavioral biases when processing visual stimuli.

In two separate studies, we used fMRI to measure activity in dogs’ brains in response to both objects and pictures of the objects. In Study 1, 15 dogs were split into two groups; dogs in the first group were trained on two 3D objects, and dogs in the second group were trained on two pictures of the objects. One stimulus was associated with reward and the other with nothing. During the fMRI session, dogs from both groups were presented with both the picture stimuli and object stimuli. If hedonic mechanisms facilitate abstraction from 2D to 3D (and vice-versa), then dogs should show greater neural activity in the caudate for the trained reward stimulus than the no reward stimulus, regardless of whether they were trained on objects or pictures. In Study 2, we developed a functional localizer for object processing regions analogous to LOC. If dogs equate 2D and 3D stimuli, then they should show no difference in neural activity between the object and the picture in these regions.

MATERIALS AND METHODS

Participants

Participants for both studies were 15 pet dogs volunteered by their Atlanta owners for fMRI training and fMRI studies (Prichard et al. 2018a) (Table 1). Each dog had previously completed two or more scans for the project and had demonstrated the ability to participate in MRI scans.

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 1. Dogs (N=19) and participation in experiments.

General Experimental Design

The experimental design was similar to previous dog fMRI studies that examined preference using visual stimuli associated with food or social reward (Cook et al. 2016). Briefly, dogs entered and positioned themselves in custom chin rests in the scanner bore. All scans took place in the presence of the dog’s owner, who stood out of view of the dogs throughout the scan near the opening of the scanner bore and delivered all rewards (hot dogs) to the dog. An experimenter was stationed next to the owner, out of view of the dog, where the experimenter controlled the presentation of stimuli to the dogs. The onset and offset of each stimulus were timestamped by the simultaneous press of a four-button MRI-compatible button box by the experimenter.

Imaging

Scanning was conducted with a Siemens 3 T Trio whole-body scanner using procedures described previously (Berns et al. 2013; Berns et al. 2012; Prichard et al. 2018a; Prichard et al. 2018b). The functional scans used a single-shot echo-planar imaging (EPI) sequence to acquire volumes of 22 sequential 2.5 mm slices with a 20% gap (TE = 25 ms, TR = 1260 ms, flip angle = 70°, 64 × 64 matrix, 2.5 mm in-plane voxel size, FOV = 192 mm). Slices were oriented dorsally to the dog’s brain (coronal to the magnet, as in the sphinx position the dogs’ heads were positioned 90 degrees from the prone human orientation) with the phase-encoding direction right-to-left. Four runs of up to 400 functional volumes were acquired for each subject, with each run lasting about 9 minutes. Following functional scans, a T2-weighted structural image of the whole brain was acquired using a turbo spin-echo sequence (25-36 2mm slices, TR = 3940 ms, TE = 8.9 ms, flip angle = 131°, 26 echo trains, 128 x 128 matrix, FOV = 192 mm).

Preprocessing

Preprocessing was the same as described in previous studies (Berns et al. 2013; Prichard et al. 2018a). Briefly, preprocessing of the fMRI data included motion correction, censoring, and normalization using AFNI (NIH) and its associated functions. A hand-selected reference volume for each dog that corresponded to their average position within the magnet bore across runs was used for two-pass, six-parameter rigid-body motion correction. Aggressive censoring removed unusable volumes from the fMRI time sequence because dogs can move between trials and when consuming rewards. Data were censored when estimated motion was greater than 1 mm displacement scan-to-scan and also based on outlier voxel signal intensities. A mask was drawn in functional space for each dog in the cerebellum, which was used to censor the data further by removing volumes where the beta values extracted from the cerebellum were assumed to be beyond the physiologic range of the BOLD signal (> |3 percent signal change|) for each trial. Smoothing, normalization, and motion correction parameters were identical to those described in previous studies (Prichard et al. 2018a). EPI images were smoothed and normalized to %-signal change with 3dmerge using a 6mm kernel at full-width half-maximum. The Advanced Normalization Tools (ANTs) software was used to spatially normalize the mean of the motion-corrected functional images (Avants et al. 2011) to the individual dog’s structural image. We also performed a nonlinear transformation from each dog’s structural image to a high-resolution canine brain atlas, developed from a previous study of Labrador retrievers (Berns et al. 2017).

Experimental Design

Study 1: 2D vs. 3D

In each session, dogs were presented with two objects (a stuffed giraffe and a stuffed whale) and two life-sized cut-out pictures posted on foamboard of the objects (Fig 1). Each stimulus was attached to a three-foot dowel that the experimenter used to present the stimuli to the dog while inside the scanner bore. Neither object had been encountered before by the dogs. Dogs were semi-randomly split into two groups prior to scanning, 8 in the object group and 7 in the picture group. Prior to the first run, dogs were trained on the stimulus-reward associations (10 reward, 10 no-reward) based on their assigned group. Dogs were also refreshed on the stimulus-reward associations between runs (5 reward, 5 no-reward). Following each run, dogs would exit the scanner and rest or drink water.

Figure 1.
  • Download figure
  • Open in new tab
Figure 1. 2D & 3D Stimuli.

Left) 3D whale and 3D giraffe objects attached to 2.5-foot dowels for presentation of stimuli to dogs while in the scanner. Right) Pictures of the whale and giraffe 3D objects were printed to create 2D color-matched versions of the 3D stimuli and pasted to foam board and 2.5-foot dowels for presentation of 2D stimuli to dogs while in the scanner

Dogs in the object group were trained on object stimuli and were semi-randomly assigned the whale or giraffe as the reward stimulus. The presentation of the reward object (giraffe or whale) was immediately followed by the delivery of a food reward, and presentation of the other object was immediately followed by nothing. In the picture group, dogs were trained on picture stimuli and were semi-randomly assigned the whale or giraffe as the reward stimulus. The presentation of the reward picture (giraffe or whale) was immediately followed by the delivery of a food reward, and the other picture was immediately followed by nothing. Training on the conditioned stimuli occurred prior to each run when the dog was positioned in the scanner bore, but before scan acquisition. During scan acquisition, no stimuli were followed by the delivery of a food reward, so that dogs could not discriminate between objects and pictures based solely on food reward. To maintain general motivation to stay in the scanner, food rewards were presented by the owner randomly throughout the scan session between presentations of the stimuli.

An event-based design was used, consisting of trained reward and trained no-reward trial types, as well as symbolic reward and symbolic no-reward trial types. Trained reward and trained no-reward trials consisted of the two conditioned stimuli associated with food reward prior to scanning (e.g. objects for half of the dogs, pictures for the other half). Symbolic reward and symbolic no-reward trials consisted of the two untrained stimuli (e.g. pictures for dogs trained on objects, and objects for dogs trained on pictures). On all trials, a stimulus was presented for a 5 s duration, followed by nothing. Trials were jittered to randomize presentation order and were separated by a variable inter-trial interval. Each dog received the same trial sequence.

A scan session consisted of 4 runs, lasting approximately 9 minutes per run. Each run consisted of 25 trials (5 trained reward, 5 trained no-reward, 5 symbolic reward, 5 symbolic no-reward, and 5 food rewards delivered at random), for a total of 100 trials per scan session. No trial type was repeated more than 4 times sequentially, as dogs could habituate to the continued presentation of a stimulus.

Analyses

Each subject’s motion-corrected, censored, smoothed images were analyzed within a general linear model (GLM) for each voxel in the brain using 3dDeconvolve (part of the AFNI suite). Task related regressors for each experiment were modeled using AFNI’s dmUBLOCK and stim_times_IM functions and were as follows: (1) trained reward stimulus; (2) trained no-reward stimulus; (3) symbolic reward stimulus; (4) symbolic no-reward stimulus. This function created a column in the design matrix for each trial, allowing for the estimation of beta values for each trial. Data were censored for outliers as described above for the contrasts of interest.

A series of contrasts were pre-planned to assess main effects related to the acquisition of trained associations and whether they generalized between 2D and 3D versions. Acquisition of the trained stimulus-reward association was probed with the contrast [trained reward— trained no reward]. Transfer of the trained reward and no-reward association to the untrained stimuli was probed with the contrast [symbolic reward—symbolic no reward]. A direct association between the trained and untrained reward stimuli was tested with the contrast [trained reward – symbolic reward]. The contrast [all_3D—all_2D] was performed to test for perceived differences between all 3D and all 2D stimuli, regardless of training. The average difference between trained stimuli and symbolic stimuli was assessed with the contrast [(trained reward + trained no-reward)—(symbolic reward + symbolic no-reward)]. Finally, the interaction between reward and no reward stimuli and symbolism was measured with the contrast [(trained reward — trained no reward)—(symbolic reward —symbolic no reward)].

Region of Interest (ROI) Analysis

Caudate

As our interest was based on the dog’s response to trained stimuli versus symbolic stimuli, quantitative analyses based on the imaging results used activation values in the canine brain area previously observed to be responsive to reward stimuli (Cook et al. 2016). Anatomical ROIs of the left and right caudate nuclei were defined structurally using each dog’s T2-weighted structural image. ROI-based analyses were performed in individual, rather than group space.

Beta values for the contrasts comparing the change in activation to reward and no reward stimuli for trained (20 reward trials, 20 no-reward trials) and symbolic stimuli (20 reward trials, 20 no-reward trials) were extracted from the caudate ROIs in the left and right hemispheres. Beta values greater than an absolute four percent signal change were removed prior to analyses (assuming that these were beyond the physiologic range of the BOLD signal). The remaining beta values were analyzed using the mixed-model procedure in SPSS 24 (IBM) with fixed-effects for the intercept, group, hemisphere (left or right), and contrast type, identity covariance structure, and maximum-likelihood estimation.

Whole Brain Analysis

Each subject’s individual-level contrast from the GLM was normalized to the Labrador Retriever atlas space via the ANTs software. Spatial transformations included a rigid-body mean EPI to structural image, affine structural to template, and diffeomorphic structural to template. These spatial transformations were concatenated and applied to individual contrasts from the GLM to compute group level statistics. 3dttest++, part of the AFNI suite, was used to compute a t-test across dogs against the null hypothesis that each voxel had a mean value of zero. All contrasts from the GLM mentioned above were included. The average smoothness of the residuals from each dog’s time series regression model was calculated using AFNI’s non-Gaussian spatial autocorrelation function 3dFWHMx –acf. The acf option leads to greatly reduced FPRs clustered around 5 percent across all voxelwise thresholds (Cox et al. 2017). AFNI’s 3dClustsim was then used to estimate the significance of cluster sizes across the whole brain after correcting for familywise error (FWE). Similar to human fMRI studies, a voxel threshold of p ≤ 0.005 was used, and a cluster was considered significant if it exceeded the critical size estimated by 3dClustsim for a FWER ≤ 0.05, using two-sided thresholding and a nearest-neighbor of 1.

Study 2: Object Localizer

To identify object-processing regions, dogs were presented with 3-s color movie clips projected on a screen in the bore of the magnet. Videos included human faces, novel objects (toys), familiar objects, and scram-bled objects (a 15 by 15 box grid with spatially rearranged movie frames.) Stimuli were presented using Python 2.7.9 and the Psychopy Experiment library. A blocked fMRI design was used where each block was 21 s with seven movie clips for each category. Each run contained two sets of four consecutive stimulus blocks in palindromic order. Stimulus blocks had a delay of 10 s, where dogs were fed intermittently between blocks, such that each run was approximately 7 minutes. On average, each dog completed three runs.

Analyses

As in Study 1, a general linear model was estimated for each voxel using AFNI’s 3dDeconvolve. Task related regressors were: (1) faces, (2) novel objects, (3) trained objects, and (4) scrambled objects. Individual object-specific regions, such as LOC, were identified with the contrast [novel objects—faces]. Each dog’s object-specific region was defined by the voxel threshold of the statistical map for the [novel objects—faces] contrast until the number of voxels in each ROI was approximately 40 voxels or less (Aulet et al. 2019). Beta values from the contrasts of interest mentioned in Study 1 were extracted from the object-specific region of each dog to examine potential differences in neural activation between 2D and 3D objects from the contrasts mentioned above.

RESULTS

Study 1: 2D vs 3D Results

Caudate ROI Analyses

There was differentiation of the reward and no-reward stimuli in the caudate ROIs for the trained stimuli, regardless of whether dogs were trained on objects or pictures. There was also a significant interaction of training x [Reward – No Reward] (F (1,45) = 11.29, p = 0.002) (Fig 2). This indicates that the trained reward association did not transfer to the symbolic stimuli.

Figure 2.
  • Download figure
  • Open in new tab
Figure 2. Average beta values (Percent signal change) in individual dogs’ caudate nucleus for the contrast of Reward— No Reward separated by training and testing (symbolic) stimuli.

Changes in brain activation were extracted from contrasts in the 2D to 3D study. In the caudate there was a significant interaction of training x [Reward – No Reward] (F (1,45) = 11.29, p = 0.002)

Whole Brain Analyses

We found neural evidence for the differentiation of stimuli as an effect of the dimensionality of the training stimuli. Whole brain analysis of the contrasts of interest revealed significant activation for three contrasts (Table 2). The [trained reward—symbolic reward] contrast and the contrast comparing activation to the trained stimulus dimensionality versus the untrained stimulus dimensionality [trained reward + trained no-reward)—(symbolic reward + symbolic no-reward)] revealed a region in the posterior parietal lobe with greater activation toward the trained dimensionality of stimuli than the untrained dimensionality (Fig 3A). The contrast comparing the reward associations for the untrained dimension of stimuli [symbolic reward— symbolic no-reward] revealed a region in the right anterior parietotemporal cortex (Fig 3B).

View this table:
  • View inline
  • View popup
  • Download powerpoint
Table 2. Cluster size and threshold significance for 2D and 3D object processing regions
Figure 3.
  • Download figure
  • Open in new tab
Figure 3. Regions important for the discrimination of dimensional object stimuli.

Whole brain analysis of the contrasts of interest revealed significant activation only for three contrasts that survive a voxel threshold of 0.005. A) The contrast comparing the trained S+ to the corresponding untrained dimension [trained reward— symbolic reward] (454 voxels) revealed a region in the left posterior parietal lobe with greater activation toward the trained dimensionality of stimuli. B) The contrast comparing the untrained dimension of stimuli [symbolic reward— no-reward] (248 voxels) produced a region in the right anterior parietotemporal cortex.

Study 2: Object Localizer Results

Individual Object Regions

Three dogs (Velcro, Rookie, and Zoey) failed to complete three runs of the object localizer task such that there was insufficient data to localize object-specific regions in the brain. Twelve dogs had object selective regions defined by the contrast [novel_objects—faces] in overlapping regions in either the left or right hemisphere (Fig. 4). We further examined these object regions for differences between 3D and 2D versions of the objects from the contrasts in Study 1. However, there were no statistically significant results for any of the contrasts in the object-specific regions across dogs.

Figure 4.
  • Download figure
  • Open in new tab
Figure 4. Individual Dog Object Regions.

Sagittal, transverse, and dorsal sections. Regions were defined using the objects-faces contrast of video stimuli for each dog. Colors represent individual dogs; white represents overlap between two or more dogs.

DISCUSSION

Our fMRI results provide the first evidence for neural differences in the occipital and parietal cortices of the dog brain for the processing of two- and three-dimensional objects. The main finding is that dogs’ perception of 2D and 3D objects is influenced by their experience with either stimulus dimension. Activation within reward processing regions was greater for the dimensionality of the trained reward stimulus. Whole-brain analyses revealed a left posterior parietal region selective for the trained dimension of stimuli over the untrained dimension. Taken together, these findings suggest that the neural representation of objects depends on dogs’ familiarity with two- and three-dimensional objects.

Object Regions

In humans, viewing real objects as well as images of objects activates similar networks, particularly the lateral occipital complex along the lateral and ventral convexity of occipito-temporal cortex (Snow et al. 2011; Todd et al. 2012). However, in a human fMRI study that presented real objects and pictures of the objects, the LOC in particular was sensitive to visual differences between the two, such that LOC did not code the real (3D) and pictorial (2D) versions of a shape as equivalent (Snow et al. 2011). Because real objects afford specific actions, including the graspability of an object or if it is within reach of the dominant hand, object-specific actions may have a unique effect on neural responses to the different versions of the same object stimuli (Gallivan et al. 2009; Gallivan and Culham 2015; Gallivan et al. 2011; Snow et al. 2014). Unlike humans, we found little difference in dogs’ neural activation in individual object regions between 2D and 3D versions of object stimuli associated with reward. Our finding of similar neural activation in object regions of the dogs’ brains to 2D and 3D versions of object stimuli could be due to dogs forming an abstract object concept that is invariant to the dimensionality of the object. However, as the object-reward pairings were acquired using a passive viewing task, dogs had little experience interacting with the objects or picture versions of the objects. The dogs’ lack of action-associations with either object may therefore have made both objects and pictures of objects equivalent to the dog as neither was actionable. It is also possible that the study was insufficiently powered to detect potentially smaller effect sizes in dogs than in humans.

Dimensionality Regions

As most studies of canine cognition rely on visual stimuli, we examined whether dogs use hedonic neural mechanisms to generalize from pictures of objects to real world objects. In the interaction contrast, we found that dogs show greater activation within the caudate nucleus to the trained dimension of stimuli relative to the untrained dimension (e.g. dogs trained on pictures of objects had greater activation to pictures relative to real world objects), suggesting hedonic neural mechanisms are biased toward the dimensionality of stimuli with which they are more familiar. Additional brain regions selective for stimulus dimensionality included a left posterior parietal region across dogs where there was greater activation to the trained dimensionality of stimuli than to the untrained dimensionality, which appeared in the same region but opposite hemisphere as the LOC defined in each dog in the object localizer study. Multi-voxel pattern analysis (MVPA) of human imaging data supports these findings, as patterns in object regions can be different for object exemplars from the same category that vary based on viewpoint or size, as well as between 2D and 3D versions of the same objects (Eger et al. 2008; Snow et al. 2011). Consistent with human imaging studies, the left posterior region also showed greater activation to objects relative to faces across dogs and appeared in regions of the canine brain similar to the primate LOC (Freud et al. 2017; Freud et al. 2018).

There was also greater activation to the untrained reward versus no reward stimuli in a right parietotemporal region across dogs (e.g. dogs trained that the 2D giraffe was the reward stimulus had greater activation to the 3D giraffe than the 3D whale in this region). Greater activation to the untrained reward stimulus in this region provides some evidence that dogs use hedonic neural mechanisms to generalize a stimulus-reward association from the trained reward stimulus to the untrained stimulus. However, we do not know what features, such as color or shape, that the dog may use to facilitate this representation. In human fMRI, the right primary visual cortex (V1) and right inferior temporal gyrus also showed greater activation to 2D versions of objects versus 3D objects (Snow et al. 2011). Our results also suggest that dorsal regions of the dog brain may process abstract features of object stimuli that include, but are not limited to, actions (Freud et al. 2017).

There were several limitations to our studies, the foremost being that only a subset of dogs participated in both the localizer study and the 2D-3D study. Some dogs were unavailable for both studies, and some dogs were unable to remain still while viewing video stimuli in the MRI. Further, we limited the number of objects to two or three items, which allowed for a simple controlled design with many trials per item but may limit the generalizability of our findings to all objects that a dog may encounter. Unlike human imaging studies, we did not include more abstract stimuli that were composed only of lines or were limited in color to black and white. To address these concerns, future research could confirm the selectivity of object processing regions for each dog using novel stimuli.

Conclusions

Our fMRI results provide evidence for dedicated object processing regions in the occipital and parietal cortices in dogs. Although real objects and pictures of the same objects share a degree of visual similarity, they differ fundamentally in the actions associated with them and require experience wither either dimension. Further, even children at age 4 can show confusion about the properties of pictures and the objects they depict and the consequences of actions on pictures and objects (Ganea et al. 2009). We have begun to understand how dogs perceive their world through brain imaging, as this offers direct insight from the participant about the neural mechanisms underlying perception. Our studies reveal that there are potentially shared neural mechanisms underlying dogs’ and humans’ visual perception of objects, and that neural biases may in turn affect perception and behavior. These studies provide insight into the question of whether pictures are an appropriate proxy for real world stimuli for dogs and for fMRI.

Ethics Statement

This study was performed in accordance with the recommendations in the Guide for the Care and Use of Laboratory Animals of the National Institutes of Health. The study was approved by the Emory University IACUC (Protocols DAR-2002879-091817BA and DAR-4000079-ENTPR-A), and all owners gave written consent for their dog’s participation in the study.

Data Availability

The datasets generated during the current study are available from the corresponding author on reasonable request.

Acknowledgments

Thank you to all of the owners who trained their dogs to participate in fMRI studies: Lorrie Backer, Rebecca Beasley, Emily Chapman, Darlene Coyne, Vicki D’Amico, Diana Delatour, Jessa Fagan, Marianne Ferraro, Anna & Cory Inman, Patricia King, Cecilia Kurland, Claire & Josh Mancebo, Patti Rudi, Cathy Siler, Lisa Tallant, Nicole & Sairina Merino Tsui, Ashwin Sakhardande, & Yusuf Uddin.

Footnotes

  • Funding This work was supported by the Office of Naval Research (N00014-16-1-2276). ONR provided support in the form of salaries [RC, MS, GSB], scan time, and volunteer payment, but did not have any additional role in the study design, data collection and analysis, decision to publish, or preparation of the manuscript.

  • Competing Interests G.B. & M.S. own equity in Dog Star Technologies and developed technology used in some of the research described in this paper. The terms of this arrangement have been reviewed and approved by Emory University in accordance with its conflict of interest policies.

REFERENCES

  1. ↵
    Albuquerque N, Guo K, Wilkinson A, Savalli C, Otta E, Mills D (2016) Dogs recognize dog and human emotions Biol Lett 12:20150883 doi:10.1098/rsbl.2015.0883
    OpenUrlCrossRefPubMed
  2. ↵
    Aulet LS, Chiu VC, Prichard A, Spivak M, Lourenco SF, Berns GS (2019) Canine sense of quantity: evidence for numerical ratio-dependent activation in parietotemporal cortex Biol Lett 15:20190666 doi:10.1098/rsbl.2019.0666
    OpenUrlCrossRef
  3. ↵
    Autier-Derian D, Deputte BL, Chalvet-Monfray K, Coulon M, Mounier L (2013) Visual discrimination of species in dogs (Canis familiaris) Anim Cogn 16:637–651 doi:10.1007/s10071-013-0600-8
    OpenUrlCrossRef
  4. ↵
    Avants BB, Tustison NJ, Song G, Cook PA, Klein A, Gee JC (2011) A reproducible evaluation of ANTs similarity metric performance in brain image registration Neuroimage 54:2033–2044 doi:10.1016/j.neuroimage.2010.09.025
    OpenUrlCrossRefPubMed
  5. ↵
    Barber AL, Randi D, Muller CA, Huber L (2016) The Processing of Human Emotional Faces by Pet and Lab Dogs: Evidence for Lateralization and Experience Effects PLoS One 11:e0152393 doi:10.1371/journal.pone.0152393
    OpenUrlCrossRef
  6. ↵
    Beauchamp MS, Lee KE, Argall BD, Martin A (2004) Integration of Auditory and Visual Information about Objects in Superior Temporal Sulcus Neuron 41:809–823 doi:10.1016/s0896-6273(04)00070-4
    OpenUrlCrossRef
  7. ↵
    Berns GS, Brooks A, Spivak M (2013) Replicability and heterogeneity of awake unrestrained canine FMRI responses PLoS One 8:e81698 doi:10.1371/journal.pone.0081698
    OpenUrlCrossRefPubMed
  8. ↵
    Berns GS, Brooks AM, Spivak M (2012) Functional MRI in awake unrestrained dogs PLoS One 7:e38027 doi:10.1371/journal.pone.0038027
    OpenUrlCrossRefPubMed
  9. ↵
    Berns GS, Brooks AM, Spivak M, Levy K (2017) Functional MRI in Awake Dogs Predicts Suitability for Assistance Work Sci Rep 7:43704 doi:10.1038/srep43704
    OpenUrlCrossRef
  10. ↵
    Bovet D, Vauclair J (2000) Picture recognition in animals and humans Behav Brain Res 109:143–165
    OpenUrlCrossRefPubMedWeb of Science
  11. ↵
    Byosiere S-E, Chouinard PA, Howell TJ, Bennett PC (2018) What do dogs (Canis familiaris) see? A review of vision in dogs and implications for cognition research Psychon Bull Rev 25:1798–1813 doi:10.3758/s13423-017-1404-7
    OpenUrlCrossRef
  12. ↵
    Byosiere S-E, Chouinard PA, Howell TJ, Bennett PC (2019) The effects of physical luminance on colour discrimination in dogs: A cautionary tale Appl Anim Behav Sci 212:58–65 doi:10.1016/j.applanim.2019.01.004
    OpenUrlCrossRef
  13. ↵
    Cook PF, Prichard A, Spivak M, Berns GS (2016) Awake canine fMRI predicts dogs’ preference for praise vs food Soc Cogn Affect Neurosci 11:1853–1862 doi:10.1093/scan/nsw102
    OpenUrlCrossRefPubMed
  14. ↵
    Cox RW, Chen G, Glen DR, Reynolds RC, Taylor PA (2017) FMRI Clustering in AFNI: False-Positive Rates Redux Brain Connect 7:152–171 doi:10.1089/brain.2016.0475
    OpenUrlCrossRefPubMed
  15. ↵
    Cuaya LV, Hernandez-Perez R, Concha L (2016) Our Faces in the Dog’s Brain: Functional Imaging Reveals Temporal Cortex Activation during Perception of Human Faces PLoS One 11:e0149431 doi:10.1371/journal.pone.0149431
    OpenUrlCrossRefPubMed
  16. ↵
    Dilks DD, Cook P, Weiller SK, Berns HP, Spivak M, Berns GS (2015) Awake fMRI reveals a specialized region in dog temporal cortex for face processing PeerJ 3:e1115 doi:10.7717/peerj.1115
    OpenUrlCrossRefPubMed
  17. Durand JB et al. (2007) Anterior regions of monkey parietal cortex process visual 3D shape Neuron 55:493–505 doi:10.1016/j.neuron.2007.06.040
    OpenUrlCrossRefPubMedWeb of Science
  18. ↵
    Eger E, Ashburner J, Haynes JD, Dolan RJ, Rees G (2008) fMRI activity patterns in human LOC carry information about object exemplars within category J Cogn Neurosci 20:356–370 doi:10.1162/jocn.2008.20019
    OpenUrlCrossRefPubMedWeb of Science
  19. ↵
    Freud E, Ganel T, Shelef I, Hammer MD, Avidan G, Behrmann M (2017) Three-Dimensional Representations of Objects in Dorsal Cortex are Dissociable from Those in Ventral Cortex Cereb Cortex 27:422–434 doi:10.1093/cercor/bhv229
    OpenUrlCrossRefPubMed
  20. ↵
    Freud E, Macdonald SN, Chen J, Quinlan DJ, Goodale MA, Culham JC (2018) Getting a grip on reality: Grasping movements directed to real objects and images rely on dissociable neural representations Cortex 98:34–48 doi:10.1016/j.cortex.2017.02.020
    OpenUrlCrossRef
  21. ↵
    Gallivan JP, Cavina-Pratesi C, Culham JC (2009) Is that within reach? fMRI reveals that the human superior parieto-occipital cortex encodes objects reachable by the hand J Neurosci 29:4381–4391 doi:10.1523/JNEUROSCI.0377-09.2009
    OpenUrlAbstract/FREE Full Text
  22. ↵
    Gallivan JP, Culham JC (2015) Neural coding within human brain areas involved in actions Curr Opin Neurobiol 33:141–149 doi:10.1016/j.conb.2015.03.012
    OpenUrlCrossRefPubMed
  23. ↵
    Gallivan JP, McLean A, Culham JC (2011) Neuroimaging reveals enhanced activation in a reach-selective brain area for objects located within participants’ typical hand workspaces Neuropsychologia 49:3710–3721 doi:10.1016/j.neuropsychologia.2011.09.027
    OpenUrlCrossRefPubMedWeb of Science
  24. ↵
    Ganea PA, Allen ML, Butler L, Carey S, DeLoache JS (2009) Toddlers’ referential understanding of pictures J Exp Child Psychol 104:283–295 doi:10.1016/j.jecp.2009.05.008
    OpenUrlCrossRefPubMed
  25. ↵
    Gomez MA, Skiba RM, Snow JC (2018) Graspable Objects Grab Attention More Than Images Do Psychol Sci 29:206–218 doi:10.1177/0956797617730599
    OpenUrlCrossRef
  26. ↵
    Huber L, Racca A, Scaf B, Viranyi Z, Range F (2013) Discrimination of familiar human faces in dogs (Canis familiaris) Learn Motiv 44:258–269 doi:10.1016/j.lmot.2013.04.005
    OpenUrlCrossRefPubMed
  27. ↵
    Hutchison RM, Gallivan JP (2018) Functional coupling between frontoparietal and occipitotemporal pathways during action and perception Cortex 98:8–27 doi:10.1016/j.cortex.2016.10.020
    OpenUrlCrossRef
  28. ↵
    Janssen P, Verhoef BE, Premereur E (2018) Functional interactions between the macaque dorsal and ventral visual pathways during three-dimensional object vision Cortex 98:218–227 doi:10.1016/j.cortex.2017.01.021
    OpenUrlCrossRef
  29. ↵
    Jitsumori M (2010) Do Animals Recognize Pictures as Representations of 3D Objects? Comparative Cognition & Behavior Reviews 5:136–138 doi:10.3819/ccbr.2010.50008
    OpenUrlCrossRef
  30. ↵
    Johnson-Ulrich Z, Vonk J, Humbyrd M, Crowley M, Wojtkowski E, Yates F, Allard S (2016) Picture object recognition in an American black bear (Ursus americanus) Anim Cogn 19:1237–1242 doi:10.1007/s10071-016-1011-4
    OpenUrlCrossRef
  31. ↵
    Kaminski J, Tempelmann S, Call J, Tomasello M (2009) Domestic dogs comprehend human communication with iconic signs Dev Sci 12:831–837 doi:10.1111/j.1467-7687.2009.00815.x
    OpenUrlCrossRefPubMed
  32. ↵
    Kourtzi Z, Kanwisher N (2000) Cortical regions involved in perceiving object shape J Neurosci 20:3310–3318
    OpenUrlAbstract/FREE Full Text
  33. ↵
    Kriegeskorte N et al. (2008) Matching categorical object representations in inferior temporal cortex of man and monkey Neuron 60:1126–1141 doi:10.1016/j.neuron.2008.10.043
    OpenUrlCrossRefPubMedWeb of Science
  34. ↵
    Miller PE, Murphy CJ (1995) Vision in dogs J Am Vet Med Assoc 207:1623–1634
    OpenUrlPubMedWeb of Science
  35. ↵
    Muller CA, Schmitt K, Barber AL, Huber L (2015) Dogs can discriminate emotional expressions of human faces Curr Biol 25:601–605 doi:10.1016/j.cub.2014.12.055
    OpenUrlCrossRefPubMed
  36. ↵
    Pitteri E, Mongillo P, Carnier P, Marinelli L, Huber L (2014) Part-based and configural processing of owner’s face in dogs PLoS One 9:e108176 doi:10.1371/journal.pone.0108176
    OpenUrlCrossRef
  37. ↵
    Pongracz P, Miklosi A, Doka A, Csanyi V (2003) Successful Application of Video-Projected Human Images for Signalling to Dogs Ethology 109:809–821 doi:10.1046/j.0179-1613.2003.00923.x
    OpenUrlCrossRef
  38. ↵
    Prichard A, Chhibber R, Athanassiades K, Spivak M, Berns GS (2018a) Fast neural learning in dogs: A multimodal sensory fMRI study Sci Rep 8:14614 doi:10.1038/s41598-018-32990-2
    OpenUrlCrossRef
  39. ↵
    Prichard A, Cook PF, Spivak M, Chhibber R, Berns GS (2018b) Awake fMRI Reveals Brain Regions for Novel Word Detection in Dogs Front Neurosci 12:737 doi:10.3389/fnins.2018.00737
    OpenUrlCrossRef
  40. ↵
    Romero CA, Snow JC (2019) Methods for Presenting Real-world Objects Under Controlled Laboratory Conditions J Vis Exp doi:10.3791/59762
    OpenUrlCrossRef
  41. ↵
    Snow JC, Pettypiece CE, McAdam TD, McLean AD, Stroman PW, Goodale MA, Culham JC (2011) Bringing the real world into the fMRI scanner: repetition effects for pictures versus real objects Sci Rep 1:130 doi:10.1038/srep00130
    OpenUrlCrossRefPubMed
  42. ↵
    Snow JC, Skiba RM, Coleman TL, Berryhill ME (2014) Real-world objects are more memorable than photographs of objects Front Hum Neurosci 8:837 doi:10.3389/fnhum.2014.00837
    OpenUrlCrossRef
  43. ↵
    Somppi S, Tornqvist H, Hanninen L, Krause C, Vainio O (2012) Dogs do look at images: eye tracking in canine cognition research Anim Cogn 15:163–174 doi:10.1007/s10071-011-0442-1
    OpenUrlCrossRefPubMed
  44. ↵
    Szabo D, Gabor A, Gacsi M, Farago T, Kubinyi E, Miklosi A, Andics A (2020) On the Face of It: No Differential Sensitivity to Internal Facial Features in the Dog Brain Front Behav Neurosci 14:25 doi:10.3389/fnbeh.2020.00025
    OpenUrlCrossRef
  45. ↵
    Thompkins AM et al. (2018) Separate brain areas for processing human and dog faces as revealed by awake fMRI in dogs (Canis familiaris) Learn Behav 46:561–573 doi:10.3758/s13420-018-0352-z
    OpenUrlCrossRefPubMed
  46. ↵
    Todd RM, Talmi D, Schmitz TW, Susskind J, Anderson AK (2012) Psychophysical and neural evidence for emotion-enhanced perceptual vividness J Neurosci 32:11201–11212 doi:10.1523/JNEUROSCI.0155-12.2012
    OpenUrlAbstract/FREE Full Text
  47. ↵
    Wallis LJ, Range F, Kubinyi E, Chapagain D, Serra J, Huber L (2017) Utilising dog-computer interactions to provide mental stimulation in dogs especially during ageing ACI 2017 Improv Relat (2017) 2017 doi:10.1145/3152130.3152146
    OpenUrlCrossRef
  48. ↵
    Weisman R, Spetch M (2010) Determining When Birds Perceive Correspondence Between Pictures and Objects: A Critique Comparative Cognition & Behavior Reviews 5:117–131 doi:10.3819/ccbr.2010.50006
    OpenUrlCrossRef
  49. ↵
    Wilkinson A, Mueller-Paul J, Huber L (2013) Picture-object recognition in the tortoise Chelonoidis carbonaria Anim Cogn 16:99–107 doi:10.1007/s10071-012-0555-1
    OpenUrlCrossRef
Back to top
PreviousNext
Posted June 05, 2020.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
2D or Not 2D? An FMRI Study of How Dogs Visually Process Objects
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
2D or Not 2D? An FMRI Study of How Dogs Visually Process Objects
Ashley Prichard, Raveena Chhibber, Kate Athanassiades, Veronica Chiu, Mark Spivak, Gregory S. Berns
bioRxiv 2020.06.04.134064; doi: https://doi.org/10.1101/2020.06.04.134064
Reddit logo Twitter logo Facebook logo LinkedIn logo Mendeley logo
Citation Tools
2D or Not 2D? An FMRI Study of How Dogs Visually Process Objects
Ashley Prichard, Raveena Chhibber, Kate Athanassiades, Veronica Chiu, Mark Spivak, Gregory S. Berns
bioRxiv 2020.06.04.134064; doi: https://doi.org/10.1101/2020.06.04.134064

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Animal Behavior and Cognition
Subject Areas
All Articles
  • Animal Behavior and Cognition (4229)
  • Biochemistry (9108)
  • Bioengineering (6752)
  • Bioinformatics (23944)
  • Biophysics (12101)
  • Cancer Biology (9497)
  • Cell Biology (13742)
  • Clinical Trials (138)
  • Developmental Biology (7616)
  • Ecology (11662)
  • Epidemiology (2066)
  • Evolutionary Biology (15479)
  • Genetics (10620)
  • Genomics (14297)
  • Immunology (9465)
  • Microbiology (22793)
  • Molecular Biology (9078)
  • Neuroscience (48890)
  • Paleontology (355)
  • Pathology (1479)
  • Pharmacology and Toxicology (2565)
  • Physiology (3823)
  • Plant Biology (8309)
  • Scientific Communication and Education (1467)
  • Synthetic Biology (2290)
  • Systems Biology (6172)
  • Zoology (1297)