Abstract
The eye is the vanguard of the reception process, constituting the point where visual information arrives and is transformed into neural signals. While we view dynamic media contents, a fine-tuned interplay of mechanisms causes our pupils to dilate and constrict over time - and putatively similarly across audience members exposed to the same messages. Research that once pioneered pupillometry did actually use dynamic media as stimuli, but this trend then stalled, and pupillometry remained underdeveloped in the study of naturalistic media stimuli. Here, we introduce a VR-based approach to capture audience members’ pupillary responses during media consumption and suggest an innovative analytic framework. Specifically, we expose audiences to a set of 30 different video messages and compute the cross-receiver similarity of pupillometric responses. Based on this data, we identify the specific video an individual is watching. Our results show that this ‘pupil-pulse-tracking’ enables highly accurate decoding of video identity. Moreover, we demonstrate that the decoding is relatively robust to manipulations of video size and distractor presence. Finally, we examine the relationship between pupillary responses and subsequent memory. Theoretical implications for objectively quantifying exposure and states of audience engagement are discussed. Practically, we anticipate that this pupillary audience response measurement approach could find application in media measurement across contexts, ranging from traditional screen-based media (commercials, movies) to social media (e.g., TikTok and YouTube), and to next-generation virtual media environments (e.g., Metaverse, gaming).
Competing Interest Statement
The authors have declared no competing interest.