Skip to main content
bioRxiv
  • Home
  • About
  • Submit
  • ALERTS / RSS
Advanced Search
New Results

Patterns of Saliency and Semantic Features Distinguish Gaze of Expert and Novice Viewers of Surveillance Footage

View ORCID ProfileYujia Peng, Joseph M. Burling, Greta K. Todorova, Catherine Neary, Frank E. Pollick, Hongjing Lu
doi: https://doi.org/10.1101/2022.01.09.475588
Yujia Peng
1School of Psychological and Cognitive Sciences and Beijing Key Laboratory of Behavior and Mental Health, Peking University, China
2Institute for Artificial Intelligence, Peking University, China
3Beijing Institute for General Artificial Intelligence, Beijing, China
7Department of Psychology, University of California, Los Angeles, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • ORCID record for Yujia Peng
  • For correspondence: yujia_peng@pku.edu.cn
Joseph M. Burling
4Department of Psychology, Ohio State University, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Greta K. Todorova
5School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Catherine Neary
6School of Health and Social Wellbeing, The University of the West of England, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Frank E. Pollick
5School of Psychology and Neuroscience, University of Glasgow, Glasgow, United Kingdom
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
Hongjing Lu
7Department of Psychology, University of California, Los Angeles, USA
8Department of Statistics, University of California, Los Angeles, USA
  • Find this author on Google Scholar
  • Find this author on PubMed
  • Search for this author on this site
  • Abstract
  • Full Text
  • Info/History
  • Metrics
  • Preview PDF
Loading

Abstract

When viewing the actions of others, we not only see patterns of body movements, but we also “see” the intentions and social relations of people, enabling us to understand the surrounding social environment. Previous research has shown that experienced forensic examiners—Closed Circuit Television (CCTV) operators—convey superior performance in identifying and predicting hostile intentions from surveillance footages than novices. However, it remains largely unknown what visual content CCTV operators actively attend to when viewing surveillance footage, and whether CCTV operators develop different strategies for active information seeking from what novices do. In this study, we conducted computational analysis for the gaze-centered stimuli captured by experienced CCTV operators and novices’ eye movements when they viewed the same surveillance footage. These analyses examined how low-level visual features and object-level semantic features contribute to attentive gaze patterns associated with the two groups of participants. Low-level image features were extracted by a visual saliency model, whereas object-level semantic features were extracted by a deep convolutional neural network (DCNN), AlexNet, from gaze-centered regions. We found that visual regions attended by CCTV operators versus by novices can be reliably classified by patterns of saliency features and DCNN features. Additionally, CCTV operators showed greater inter-subject correlation in attending to saliency features and DCNN features than did novices. These results suggest that the looking behavior of CCTV operators differs from novices by actively attending to different patterns of saliency and semantic features in both low-level and high-level visual processing. Expertise in selectively attending to informative features at different levels of visual hierarchy may play an important role in facilitating the efficient detection of social relationships between agents and the prediction of harmful intentions.

Author Summary Imagine seeing a person walking toward another person menacingly on the street, we may instantly feel that some physical confrontation will happen in the next second. However, it remains unclear how we efficiently infer social intentions and outcomes from the observed dynamic visual input. To answer this question, CCTV experts, who have years of experience on observing social scenes and making online predictions of the action outcomes, provide a unique perspective. Here, we collected experts’ and novices’ eye movements when observing different action sequences and compared the attended visual information between groups. A saliency model was used to compare low-level visual features such as luminance and color, and a deep convolutional neural network was used to extract object-level semantic visual features. Our findings showed that experts obtained different patterns of low-level and semantic-level features in visual processing compared to novices. Thus, the expertise in selectively attending to informative features at different levels of visual hierarchy may play an important role in facilitating the efficient detection of social relationships between agents and the prediction of harmful intentions.

Competing Interest Statement

The authors have declared no competing interest.

Copyright 
The copyright holder for this preprint is the author/funder, who has granted bioRxiv a license to display the preprint in perpetuity. All rights reserved. No reuse allowed without permission.
Back to top
PreviousNext
Posted January 11, 2022.
Download PDF
Email

Thank you for your interest in spreading the word about bioRxiv.

NOTE: Your email address is requested solely to identify you as the sender of this article.

Enter multiple addresses on separate lines or separate them with commas.
Patterns of Saliency and Semantic Features Distinguish Gaze of Expert and Novice Viewers of Surveillance Footage
(Your Name) has forwarded a page to you from bioRxiv
(Your Name) thought you would like to see this page from the bioRxiv website.
CAPTCHA
This question is for testing whether or not you are a human visitor and to prevent automated spam submissions.
Share
Patterns of Saliency and Semantic Features Distinguish Gaze of Expert and Novice Viewers of Surveillance Footage
Yujia Peng, Joseph M. Burling, Greta K. Todorova, Catherine Neary, Frank E. Pollick, Hongjing Lu
bioRxiv 2022.01.09.475588; doi: https://doi.org/10.1101/2022.01.09.475588
Digg logo Reddit logo Twitter logo Facebook logo Google logo LinkedIn logo Mendeley logo
Citation Tools
Patterns of Saliency and Semantic Features Distinguish Gaze of Expert and Novice Viewers of Surveillance Footage
Yujia Peng, Joseph M. Burling, Greta K. Todorova, Catherine Neary, Frank E. Pollick, Hongjing Lu
bioRxiv 2022.01.09.475588; doi: https://doi.org/10.1101/2022.01.09.475588

Citation Manager Formats

  • BibTeX
  • Bookends
  • EasyBib
  • EndNote (tagged)
  • EndNote 8 (xml)
  • Medlars
  • Mendeley
  • Papers
  • RefWorks Tagged
  • Ref Manager
  • RIS
  • Zotero
  • Tweet Widget
  • Facebook Like
  • Google Plus One

Subject Area

  • Neuroscience
Subject Areas
All Articles
  • Animal Behavior and Cognition (3484)
  • Biochemistry (7336)
  • Bioengineering (5308)
  • Bioinformatics (20225)
  • Biophysics (9991)
  • Cancer Biology (7717)
  • Cell Biology (11280)
  • Clinical Trials (138)
  • Developmental Biology (6426)
  • Ecology (9930)
  • Epidemiology (2065)
  • Evolutionary Biology (13298)
  • Genetics (9354)
  • Genomics (12566)
  • Immunology (7687)
  • Microbiology (18979)
  • Molecular Biology (7428)
  • Neuroscience (40944)
  • Paleontology (300)
  • Pathology (1226)
  • Pharmacology and Toxicology (2132)
  • Physiology (3146)
  • Plant Biology (6850)
  • Scientific Communication and Education (1272)
  • Synthetic Biology (1893)
  • Systems Biology (5306)
  • Zoology (1087)