Analysis of the Mechanism for a Deep Convolutional Neural Network Model to Predict Attentional Selection Using Adversarial Noise
Keywords:
Attentional selection, Salincy map, Adversarial noise, Deep convolutional neural networkAbstract
Visual recognition is essential for animals, including humans, to interpret their environment. However, attention plays a key role in filtering visual information, directing brain resources to salient objects or locations. The saliency map model replicates this biological process, predicting where attention and gaze will focus. Recently, models based on deep convolutional neural networks (DCNNs) have leveraged large datasets to improve attentional predictions, though their mechanisms remain unclear. This study investigates attentional selection by analyzing how a DCNN-based saliency map model responds to adversarial-interference images-natural images altered with noise to disrupt the model’s predictions. While human perception remains unaffected by these modifications, the model’s neural responses are significantly altered. Especially, the interference images often shift attention from central to peripheral regions. These findings offer new insights into the workings of DCNN-based saliency models and deepen our understanding of human attention mechanisms.
Downloads
Published
Issue
Track Selection
License
Copyright (c) 2025 The Authors(s)

This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.