Visual-Auditory Multimodal Decoding from Multi-Channel Human Intracranial EEG

Authors

  • Baowen Cheng Shanghai Institute of Microsystem and Information Technology
  • Ke Chen
  • Haoyu Hua
  • Zehan Wu
  • Zhuofan Zhao
  • Liang Chen
  • Ying Mao
  • Meng Li

Keywords:

Brain-machine interface, Neural data analysis

Abstract

The human brain's response to video stimuli involves the engagement of numerous sensory modalities, including visual and auditory systems.  Nevertheless, the neural underpinnings governing the processing of video stimuli remain incompletely understood. One critical factor contributing to this knowledge gap is the scarcity of corresponding human electrophysiology datasets, particularly those collected from deep brain regions.  This work presented an intracranial EEG (iEEG) dataset acquired from 66 electrodes in an epilepsy patient over the course of 300 minutes while they viewed customized video stimuli. The dataset encompassed a wide range of cortical and subcortical brain regions, with the goal of enhancing our comprehension of the perceptual processing mechanisms involved during video observation.  Based on this dataset, we employed a classification experiment to assess the significance of brain regions involved in the processing of diverse videos, auditory stimuli, and visual scenes within the human brain.

DOI: https://doi.org/10.24135/ICONIP4

Downloads

Published

2025-03-17