Skip to content

Latest commit

 

History

History
5 lines (3 loc) · 1.18 KB

README.md

File metadata and controls

5 lines (3 loc) · 1.18 KB

EEmaGe: EEG-based Image Generation for Visual Reconstruction

Abstract

Visual reconstruction from EEG has been paved with the advancement of AI. Recent studies have demonstrated the feasibility of reconstructing images from EEG recordings in designed experiments. Nevertheless, even though breakthroughs in AI have begun with imitating the human system, these frameworks lack resemblance to the visual system of the human. To minimize this gap, the research proposes a novel framework called EEmaGe which utilizes self-supervised learning to reconstruct images from raw EEG data. Unlike the previous methods which rely on supervised learning and labeled data using visual cues, the framework employs self-supervised autoencoders and its downstream tasks to extract robust EEG features. The experimental results demonstrate that an EEG encoder is trained better with similar images compared to EEG alone, even when the labeled data - a pair of EEG and image - is shuffled. As this RE2I approach, the research has the potential to contribute to advance our knowledge about the intricacies of the human brain and to develop more sophisticated AI systems that effectively mock human visual perception.