Welcome to the repository for the paper "Seeing the Future: Anticipatory Eye Gaze as a Marker of Memory." This study introduces the MEGA (Memory Episode Gaze Anticipation) paradigm, which utilizes eye tracking to quantify memory retrieval without relying on verbal reports. By monitoring anticipatory gaze patterns, we can index event memory traces, providing a novel "no-report" method for assessing episodic memory.
To run the scripts in this repository, you need:
- Python 3.9
- Required Python packages (listed in
requirements.txt
)
Clone the repository to your local machine:
git clone https://github.com/dyamin/MEGA.git
cd MEGA
Install the required packages:
pip install -r requirements.txt
Preprocessing starts with two main components: Preprocess-subjects
and decentralized
.
-
Preprocess-subjects:
- Choose between
ParseEyeLinkAsc
orCili
to read the EyeLink.asc
files (after conversion using the EyeLink tooledf2asc
). This will return all the data from the EyeLink parser as a large pandas DataFrame.
- Choose between
-
Decentralized:
- Since each subject watches the movies in a different order, you will have one long file with movie number markers. This code extracts data for each movie independently, making it easier to load and process.
To run this step, use the file `PreprocessingController.py` under the folder pre_processing
Run the postprocessing module to organize and classify the eye-tracking data, with the memory reports:
To run this step, use the file `PostprocessingController.py` under the folder post_processing
Run the RoI module to add the RoI definitions
To run this step, use the file `src/roi/RoiController.py`
To extract eye-tracking features, use the module features_extraction
To run this step, use the file `src/features_extraction/FeatureExtractionController.py'
To carry out statistical analysis, you can use the Statistical Analysis module to create visualizations and calculate the statistical power for the effect. Here's how to do it:
1. Start by defining the configuration in the `src/statistical_analysis/config.py` file.
2. Next, execute the pre-plotter file located at 'src/statistical_analysis/Preplotter.py'
3. Finally, run either 'src/statistical_analysis/GazeAnalysisPlotter.py' or 'src/statistical_analysis/BetweenPopulationsAnalysisPlotter.py' based on the specific test you need to perform.
Use the provided Jupyter notebooks to perform machine learning classification on the eye-tracking features (under src/features_extraction).
The version we utilized in the paper is: `src/classification/paired_session_classification_loso_ALL_ROI_v2.ipynby`
These notebooks contain detailed instructions and scripts for preparing data, training, and evaluating machine learning models. You can also perform single-trial classification using XGBoost and other models we experimented with.
To generate videos with eye-tracking data like the ones we used here
1. Start by defining the configuration in the `src/visualize/config.py` file.
2. Next, execute the 'src/visualize/VisualizeController.py' file
The movie clips used in the experiments are available at Yuval Nir Lab
The experiment code is available here
The data used in this study is available upon request due to privacy and ethical considerations. Please contact the corresponding author for access.
- Figures and Tables: Generated results can be found in the
results
directory. - Supplementary Materials: Additional analysis and figures are available in the
supplementary
directory.
If you use any part of this code or data in your research, please cite our paper:
Yamin D., Schmidig J.F., Sharon O., Nadu Y., Nir J., Ranganath C., Nir Y. (2024). Seeing the future: anticipatory eye gaze as a marker of memory.
TODO
We welcome contributions from the community. Please follow these steps to contribute:
- Fork the repository.
- Create a new branch (
git checkout -b feature/YourFeature
). - Commit your changes (
git commit -m 'Add Your Feature'
). - Push to the branch (
git push origin feature/YourFeature
). - Open a pull request.
- Daniel Yamin
- Flavio Schmidig
- Omer Sharon
- Jonathan Nir
- Yuval Shapira
- Yuval Nir
This project is licensed under the MIT License - see the LICENSE file for details.
For more detailed information, please refer to our published paper: Yamin D., Schmidig J.F., Sharon O., Nadu Y., Nir J., Ranganath C., Nir Y. (2024). Seeing the future: anticipatory eye gaze as a marker of memory.