An official implementation code for paper "Spatio-Temporal Co-Attention Fusion Network for Video Splicing Localization", JEI 2024. This repo will provide codes, trained weights, and our training dataset.
- Download the training frames from Baiduyun Link (extract code: y4vy).
- Prepare for the training datasets.
└─dataset
├── train
│ ├── videos
│ │ ├── video_00000
│ │ │ ├── 00000.png
│ │ │ └── ...
│ │ └── ...
│ └── masks
│ ├── video_00000
│ │ ├── 00000.png
│ │ └── ...
│ └── ...
└── val
├── videos
│ ├── video_00000
│ │ ├── 00000.png
│ │ └── ...
│ └── ...
└── masks
├── video_00000
│ ├── 00000.png
│ └── ...
└── ...
- Modify configuration in
libs/utils/config_standard_db.py
.
python train.py
Download the weights from Google Drive Link and move it into the checkpoints/
.
- Run on a dataset.
python test.py
- Run on a video.
python demo.py
The code and dataset is released only for academic research. Commercial usage is strictly prohibited.
@article{Lin2024SCFNet,
author = {Man Lin and Gang Cao and Zijie Lou and Chi Zhang},
title = {{Spatio-temporal co-attention fusion network for video splicing localization}},
volume = {33},
journal = {Journal of Electronic Imaging},
number = {3},
publisher = {SPIE},
pages = {033027},
year = {2024},
doi = {10.1117/1.JEI.33.3.033027},
URL = {https://doi.org/10.1117/1.JEI.33.3.033027}
}
If you have any questions, please contact me([email protected]).