Skip to content

zgsxwsdxg/DeblurGAN

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

40 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

DeblurGAN

arXiv Paper Version

Pytorch implementation of the paper DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks.

Our network takes blurry image as an input and procude the corresponding sharp estimate, as in the example:

The model we use is Conditional Wasserstein GAN with Gradient Penalty + Perceptual loss based on VGG-19 activations. Such architecture also gives good results on other image-to-image translation problems (super resolution, colorization, inpainting, dehazing etc.)

How to run

Prerequisites

  • NVIDIA GPU + CUDA CuDNN (CPU untested, feedback appreciated)
  • Pytorch

Download weights from Dropbox . Note that during the inference you need to keep only Generator weights.

Put the weights into

/.checkpoints/experiment_name

To test a model put your blurry images into a folder and run:

python test.py --dataroot /.path_to_your_data --model test --dataset_mode single --learn_residual

Data

Download dataset for Object Detection benchmark from Google Drive

Note: The repository is still being structured, the links to the data, weights and also instructions would be updated soon

Citation

If you find our code helpful in your research or work please cite our paper.

@article{DeblurGAN,
  title = {DeblurGAN: Blind Motion Deblurring Using Conditional Adversarial Networks},
  author = {Kupyn, Orest and Budzan, Volodymyr and Mykhailych, Mykola and Mishkin, Dmytro and Matas, Jiri},
  journal = {ArXiv e-prints},
  eprint = {1711.07064},
  year = 2017
}

Acknowledgments

Code borrows heavily from pix2pix. The images were taken from GoPRO test dataset - DeepDeblur

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%