ShinkaiGAN is a deep learning model designed to transform sketch images into beautiful anime scenes inspired by the style of Makoto Shinkai. This model utilizes a Hybrid Perception Block U-Net architecture to achieve high-quality image-to-image translation. In order to stabilize training process, we adopt the progressive training techniques as Karras, et. al. proposed to train ProGAN and StyleGANs.
The core of ShinkaiGAN is based on UNet with the Hybrid Perception Block architecture.
The model is trained on a custom dataset that includes:
- High-resolution anime scenes from various Makoto Shinkai movies (currently is not public yet due to copyright, this will be updated soon).
- Corresponding sketch images manually created or extracted using edge detection algorithms.
To use ShinkaiGAN, follow these steps:
-
Clone the Repository:
git clone https://github.com/yourusername/ShinkaiGAN.git cd ShinkaiGAN
-
Install Dependencies:
pip install -r requirements.txt
-
Run Training:
python train.py \ --src_dir "/path/to/source/directory" \ --tgt_dir "/path/to/target/directory" \ --lvl1_epoch 10 \ --lvl2_epoch 20 \ --lvl3_epoch 30 \ --lvl4_epoch 40 \ --lambda_adv 1.0 \ --lambda_ct 0.1 \ --lambda_up 0.01 \ --lambda_style 0.01 \ --lambda_color 0.001 \ --lambda_grayscale 0.01 \ --lambda_tv 0.001 \ --lambda_fml 0.01 \ --device cuda
Here are some examples of sketch-to-anime transformations using ShinkaiGAN:
Sketch | Anime Scene |
---|---|
We welcome contributions to improve ShinkaiGAN. If you would like to contribute, please follow these steps:
- Fork the repository.
- Create a new branch (
git checkout -b feature-branch
). - Commit your changes (
git commit -am 'Add new feature'
). - Push to the branch (
git push origin feature-branch
). - Create a new Pull Request.
This project is licensed under the CC BY-NC-ND License. See the LICENSE file for details.
- Zheng, W., Li, Q., Zhang, G., Wan, P., & Wang, Z. (2022). ITTR: Unpaired Image-to-Image Translation with Transformers. ArXiv (Cornell University). https://doi.org/10.48550/arxiv.2203.16015
- Ronneberger, O., Fischer, P., & Brox, T. (2015, May 18). U-Net: Convolutional Networks for Biomedical Image Segmentation. ArXiv.org. https://arxiv.org/abs/1505.04597
- Torbunov, D., Huang, Y., Tseng, H.-H., Yu, H., Huang, J., Yoo, S., Lin, M., Viren, B., & Ren, Y. (2023, September 22). UVCGAN v2: An Improved Cycle-Consistent GAN for Unpaired Image-to-Image Translation. ArXiv.org. https://doi.org/10.48550/arXiv.2303.16280
- Karras, T., Aila, T., Laine, S., & Lehtinen, J. (2017). Progressive Growing of GANs for Improved Quality, Stability, and Variation. ArXiv.org. https://arxiv.org/abs/1710.10196
- Karras, T., Laine, S., & Aila, T. (2018). A Style-Based Generator Architecture for Generative Adversarial Networks. ArXiv.org. https://arxiv.org/abs/1812.04948
- AnimeGANv3: A Novel Double-Tail Generative Adversarial Network for Fast Photo Animation. (n.d.). Tachibanayoshino.github.io. Retrieved June 25, 2024, from https://tachibanayoshino.github.io/AnimeGANv3/
- Liu, G., Chen, X., & Hu, Y. (2019). Anime Sketch Coloring with Swish-Gated Residual U-Net. Communications in Computer and Information Science, 190–204. https://doi.org/10.1007/978-981-13-6473-0_17