Image captioning done entirely with CNNs, implemented in Pytorch and based on the paper CNN+CNN: Convolutional Decoders for Image Captioning, written by Qingzhong Wang and Antoni B. Chan.
The model was trained using the COCO dataset, using the splitting between training and validation as provided by the authors.
Edit the makefile and change image and caption folders to your dataset path. Also you can specify different learning rates and batch size.
To train the simple model, run
make train_cnn_cnn_ce
To train the hierarchical attention model, run
make train_cnn_cnn_ha_ce
Model weights will be saved to weigths/cnn_cnn_ce<vocab_size><embedding_dim><language_layers>.dat
Using the inference.py script, you can feed the model any picture you want
python inference.py -i <path_to_picture>
- Pytorch (> 1.1): deep learning framework
- Torchvision > 0.3
- pyspellchecker: to correct the spelling of words in captions, used to generate a dictionary with good words.
- Matplotlib
- PIL
- Numpy
This project was done as the final project for the subject of Computer Vision at the Robotics Engineering degree at Alicante's University.