Created by Junhua Mao
This package is a re-implementation of the m-RNN image captioning method using TensorFlow. The training speed is optimized with buckets of different lengths of the training sentences. It also support the Beam Search method to decode image features into sentences.
If you find this package useful in your research, please consider citing:
@article{mao2014deep,
title={Deep Captioning with Multimodal Recurrent Neural Networks (m-RNN)},
author={Mao, Junhua and Xu, Wei and Yang, Yi and Wang, Jiang and Huang, Zhiheng and Yuille, Alan},
journal={ICLR},
year={2015}
}
- TensorFlow 0.8+
- python 2.7 (Need ackages of numpy, scipy, nltk. All included in Anaconda)
- MS COCO caption toolkit
-
install MS COCO caption toolkit
-
Suppose that toolkit is install on $PATH_COCOCap and this package is install at $PATH_mRNN_CR. Create a soft link to COCOCap as follows:
cd $PATH_mRNN_CR
ln -sf $PATH_COCOCap ./external/coco-caption
- Download necessary data for using a trained m-RNN model.
bash setup.sh
This demo shows how to use a trained model to generate descriptions for an image. Run demo.py or view demo.ipynb
The configuration of the trained model is: ./model_conf/mrnn_GRU_conf.py.
The model achieves a CIDEr of 0.890 and a BLEU-4 of 0.282 on the 1000 validation images used in the m-RNN paper. It adopts a transposed weight sharing strategy that accelerates the training and regularizes the network.
Use the following shell to download extracted image features (Inception-v3 or VGG) for MS COCO.
# If you want to use inception-v3 image feature, then run:
bash ./download_coco_inception_features.sh
# If you want to use VGG image feature, then run:
bash ./download_coco_vgg_features.sh
Alternatively, you can extract image features yourself, you should download images from MS COCO dataset first. Please make sure that we can find the image on ./datasets/ms_coco/images/ (should have at least train2014 and val2014 folder). After that, type:
python ./exp/ms_coco_caption/extract_image_features_all.py
python ./exp/ms_coco_caption/create_dictionary.py
python ./exp/ms_coco_caption/mrnn_trainer_mscoco.py
In the training, you can see the loss of your model, but it sometimes very helpful to see the metrics (e.g. BLEU) of the generated sentences for all the checkpoints of the model. You can simply open another terminal:
python ./exp/ms_coco_caption/mrnn_validator_mscoco.py
The trained model, and the evaluation results, are all shown in ./cache/models/mscoco/
You should arrange the annotation of the other datasets in the same format of our MS COCO annotation format. See ./datasets/ms_coco/mscoco_anno_files/README.md for details.
- Allow end-to-end finetuning of the vision network parameters.