Skip to content

Latest commit

 

History

History
64 lines (49 loc) · 1.76 KB

README.md

File metadata and controls

64 lines (49 loc) · 1.76 KB

Multimodal Eigenwords

Python implementation for Multimodal Eigenwords, which extends Eigenwords for multimodal word embedding.

Prerequisits

Tested on CentOS7 with the following environments:

  • Anaconda3
    • numpy >= 1.15.4
    • scipy >= 1.1.0
    • scikit-learn >= 0.20.1
  • h5py >= 2.9.0
  • more-itertools >= 4.3.0
  • tqdm >= 4.30.0
  • dask >= 1.1.1
  • gensim >= 3.5.0
  • imageio >= 2.4.1
  • pybind11 >= 2.2.4
  • openbals (suppose it's installed using conda)
  • g++ >= 4.8.5

Usage

See the demo on mmeigenwords_demo.ipynb. Before you run scripts on the notebook, you need to conduct the following steps.

# compile cpp source
cd src/
make

cd ../data
# download input files (corpus, image features, etc.)
./download_inputs.sh

# download images
# Note that this may take some time, and that some images may have been removed from flickr
./download_images.sh

Citation

@InProceedings{W17-2405,
  author = "Fukui, Kazuki and Oshikiri, Takamasa and Shimodaira, Hidetoshi",
  title = "Spectral Graph-Based Method of Multimodal Word Embedding",
  booktitle = "Proceedings of TextGraphs-11: the Workshop on Graph-based Methods for Natural Language Processing",
  year = "2017",
  publisher = "Association for Computational Linguistics",
  pages = "39--44",
  location = "Vancouver, Canada",
  doi = "10.18653/v1/W17-2405",
  url = "http://aclweb.org/anthology/W17-2405"
}