CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3.
With CLIP, we can train any two image and text encoder models together to relate images and text. It gives a score for relatedness of any given text and image! We fine-tuned Vision Transformer(ViT) as the vision encoder and the roberta-zwnj-wnli-mean-tokens as the farsi text encoder.
You can find how to train the model in the CLIP training notebook.
To train (fine-tune) this model, we need examples that are pairs of images and Persian text that are the text associated with the image. Since Persian data in this field is not easily available and manual labeling of data is costly, we decided to translate the available English data and obtain the other part of the data from the web crawling method.
There weren't datasets with Persian captioned images, so we translated datasets with English captions to Persian with Google Translate using googletrans python package.
Then we evaluated these translations with a sentence-bert bilingual model named distiluse-base-multilingual-cased-v2 trained for sentence similarity. We calculated cosine similarity for embeddings of English caption and its Persian translation. The histogram of this score is shown below:
Finally, we filtered out top translations. Some samples of the final dataframe:
More details of translation part can be found in this notebook.
For improve our model performance we crawled divar posts with it's API. we saved image-title pairs in google drive. You can see more details in this notebook. Some samples of final data is shown below:
This metric is used for evaluating how good an image search of a model is.
Acc@k definition: Is the best image (the most related to the text query), among the top-k outputs of the model?
We calculated this metric for both models (CLIP & baseline) on two datasets:
We can see the results of our CLIP model on a sample of flickr dataset with size 1000 (the right diagram has a log scale in its x-axis:
And here are the results of our CLIP model on a sample of nocaps dataset with size 1000 (the right diagram has a log scale in its x-axis:
You can find more details in notebooks for CLIP evaluation and baseline evaluation
The model is zero-shot. So it should works on new tasks without new training.
We used both models (CLIP & baseline), to classify images in two datasets:
- STL10: unseen data with 10 different categories.
- OxfordIIIT Pet: unseen data with 37 different types of pets.
We created a dataset from "OxfordIIIT Pet" which has only "dog" and "cat" labels.
We can see the results of CLIP model classification on the two datasets:
And here are the results of the baseline model in classification:
You can find more details in notebooks for CLIP zero-sho and baseline zero-shot
from transformers import AutoModel, AutoTokenizer, CLIPVisionModel
# load finetuned vision encoder
vision_encoder = CLIPVisionModel.from_pretrained('arman-aminian/farsi-image-search-vision')
# load our finetuned text encoder and tokenizer
text_encoder = AutoModel.from_pretrained('arman-aminian/farsi-image-search-text')
text_tokenizer = AutoTokenizer.from_pretrained('arman-aminian/farsi-image-search-text')
search = ImageSearchDemo(vision_encoder, text_encoder, text_tokenizer, device='cuda')
# encode images
search.compute_image_embeddings(test.image.to_list())
search.image_search('ورزش کردن گروهی')
We have deployed our model on Huggingface site, which you can query through https://huggingface.co/spaces/arman-aminian/farsi-image-search right now! Please keep in mind that our model had time and hardware limitations to train the model. Also, the demo searches for your query from a limited dataset and shows you the ten best results, so there may not be ten photos completely related to your query in the demo dataset that the model wants to find :D
In order for the result to be reliable, the dataset selected for the demo is completely new and taken from the unsplash, and even other parts of this dataset have not been seen during the training of the model.