Skip to content

Capture the learnings and implementations from the Hugging Face course 2.

License

Notifications You must be signed in to change notification settings

marcelcastrobr/huggingface_course2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Objectives:

Part 2 will focus on all the other common NLP tasks: token classification, language modeling (causal and masked), translation, summarization and question answering. It will also take a deeper dive in the whole Hugging Face ecosystem, in particular 🤗 Datasets and 🤗 Tokenizers.

Course link: https://huggingface.co/course/chapter5?fw=pt

Amazon SageMaker Community

Project Requirements:

Goal: Create a text-to-image search engine that allows users to search for images based on natural language queries using techniques like Multilingual Knowledge Distillation 2 to extend the embeddings to new languages.

Example of Models:

Below are some CLIP model examples.

Course Notebook references:

CLIP Model:

CLIP model overview:

The CLIP model was proposed in Learning Transferable Visual Models From Natural Language Supervision by Alec Radford, Jong Wook Kim, Chris Hallacy, Aditya Ramesh, Gabriel Goh, Sandhini Agarwal, Girish Sastry, Amanda Askell, Pamela Mishkin, Jack Clark, Gretchen Krueger, Ilya Sutskever. CLIP (Contrastive Language-Image Pre-Training) is a neural network trained on a variety of (image, text) pairs. It can be instructed in natural language to predict the most relevant text snippet, given an image, without directly optimizing for the task, similarly to the zero-shot capabilities of GPT-2 and 3. Source: HuggingFace

DEMO: Text to Image using CLIP Model

Short Description: I made use of the CLIP Model implemented in the HuggingFace transformers library. The openai/clip-vit-base-patch32 was used as a pertained model using the unsplash-25k-photos dataset.

Please check the demo on the link below. You can use the examples in the Botton left corner for ideas on how it works. The demo was build using Hugging Face Spaces and gradio.

Link: https://huggingface.co/spaces/marcelcastrobr/CLIP-image-search

image-20211207102255637

Course certificate 😆 ✅

Huggingface_certificate

About

Capture the learnings and implementations from the Hugging Face course 2.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published