Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Proposal for Project Vision Transformers Added #200

Merged
merged 6 commits into from
May 15, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
39 changes: 39 additions & 0 deletions _projects/ViT.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,39 @@
---
layout: page
title: Vision Transformers From Scratch
description: Implementing a Vision Transformers Model From Scratch
importance: 1
---

| Project Domains | Mentors | Project Difficulty |
|------------------------------|--------------|--------------------|
| Deep Learning, Transformers, CNNs, LSTMs, Python, Pytorch | Aryan Nanda | Hard |

<br>

### Project Description

An Image is Worth 16x16 Words. While the Transformer architecture has become the de-facto standard for natural language processing tasks, its use in computer vision remains limited. This project is an attempt to understand the transformer architecture and its use in CV applications. Initially, we will start with naive deep-learning models and then will do basics of Processing sequential data using RNNs, LSTMs and then will understand the workings of Vision Transformers and then will implement a model that can generate a descriptive caption for an image we provide it.
This project will be very beneficial for those who want to do research in Transformer-based models in Open-source in upcoming years.

### Pre-requisites

- Strong Python Programming -> https://www.youtube.com/watch?v=rfscVS0vtbw

- Good conceptual understanding of Linear Algebra concepts -> https://www.youtube.com/watchv=fNk_zzaMoSslist=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab

- Good understanding of concepts taught in the Pixels workshop(Convolutions, playing with images etc.) -> https://drive.google.com/drive/folders/1vyaM4vVJF-gTf_5movE73Ve3Pq_SUFSt

- Familiarity with neural networks -> https://www.youtube.com/watch?v=aircAruvnKk&list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi (Must Watch)

> It is recommended that candidates interested in this project go through the above resources. This will give you an advantage over others during interview for this project.

### References

- [Explaination of SOTA Transformers model](https://arxiv.org/abs/1706.03762)
- [Vision Transformers](https://arxiv.org/abs/2010.11929)

### Mentor
Aryan Nanda - [email protected]

> If you have any doubts regarding this project or any difficulty in understanding the pre-requisites videos you reach out to the mentor.
61 changes: 61 additions & 0 deletions _site/projects/ViT/index.html
Original file line number Diff line number Diff line change
@@ -0,0 +1,61 @@
<!DOCTYPE html PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" "http://www.w3.org/TR/REC-html40/loose.dtd">
<html><body>
<table>
<thead>
<tr>
<th>Project Domains</th>
<th>Mentors</th>
<th>Project Difficulty</th>
</tr>
</thead>
<tbody>
<tr>
<td>Deep Learning, Transformers, CNNs, Python, Pytorch</td>
<td>Aryan Nanda</td>
<td>Hard</td>
</tr>
</tbody>
</table>

<p><br></p>

<h3 id="project-description">Project Description</h3>

<p>An Image is Worth 16x16 Words. While the Transformer architecture has become the de-facto standard for natural language processing tasks, its use in computer vision remains limited. This project is an attempt to understand the transformer architecture and its use in CV applications. Initially, we will start with naive deep-learning models and then will do basics of Processing sequential data using RNNs, LSTMs and then will understand the workings of Vision Transformers and then will implement a model that can generate a descriptive caption for an image we provide it. <br>
This project will be very beneficial for those who want to do research in Transformer-based models in Open-source in upcoming years.</p>

<h3 id="pre-requisites">Pre-requisites</h3>

<ul>
<li>
<p>Strong Python Programming -&gt; https://www.youtube.com/watch?v=rfscVS0vtbw</p>
</li>
<li>
<p>Good conceptual understanding of Linear Algebra concepts -&gt; https://www.youtube.com/watchv=fNk_zzaMoSslist=PLZHQObOWTQDPD3MizzM2xVFitgF8hE_ab</p>
</li>
<li>
<p>Good understanding of concepts taught in the Pixels workshop(Convolutions, playing with images etc.) -&gt; https://drive.google.com/drive/folders/1vyaM4vVJF-gTf_5movE73Ve3Pq_SUFSt</p>
</li>
<li>
<p>Familiarity with neural networks -&gt; https://www.youtube.com/watch?v=aircAruvnKk&amp;list=PLZHQObOWTQDNU6R1_67000Dx_ZCJB-3pi (Must Watch)</p>
</li>
</ul>

<blockquote>
<p>It is recommended that candidates interested in this project go through the above resources. This will give you an advantage over others during interview for this project.</p>
</blockquote>

<h3 id="references">References</h3>

<ul>
<li><a href="https://arxiv.org/abs/1706.03762" rel="external nofollow noopener" target="_blank">Explaination of SOTA Transformers model</a></li>
<li><a href="https://arxiv.org/abs/2010.11929" rel="external nofollow noopener" target="_blank">Vision Transformers</a></li>
</ul>

<h3 id="mentor">Mentor</h3>
<p>Aryan Nanda - [email protected]</p>

<blockquote>
<p>If you have any doubts regarding this project or any difficulty in understanding the pre-requisites videos you reach out to the mentor.</p>
</blockquote>
</body></html>
16 changes: 15 additions & 1 deletion _site/projects/index.html
Original file line number Diff line number Diff line change
Expand Up @@ -26,7 +26,7 @@
<!-- Fonts & Icons -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/@fortawesome/[email protected]/css/all.min.css" integrity="sha256-mUZM63G8m73Mcidfrv5E+Y61y7a12O5mW4ezU3bxqW4=" crossorigin="anonymous">
<link rel="stylesheet" href="https://cdn.jsdelivr.net/npm/[email protected]/css/academicons.min.css" integrity="sha256-i1+4qU2G2860dGGIOJscdC30s9beBXjFfzjWLjBRsBg=" crossorigin="anonymous">
<link rel="stylesheet" type="text/css" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700|Roboto+Slab:100,300,400,500,700|Material+Icons">
<link rel="stylesheet" type="text/css" href="https://fonts.googleapis.com/css?family=Roboto:300,400,500,700%7CRoboto+Slab:100,300,400,500,700%7CMaterial+Icons">

<!-- Code Syntax Highlighting -->
<link rel="stylesheet" href="https://cdn.jsdelivr.net/gh/jwarby/jekyll-pygments-themes@master/github.css" media="" id="highlight_theme_light">
Expand Down Expand Up @@ -126,6 +126,20 @@ <h1 class="post-title">Eklavya Projects</h1>
<div class="grid">
<!-- _includes/projects.html -->
<div class="grid-sizer"></div>
<div class="grid-item">
<a href="/projects/ViT/">
<div class="card hoverable">
<div class="card-body">
<h2 class="card-title text-wrap"></h2>
<p class="card-text"></p>
<div class="row ml-1 mr-1 p-0">
</div>
</div>
</div>
</a>
</div>
<!-- _includes/projects.html -->
<div class="grid-sizer"></div>
<div class="grid-item">
<a href="/projects/Evobourne/">
<div class="card hoverable">
Expand Down
Loading