Revised and expanded
AI, ML and Deep Learning | Note | Video | Code |
---|---|---|---|
Overview | - | - | |
Supervised Learning | - | - | |
Multilayer Perceptron (MLP) | - | - | |
Optimization | - | - |
AI, ML and Deep Learning | Note | Video | Code |
---|---|---|---|
LLM | |||
LangChain | - | Jupyter | |
LLM Fine Tuning & Document Query | - | Jupyter | |
Document Query using Chroma | - | - | Jupyter |
Dolly (Free LLM) | - | - | Jupyter |
LVM | |||
Segment Anything Model (SAM) | - | Prompts & All Masks |
|
Open CLIP & CoCa | - | - | Jupyter |
Agents | |||
HuggingGPT | - | Agents | |
Large MultiModal Models (L3M) | |||
ImageBind | - | ImageBind | |
Stable Diffusion | |||
Diffusion | - | - | Diffusers |
ControlNet | - | - | ControlNet |
Welcome to the 2022 version of Deep Learning course. We made major changes in the coverage and delivery of this course to reflect the recent advances in the field.
Assuming you already have anaconda
or venv
, install the required python packages to run the experiments in this version.
pip install -r requirements.txt
AI, ML and Deep Learning | Note | Video | Code |
---|---|---|---|
Overview | YouTube | - | |
Toolkit | |||
Development Environment and Code Editor |
YouTube | - | |
Python | YouTube | - | |
Numpy | YouTube | Jupyter | |
Einsum | YouTube | Jupyter | |
Einops | YouTube | Jupyter & Jupyter (Audio) |
|
PyTorch & Timm | YouTube | PyTorch/Timm & Input Jupyter |
|
Gradio & Hugging Face | YouTube | Jupyter | |
Weights and Biases | YouTube | Jupyter | |
Hugging Face Accelerator | Same as W&B | Same as W&B | Jupyter & Python |
Datasets & Dataloaders | YouTube | Jupyter | |
Supervised Learning | YouTube | ||
PyTorch Lightning | YouTube | MNIST & KWS | |
Keyword Spotting App | cd versions/2022/supervised/python && python3 kws-infer.py --gui |
||
Building blocks: MLPs, CNNs, RNNs, Transformers |
|||
MLP | YouTube | MLP on CIFAR10 | |
CNN | YouTube | CNN on CIFAR10 | |
Transformer | YouTube | Transformer on CIFAR10 | |
Backpropagation | |||
Optimization | |||
Regularization | |||
Unsupervised Learning | Soon | ||
AutoEncoders | YouTube | AE MNIST Colorization CIFAR10 |
|
Variational AutoEncoders | Soon | ||
Practical Applications: Vision, Speech, NLP |
Soon |
-
Emphasis on tools to use and deploy deep learning models. In the past, we learn how to build and train models to perform certain tasks. However, often times we want to use a pre-trained model for immediate deployment. testing or demonstration. Hence, we will use tools such as
huggingface
,gradio
andstreamlit
in our discussions. -
Emphasis on understanding deep learning building blocks. The ability to build, train and test models is important. However, when we want to optimize and deploy a deep learning model on a new hardware or run it on production, we need an in-depth understanding of the code implementation of our algorithms. Hence, there will be emphasis on low-level algorithms and their code implementations.
-
Emphasis on practical applications. Deep learning can do a lot more than recognition. Hence, we will highlight practical applications in vision (detection, segmentation), speech (ASR, TTS) and text (sentiment, summarization).
-
Various levels of abstraction. We will present deep learning concepts from low-level
numpy
andeinops
, to mid-level framework such as PyTorch, and to high-level APIs such ashuggingface
,gradio
andstreamlit
. This enables us to use deep learning principles depending on the problem constraints. -
Emphasis on individual presentation of assignments, machine exercises and projects. Online learning is hard. To maximize student learning, this course focuses on exchange of ideas to ensure individual student progress.
If you find this work useful, please give it a star, fork, or cite:
@misc{atienza2020dl,
title={Deep Learning Lecture Notes},
author={Atienza, Rowel},
year={2020},
publisher = {GitHub},
journal = {GitHub repository},
howpublished = {\url{https://github.com/roatienza/Deep-Learning-Experiments}},
}