Skip to content

Explore the complete Deep Learning MasterClass in TensorFlow and PyTorch, covering neural networks, GANs, Transformers, VAEs, and more. Your go-to resource for mastering cutting-edge techniques.

License

Notifications You must be signed in to change notification settings

mhuzaifadev/deep-learning-masterclass

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

69 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Deep Learning Master Class

Table of Contents
  1. About The Project
  2. Getting Started
  3. Roadmap
  4. Contributing
  5. License

About The Project

This project was originally initiated under the influence of Google Developer Student Clubs and Microsoft Learn Student Ambassadors - SSUET campus to teach more and more students about technology. Over the internet, there are great resources to learn Deep Learning, but what it lacks is the proper flow in their road map, which triggers the students to give up halfway mostly.

This project is created to let students have the quickest and easiest hands-on journey with Deep Learning using Tensorflow & Pytorch Python3. However, currently, the notebook doesn't possess any Mathematical material, but we surely are digging into it with other experienced ML writers to help throughout that process.

The end goal of this repository is just 3 hours per day and only 90 days, and you will be best with Deep Learning, you may have never imagined.

Built With

The entire project (course) is focused on Python 3. (We recommend Python 3.6 to 3.8), following some famously required packages in Python. And the most important ingredient here is LOVE.

Getting Started

Install Python 3.8X on your local machine. Once done. Open the terminal and run

pip install jupyternotebook
pip install tensorflow
pip install pytorch
pip install opencv-python

Then reopen your termial in your desire directory, and run

jupyter notebook

In that way, jupyter notebook will initiate its kernal and live on the local host.

The other possible and easiest solution is to sign into your Google Account and hit https://colab.research.google.com . This will open Colab, an online Jupyter Notebook workspace by Google. All environments are already built-in, you can directly start working on Colab.

Prerequisites

Your system must meet the requirement of Windows 7 or equivalent with a minimum of 3 to 4GB of memory available.

Another essential prerequisite is to learn Machine Learning. If you're not currently good at it or don't know much about it. So there's absolutely no need to be worrying about it. Head over to Machine Learning Zero to Hero Course, it will take 30 days to get the best in Machine Learning. But believe me, without it, learning Deep Learning is simply like learning how to fly a jet plane without learning to fly a general plane.

Roadmap

See the open issues for a list of proposed features (and known issues). The roadmap of this project is comprising over THIRTEEN sections.














Introduction to Neural Networks
A neural network is a series of algorithms that endeavors to recognize underlying relationships in a set of data through a process that mimics the way the human brain operates. In this sense, neural networks refer to systems of neurons, either organic or artificial in nature.
Tensorflow & Keras
TensorFlow is an open-source library developed by Google primarily for deep learning applications. It also supports traditional machine learning.
PyTorch
PyTorch is an open-source machine learning library based on the Torch library, used for applications such as computer vision and natural language processing, primarily developed by Facebook's AI Research lab (FAIR).
Convolutional Neural Network
A Convolutional Neural Network, also known as CNN or ConvNet, is a class of neural networks that specializes in processing data that has a grid-like topology, such as an image. Each neuron works in its own receptive field and is connected to other neurons in a way that they cover the entire visual field.
Recurrent Neural Network
Recurrent neural networks (RNN) are the state-of-the-art algorithm for sequential data and are used by Apple's Siri and and Google's voice search. It is the first algorithm that remembers its input, due to an internal memory, which makes it perfectly suited for machine learning problems that involve sequential data.
Self Organizing Maps
A self-organizing map (SOM) or self-organizing feature map (SOFM) is a type of artificial neural network (ANN) that is trained using unsupervised learning to produce a low-dimensional (typically two-dimensional), discretized representation of the input space of the training samples, called a map, and is, therefore, a method to do dimensionality reduction.
Boltzman Machine
A Boltzmann machine (also called a stochastic Hopfield network with hidden units or Sherrington–Kirkpatrick model with external field or stochastic Ising-Lenz-Little model) is a type of stochastic recurrent neural network. It is a Markov random field.
Transfer Learning
Transfer learning is a machine learning method where a model developed for a task is reused as the starting point for a model on a second task. Common examples of transfer learning in deep learning. When to use transfer learning on your own predictive modeling problems.
Single Shot Detector
SSD is a single-shot detector. It has no delegated region proposal network and predicts the boundary boxes and the classes directly from feature maps in one single pass. To improve accuracy, SSD introduces small convolutional filters to predict object classes and offsets to default boundary boxes.
Neural Style Transfer
Neural style transfer is an optimization technique used to take two images—a content image and a style reference image (such as an artwork by a famous painter)—and blend them together so the output image looks like the content image, but “painted” in the style of the style reference image.
Autoencoders
An autoencoder is a type of artificial neural network used to learn efficient data codings in an unsupervised manner. The aim of an autoencoder is to learn a representation (encoding) for a set of data, typically for dimensionality reduction, by training the network to ignore signal “noise”.
Generasive Adversial Networks
Generative Adversarial Networks, or GANs, are a deep-learning-based generative model. More generally, GANs are a model architecture for training a generative model, and it is most common to use deep learning models in this architecture.
Natural Language Processing
Natural language processing (NLP) is a branch of artificial intelligence that helps computers understand, interpret and manipulate human language. NLP draws from many disciplines, including computer science and computational linguistics, in its pursuit to fill the gap between human communication and computer understanding.

Contributing

Contributions are what make the open-source community such an amazing place to learn, inspire, and create. Any contributions you make are greatly appreciated.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

License

Distributed under the MIT License. See LICENSE for more information.

🤝🏻  Connect with Me

About

Explore the complete Deep Learning MasterClass in TensorFlow and PyTorch, covering neural networks, GANs, Transformers, VAEs, and more. Your go-to resource for mastering cutting-edge techniques.

Topics

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Sponsor this project

 

Packages

No packages published