Skip to content

Latest commit

 

History

History
86 lines (61 loc) · 4.79 KB

Lesson8.md

File metadata and controls

86 lines (61 loc) · 4.79 KB

Deep Learning

Lesson 8

Introduction to CPU and GPU

CPU (central processing unit) is a generalized processor that is designed to carry out a wide variety of tasks.A CPU (central processing unit) and a GPU (graphics processing unit) work together to boost data throughput and the number of simultaneous calculations within an application.


GPU (graphics processing unit) is a specialized processing unit with enhanced mathematical computation capability, ideal for computer graphics and machine-learning tasks.


GPUs were created to create visuals for computer graphics and video gaming consoles, but from the early 2010s, they have also been used to speed up calculations involving large amounts of data. For example,at any given time, a graphically intense video game may have hundreds or thousands of polygons on the screen, each having its own movement, color, lighting, and other characteristics. That kind of workload is too much for a CPU to handle. Graphic processing units (GPUs) are used to solve this problem.



NVIDIA VS AMD

NVIDIA and AMD rae two of the most popular companies who manufacture GPUs. In deep learning, we mostly used the NVIDIA GPUs.Over the years, Nvidia has been putting hardware-level optimizations to better support AI-related tasks. Besides, their CUDA software development kit (SDK) supports all major machine learning frameworks. Combined with a large, helpful community and libraries, Nvidia is the perfect suit for most ML enthusiasts.


Deep Learning Softwares

This section of the lesson delas with the common libraries used for the application of deep learning. Some of the common libraries/framework used in deep learning include Tensorflow, Pytorch and Caffe2. The point of deep learning frameworks include

  • Easily build big computational graphs
  • Easily compute gradients in computational graphs
  • Run it all efficiently on GPU

If we try to compute compute complex calculations in Numpy ourselves, it becomes inefficient once the network becomes complicated. Also, Numpy is CPU based so it doesn't run on GPU and we can't take advantage of parallel processing. Using the frameworks such as Tensorflow, we can just write a single piece of code and the framework calculates complex calculation like gradient calculations for you. Also, you can tell frameworks to run calculations on either CPU or GPU yourself.

Tensorflow
The Google Brain team created TensorFlow for internal Google use in research and production. In 2015, the first version was released under the Apache License 2.0. In September 2019, Google launched TensorFlow 2.0, an improved version of TensorFlow.

Pytorch
PyTorch is an open source machine learning framework based on the Torch library, largely created by Facebook's AI Research division for applications such as computer vision and natural language processing.

Caffe2
The deep learning framework Caffe (Convolutional Architecture for Fast Feature Embedding) was created at the University of California, Berkeley. With its unrivaled performance and well-tested C++ codebase, the original Caffe framework was ideal for large-scale product use cases.