Skip to content

Commit

Permalink
Inital Commit
Browse files Browse the repository at this point in the history
  • Loading branch information
SrirajBehera committed Aug 13, 2021
0 parents commit ab2e7d4
Show file tree
Hide file tree
Showing 337 changed files with 34,763 additions and 0 deletions.
17 changes: 17 additions & 0 deletions .gitattributes
Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@
# Auto detect text files and perform LF normalization
* text=auto

# Custom for Visual Studio
*.cs diff=csharp

# Standard to msysgit
*.doc diff=astextplain
*.DOC diff=astextplain
*.docx diff=astextplain
*.DOCX diff=astextplain
*.dot diff=astextplain
*.DOT diff=astextplain
*.pdf diff=astextplain
*.PDF diff=astextplain
*.rtf diff=astextplain
*.RTF diff=astextplain
2 changes: 2 additions & 0 deletions .gitignore
Original file line number Diff line number Diff line change
@@ -0,0 +1,2 @@
**/ml_login_data.mat
**/ocatve_workspace
8 changes: 8 additions & 0 deletions LICENSE.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@

Copyright (c) 2017 ADTRAN

Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:

The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.

THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
219 changes: 219 additions & 0 deletions README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,219 @@
# Machine Learning By Prof. Andrew Ng :star2::star2::star2::star2::star:

This page contains all my YouTube/Coursera Machine Learning courses and resources :book: by [Prof. Andrew Ng](http://www.andrewng.org/) :man:

# Table of Contents
1. [Brief Intro](#brief-intro)
2. [Video lectures Index](#video-lectures-index)
3. [Programming Exercise Tutorials](#programming-exercise-tutorials)
4. [Programming Exercise Test Cases](#programming-exercise-test-cases)
5. [Useful Resources](#useful-resources)
6. [Schedule](#schedule)
7. [Extra Information](#extra-information)
8. [Online E-Books](#online-e-books)
9. [Aditional Information](#aditional-information)

## Brief Intro

The most of the course talking about **hypothesis function** and minimising **cost funtions**

### Hypothesis
A hypothesis is a certain function that we believe (or hope) is similar to the true function, the target function that we want to model. In context of email spam classification, it would be the rule we came up with that allows us to separate spam from non-spam emails.

### Cost Function
The cost function or **Sum of Squeared Errors(SSE)** is a measure of how far away our hypothesis is from the optimal hypothesis. The closer our hypothesis matches the training examples, the smaller the value of the cost function. Theoretically, we would like J(θ)=0

### Gradient Descent
Gradient descent is an iterative minimization method. The gradient of the error function always shows in the direction of the steepest ascent of the error function. Thus, we can start with a random weight vector and subsequently follow the
negative gradient (using a learning rate alpha)

#### Differnce between cost function and gradient descent functions
<table>
<colgroup>
<col width="50%" />
<col width="50%" />
</colgroup>
<thead>
<tr class="header">
<th> Cost Function </th>
<th> Gradient Descent </th>
</tr>
</thead>
<tbody>
<tr valign="top">
<td markdown="span">
<pre><code>
function J = computeCostMulti(X, y, theta)
m = length(y); % number of training examples
J = 0;
predictions = X*theta;
sqerrors = (predictions - y).^2;
J = 1/(2*m)* sum(sqerrors);
end
</code></pre>
</td>
<td markdown="span">
<pre><code>
function [theta, J_history] = gradientDescentMulti(X, y, theta, alpha, num_iters)
m = length(y); % number of training examples
J_history = zeros(num_iters, 1);
for iter = 1:num_iters
predictions = X * theta;
updates = X' * (predictions - y);
theta = theta - alpha * (1/m) * updates;
J_history(iter) = computeCostMulti(X, y, theta);
end
end
</code></pre>
</td>
</tr>
</tbody>
</table>

### Bias and Variance
When we discuss prediction models, prediction errors can be decomposed into two main subcomponents we care about: error due to "bias" and error due to "variance". There is a tradeoff between a model's ability to minimize bias and variance. Understanding these two types of error can help us diagnose model results and avoid the mistake of over- or under-fitting.

Source: http://scott.fortmann-roe.com/docs/BiasVariance.html

### Hypotheis and Cost Function Table

| Algorithm | Hypothesis Function | Cost Function | Gradient Descent |
|-------------------------------------------- |----------------------------------------------------------------------- |------------------------------------------------------------------------------- |--------------------------------------------------------------------------------------- |
| Linear Regression | ![linear_regression_hypothesis](/extra/img/linear_hypothesis.gif) | ![linear_regression_cost](/extra/img/linear_cost.gif) | |
| Linear Regression with Multiple variables | ![linear_regression_hypothesis](/extra/img/linear_hypothesis.gif) | ![linear_regression_cost](/extra/img/linear_cost.gif) | ![linear_regression_multi_var_gradient](/extra/img/linear_multi_var_gradient_descent.gif) |
| Logistic Regression | ![logistic_regression_hypothesis](/extra/img/logistic_hypothesis.gif) | ![logistic_regression_cost](/extra/img/logistic_cost.gif) | ![logistic_regression_gradient](/extra/img/logistic_gradient.gif) |
| Logistic Regression with Multiple Variables | | ![logistic_regression_multi_var_cost](/extra/img/logistic_multi_var_cost.gif) | ![logistic_regression_multi_var_gradient](/extra/img/logistic_multi_var_gradient.gif) |
| Neural Networks | | ![nural_cost](/extra/img/nural_cost.gif) | | |

### Regression with Pictures
- [Linear Regression](http://adit.io/posts/2016-02-20-Linear-Regression-in-Pictures.html)
- [Logistic Regression](http://adit.io/posts/2016-03-13-Logistic-Regression.html#non-linear-classification)

## Video lectures Index
[https://class.coursera.org/ml/lecture/preview](https://class.coursera.org/ml/lecture/preview)

## Programming Exercise Tutorials
[https://www.coursera.org/learn/machine-learning/discussions/all/threads/m0ZdvjSrEeWddiIAC9pDDA](https://www.coursera.org/learn/machine-learning/discussions/all/threads/m0ZdvjSrEeWddiIAC9pDDA)

## Programming Exercise Test Cases
[https://www.coursera.org/learn/machine-learning/discussions/all/threads/0SxufTSrEeWPACIACw4G5w](https://www.coursera.org/learn/machine-learning/discussions/all/threads/0SxufTSrEeWPACIACw4G5w)

## Useful Resources
[https://www.coursera.org/learn/machine-learning/resources/NrY2G](https://www.coursera.org/learn/machine-learning/resources/NrY2G)

## Schedule:
### Week 1:
- Welcome - [pdf](/home/week-1/lectures/pdf/Lecture1.pdf) - [ppt](/home//week-1/lectures/ppt/Lecture1.pptx)
- Linear regression with one variable - [pdf](/home/week-1/lectures/pdf/Lecture2.pdf) - [ppt](/home/week-1/lectures/ppt/Lecture2.pptx)
- Linear Algebra review (Optional) - [pdf](/home/week-1/lectures/pdf/Lecture3.pdf) - [ppt](/home/week-1/lectures/ppt/Lecture3.pptx)
- [Lecture Notes](/home/week-1/lectures/notes.pdf)
- [Errata](/home/week-1/errata.pdf)

### Week 2:
- Linear regression with multiple variables - [pdf](/home/week-2/lectures/pdf/Lecture4.pdf) - [ppt](/home/week-2/lectures/ppt/Lecture4.pptx)
- Octave tutorial [pdf](/home/week-2/lectures/pdf/Lecture5.pdf)
- Programming Exercise 1: Linear Regression - [pdf](/home/week-2/exercises/machine-learning-ex1/ex1.pdf) - [Problem](/home/week-2/exercises/machine-learning-ex1.zip) - [Solution](/home/week-2/exercises/machine-learning-ex1/ex1/)
- [Lecture Notes](/home/week-2/lectures/notes.pdf)
- [Errata](/home/week-2/errata.pdf)
- [Program Exercise Notes](/home/week-2/exercises/Programming%20Ex.1.pdf)

### Week 3:
- Logistic regression - [pdf](/home/week-3/lectures/pdf/Lecture6.pdf) - [ppt](/home/week-3/lectures/ppt/Lecture6.pptx)
- Regularization - [pdf](/home/week-3/lectures/pdf/Lecture7.pdf) - [ppt](/home/week-3/lectures/ppt/Lecture7.pptx)
- Programming Exercise 2: Logistic Regression - [pdf](/home/week-3/exercises/machine-learning-ex2/ex2.pdf) - [Problem](home/week-3/exercises/machine-learning-ex2.zip) - [Solution](/home/week-3/exercises/machine-learning-ex2/ex2)
- [Lecture Notes](/home/week-3/lectures/notes.pdf)
- [Errata](/home/week-3/errata.pdf)
- [Program Exercise Notes](/home/week-3/exercises/Programming%20Ex.2.pdf)

### Week 4:
- Neural Networks: Representation - [pdf](/home/week-4/lectures/pdf/Lecture8.pdf) - [ppt](/home/week-4/lectures/ppt/Lecture8.pptx)
- Programming Exercise 3: Multi-class Classification and Neural Networks - [pdf](/home/week-4/exercises/machine-learning-ex3/ex3.pdf) - [Problem](/home/week-4/exercises/machine-learning-ex3.zip) - [Solution](/home/week-4/exercises/machine-learning-ex3/ex3)
- [Lecture Notes](/home/week-4/lectures/notes.pdf)
- [Errata](/home/week-4/errata.pdf)
- [Program Exercise Notes](/home/week-4/exercises/Programming%20Ex.3.pdf)

### Week 5:
- Neural Networks: Learning - [pdf](/home/week-5/lectures/pdf/Lecture9.pdf) - [ppt](/home/week-5/lectures/ppt/Lecture9.pptx)
- Programming Exercise 4: Neural Networks Learning - [pdf](/home/week-5/exercises/machine-learning-ex4/ex4.pdf) - [Problem](/home/week-5/exercises/machine-learning-ex4.zip) - [Solution](/home/week-5/exercises/machine-learning-ex4/ex4)
- [Lecture Notes](/home/week-5/lectures/notes.pdf)
- [Errata](/home/week-5/errata.pdf)
- [Program Exercise Notes](/home/week-4/exercises/Programming%20Ex.4.pdf)

### Week 6:
- Advice for applying machine learning - [pdf](/home/week-6/lectures/pdf/Lecture10.pdf) - [ppt](/home/week-6/lectures/ppt/Lecture10.pptx)
- Machine learning system design - [pdf](/home/week-6/lectures/pdf/Lecture11.pdf) - [ppt](/home/week-6/lectures/ppt/Lecture11.pptx)
- Programming Exercise 5: Regularized Linear Regression and Bias v.s. Variance - [pdf](/home/week-6/exercises/machine-learning-ex5/ex5.pdf) - [Problem](/home/week-6/exercises/machine-learning-ex5.zip) - [Solution](/home/week-6/exercises/machine-learning-ex5/ex5)
- [Lecture Notes](/home/week-6/lectures/notes.pdf)
- [Errata](/home/week-6/errata.pdf)
- [Program Exercise Notes](/home/week-6/exercises/Programming%20Ex.5.pdf)

### Week 7:
- Support vector machines - [pdf](/home/week-7/lectures/pdf/Lecture12.pdf) - [ppt](/home/week-7/lectures/ppt/Lecture12.pptx)
- Programming Exercise 6: Support Vector Machines - [pdf](/home/week-7/exercises/machine-learning-ex6/ex6.pdf) - [Problem](/home/week-7/exercises/machine-learning-ex6.zip) - [Solution](/home/week-7/exercises/machine-learning-ex6/ex6)
- [Lecture Notes](/home/week-7/lectures/notes.pdf)
- [Errata](/home/week-7/errata.pdf)
- [Program Exercise Notes](/home/week-7/exercises/Programming%20Ex.6.pdf)

### Week 8:
- Clustering - [pdf](/home/week-8/lectures/pdf/Lecture13.pdf) - [ppt](/home/week-8/lectures/ppt/Lecture13.ppt)
- Dimensionality reduction - [pdf](/home/week-8/lectures/pdf/Lecture14.pdf) - [ppt](/home/week-8/lectures/ppt/Lecture14.ppt)
- Programming Exercise 7: K-means Clustering and Principal Component Analysis - [pdf](/home/week-8/exercises/machine-learning-ex7/ex7.pdf) - [Problems](/home/week-8/exercises/machine-learning-ex7.zip) - [Solution](/home/week-8/exercises/machine-learning-ex7/ex7)
- [Lecture Notes](/home/week-8/lectures/notes.pdf)
- [Errata](/home/week-8/errata.pdf)
- [Program Exercise Notes](/home/week-8/exercises/Programming%20Ex.7.pdf)

### Week 9:
- Anomaly Detection - [pdf](/home/week-9/lectures/pdf/Lecture15.pdf) - [ppt](/home/week-9/lectures/ppt/Lecture15.ppt)
- Recommender Systems - [pdf](/home/week-9/lectures/pdf/Lecture16.pdf) - [ppt](/home/week-9/lectures/ppt/Lecture16.ppt)
- Programming Exercise 8: Anomaly Detection and Recommender Systems - [pdf](/home/week-9/exercises/machine-learning-ex8/ex8.pdf) - [Problems](/home/week-9/exercises/machine-learning-ex8.zip) - [Solution](/home/week-9/exercises/machine-learning-ex8/ex8)
- [Lecture Notes](/home/week-9/lectures/notes.pdf)
- [Errata](/home/week-9/errata.pdf)
- [Program Exercise Notes](/home/week-9/exercises/Programming%20Ex.8.pdf)

### Week 10:
- Large scale machine learning - [pdf](/home/week-10/lectures/pdf/Lecture17.pdf) - [ppt](/home/week-10/lectures/ppt/Lecture17.ppt)
- [Lecture Notes](/home/week-10/lectures/notes.pdf)

### Week 11:
- Application example: Photo OCR - [pdf](/home/week-11/lectures/pdf/Lecture18.pdf) - [ppt](/home/week-11/lectures/ppt/Lecture18.ppt)

## Extra Information

- [Linear Algebra Review and Reference Zico Kolter](/extra/cs229-linalg.pdf)
- [CS229 Lecture notes](/extra/cs229-notes1.pdf)
- [CS229 Problems](/extra/cs229-prob.pdf)
- [Financial time series forecasting with machine learning techniques](/extra/machine%20learning%20stocks.pdf)
- [Octave Examples](/extra/octave_session.m)

## Online E Books

- [Introduction to Machine Learning by Nils J. Nilsson](robotics.stanford.edu/~nilsson/MLBOOK.pdf)
- [Introduction to Machine Learning by Alex Smola and S.V.N. Vishwanathan](http://alex.smola.org/drafts/thebook.pdf)
- [Introduction to Data Science by Jeffrey Stanton](http://surface.syr.edu/cgi/viewcontent.cgi?article=1165&context=istpub)
- [Bayesian Reasoning and Machine Learning by David Barber](http://web4.cs.ucl.ac.uk/staff/D.Barber/pmwiki/pmwiki.php?n=Brml.Online)
- [Understanding Machine Learning, © 2014 by Shai Shalev-Shwartz and Shai Ben-David](http://www.cs.huji.ac.il/~shais/UnderstandingMachineLearning/copy.html)
- [Elements of Statistical Learning, by Hastie, Tibshirani, and Friedman](http://statweb.stanford.edu/~tibs/ElemStatLearn/)
- [Pattern Recognition and Machine Learning, by Christopher M. Bishop](http://users.isr.ist.utl.pt/~wurmd/Livros/school/Bishop%20-%20Pattern%20Recognition%20And%20Machine%20Learning%20-%20Springer%20%202006.pdf)


## Aditional Information
- [Machine Learning Course Notes (Excluding Octave/MATLAB)](http://www.holehouse.org/mlclass/)

### Links
- [What are the top 10 problems in deep learning for 2017?](https://www.quora.com/What-are-the-top-10-problems-in-deep-learning-for-2017)
- [When will the deep learning bubble burst?](https://www.quora.com/When-will-the-deep-learning-bubble-burst)

### Statistics Models

- HMM - [Hidden Markov Model](https://en.wikipedia.org/wiki/Hidden_Markov_model)
- CRFs - [Conditional Random Fields](https://en.wikipedia.org/wiki/Conditional_random_field)
- LSI - [Latent Semantic Indexing](https://www.searchenginejournal.com/what-is-latent-semantic-indexing-seo-defined/21642/)
- MRF - [Markov Random Fields](https://en.wikipedia.org/wiki/Markov_random_field)

### NLP forums

- SIGIR - [Special Interest Group on Information Retrieval](http://sigir.org/)
- ACL - [Association for Computational Linguistics](https://www.aclweb.org/portal/)
- NAACL - [The North American Chapter of the Association for Computational Linguistics](http://naacl.org/)
- EMNLP - [Empirical Methods in Natural Language Processing](http://emnlp2017.net/)
- NIPS - [Neural Information Processing Systems](https://nips.cc/)
9 changes: 9 additions & 0 deletions _config.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,9 @@
title: Coursera Machine Learning
email: [email protected]
theme: jekyll-theme-cayman
gems:
- jemoji
- jekyll-relative-links
markdown: kramdown
twitter_username: mallikarjunarao
github_username: vkosuri
Binary file added extra/cs229-linalg.pdf
Binary file not shown.
Binary file added extra/cs229-notes1.pdf
Binary file not shown.
Binary file added extra/cs229-prob.pdf
Binary file not shown.
1 change: 1 addition & 0 deletions extra/img/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
All the images are generated using https://www.codecogs.com/latex/eqneditor.php
Binary file added extra/img/linear_cost.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added extra/img/linear_hypothesis.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added extra/img/linear_multi_var_gradient_descent.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added extra/img/logistic_cost.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added extra/img/logistic_gradient.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added extra/img/logistic_hypothesis.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added extra/img/logistic_multi_var_cost.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added extra/img/logistic_multi_var_gradient.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
Binary file added extra/img/nural_cost.gif
Loading
Sorry, something went wrong. Reload?
Sorry, we cannot display this file.
Sorry, this file is invalid so it cannot be displayed.
91 changes: 91 additions & 0 deletions extra/latext_functions.txt
Original file line number Diff line number Diff line change
@@ -0,0 +1,91 @@

# Supervise Learning

## Linear Regression

### Hypothesis

### Cost Fucntion
```
J(\theta_0,\theta_1)=\frac{1}{2m}\sum_{i=1}^{m}(\hat{y}_i-y_i)^2=\frac{1}{2m}\sum_{i=1}^{m}(h_\theta(x_i)-y_i)^2
```


## Linear Regression with multiple variables

### Hypothesis

### Cost Fucntion
```
J(\theta_0,\theta_1)=\frac{1}{2m}\sum_{i=1}^{m}(\hat{y}_i-y_i)^2=\frac{1}{2m}\sum_{i=1}^{m}(h_\theta(x_i)-y_i)^2
```

### Gradient Descent
```
repeat \hspace*{1mm} untill \hspace*{1mm} convergence: \{\\
\hspace*{20mm} \theta_j:=\theta_j-\alpha\frac{1}{m}\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)}).x_j^{(i)} \hspace*{8mm} for \hspace*{1mm} j:=0..n
\\\hspace*{6mm}\}
```

## Logistic Regression

### Hypothesis
```
h_\theta(x)=g(\theta^Tx)
```

### Cost Fucntion
```
J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}(y^{(i)}log(h_\theta(x^{(i)}))+(1-y^{(i)})log(1-h_\theta(x^{(i)}))
```

### Gradient Descent
```
repeat \hspace*{1mm} untill \hspace*{1mm} convergence: \{\\
\hspace*{20mm} \theta_j:=\theta_j-\alpha\frac{1}{m}\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)}).x_j^{(i)} \hspace*{8mm} for \hspace*{1mm} j:=0..n
\\\hspace*{6mm}\}
```

## Logistic Regression with multiple variables

### Hypothesis
```
h_\theta(x)=g(\theta^Tx)
```

### Cost Fucntion
```
J(\theta)=-\frac{1}{m}\sum_{i=1}^{m}[y^{(i)}log(h_\theta(x^{(i)}))+(1-y^{(i)})log(1-h_\theta(x^{(i)})]+\frac{\lambda}{2m}\sum_{j=1}^n\theta_j^2
```

### Gradient Descent
```
Repeat: \{
\\
\hspace*{20mm}\theta_0:=\theta_0-\alpha\frac{1}{m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})x_0^{(i)}
\\
\hspace*{20mm} \theta_j:=\theta_j-\alpha[(\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)})x_j^{(i)})+\frac{\lambda}{m}\theta_j]\hspace*{8mm}j\epsilon{1,2,\dots n})
\\
\hspace*{6mm}\}

## Nural Networks

### Hypothesis
```
h_\theta(x)=g(\theta^Tx)
```

### Cost Fucntion
```
J(\Theta)=-\frac{1}{m}\sum_{i=1}^{m}\sum_{k=1}^{k}[y^{(k)}log((h_\Theta(x^{(i)}))_k)+(1-y^{(i)}_k)log(1-(h_\Theta(x^{(i)})_k)]+\frac{\lambda}{2m}\sum_{l=1}^{L-1}\sum_{i=1}^{s_l}\sum_{j=1}^{s_{l+1}}(\theta_{j,i}^{(l)})^2
```

### Gradient Descent
```
Repeat: \{
\\
\hspace*{20mm}\theta_0:=\theta_0-\alpha\frac{1}{m}\sum_{i=1}^m(h_\theta(x^{(i)})-y^{(i)})x_0^{(i)}
\\
\hspace*{20mm} \theta_j:=\theta_j-\alpha[(\sum_{i=1}^{m}(h_\theta(x^{(i)})-y^{(i)})x_j^{(i)})+\frac{\lambda}{m}\theta_j]\hspace*{8mm}j\epsilon{1,2,\dots n})
\\
\hspace*{6mm}\}
Binary file added extra/machine learning stocks.pdf
Binary file not shown.
Loading

0 comments on commit ab2e7d4

Please sign in to comment.