Skip to content

Commit 0a8d848

Browse files
author
Quinn Liu
committed
added comments for gradient descent algorithm
1 parent 2cdb1c1 commit 0a8d848

File tree

3 files changed

+5
-3
lines changed

3 files changed

+5
-3
lines changed

README.md

+1-2
Original file line numberDiff line numberDiff line change
@@ -5,8 +5,7 @@ Definition of Machine Learning by Tom Mitchell
55

66
"A computer program is said to learn from experience E with respect to some task T and some performance measure P, if its performance on T, as measured by P, improves with experience E."
77

8-
- folder ```imagesForExplanation```
9-
+ contains images used in other folder's README files for explanation
8+
- folder ```imagesForExplanation``` = contains images used in other folder's README files for explanation
109

1110
- folder ```supervisedLearning```
1211
+ SUPERVISED Learning = teach the computer how to learn
Loading

supervisedLearning/README.md

+4-1
Original file line numberDiff line numberDiff line change
@@ -11,4 +11,7 @@ supervisedLearning
1111
- WHOLE POINT: Find theta_0 & theta_1 so that h(x) is close to y for our training examples (x,y)
1212
- mathamatically this means we need to minimize (1/2m)(Sum from i = 1 to m of (h(x^(i))-y^(i))^2 ) where (1/2m) makes math easilier and (h(x^(i)) = theta_0 + theta_1 * x^(i)
1313
- math notation = minimize over theta_0, theta_1 the cost function J(theta_0, theta_1) also called the squared error function
14-
- WHOLE POINT explained using a picture: https://github.com/quinnliu/MachineLearning/blob/master/imagesForExplanation/CostFunctionExampleWithTheta_0AndTheta_1.jpg
14+
- WHOLE POINT explained using a picture: https://github.com/quinnliu/MachineLearning/blob/master/imagesForExplanation/CostFunctionExampleWithTheta_0AndTheta_1.jpg
15+
- Now plugging in the minimal theta_0 and theta_1 our function h(x) = theta_0 + theta_1 * x will predict h(x) = y by giving it an input x.
16+
17+
- GRADIENT DESCENT = algorithm that lets us find a minimal theta_0 and theta_1

0 commit comments

Comments
 (0)