In this part of the unit, you will learn about how OLS regression selects from all of the possible linear models to produce the linear model that has the lowest mean squared error (MSE).
You will begin by fitting a model via the ocular method (you'll just fit a model by eye, with simple data) to realize that OLS regression and all other algorithms are just relatively simple optimization routines that we can (sometimes) do easily ourselves.
Then you'll read about how OLS regression is just a plug-in estimator for the Best Linear Predictor (BLP) that you learned about for random variables. Since OLS regression is a plug-in estimator, we'll check to see whether it is a good estimator using the core estimation properties that we discussed previously: (1) Consistency; (2) Unbiasedness; and, (3) Efficiency.