-
-
Notifications
You must be signed in to change notification settings - Fork 37
/
05-regression.Rmd
3168 lines (2187 loc) · 104 KB
/
05-regression.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
# Linear Regression
::: {style="text-align:center"}
![](images/econometrics.PNG){width="450" height="200"}
:::
- Estimating parameters -\> parametric (finite parameters)
- Estimating functions -\> non-parametric
**Estimator Desirable Properties**
1. **Unbiased**
2. **Consistency**
- $plim\hat{\beta_n}=\beta$
- based on the law of large numbers, we can derive consistency
- More observations means more precise, closer to the true value.
3. **Efficiency**
- Minimum variance in comparison to another estimator.
- OLS is BLUE (best linear unbiased estimator) means that OLS is the most efficient among the class of linear unbiased estimator [Gauss-Markov Theorem](#gauss-markov-theorem)
- If we have correct distributional assumptions, then the Maximum Likelihood is asymptotically efficient among consistent estimators.
## Ordinary Least Squares {#ordinary-least-squares}
The most fundamental model in statistics or econometric is a OLS linear regression. OLS = Maximum likelihood when the error term is assumed to be normally distributed.
Regression is still great if the underlying CEF (conditional expectation function) is not linear. Because regression has the following properties:
1. For $E[Y_i | X_{1i}, \dots, X_{Ki}] = a + \sum_{k=1}^K b_k X_{ki}$ (i.e., the CEF of $Y_i$ on $X_{1i}, \dots, X_{Ki}$ is linear, then the regression of $Y_i$ on $X_{1i}, \dots, X_{Ki}$ is the CEF
2. For $E[Y_i | X_{1i} , \dots, X_{Ki}]$ is a nonlinear function of the conditioning variables, the regression of $Y_i$ on $X_{1i}, \dots, X_{Ki}$ will give you the best linear approximation to the nonlinear CEF (i.e., minimize the expected squared deviation between the fitted values from the linear model and the CEF).
### Simple Regression (Basic Model)
$$
Y_i = \beta_0 + \beta_1 X_i + \epsilon_i
$$
- $Y_i$: response (dependent) variable at i-th observation
- $\beta_0,\beta_1$: regression parameters for intercept and slope.
- $X_i$: known constant (independent or predictor variable) for i-th observation
- $\epsilon_i$: random error term
$$
\begin{aligned}
E(\epsilon_i) &= 0 \\
var(\epsilon_i) &= \sigma^2 \\
cov(\epsilon_i,\epsilon_j) &= 0 \text{ for all } i \neq j
\end{aligned}
$$
$Y_i$ is random since $\epsilon_i$ is:
$$
\begin{aligned}
E(Y_i) &= E(\beta_0 + \beta_1 X_i + \epsilon_i) \\
&= E(\beta_0) + E(\beta_1 X_i) + E(\epsilon) \\
&= \beta_0 + \beta_1 X_i
\end{aligned}
$$
$$
\begin{aligned}
var(Y_i) &= var(\beta_0 + \beta_1 X_i + \epsilon_i) \\
&= var(\epsilon_i) \\
&= \sigma^2
\end{aligned}
$$
Since $cov(\epsilon_i, \epsilon_j) = 0$ (uncorrelated), the outcome in any one trail has no effect on the outcome of any other. Hence, $Y_i, Y_j$ are uncorrelated as well (conditioned on the $X$'s)
**Note**\
[Least Squares](#ordinary-least-squares) does not require a distributional assumption
Relationship between bivariate regression and covariance
Covariance between 2 variables:
$$
C(X_i, Y_i) = E[(X_i - E[X_i])(Y_i - E[Y_i])]
$$
Which has the following properties
1. $C(X_i, X_i) = \sigma^2_X$
2. If either $E(X_i) = 0 | E(Y_i) = 0$, then $Cov(X_i, Y_i) = E[X_i Y_i]$
3. Given $W_i = a + b X_i$ and $Z_i = c + d Y_i$, then $Cov(W_i, Z_i) = bdC(X_i, Y_i)$
For the bivariate regression, the slope is
$$
\beta = \frac{Cov(Y_i, X_i)}{Var(X_i)}
$$
To extend this to a multivariate case
$$
\beta_k = \frac{C(Y_i, \tilde{X}_{ki})}{Var(\tilde{X}_{ki})}
$$
Where $\tilde{X}_{ki}$ is the residual from a regression of $X_{ki}$ on the $K-1$ other covariates included in the model
And intercept
$$
\alpha = E[Y_i] - \beta E(X_i)
$$
#### Estimation
Deviation of $Y_i$ from its expected value:
$$
Y_i - E(Y_i) = Y_i - (\beta_0 + \beta_1 X_i)
$$
Consider the sum of the square of such deviations:
$$
Q = \sum_{i=1}^{n} (Y_i - \beta_0 -\beta_1 X_i)^2
$$
$$
\begin{aligned}
b_1 &= \frac{\sum_{i=1}^{n} (X_i - \bar{X})(Y_i - \bar{Y})}{\sum_{i=1}^{n}(X_i - \bar{X})^2} \\
b_0 &= \frac{1}{n}(\sum_{i=1}^{n}Y_i - b_1\sum_{i=1}^{n}X_i) = \bar{Y} - b_1 \bar{X}
\end{aligned}
$$
#### Properties of Least Least Estimators
$$
\begin{aligned}
E(b_1) &= \beta_1 \\
E(b_0) &= E(\bar{Y}) - \bar{X}\beta_1 \\
E(\bar{Y}) &= \beta_0 + \beta_1 \bar{X} \\
E(b_0) &= \beta_0 \\
var(b_1) &= \frac{\sigma^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2} \\
var(b_0) &= \sigma^2 (\frac{1}{n} + \frac{\bar{X}^2}{\sum (X_i - \bar{X})^2})
\end{aligned}
$$
$var(b_1) \to 0$ as more measurements are taken at more $X_i$ values (unless $X_i$ is at its mean value)\
$var(b_0) \to 0$ as $n$ increases when the $X_i$ values are judiciously selected.
**Mean Square Error**
$$
MSE = \frac{SSE}{n-2} = \frac{\sum_{i=1}^{n}e_i^2}{n-2} = \frac{\sum(Y_i - \hat{Y_i})^2}{n-2}
$$
Unbiased estimator of MSE:
$$
E(MSE) = \sigma^2
$$
$$
\begin{aligned}
s^2(b_1) &= \widehat{var(b_1)} = \frac{MSE}{\sum_{i=1}^{n}(X_i - \bar{X})^2} \\
s^2(b_0) &= \widehat{var(b_0)} = MSE(\frac{1}{n} + \frac{\bar{X}^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2})
\end{aligned}
$$
$$
\begin{aligned}
E(s^2(b_1)) &= var(b_1) \\
E(s^2(b_0)) &= var(b_0)
\end{aligned}
$$
#### Residuals
$$
e_i = Y_i - \hat{Y} = Y_i - (b_0 + b_1 X_i)
$$
- $e_i$ is an estimate of $\epsilon_i = Y_i - E(Y_i)$
- $\epsilon_i$ is always unknown since we don't know the true $\beta_0, \beta_1$
$$
\begin{aligned}
\sum_{i=1}^{n} e_i &= 0 \\
\sum_{i=1}^{n} X_i e_i &= 0
\end{aligned}
$$
Residual properties
1. $E[e_i] =0$
2. $E[X_i e_i] = 0$ and $E[\hat{Y}_i e_i ] = 0$
#### Inference
**Normality Assumption**
- Least Squares estimation does not require assumptions of normality.
- However, to do inference on the parameters, we need distributional assumptions.
- Inference on $\beta_0,\beta_1$ and $Y_h$ are not extremely sensitive to moderate departures from normality, especially if the sample size is large.
- Inference on $Y_{pred}$ is very sensitive to the normality assumptions.
**Normal Error Regression Model**
$$
Y_i \sim N(\beta_0+\beta_1X_i, \sigma^2)
$$
##### $\beta_1$
Under the normal error model,
$$
b_1 \sim N(\beta_1,\frac{\sigma^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2})
$$
A linear combination of independent normal random variable is normally distributed
Hence,
$$
\frac{b_1 - \beta_1}{s(b_1)} \sim t_{n-2}
$$
A $(1-\alpha) 100 \%$ confidence interval for $\beta_1$ is
$$
b_1 \pm t_{t-\alpha/2 ; n-2}s(b_1)
$$
##### $\beta_0$
Under the normal error model, the sampling distribution for $b_0$ is
$$
b_0 \sim N(\beta_0,\sigma^2(\frac{1}{n} + \frac{\bar{X}^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2}))
$$
Hence,
$$
\frac{b_0 - \beta_0}{s(b_0)} \sim t_{n-2}
$$ A $(1-\alpha)100 \%$ confidence interval for $\beta_0$ is
$$
b_0 \pm t_{1-\alpha/2;n-2}s(b_0)
$$
##### Mean Response
Let $X_h$ denote the level of X for which we wish to estimate the mean response
- We denote the mean response when $X = X_h$ by $E(Y_h)$\
- A point estimator of $E(Y_h)$ is $\hat{Y}_h$:
$$
\hat{Y}_h = b_0 + b_1 X_h
$$ **Note**
$$
\begin{aligned}
E(\bar{Y}_h) &= E(b_0 + b_1X_h) \\
&= \beta_0 + \beta_1 X_h \\
&= E(Y_h)
\end{aligned}
$$ (unbiased estimator)
$$
\begin{aligned}
var(\hat{Y}_h) &= var(b_0 + b_1 X_h) \\
&= var(\hat{Y} + b_1 (X_h - \bar{X})) \\
&= var(\bar{Y}) + (X_h - \bar{X})^2var(b_1) + 2(X_h - \bar{X})cov(\bar{Y},b_1) \\
&= \frac{\sigma^2}{n} + (X_h - \bar{X})^2 \frac{\sigma^2}{\sum(X_i - \bar{X})^2} \\
&= \sigma^2(\frac{1}{n} + \frac{(X_h - \bar{X})^2}{\sum_{i=1}^{n} (X_i - \bar{X})^2})
\end{aligned}
$$
Since $cov(\bar{Y},b_1) = 0$ due to the iid assumption on $\epsilon_i$
An estimate of this variance is
$$
s^2(\hat{Y}_h) = MSE (\frac{1}{n} + \frac{(X_h - \bar{X})^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2})
$$
the sampling distribution for the mean response is
$$
\begin{aligned}
\hat{Y}_h &\sim N(E(Y_h),var(\hat{Y_h})) \\
\frac{\hat{Y}_h - E(Y_h)}{s(\hat{Y}_h)} &\sim t_{n-2}
\end{aligned}
$$
A $100(1-\alpha) \%$ CI for $E(Y_h)$ is
$$
\hat{Y}_h \pm t_{1-\alpha/2;n-2}s(\hat{Y}_h)
$$
##### Prediction of a new observation
Regarding the [Mean Response], we are interested in estimating **mean** of the distribution of Y given a certain X.
Now, we want to **predict** an individual outcome for the distribution of Y at a given X. We call $Y_{pred}$
Estimation of mean response versus prediction of a new observation:
- the point estimates are the same in both cases: $\hat{Y}_{pred} = \hat{Y}_h$
- It is the variance of the prediction that is different; hence, prediction intervals are different than confidence intervals. The prediction variance must consider:
- Variation in the mean of the distribution of $Y$
- Variation within the distribution of $Y$
We want to predict: mean response + error
$$
\beta_0 + \beta_1 X_h + \epsilon
$$
Since $E(\epsilon) = 0$, use the least squares predictor:
$$
\hat{Y}_h = b_0 + b_1 X_h
$$
The variance of the predictor is
$$
\begin{aligned}
var(b_0 + b_1 X_h + \epsilon) &= var(b_0 + b_1 X_h) + var(\epsilon) \\
&= \sigma^2(\frac{1}{n} + \frac{(X_h - \bar{X})^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2}) + \sigma^2 \\
&= \sigma^2(1+\frac{1}{n} + \frac{(X_h - \bar{X})^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2})
\end{aligned}
$$
An estimate of the variance is given by
$$
s^2(pred)= MSE (1+ \frac{1}{n} + \frac{(X_h - \bar{X})^2}{\sum_{i=1}^{n}(X_i - \bar{X})^2})
$$
and
$$
\frac{Y_{pred}-\hat{Y}_h}{s(pred)} \sim t_{n-2}
$$
$100(1-\alpha) \%$ prediction interval is
$$
\bar{Y}_h \pm t_{1-\alpha/2; n-2}s(pred)
$$
The prediction interval is very sensitive to the distributional assumption on the errors, $\epsilon$
##### Confidence Band
We want to know the confidence interval for the entire regression line, so we can draw conclusions about any and all mean response fo the entire regression line $E(Y) = \beta_0 + \beta_1 X$ rather than for a given response $Y$
**Working-Hotelling Confidence Band**
For a given $X_h$, this band is
$$
\hat{Y}_h \pm W s(\hat{Y}_h)
$$ where $W^2 = 2F_{1-\alpha;2,n-2}$, which is just 2 times the F-stat with 2 and $n-2$ degrees of freedom
- the interval width will change with each $X_h$ (since $s(\hat{Y}_h)$ changes)\
- the boundary values for this confidence band will always define a hyperbole containing the regression line\
- will be smallest at $X = \bar{X}$
#### ANOVA
Partitioning the Total Sum of Squares: Consider the corrected Total sum of squares:
$$
SSTO = \sum_{i=1}^{n} (Y_i -\bar{Y})^2
$$
Measures the overall dispersion in the response variable\
We use the term corrected because we correct for mean, the uncorrected total sum of squares is given by $\sum Y_i^2$
use $\hat{Y}_i = b_0 + b_1 X_i$ to estimate the conditional mean for Y at $X_i$
$$
\begin{aligned}
\sum_{i=1}^n (Y_i - \bar{Y})^2 &= \sum_{i=1}^n (Y_i - \hat{Y}_i + \hat{Y}_i - \bar{Y})^2 \\
&= \sum_{i=1}^n(Y_i - \hat{Y}_i)^2 + \sum_{i=1}^n(\hat{Y}_i - \bar{Y})^2 + 2\sum_{i=1}^n(Y_i - \hat{Y}_i)(\hat{Y}_i-\bar{Y}) \\
&= \sum_{i=1}^n(Y_i - \hat{Y}_i)^2 + \sum_{i=1}^n(\bar{Y}_i -\bar{Y})^2 \\
STTO &= SSE + SSR \\
\end{aligned}
$$
where SSR is the regression sum of squares, which measures how the conditional mean varies about a central value.
The cross-product term in the decomposition is 0:
$$
\begin{aligned}
\sum_{i=1}^n (Y_i - \hat{Y}_i)(\hat{Y}_i - \bar{Y}) &= \sum_{i=1}^{n}(Y_i - \bar{Y} -b_1 (X_i - \bar{X}))(\bar{Y} + b_1 (X_i - \bar{X})-\bar{Y}) \\
&= b_1 \sum_{i=1}^{n} (Y_i - \bar{Y})(X_i - \bar{X}) - b_1^2\sum_{i=1}^{n}(X_i - \bar{X})^2 \\
&= b_1 \frac{\sum_{i=1}^{n}(Y_i -\bar{Y})(X_i - \bar{X})}{\sum_{i=1}^{n}(X_i - \bar{X})^2} \sum_{i=1}^{n}(X_i - \bar{X})^2 - b_1^2\sum_{i=1}^{n}(X_i - \bar{X})^2 \\
&= b_1^2 \sum_{i=1}^{n}(X_i - \bar{X})^2 - b_1^2 \sum_{i=1}^{n}(X_i - \bar{X})^2 \\
&= 0
\end{aligned}
$$
and
$$
\begin{aligned}
SSTO &= SSR + SSE \\
(n-1 d.f) &= (1 d.f.) + (n-2 d.f.)
\end{aligned}
$$
| Source of Variation | Sum of Squares | df | Mean Square | F |
|---------------------|----------------|-------|--------------|---------|
| Regression (model) | SSR | $1$ | MSR = SSR/df | MSR/MSE |
| Error | SSE | $n-2$ | MSE = SSE/df | |
| Total (Corrected) | SSTO | $n-1$ | | |
$$
\begin{aligned}
E(MSE) &= \sigma^2 \\
E(MSR) &= \sigma^2 + \beta_1^2 \sum_{i=1}^{n} (X_i - \bar{X})^2
\end{aligned}
$$
- If $\beta_1 = 0$, then these two expected values are the same\
- if $\beta_1 \neq 0$ then E(MSR) will be larger than E(MSE)
which means the ratio of these two quantities, we can infer something about $\beta_1$
Distribution theory tells us that if $\epsilon_i \sim iid N(0,\sigma^2)$ and assuming $H_0: \beta_1 = 0$ is true,
$$
\begin{aligned}
\frac{MSE}{\sigma^2} &\sim \chi_{n-2}^2 \\
\frac{MSR}{\sigma^2} &\sim \chi_{1}^2 \text{ if } \beta_1=0
\end{aligned}
$$
where these two chi-square random variables are independent.
Since the ratio of 2 independent chi-square random variable follows an F distribution, we consider:
$$
F = \frac{MSR}{MSE} \sim F_{1,n-2}
$$
when $\beta_1 =0$. Thus, we reject $H_0: \beta_1 = 0$ (or $E(Y_i)$ = constant) at $\alpha$ if
$$
F > F_{1 - \alpha;1,n-2}
$$
this is the only null hypothesis that can be tested with this approach.
**Coefficient of Determination**
$$
R^2 = \frac{SSR}{SSTO} = 1- \frac{SSE}{SSTO}
$$
where $0 \le R^2 \le 1$
**Interpretation**: The proportionate reduction of the total variation in $Y$ after fitting a linear model in $X$.
It is not really correct to say that $R^2$ is the "variation in $Y$ explained by $X$".
$R^2$ is related to the correlation coefficient between $Y$ and $X$:
$$
R^2 = (r)^2
$$
where $r= corr(x,y)$ is an estimate of the Pearson correlation coefficient. Also, note
$$
\begin{aligned}
b_1 &= (\frac{\sum_{i=1}^{n}(Y_i - \bar{Y})^2}{\sum_{i=1}^{n} (X_i - \bar{X})^2})^{1/2} \\
r &= \frac{s_y}{s_x} r
\end{aligned}
$$
**Lack of Fit**
$Y_{11},Y_{21}, \dots ,Y_{n_1,1}$: $n_1$ repeat obs at $X_1$
$Y_{1c},Y_{2c}, \dots ,Y_{n_c,c}$: $n_c$ repeat obs at $X_c$
So, there are $c$ distinct $X$ values.
Let $\bar{Y}_j$ be the mean over replicates for $X_j$
Partition the Error Sum of Squares:
$$
\begin{aligned}
\sum_{i} \sum_{j} (Y_{ij} - \hat{Y}_{ij})^2 &= \sum_{i} \sum_{j} (Y_{ij} - \bar{Y}_j + \bar{Y}_j + \hat{Y}_{ij})^2 \\
&= \sum_{i} \sum_{j} (Y_{ij} - \bar{Y}_j)^2 + \sum_{i} \sum_{j} (\bar{Y}_j - \hat{Y}_{ij})^2 + \text{cross product term} \\
&= \sum_{i} \sum_{j}(Y_{ij} - \bar{Y}_j)^2 + \sum_j n_j (\bar{Y}_j- \hat{Y}_{ij})^2 \\
SSE &= SSPE + SSLF \\
\end{aligned}
$$
- SSPE: "pure error sum of squares" has $n-c$ degrees of freedom since we need to estimate $c$ means\
- SSLF: "lack of fit sum of squares" has $c - 2$ degrees of freedom (the number of unique $X$ values - number of parameters used to specify the conditional mean regression model)
$$
\begin{aligned}
MSPE &= \frac{SSPE}{df_{pe}} = \frac{SSPE}{n-c} \\
MSLF &= \frac{SSLF}{df_{lf}} = \frac{SSLF}{c-2}
\end{aligned}
$$
The **F-test for Lack-of-Fit** tests
$$
\begin{aligned}
H_0: Y_{ij} &= \beta_0 + \beta_1 X_i + \epsilon_{ij}, \epsilon_{ij} \sim iid N(0,\sigma^2) \\
H_a: Y_{ij} &= \alpha_0 + \alpha_1 X_i + f(X_i, Z_1,...) + \epsilon_{ij}^*,\epsilon_{ij}^* \sim iid N(0, \sigma^2)
\end{aligned}
$$
$E(MSPE) = \sigma^2$ under either $H_0$, $H_a$
$E(MSLF) = \sigma^2 + \frac{\sum n_j(f(X_i,...))^2}{n-2}$ in general and
$E(MSLF) = \sigma^2$ when $H_0$ is true
We reject $H_0$ (i.e., the model $Y_i = \beta_0 + \beta_1 X_i + \epsilon_i$ is not adequate) if
$$
F = \frac{MSLF}{MSPE} > F_{1-\alpha;c-2,n-c}
$$
Failing to reject $H_0$ does not imply that $H_0: Y_{ij} = \beta_0 + \beta_1 X_i + \epsilon_{ij}$ is exactly true, but it suggests that this model may provide a reasonable approximation to the true model.
| Source of Variation | Sum of Squares | df | Mean Square | F |
|---------------------|----------------|-------|-------------|-------------|
| Regression | SSR | $1$ | MSR | MSR / MSE |
| Error | SSE | $n-2$ | MSE | |
| Lack of fit | SSLF | $c-2$ | MSLF | MSLF / MSPE |
| Pure Error | SSPE | $n-c$ | MSPE | |
| Total (Corrected) | SSTO | $n-1$ | | |
Repeat observations have an effect on $R^2$:
- It is impossible for $R^2$ to attain 1 when repeat obs. exist (SSE can't be 0)
- The maximum $R^2$ attainable in this situation:
$$
R^2_{max} = \frac{SSTo - SSPE}{SSTO}
$$
- Not all levels of X need have repeat observations.
- Typically, when $H_0$ is appropriate, one still uses MSE as the estimate for $\sigma^2$ rather than MSPE, Since MSE has more degrees of freedom, sometimes people will pool these estimates.
**Joint Inference**\
The confidence coefficient for both $\beta_0$ and $\beta_1$ considered simultaneously is $\le \alpha$
Let
- $\bar{A}_1$ be the event that the first interval covers $\beta_0$
- $\bar{A}_2$ be the event that the second interval covers $\beta_1$
$$
\begin{aligned}
P(\bar{A}_1) &= 1 - \alpha \\
P(\bar{A}_2) &= 1 - \alpha
\end{aligned}
$$
The probability that both $\bar{A}_1$ and $\bar{A}_2$
$$
\begin{aligned}
P(\bar{A}_1 \cap \bar{A}_2) &= 1 - P(\bar{A}_1 \cup \bar{A}_2) \\
&= 1 - P(A_1) - P(A_2) + P(A_1 \cap A_2) \\
&\ge 1 - P(A_1) - P(A_2) \\
&= 1 - 2\alpha
\end{aligned}
$$
If $\beta_0$ and $\beta_1$ have separate 95% confidence intervals, the joint (family) confidence coefficient is at least $1 - 2(0.05) = 0.9$. This is called a **Bonferroni Inequality**
We could use a procedure in which we obtained $1-\alpha/2$ confidence intervals for the two regression parameters separately, then the joint (Bonferroni) family confidence coefficient would be at least $1- \alpha$
The $1-\alpha$ joint Bonferroni confidence interval for $\beta_0$ and $\beta_1$ is given by calculating:
$$
\begin{aligned}
b_0 &\pm B s(b_0) \\
b_1 &\pm B s(b_1)
\end{aligned}
$$
where $B= t_{1-\alpha/4;n-2}$
Interpretation: If repeated samples were taken and the joint $(1-\alpha)$ intervals for $\beta_0$ and $\beta_1$ were obtained, $(1-\alpha)100$% of the joint intervals would contain the true pair $(\beta_0, \beta_1)$. That is, in $\alpha \times 100$% of the samples, one or both intervals would not contain the true value.
- The Bonferroni interval is **conservative**. It is a lower bound and the joint intervals will tend to be correct more than $(1-\alpha)100$% of the time (lower power). People usually consider a larger $\alpha$ for the Bonferroni joint tests (e.g, $\alpha=0.1$)
- The Bonferroni procedure extends to testing more than 2 parameters. Say we are interested in testing $\beta_0,\beta_1,..., \beta_{g-1}$ (g parameters to test). Then, the joint Bonferroni interval is obtained by calculating the $(1-\alpha/g)$ 100% level interval for each separately.
- For example, if $\alpha = 0.05$ and $g=10$, each individual test is done at the $1- \frac{.05}{10}$ level. For 2-sided intervals, this corresponds to using $t_{1-\frac{0.05}{2(10)};n-p}$ in the CI formula. This procedure works best if g is relatively small, otherwise the intervals for each individual parameter are very wide and the test is way too conservative.
- $b_0,b_1$ are usually correlated (negatively if $\bar{X} >0$ and positively if $\bar{X}<0$)
- Other multiple comparison procedures are available.
#### Assumptions
- Linearity of the regression function
- Error terms have constant variance
- Error terms are independent
- No outliers
- Error terms are normally distributed
- No Omitted variables
#### Diagnostics
- Constant Variance
- Plot residuals vs. X
- Outliers
- plot residuals vs. X
- box plots
- stem-leaf plots
- scatter plots
We could use standardize the residuals to have unit variance. These standardized residuals are called studentized residuals:
$$
r_i = \frac{e_i -\bar{e}}{s(e_i)} = \frac{e_i}{s(e_i)}
$$
A simplified standardization procedure gives semi-studentized residuals:
$$
e_i^* = \frac{e_i - \bar{e}}{\sqrt{MSE}} = \frac{e_i}{\sqrt{MSE}}
$$
**Non-independent of Error Terms**
- plot residuals vs. time
Residuals $e_i$ are not independent random variables because they involve the fitted values $\hat{Y}_i$, which are based on the same fitted regression function.
If the sample size is large, the dependency among $e_i$ is relatively unimportant.
To detect non-independence, it helps to plot the residual for the $i$-th response vs. the $(i-1)$-th
**Non-normality of Error Terms**
to detect non-normality (distribution plots of residuals, box plots of residuals, stem-leaf plots of residuals, normal probability plots of residuals)
- Need relatively large sample sizes.
- Other types of departure affect the distribution of the residuals (wrong regression function, non-constant error variance,...)
##### Objective Tests of Model Assumptions
- Normality
- Use [Methods based on empirical cumulative distribution function] to test on residuals.
- Constancy of error variance
- [Brown-Forsythe Test (Modified Levene Test)](#brown-forsythe-test-modified-levene-test)
- [Breusch-Pagan Test (Cook-Weisberg Test)](#breusch-pagan-test-cook-weisberg-test)
#### Remedial Measures
If the simple linear regression is not appropriate, one can:
- more complicated models
- transformations on $X$ and/or $Y$ (may not be "optimal" results)
Remedial measures based on deviations:
- Non-linearity:
- [Transformations](Non-normality%20often%20occurs%20with%20non-constant%20error%20variances;%20need%20to%20transform%20to%20constant%20error%20variance%20first,%20then%20check%20normality.)
- more complicated models
- Non-constant error variance:
- [Weighted Least Squares]
- [Transformations](Non-normality%20often%20occurs%20with%20non-constant%20error%20variances;%20need%20to%20transform%20to%20constant%20error%20variance%20first,%20then%20check%20normality.)
- Correlated errors:
- serially correlated error models (times series)
- Non-normality
- Additional variables: multiple regression.
- Outliers:
- Robust estimation.
##### Transformations
use transformations of one or both variables before performing the regression analysis.\
The properties of least-squares estimates apply to the transformed regression, not the original variable.
If we transform the Y variable and perform regression to get:
$$
g(Y_i) = b_0 + b_1 X_i
$$
Transform back:
$$
\hat{Y}_i = g^{-1}(b_0 + b_1 X_i)
$$
$\hat{Y}_i$ will be biased. we can correct this bias.
**Box-Cox Family Transformations**
$$
Y'= Y^{\lambda}
$$
where $\lambda$ is a parameter to be determined from the data.
| $\lambda$ | $Y'$ |
|:---------:|:------------:|
| 2 | $Y^2$ |
| 0.5 | $\sqrt{Y}$ |
| 0 | $ln(Y)$ |
| -0.5 | $1/\sqrt{Y}$ |
| -1 | $1/Y$ |
To pick $\lambda$, we can do estimation by:
- trial and error
- maximum likelihood
- numerical search
**Variance Stabilizing Transformations**
A general method for finding a variance stabilizing transformation, when the standard deviation is a function of the mean, is the **delta method** - an application of a Taylor series expansion.
$$
\sigma = \sqrt{var(Y)} = f(\mu)
$$
where $\mu = E(Y)$ and $f(\mu)$ is some smooth function of the mean.
Consider the transformation $h(Y)$. Expand this function in a Taylor series about $\mu$. Then,
$$
h(Y) = h(\mu) + h'(\mu)(Y-\mu) + \text{small terms}
$$
we want to select the function h(.) so that the variance of h(Y) is nearly constant for all values of $\mu= E(Y)$:
$$
\begin{aligned}
const &= var(h(Y)) \\
&= var(h(\mu) + h'(\mu)(Y-\mu)) \\
&= (h'(\mu))^2 var(Y-\mu) \\
&= (h'(\mu))^2 var(Y) \\
&= (h'(\mu))^2(f(\mu))^2 \\
\end{aligned}
$$
we must have,
$$
h'(\mu) \propto \frac{1}{f(\mu)}
$$
then,
$$
h(\mu) = \int\frac{1}{f(\mu)}d\mu
$$
Example: For the Poisson distribution: $\sigma^2 = var(Y) = E(Y) = \mu$
Then,
$$
\begin{aligned}
\sigma = f(\mu) &= \sqrt{mu} \\
h'(\mu) &\propto \frac{1}{\mu} = \mu^{-.5}
\end{aligned}
$$
Then, the variance stabilizing transformation is:
$$
h(\mu) = \int \mu^{-.5} d\mu = \frac{1}{2} \sqrt{\mu}
$$
hence, $\sqrt{Y}$ is used as the variance stabilizing transformation.
If we don't know $f(\mu)$
1. Trial and error. Look at residuals plots
2. Ask researchers about previous studies or find published results on similar experiments and determine what transformation was used.
3. If you have multiple observations $Y_{ij}$ at the same X values, compute $\bar{Y}_i$ and $s_i$ and plot them\
If $s_i \propto \bar{Y}_i^{\lambda}$ then consider $s_i = a \bar{Y}_i^{\lambda}$ or $ln(s_i) = ln(a) + \lambda ln(\bar{Y}_i)$. So regression the natural log of $s_i$ on the natural log of $\bar{Y}_i$ gives $\hat{a}$ and $\hat{\lambda}$ and suggests the form of $f(\mu)$ If we don't have multiple obs, might still be able to "group" the observations to get $\bar{Y}_i$ and $s_i$.
| Transformation | Situation | Comments |
|---------------------------|----------------------------------------|----------------------------------------|
| $\sqrt{Y}$ | $var(\epsilon_i) = k E(Y_i)$ | counts from Poisson dist |
| $\ sqrt{Y} + \sqrt{Y+1}$ | $var(\epsilon_i) = k E(Y_i)$ | small counts or zeroes |
| $log(Y)$ | $var(\epsilon_i) = k (E(Y_i))^2$ | positive integers with wide range |
| $log(Y+1)$ | $var(\epsilon_i) = k(E(Y_i))^2$ | some counts zero |
| 1/Y | $var(\epsilon_i) = k(E(Y_i))^4$ | most responses near zero, others large |
| $arcsin(\sqrt{Y})$ | $var(\epsilon_i) = k E(Y_i)(1-E(Y_i))$ | data are binomial proportions or % |
### Multiple Linear Regression
Geometry of Least Squares
$$
\begin{aligned}
\mathbf{\hat{y}} &= \mathbf{Xb} \\
&= \mathbf{X(X'X)^{-1}X'y} \\
&= \mathbf{Hy}
\end{aligned}
$$
sometimes $\mathbf{H}$ is denoted as $\mathbf{P}$.
$\mathbf{H}$ is the projection operator.\
$$
\mathbf{\hat{y}= Hy}
$$
is the projection of y onto the linear space spanned by the columns of $\mathbf{X}$ (model space). The dimension of the model space is the rank of $\mathbf{X}$.
Facts:
1. $\mathbf{H}$ is symmetric (i.e., $\mathbf{H} = \mathbf{H}'$)\
2. $\mathbf{HH} = \mathbf{H}$
$$
\begin{aligned}
\mathbf{HH} &= \mathbf{X(X'X)^{-1}X'X(X'X)^{-1}X'} \\
&= \mathbf{X(X'X)^{-1}IX'} \\
&= \mathbf{X(X'X)^{-1}X'}
\end{aligned}
$$
3. $\mathbf{H}$ is an $n \times n$ matrix with $rank(\mathbf{H}) = rank(\mathbf{X})$\
4. $\mathbf{(I-H) = I - X(X'X)^{-1}X'}$ is also a projection operator. It projects onto the $n - k$ dimensional space that is orthogonal to the $k$ dimensional space spanned by the columns of $\mathbf{X}$
5. $\mathbf{H(I-H)=(I-H)H = 0}$
Partition of uncorrected total sum of squares:
$$
\begin{aligned}
\mathbf{y'y} &= \mathbf{\hat{y}'\hat{y} + e'e} \\
&= \mathbf{(Hy)'(Hy) + ((I-H)y)'((I-H)y)} \\
&= \mathbf{y'H'Hy + y'(I-H)'(I-H)y} \\
&= \mathbf{y'Hy + y'(I-H)y} \\
\end{aligned}
$$
or partition for the corrected total sum of squares:
$$
\mathbf{y'(I-H_1)y = y'(H-H_1)y + y'(I-H)y}
$$
where $H_1 = \frac{1}{n} J = 1'(1'1)1$
| Source | SS | df | MS | F |
|------------|---------------------------------------|---------|--------------|------------|
| Regression | $SSR = \mathbf{y' (H-\frac{1}{n}J)y}$ | $p - 1$ | $SSR/(p-1)$ | $MSR /MSE$ |
| Error | $SSE = \mathbf{y'(I - H)y}$ | $n - p$ | $SSE /(n-p)$ | |
| Total | $\mathbf{y'y - y'Jy/n}$ | $n -1$ | | |
Equivalently, we can express
$$
\mathbf{Y = X\hat{\beta} + (Y - X\hat{\beta})}
$$
where
- $\mathbf{\hat{Y} = X \hat{\beta}}$ = sum of a vector of fitted values
- $\mathbf{e = ( Y - X \hat{\beta})}$ = residual
- $\mathbf{Y}$ is the $n \times 1$ vector in a n-dimensional space $R^n$
- $\mathbf{X}$ is an $n \times p$ full rank matrix. and its columns generate a $p$-dimensional subspace of $R^n$. Hence, any estimator $\mathbf{X \hat{\beta}}$ is also in this subspace.
We choose least squares estimator that minimize the distance between $\mathbf{Y}$ and $\mathbf{X \hat{\beta}}$, which is the **orthogonal projection** of $\mathbf{Y}$ onto $\mathbf{X\beta}$.
$$
\begin{aligned}
||\mathbf{Y} - \mathbf{X}\hat{\beta}||^2 &= \mathbf{||Y - X\hat{\beta}||}^2 + \mathbf{||X \hat{\beta}||}^2 \\
&= \mathbf{(Y - X\hat{\beta})'(Y - X\hat{\beta}) +(X \hat{\beta})'(X \hat{\beta})} \\
&= \mathbf{(Y - X\hat{\beta})'Y - (Y - X\hat{\beta})'X\hat{\beta} + \hat{\beta}' X'X\hat{\beta}} \\
&= \mathbf{(Y-X\hat{\beta})'Y + \hat{\beta}'X'X(XX')^{-1}X'Y} \\
&= \mathbf{Y'Y - \hat{\beta}'X'Y + \hat{\beta}'X'Y}
\end{aligned}
$$
where the norm of a $(p \times 1)$ vector $\mathbf{a}$ is defined by:
$$
\mathbf{||a|| = \sqrt{a'a}} = \sqrt{\sum_{i=1}^p a^2_i}
$$
Coefficient of Multiple Determination
$$
R^2 = \frac{SSR}{SSTO}= 1- \frac{SSE}{SSTO}
$$
Adjusted Coefficient of Multiple Determination
$$
R^2_a = 1 - \frac{SSE/(n-p)}{SSTO/(n-1)} = 1 - \frac{(n-1)SSE}{(n-p)SSTO}
$$
Sequential and Partial Sums of Squares:
In a regression model with coefficients $\beta = (\beta_0, \beta_1,...,\beta_{p-1})'$, we denote the uncorrected and corrected SS by
$$
\begin{aligned}
SSM &= SS(\beta_0, \beta_1,...,\beta_{p-1}) \\
SSM_m &= SS(\beta_0, \beta_1,...,\beta_{p-1}|\beta_0)
\end{aligned}
$$
There are 2 decompositions of $SSM_m$:
- **Sequential SS**: (not unique -depends on order, also referred to as Type I SS, and is the default of `anova()` in R)\
$$
SSM_m = SS(\beta_1 | \beta_0) + SS(\beta_2 | \beta_0, \beta_1) + ...+ SS(\beta_{p-1}| \beta_0,...,\beta_{p-2})
$$
- **Partial SS**: (use more in practice - contribution of each given all of the others)
$$
SSM_m = SS(\beta_1 | \beta_0,\beta_2,...,\beta_{p-1}) + ... + SS(\beta_{p-1}| \beta_0, \beta_1,...,\beta_{p-2})
$$
### OLS Assumptions
- [A1 Linearity](#a1-linearity)
- [A2 Full rank](#a2-full-rank)
- [A3 Exogeneity of Independent Variables](#a3-exogeneity-of-independent-variables)
- [A4 Homoskedasticity](#a4-homoskedasticity)
- [A5 Data Generation (random Sampling)](#a5-data-generation-random-sampling)
- [A6 Normal Distribution](#a6-normal-distribution)
#### A1 Linearity {#a1-linearity}
```{=tex}
\begin{equation}
A1: y=\mathbf{x}\beta + \epsilon
(\#eq:A1)
\end{equation}
```
Not restrictive
- $x$ can be nonlinear transformation including interactions, natural log, quadratic
With A3 (Exogeneity of Independent), linearity can be restrictive
##### Log Model
| Model | Form | Interpretation of $\beta$ | In words |
|-------------|----------------------------------------------|---------------------------------------|-----------------------------------------------------------------------|
| Level-Level | $y =\beta_0+\beta_1x+\epsilon$ | $\Delta y = \beta_1 \Delta x$ | A unit change in $x$ will result in $\beta_1$ unit change in $y$ |
| Log-Level | $ln(y) = \beta_0 + \beta_1x + \epsilon$ | $\% \Delta y=100 \beta_1 \Delta x$ | A unit change in $x$ result in 100 $\beta_1$ % change in $y$ |
| Level-Log | $y = \beta _0 + \beta_1 ln (x) + \epsilon$ | $\Delta y = (\beta_1/ 100)\%\Delta x$ | One percent change in $x$ result in $\beta_1/100$ units change in $y$ |
| Log-Log | $ln(y) = \beta_0 + \beta_1 l n(x) +\epsilon$ | $\% \Delta y= \beta _1 \% \Delta x$ | One percent change in $x$ result in $\beta_1$ percent change in $y$ |
##### Higher Orders
$y=\beta_0 + x_1\beta_1 + x_1^2\beta_2 + \epsilon$
$$
\frac{\partial y}{\partial x_1}=\beta_1 + 2x_1\beta_2
$$
- The effect of $x_1$ on y depends on the level of $x_1$
- The partial effect at the average = $\beta_1+2E(x_1)\beta_2$
- Average Partial Effect = $E(\beta_1 + 2x_1\beta_2)$
##### Interactions