-
Notifications
You must be signed in to change notification settings - Fork 3
/
Copy pathch10_lab3.Rmd
167 lines (122 loc) · 7.13 KB
/
ch10_lab3.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
---
title: "10.6 Lab 3: NCI60 Data Example"
output:
github_document:
md_extensions: -fancy_lists+startnum
html_notebook:
md_extensions: -fancy_lists+startnum
---
```{r setup, message=FALSE, warning=FALSE}
library(ISLR)
library(tidyverse)
```
Unsupervised techniques are often used in the analysis of genomic data. In particular, PCA and hierarchical clustering are popular tools. We illustrate these techniques on the `NCI60` cancer cell line microarray data, which consists of 6,830 gene expression measurements on 64 cancer cell lines.
```{r}
nci_labs <- NCI60$labs
nci_data <- NCI60$data
```
Each cell line is labeled with a cancer type. We do not make use of the cancer types in performing PCA and clustering, as these are unsupervised techniques. But after performing PCA and clustering, we will check to see the extent to which these cancer types agree with the results of these unsupervised techniques.
```{r}
dim(nci_data)
```
We begin by examining the cancer types for the cell lines.
```{r}
nci_labs[1:4]
```
```{r}
nci_labs %>% table()
```
## 10.6.1 PCA on the NCI60 Data
We first perform PCA on the data after scaling the variables (genes) to have standard deviation one, although one could reasonably argue that it is better not to scale the genes.
```{r}
pr_out <- prcomp(nci_data, scale = TRUE)
pr_out_x <- pr_out$x %>% as_tibble(rownames = "variable")
```
We now plot the first few principal component score vectors, in order to visualize the data. The observations (cell lines) corresponding to a given cancer type will be plotted in the same color, so that we can see to what extent the observations within a cancer type are similar to each other.
```{r}
qplot(PC1, PC2, color = nci_labs, data = pr_out_x)
```
```{r}
qplot(PC2, PC3, color = nci_labs, data = pr_out_x)
```
On the whole, cell lines corresponding to a single cancer type do tend to have similar values on the first few principal component score vectors. This indicates that cell lines from the same cancer type tend to have pretty similar gene expression levels.
We can obtain a summary of the proportion of variance explained (PVE) of the first few principal components using the `summary()` method for a
`prcomp` object:
```{r}
summary(pr_out)
```
```{r}
plot(pr_out)
```
Note that the height of each bar in the bar plot is given by squaring the corresponding element of `pr.out$sdev`. However, it is more informative to plot the PVE of each principal component (i.e. a scree plot) and the cumulative PVE of each principal component. This can be done with just a little work.
```{r}
pve <- summary(pr_out)$importance[2,]
cum_pve <- cumsum(pve)
qplot(seq_along(pve), pve, geom = "line") +
geom_point() +
labs(x = "Principal Component",
y = "PVE")
```
```{r}
qplot(seq_along(cum_pve), cum_pve, geom = "line") +
geom_point() +
labs(x = "Principal Component",
y = "Cumulative PVE")
```
We see that together, the first seven principal components explain around 40% of the variance in the data. This is not a huge amount of the variance. However, looking at the scree plot, we see that while each of the first seven principal components explain a substantial amount of variance, there is a marked decrease in the variance explained by further principal components. That is, there is an elbow in the plot after approximately the seventh principal component. This suggests that there may be little benefit to examining more than seven or so principal components (though even examining seven principal components may be difficult).
## 10.6.2 Clustering the Observations of the NCI60 Data
We now proceed to hierarchically cluster the cell lines in the `NCI60` data, with the goal of finding out whether or not the observations cluster into distinct types of cancer. To begin, we standardize the variables to have mean zero and standard deviation one. As mentioned earlier, this step is optional and should be performed only if we want each gene to be on the same scale.
```{r}
st_data <- scale(nci_data) %>%
as_tibble()
```
We now perform hierarchical clustering of the observations using complete, single, and average linkage. Euclidean distance is used as the dissimilarity measure.
```{r, fig.height=7}
data_dist <- dist(st_data)
plot(hclust(data_dist), labels = nci_labs, main = "Complete Linkage")
```
```{r, fig.height=7}
plot(hclust(data_dist, method = "average"),
labels = nci_labs, main = "Average Linkage")
```
```{r, fig.height=7}
plot(hclust(data_dist, method = "single"),
labels = nci_labs, main = "Single Linkage")
```
Typically, single linkage will tend to yield trailing clusters: very large clusters onto which individual observations attach one-by-one. On the other hand, complete and average linkage tend to yield more balanced, attractive clusters. For this reason, complete and average linkage are generally preferred to single linkage. Clearly cell lines within a single cancer type do tend to cluster together, although the clustering is not perfect. We will use complete linkage hierarchical clustering for the analysis that follows.
We can cut the dendrogram at the height that will yield a particular number of clusters, say four:
```{r}
hc_out <- hclust(data_dist)
hc_clusters <- cutree(hc_out, 4)
table(hc_clusters, nci_labs)
```
There are some clear patterns. All the leukemia cell lines fall in cluster 3, while the breast cancer cell lines are spread out over three different clusters.
We can plot the cut on the dendrogram that produces these four clusters:
```{r, fig.width=10}
par(mfrow = c(1,1))
plot(hc_out, labels = nci_labs)
abline(h = 139, col = "red")
```
The argument `h=139` plots a horizontal line at height 139 on the dendrogram; this is the height that results in four distinct clusters.
Printing the output of `hclust` gives a useful brief summary of the object:
```{r}
hc_out
```
We claimed earlier in Section 10.3.2 that K-means clustering and hierarchical clustering with the dendrogram cut to obtain the same number of clusters can yield very different results. How do these `NCI60` hierarchical clustering results compare to what we get if we perform K-means clustering with K=4?
```{r}
set.seed(2)
km_out <- kmeans(st_data, 4, nstart = 20)
km_clusters <- km_out$cluster
table(km_clusters, hc_clusters)
```
Cluster 4 in K-means is identical to cluster 3 in HC, but others clusters differ.
Rather than performing hierarchical clustering on the entire data matrix, we can simply perform hierarchical clustering on the first few principal component score vectors, as follows:
```{r, fig.width=10}
hc_out_2 <- hclust(dist(pr_out$x[,1:5]))
plot(hc_out_2, labels = nci_labs,
main = "HC on first 5 principal components")
```
```{r}
table(cutree(hc_out_2, 4), nci_labs)
```
Not surprisingly, these results are different from the ones that we obtained when we performed hierarchical clustering on the full data set. Sometimes performing clustering on the first few principal component score vectors can give better results than performing clustering on the full data. In this situation, we might view the principal component step as one of denoising the data. We could also perform K-means clustering on the first few principal component score vectors rather than the full data set.