forked from TZstatsADS/ADS_Teaching
-
Notifications
You must be signed in to change notification settings - Fork 0
/
Main.Rmd
251 lines (206 loc) · 8.32 KB
/
Main.Rmd
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
---
title: "Main"
author: "Chengliang Tang, Yujie Wang, Tian Zheng"
output:
html_document:
df_print: paged
---
In your final repo, there should be an R markdown file that organizes **all computational steps** for evaluating your proposed Facial Expression Recognition framework.
This file is currently a template for running evaluation experiments. You should update it according to your codes but following precisely the same structure.
```{r message=FALSE}
if(!require("EBImage")){
source("https://bioconductor.org/biocLite.R")
biocLite("EBImage")
}
if(!require("R.matlab")){
install.packages("R.matlab")
}
if(!require("readxl")){
install.packages("readxl")
}
if(!require("dplyr")){
install.packages("dplyr")
}
if(!require("readxl")){
install.packages("readxl")
}
if(!require("ggplot2")){
install.packages("ggplot2")
}
if(!require("caret")){
install.packages("caret")
}
library(R.matlab)
library(readxl)
library(dplyr)
library(EBImage)
library(ggplot2)
library(caret)
```
### Step 0 set work directories
```{r wkdir, eval=FALSE}
set.seed(0)
setwd("~/Project3-FacialEmotionRecognition/doc")
# here replace it with your own path or manually set it in RStudio to where this rmd file is located.
# use relative path for reproducibility
```
Provide directories for training images. Training images and Training fiducial points will be in different subfolders.
```{r}
train_dir <- "../data/train_set/" # This will be modified for different data sets.
train_image_dir <- paste(train_dir, "images/", sep="")
train_pt_dir <- paste(train_dir, "points/", sep="")
train_label_path <- paste(train_dir, "label.csv", sep="")
```
### Step 1: set up controls for evaluation experiments.
In this chunk, we have a set of controls for the evaluation experiments.
+ (T/F) cross-validation on the training set
+ (number) K, the number of CV folds
+ (T/F) process features for training set
+ (T/F) run evaluation on an independent test set
+ (T/F) process features for test set
```{r exp_setup}
run.cv=TRUE # run cross-validation on the training set
K <- 5 # number of CV folds
run.feature.train=TRUE # process features for training set
run.test=TRUE # run evaluation on an independent test set
run.feature.test=TRUE # process features for test set
```
Using cross-validation or independent test set evaluation, we compare the performance of models with different specifications. In this Starter Code, we tune parameter k (number of neighbours) for KNN.
```{r model_setup}
k = c(5,11,21,31,41,51)
model_labels = paste("KNN with K =", k)
```
### Step 2: import data and train-test split
```{r}
#train-test split
info <- read.csv(train_label_path)
n <- nrow(info)
n_train <- round(n*(4/5), 0)
train_idx <- sample(info$Index, n_train, replace = F)
test_idx <- setdiff(info$Index,train_idx)
```
If you choose to extract features from images, such as using Gabor filter, R memory will be exhausted if all images are read together. The solution is to repeat reading a smaller batch(e.g 100) and process them.
```{r}
n_files <- length(list.files(train_image_dir))
image_list <- list()
for(i in 1:100){
image_list[[i]] <- readImage(paste0(train_image_dir, sprintf("%04d", i), ".jpg"))
}
```
Fiducial points are stored in matlab format. In this step, we read them and store them in a list.
```{r read fiducial points}
#function to read fiducial points
#input: index
#output: matrix of fiducial points corresponding to the index
readMat.matrix <- function(index){
return(round(readMat(paste0(train_pt_dir, sprintf("%04d", index), ".mat"))[[1]],0))
}
#load fiducial points
fiducial_pt_list <- lapply(1:n_files, readMat.matrix)
save(fiducial_pt_list, file="../output/fiducial_pt_list.RData")
```
### Step 3: construct features and responses
+ The follow plots show how pairwise distance between fiducial points can work as feature for facial emotion recognition.
+ In the first column, 78 fiducials points of each emotion are marked in order.
+ In the second column distributions of vertical distance between right pupil(1) and right brow peak(21) are shown in histograms. For example, the distance of an angry face tends to be shorter than that of a surprised face.
+ The third column is the distributions of vertical distances between right mouth corner(50)
and the midpoint of the upper lip(52). For example, the distance of an happy face tends to be shorter than that of a face.
![Figure1](../figs/feature_visualization.jpg)
`feature.R` should be the wrapper for all your feature engineering functions and options. The function `feature( )` should have options that correspond to different scenarios for your project and produces an R object that contains features and responses that are required by all the models you are going to evaluate later.
+ `feature.R`
+ Input: list of images or fiducial point
+ Output: an RData file that contains extracted features and corresponding responses
```{r feature}
source("../lib/feature.R")
tm_feature_train <- NA
if(run.feature.train){
tm_feature_train <- system.time(dat_train <- feature(fiducial_pt_list, train_idx))
}
tm_feature_test <- NA
if(run.feature.test){
tm_feature_test <- system.time(dat_test <- feature(fiducial_pt_list, test_idx))
}
save(dat_train, file="../output/feature_train.RData")
save(dat_test, file="../output/feature_test.RData")
```
### Step 4: Train a classification model with training features and responses
Call the train model and test model from library.
`train.R` and `test.R` should be wrappers for all your model training steps and your classification/prediction steps.
+ `train.R`
+ Input: a data frame containing features and labels and a parameter list.
+ Output:a trained model
+ `test.R`
+ Input: the fitted classification model using training data and processed features from testing images
+ Input: an R object that contains a trained classifier.
+ Output: training model specification
+ In this Starter Code, we use KNN to do classification.
```{r loadlib}
#source("../lib/train.R") Since knn does not need to train, I comment this line.
source("../lib/test_knn.R")
```
#### Model selection with cross-validation
* Do model selection by choosing among different values of training model parameters.
```{r runcv, eval=F}
source("../lib/cross_validation_knn.R")
if(run.cv){
err_cv <- matrix(0, nrow = length(k), ncol = 2)
for(i in 1:length(k)){
cat("k=", k[i], "\n")
err_cv[i,] <- cv.function(dat_train, K, k[i])
save(err_cv, file="../output/err_cv.RData")
}
}
```
Visualize cross-validation results.
```{r cv_vis}
if(run.cv){
load("../output/err_cv.RData")
err_cv <- as.data.frame(err_cv)
colnames(err_cv) <- c("mean_error", "sd_error")
err_cv$k = as.factor(k)
err_cv %>%
ggplot(aes(x = k, y = mean_error,
ymin = mean_error - sd_error, ymax = mean_error + sd_error)) +
geom_crossbar() +
theme(axis.text.x = element_text(angle = 90, hjust = 1))
}
```
* Choose the "best" parameter value
```{r best_model}
if(run.cv){
model_best <- k[which.min(err_cv[,1])]
}
par_best <- list(k = model_best)
```
* Train the model with the entire training set using the selected model (model parameter) via cross-validation.
```{r final_train}
#tm_train=NA
#tm_train <- system.time(fit_train <- train(dat_train, par_best))
#save(fit_train, file="../output/fit_train.RData")
```
### Step 5: Run test on test images
```{r test}
tm_test=NA
if(run.test){
load(file="../output/fit_train.RData")
tm_test <- system.time(pred <- test(model_best, dat_test))
}
```
* evaluation
```{r}
accu <- mean(dat_test$emotion_idx == pred)
cat("The accuracy of model:", model_labels[which.min(err_cv[,1])], "is", accu*100, "%.\n")
library(caret)
confusionMatrix(pred, dat_test$emotion_idx)
```
Note that the accuracy is not high but is better than that of ramdom guess(4.5%).
### Summarize Running Time
Prediction performance matters, so does the running times for constructing features and for training the model, especially when the computation resource is limited.
```{r running_time}
cat("Time for constructing training features=", tm_feature_train[1], "s \n")
cat("Time for constructing testing features=", tm_feature_test[1], "s \n")
#cat("Time for training model=", tm_train[1], "s \n")
cat("Time for testing model=", tm_test[1], "s \n")
```
###Reference
- Du, S., Tao, Y., & Martinez, A. M. (2014). Compound facial expressions of emotion. Proceedings of the National Academy of Sciences, 111(15), E1454-E1462.