Effects of working with Facial Keypoints Detection dataset from Kaggle competition (link in Resources below). Projects include data preprocessing, model training, evaluating and improving model's parametrs with Hyperas package.
Firstly, download dataset according to a instruction in URL in Data sources section of this README. Next, install Facial-Keypoints-Detection with git after downloading it to your computer, unzipping it and open a project folder in your terminal.
git clone https://github.com/gwiazale/Facial-Keypoints-Detection.git
After that, just open it in your favourite enviroment.
- Import packages
- Data preparation:
- Loading data
- Describing data
- Describing new datasets after fixing null values
- Filling null values with mean value for each column in DataFrame
- Deleting rows with null values
- Deleting columns, where percentage of null values are greater than 1%, rest is filled with mean value of the column
- Results of fixing null values
- Filling null values with mean value for each column in DataFrame
- Deleting rows with null values
- Deleting columns, where percentage of null values are greater than 1%, rest is filled with mean value of the column
- Models for:
- Filling null values with mean value for each column in DataFrame
- Model evaluation
- Deleting rows with null values
- Model evaluation
- Deleting columns, where percentage of null values are greater than 1%, rest is filled with mean value of the column
- Model evaluation Parametrs of the models were improved with my Google Colab project, where is a script of choosing best params using Hyperas package.
- Submission results
Results of training and validating the first model:
Predicting (x, y) locations of eyes, nose and mouth for the first model performs:
Results of training and validating the second model:
Predicting (x, y) locations of eyes, nose and mouth for the second model performs:
Results of training and validating the third model:
Predicting (x, y) locations of eyes, nose and mouth for the third model performs:
Proposition to improve the predictions are:
- improving model architecture
- choosing another way to data preprocessing (e.g. data augmentation for the second dataset, choose another solution to filling null values than mean value of the feature).