-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
1 changed file
with
54 additions
and
0 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,54 @@ | ||
--- | ||
title: "Week 16 - Improving and Testing" | ||
image: ../assets/images/logo.png | ||
categories: | ||
- Weekly Log | ||
tags: | ||
- GitHub Pages | ||
- ROS2 | ||
- Gazebo | ||
- PID Controllers | ||
- Autonomous Driving | ||
- Neural Networks | ||
- Machine Learning | ||
--- | ||
|
||
# Progress This Week | ||
|
||
## Objective | ||
|
||
The objective was to achieve a well-functioning neural pilot, which was finally achieved. | ||
|
||
## Trying External Pilot | ||
|
||
After implementing normalization and affining in the preprocessing of the data, getting great training and offline graphs, and still not getting a working pilot, I needed something else to find where the problem was. | ||
|
||
In order to do this, I asked for a pilot and some datasets from another student ahead of me, [Alejandro Moncalvillo](https://github.com/RoboticsLabURJC/2022-tfg-alejandro-moncalvillo/tree/main). With his model, one I knew had to work, I was able to find some problems in the Neural Pilot, the biggest one regarding color. After solving this and training again, I got a model that worked, but only most of the time. One out of every three laps (first one included), it would crash in the first curve. | ||
|
||
Knowing now what I had to work on, I proceeded to the next step of this external approach. | ||
|
||
## Training with External Data | ||
|
||
I decided to start training with the data the external pilot had been trained on. This was initially challenging because we had different approaches to recording the datasets, so I needed some alterations in order to actually start training. For this, every step was traced, especially the image management; the images had to remain consistent with what the Neural Pilot was going to receive. | ||
|
||
After some failures, I tried training it with data from quite a lot of datasets of different circuits. This was actually easier than I thought due to some Python magic, and I ended training with a dataset that contained 56,500 data before any preprocessing was done, doubling the amount with the flipping preprocessing. | ||
|
||
The result might be seen here: [https://youtu.be/dum_p3uuFuk](https://youtu.be/dum_p3uuFuk) | ||
|
||
After that, I tried to add affine, which resulted in a pilot that failed quite a lot: [https://youtu.be/UvbVKGQogVk](https://youtu.be/UvbVKGQogVk) | ||
|
||
However, affining made it more complex and doubled the data; maybe I could make it find a good model faster using normalization, which, as seen before, obtained great results in accelerating training. | ||
|
||
The result was this: [https://youtu.be/a7uMsLUp8L0](https://youtu.be/a7uMsLUp8L0) | ||
|
||
Every training was done with 100 epochs. | ||
|
||
## For the Future | ||
|
||
Given the differences between datasets, I did not perform offline testing and did not include graphics because they were basically the same as in other posts. | ||
|
||
However, next time I will be using my own datasets and documenting everything properly. | ||
|
||
I also attempted to do the preprocessing during training instead of during data loading, but it proved incredibly difficult given that most of the preprocessing did not change the image but created another image with different characteristics. | ||
|
||
--- |