-
Notifications
You must be signed in to change notification settings - Fork 242
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Several changes for smoother results and more control over the NN. #116
Open
iciclesoft
wants to merge
2
commits into
CodingTrain:master
Choose a base branch
from
iciclesoft:master
base: master
Could not load branches
Branch not found: {{ refName }}
Loading
Could not load tags
Nothing to show
Loading
Are you sure you want to change the base?
Some commits from the old base branch may be removed from the timeline,
and old review comments may become outdated.
Open
Changes from 1 commit
Commits
Show all changes
2 commits
Select commit
Hold shift + click to select a range
File filter
Filter by extension
Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Isn't this just equivalent to the AI learning negative weights/biases from
inputs[0]
andinputs[1]
, though?There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Because all input-nodes are linked directly to all hidden-nodes I would say no in this case. If the hidden-layer was a multi-dimensional array I would say yes. The main reason for this change was because it felt a bit too 'cheaty' for me I guess and the vehicles also showed some strange behavior because the input would always be 0 until they get within this border. To be clear though I am no expert in neuro evolution, in fact I learned most of what I know about this topic from watching The Coding Train :)
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Actually I would say no in both cases. I do however agree that duplicating inputs 0 and 1 directly into 2 and 3 would have the exact same effect, but what it does do is help solving the XOR-problem for the border.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
But duplicating them exactly would have no net effect. Simplified, a NN with two equivalent inputs and one middle layer node would have two weights w_11, w_21 for example, and two biases b_1, b_2, right? If so, then couldn't you build a 1-input NN with the same behaviour by just summing the weights and biases together?
I also have no idea about these things either, which is why I asked if it was actually different!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Alright nice, so my reasoning was that with having 1 input, the NN must use a single-value to represent 3 outcomes, namely 0 and 1 for 'close to border' and 0.5 for 'not close to border'. By using 2 inputs, it can use one value to represent 2 outcomes and the other value to represent 2 outcomes. This however is wrong, as can be seen here: http://www.iciclesoft.com/preview/nn-test
The code for these testcases can be found at https://github.com/iciclesoft/NN-Test
The remaining question however is if we want to go back to just the x and y positions (like it was before) or if we want to go with the border stroke.
My vote would be for the x and y positions, which they seem to pick up quite nicely after having it run for +/- one minute at 100x speed.Edit: Did some more testing today, sometimes the borders are picked up very fast, other times they seem to be ignoring the borders for quite some time.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@meiamsome So I've done some more tests, the previous tests were mainly about the ability of a certain neural network to find both the north and south borders on a plane. I've changed these tests to also include an average of training cycles it needs to make a distinction between the two borders and the 'middle ground'. Here we can actually see a difference between having one or two inputs and even a difference in the way the inputs are given.
Mostly the results I see show that having two inputs, where the second input is an invert of the first, need the least update-cycles to complete. This is usually slightly 'better' than having two of the same inputs (which is quite strange). Having just one input is usually about 30-40% slower. This is the case where the neural networks have 32 hidden nodes. To make it even more strange, when the nn has only 4 hidden nodes, the average updates needed are a lot closer to eachother.
It gets even stranger, before I had the 'allowed error rate', which is used to determine if a test is succesfully completed, at 1% instead of 2. I would say that this wouldn't affect the results a lot, since it's the same for each test, but in these tests having two of the same inputs usually required the least updates (instead of two inputs, where the second is the invert of the first).
All in all it seems that having multiple inputs, wether they are inverted or not, does help the neural network to learn about the borders quicker.
By the way I've updated both http://www.iciclesoft.com/preview/nn-test and https://github.com/iciclesoft/NN-Test if you're interested in the tests.