Add a testcase checkmark list of the vulnerable and non-vulnerable instances for DAST Tools #12
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Being a master's student in the cybersecurity branch, my last months were passed writing my thesis about "Evaluation of Dynamic Analysis Tools in detecting OWASP Top 10 vulnerabilities", a very hardworking process. By this, I end up using the WebGoat application as part of my workload, i.e. test case set, to evaluate the subjects, namely DAST tools, since they need an environment where they can exercise their performance.
Not having a checkmark list of the vulnerable and non-vulnerable instance of BWApp posed a very challenging process of evaluate it manually and gather this same list. With this, my intention with this pull request is to help future tool testers that may use WebGoat as testing environment by providing an excel file that can be used for marking if a given tool detected a determined instance or not.
The first 2 columns will correspond, respectively, to endpoint/URL (it's up to you) of were a the vulnerable or non-vulnerable instance is located and then the parameter involved, if applicable. The vulnerable column will be TRUE (if the instance is really vulnerable) or FALSE (otherwise). Finally the vulnerability column will indicate what vulnerability we talking about. The rest of the columns is where finally we check if a tool x, y or z detected the vulnerability.
Each number (0 or 1) corresponds to a parameter written in the parameters column. In case that there aren't parameters is used just a 1 number. Of course, 0 means that the tool hadn't an alert here and 1 mean that the tool had an alert here. I know, it's not the most intuitive way to do it, but will be improved in the future.
Really hope this can reach as many tool testers as possible and in someway help them! Any suggestions, doubts and comments I'm all ears! :P
In the following link you can reach a repository where a small set of scripts were created to help in this particulary task of couting the TPs, FNs, FPs and TNs achieved by the tools.
https://github.com/joaosilva21/Thesis-Scripts-Assess-of-DAST-Tools-against-OWASP-Top-10-vulns