Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Python: Add Gradio models #16135
Python: Add Gradio models #16135
Changes from 12 commits
bed0d56
ca7789d
84d6956
1129925
5d94658
eaba798
8bb4193
944f884
52ceb7f
d6acea1
f72afdc
34c4479
72493a6
File filter
Filter by extension
Conversations
Jump to
There are no files selected for viewing
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are all these vulnerable? I quickly checked a few of them and it looks like:
HighlightedText
only allows the user to specify substrings of a given textColorPicker
only passes on hex stringsAnnotatedImage
does not accept user input at allWe should probably only include actually vulnerable inputs here in order to avoid the query getting noisy.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It’s the event listeners that take untrusted user input, so, for example,
gradio.AnnotatedImage.select(fn, inputs, outputs)
(see docs) and that's what we model here.To clarify - with the above models, we look for any event listeners of the listed classes. For example,
gr.Button
has only one,click()
(see this example, which asks for a user's name and displays it). Side note that if an event listener does not exist for a given class, CodeQL doesn't (can't) match it (for example there is nogr.Button.select()
). I found this way of modeling to be the most succinct and it doesn't slow performance, but I'm open to feedback around it.There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think it is fine to model situations that will never match, I was more worried about false positives. But on reflection, I think this is fine to start; we can refine it if we actually encounter any problems..
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've already found a few vulnerabilities using these models, including in stable-diffusion-webui (120k stars):
https://securitylab.github.com/advisories/GHSL-2024-010_stable-diffusion-webui/
https://securitylab.github.com/advisories/GHSL-2024-019_GHSL-2024-024_kohya_ss/
To be fair, I tested the
AnnotatedImage.select()
event listener (and several other event listeners on the list), and it does take user input, but it's not very popular to use, so I haven't actually seen vulnerabilities around it. Most of the vulnerabilities happen withButton.click()
(including the 7 vulns I linked to above). I think there shouldn't be many false positives here, but I'll be happy to help if anything needs refining.