-
Notifications
You must be signed in to change notification settings - Fork 857
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Addition: GITA-7B/13B & GVLQA Dataset(Accepted by NeurIPS 2024) #191
Comments
Update |
Thanks for sharing! We've incorporated the work into our repo.
|
Thanks and have a good day.
|
Sure. We've added it to the benchmark section. |
Dear Authors,
We'd like to add "GITA: Graph to Visual and Textual Integration for Vision-Language Graph Reasoning" to this repository, which has been accepted by NeurIPS 2024. Paper.
GITA is the first work to explore and establish the promising vision-language question answering on graph-related reasoning area. It systematically enable VLMs for general language-based graph reasoning tasks.
In this paper, we provide new pre-trained VLM model weights for graph reasoning:
Model: GITA-7B/13B, the model weights are in both Github repo and Model weight huggingface.
We also proposed the first dataset GVLQA for vision-language graph reasoning, they are VQA image-text-query-answer pairs for graph reasoning. GVLQA Datasets.
Wish your research smoothly. Looking forward to your reply!
The text was updated successfully, but these errors were encountered: