You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is it possible to get the ideal prompt for grounding dino, if I have a few annotated sample images? The prompt could be either text or a feature embedding that can be used as input into the model. Ideally, using this prompt would allow the model to improve open-set recognition via few shot learning. This is similar to how the recognize anything plus model can use text descriptions that are generated by a llm to improve open-set recognition.
For example, the model performs poorly with a class like the czech hedgehog. I would like to be able to understand what I should use as the prompt to get the associated object detected in a dataset.
The text was updated successfully, but these errors were encountered:
Is it possible to get the ideal prompt for grounding dino, if I have a few annotated sample images? The prompt could be either text or a feature embedding that can be used as input into the model. Ideally, using this prompt would allow the model to improve open-set recognition via few shot learning. This is similar to how the recognize anything plus model can use text descriptions that are generated by a llm to improve open-set recognition.
For example, the model performs poorly with a class like the czech hedgehog. I would like to be able to understand what I should use as the prompt to get the associated object detected in a dataset.
The text was updated successfully, but these errors were encountered: