You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
However it is very difficult to go through and cpature the results over each document. Is it possible to have a table describing quantitative metrics like accuracy etc. over different types of documents e.g. resume, passport etc.
Also, It will be nice to have some comparison against VLM-1 on the same dataset.
The text was updated successfully, but these errors were encountered:
Hi @345ishaan, yes we're currently working on this so that you'll be able to compare model providers quantitatively on fine-grained metrics like accuracy (over document types, and even specific fields). What's your use-case you're considering?
This section of ReadME talks about continuos evaluation and benchmarking: https://github.com/vlm-run/vlmrun-hub?tab=readme-ov-file#-qualitative-results
However it is very difficult to go through and cpature the results over each document. Is it possible to have a table describing quantitative metrics like accuracy etc. over different types of documents e.g. resume, passport etc.
Also, It will be nice to have some comparison against VLM-1 on the same dataset.
The text was updated successfully, but these errors were encountered: