You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
First of all, thank you for releasing this benchmark and the sample inference code—it’s been incredibly helpful.
I’m currently trying to benchmark some methods on REPOCOD but am encountering difficulties reproducing the results. Specifically, I’m using VLLM for generation, and the outputs differ from those produced via direct inference using HuggingFace Transformers.
Here’s a snippet of the inference code I’m using with VLLM:
Thank you for your interest in REPOCOD and for sharing your inference setup.
In our experiments, we used greedy decoding for inference, which might explain the differences you're observing.
As for the inference code, we plan to release our inference code along with the set up of our three retrieval methods in an upcoming update to the repository. Thanks.
Hi team,
First of all, thank you for releasing this benchmark and the sample inference code—it’s been incredibly helpful.
I’m currently trying to benchmark some methods on REPOCOD but am encountering difficulties reproducing the results. Specifically, I’m using VLLM for generation, and the outputs differ from those produced via direct inference using HuggingFace Transformers.
Here’s a snippet of the inference code I’m using with VLLM:
Could you please:
The text was updated successfully, but these errors were encountered: