You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I encountered an issue while evaluating the BPR (Bayesian Personalized Ranking) model with basically the same code provided in the example on a different dataset. Specifically, when using the "by_threshold" relevancy method with ranking metrics, the computed values for precision@k, ndcg@k, and map@k exceed 1, which seems incorrect. This issue does not occur when switching the relevancy method to "top_k."
How do we replicate the issue?
I use the following parameter for BPR (all using the default seed):
It divides by 10 (default k value), instead of by 50 (the specified by_threshold value).
The other metrics have similar problems.
Perhaps this is what by_threshold is intended to do. Is it a way of changing how many items you want, even though you are calculating by k?? I don't really understand how by_threshold should work so I don't really know if this is a bug or intended behaviour.
Description
Hello!
I encountered an issue while evaluating the BPR (Bayesian Personalized Ranking) model with basically the same code provided in the example on a different dataset. Specifically, when using the "by_threshold" relevancy method with ranking metrics, the computed values for precision@k, ndcg@k, and map@k exceed 1, which seems incorrect. This issue does not occur when switching the relevancy method to "top_k."
How do we replicate the issue?
I use the following parameter for BPR (all using the default seed):
Using these evaluation
Here is the dataset I test on: https://github.com/mnhqut/rec_sys-dataset/blob/main/data.csv
My result:
MAP: 1.417529
NDCG: 1.359902
Precision@K: 2.256466
Willingness to contribute
The text was updated successfully, but these errors were encountered: