Skip to content

Latest commit

 

History

History
71 lines (55 loc) · 7.57 KB

WC2014QA.md

File metadata and controls

71 lines (55 loc) · 7.57 KB

WC2014QA

WC2014QA[1] is created for question answering tasks over a football world cup 2014 knowlege graph. You could refer to more detailed dataset creation process in the original paper by this link.

The dataset contain two types of questions, namely path query and conjunctive query. A path query is a question that contains only one named entity from the knowledge base and its answer can be found from the knowledge graph by walking down a path consisting of a few relations. A conjunctive query is a question that contains more than one entities and the answer is given as the conjunction of all path queries starting from each entity. Furthermore, the path query could be classified by 1-hop and 2-hop questions.

This dataset can be downloaded via the link.

Path Query (1-hop)

Leaderboard

Model / System Year Precision Recall F1 Accuracy Language Reported by
Uhop-HR 2022 - - 98.1(Hits@1) - EN AlAgha, 2022
TransferNet 2022 - - 97.9(Hits@1) - EN AlAgha, 2022
AlAgha, 2022 2022 - - 97.4(Hits@1) - EN AlAgha, 2022
ISM 2022 - - 96.3(Hits@1) - EN AlAgha, 2022
RL-MHR 2022 - - 94.8(Hits@1) - EN AlAgha, 2022
HR-BiLSTM 2022 - - 86.5(Hits@1) - EN AlAgha, 2022
IRN 2022 - - 71.2(Hits@1) - EN AlAgha, 2022
MACRE-hard infusion 2023 - - - 99.9 EN Xu et al.
MACRE-soft infusion 2023 - - - 99.9 EN Xu et al.
SRN 2023 - - - 98.9 EN Xu et al.
KVMemN2N 2023 - - - 87.0 EN Xu et al.
IRN 2023 - - - 84.3 EN Xu et al.
Seq2Seq 2023 - - - 53.7 EN Xu et al.
Subgraph Embed 2023 - - - 44.8 EN Xu et al.

Path Query (2-hop)

Leaderboard

Model / System Year Precision Recall F1 Accuracy Language Reported by
AlAgha, 2022 2022 - - 98.6(Hits@1) - EN AlAgha, 2022
ISM 2022 - - 98.0(Hits@1) - EN AlAgha, 2022
Uhop-HR 2022 - - 96.6(Hits@1) - EN AlAgha, 2022
TransferNet 2022 - - 96.5(Hits@1) - EN AlAgha, 2022
RL-MHR 2022 - - 93.6(Hits@1) - EN AlAgha, 2022
HR-BiLSTM 2022 - - 83.2(Hits@1) - EN AlAgha, 2022
IRN 2022 - - 66.2(Hits@1) - EN AlAgha, 2022
MACRE-hard infusion 2023 - - - 99.8 EN Xu et al.
MACRE-soft infusion 2023 - - - 93.0 EN Xu et al.
IRN 2023 - - - 98.1 EN Xu et al.
SRN 2023 - - - 97.8 EN Xu et al.
KVMemN2N 2023 - - - 92.8 EN Xu et al.
Seq2Seq 2023 - - - 54.8 EN Xu et al.
Subgraph Embed 2023 - - - 50.7 EN Xu et al.

Path Query (Total)

Leaderboard

Model / System Year Precision Recall F1 Language Reported by
ISM 2022 - - 97.3(Hits@1) EN AlAgha, 2022
TransferNet 2022 - - 96.8(Hits@1) EN AlAgha, 2022
AlAgha, 2022 2022 - - 96.0(Hits@1) EN AlAgha, 2022
Uhop-HR 2022 - - 95.1(Hits@1) EN AlAgha, 2022
RL-MHR 2022 - - 92.1(Hits@1) EN AlAgha, 2022
HR-BiLSTM 2022 - - 72.3(Hits@1) EN AlAgha, 2022
IRN 2022 - - 64.7(Hits@1) EN AlAgha, 2022

References

[1] Zhang et al. “Gaussian Attention Model and Its Application to Knowledge Base Embedding and Question Answering.” ICLR (2017).

Go back to the README