Skip to content

Commit

Permalink
JarvisQA
Browse files Browse the repository at this point in the history
  • Loading branch information
xixi019 committed May 16, 2024
1 parent ffffd9a commit 6f15a71
Showing 1 changed file with 2 additions and 1 deletion.
3 changes: 2 additions & 1 deletion systems.md
Original file line number Diff line number Diff line change
Expand Up @@ -140,4 +140,5 @@
| MACRE | Xu et al. | [Link](https://link.springer.com/chapter/10.1007/978-3-031-30672-3_40) | no | - | [Link](https://link.springer.com/chapter/10.1007/978-3-031-30672-3_40) | MACRE is a novel approach for multi-hop question answering over KGs via contrastive relation embedding (MACRE) powered by contrastive relation embedding and context-aware relation ranking. | Xu et al. |
| KGQAcl/rr | Hu et al. | [Link](https://arxiv.org/pdf/2303.10368.pdf) | yes | [Link](https://github.com/HuuuNan/PLMs-in-Practical-KBQA) | [Link](https://arxiv.org/pdf/2303.10368.pdf) | KGQA-CL and KGQA-RR are tow frameworks proposed to evaluate the PLM's performance in comparison to their efficiency. Both architectures are composed of mention detection, entity disambiguiation, relation detection and answer query building. The difference lies on the relation detection module. KGQA-CL aims to map question intent to KG relations. While KGQA-RR ranks the related relations to retrieve the answer entity. Both frameworks are tested on common PLM, distilled PLMs and knowledge-enhanced PLMs and achieve high performance on three benchmarks. | Hu et al. |
| W. Han et al. | Han et al. | [Link](https://link.springer.com/chapter/10.1007/978-3-031-30672-3_39) | no | - | [Link](https://link.springer.com/chapter/10.1007/978-3-031-30672-3_39) | This model is based on machine reading comprehension. To transform a subgraph of the KG centered on the topic entity into text, the subgraph is sketched through a carefully designed schema tree, which facilitates the retrieval of multiple semantically-equivalent answer entities. Then, the promising paragraphs containing answers are picked by a contrastive learning module. Finally, the answer entities are delivered based on the answer span that is detected by the machine reading comprehension module. | Han et al. |
| GAIN | Shu et al. | [Link](https://arxiv.org/pdf/2309.08345.pdf) | no | - | [Link](https://arxiv.org/pdf/2309.08345.pdf) | GAIN is not a KGQA system, but a data augmentation method named Graph seArch and questIon generatioN (GAIN). GAIN applies to KBQA corresponding to logical forms or triples, and scales data volume and distribution through four steps: 1) Graph search: Sampling logical forms or triples from arbitrary domains in the KB, without being restricted to any particular KBQA dataset. 2) Training question generator on existing KBQA datasets, i.e., learning to convert logical forms or triples into natural language questions. 3) Verbalization: Using the question generator from step 2 to verbalize sampled logical forms or triples from step 1, thus creating synthetic questions. 4) Training data expansion: Before fine-tuning any neural models on KBQA datasets, GAIN-synthetic data can be used to train these models or to expand the corpus of in-context samples for LLMs. That is, as a data augmentation method, GAIN is not a KBQA model, but it is used to augment a base KBQA model. | Shu et al. |
| GAIN | Shu et al. | [Link](https://arxiv.org/pdf/2309.08345.pdf) | no | - | [Link](https://arxiv.org/pdf/2309.08345.pdf) | GAIN is not a KGQA system, but a data augmentation method named Graph seArch and questIon generatioN (GAIN). GAIN applies to KBQA corresponding to logical forms or triples, and scales data volume and distribution through four steps: 1) Graph search: Sampling logical forms or triples from arbitrary domains in the KB, without being restricted to any particular KBQA dataset. 2) Training question generator on existing KBQA datasets, i.e., learning to convert logical forms or triples into natural language questions. 3) Verbalization: Using the question generator from step 2 to verbalize sampled logical forms or triples from step 1, thus creating synthetic questions. 4) Training data expansion: Before fine-tuning any neural models on KBQA datasets, GAIN-synthetic data can be used to train these models or to expand the corpus of in-context samples for LLMs. That is, as a data augmentation method, GAIN is not a KBQA model, but it is used to augment a base KBQA model. | Shu et al. |
| JarvisQALcs | Jaradeh et al. | [Link](https://arxiv.org/pdf/2006.01527) | no | | | same as reporting paper | JarvisQA a BERT based system to answer questions on tabular views of scholarly knowledge graphs. | Jaradeh et al. |

0 comments on commit 6f15a71

Please sign in to comment.