diff --git a/README.md b/README.md index d97c598..c82baaa 100644 --- a/README.md +++ b/README.md @@ -138,8 +138,10 @@ The BibTeX is below: } ``` -For the graph transformation/normalization work, please use the -following: +For the graph transformation/normalization work, please cite [Goodman, 2019]. +The BibTeX is below: + +[Goodman, 2019]: https://jaslli.org/files/proceedings/05_paclic33_postconf.pdf ``` bibtex @inproceedings{Goodman:2019, @@ -148,7 +150,10 @@ following: booktitle = "Proceedings of the 33rd Pacific Asia Conference on Language, Information, and Computation", year = "2019", pages = "47--56", - address = "Hakodate" + address = "Hakodate", + publisher = "Japanese Association for the Study of Logic, Language and Information", + url = "https://jaslli.org/files/proceedings/05_paclic33_postconf.pdf", + abstract = "Abstract Meaning Representation (AMR; Banarescu et al., 2013) encodes the meaning of sentences as a directed graph and Smatch (Cai and Knight, 2013) is the primary metric for evaluating AMR graphs. Smatch, however, is unaware of some meaning-equivalent variations in graph structure allowed by the AMR Specification and gives different scores for AMRs exhibiting these variations. In this paper I propose four normalization methods for helping to ensure that conceptually equivalent AMRs are evaluated as equivalent. Equivalent AMRs with and without normalization can look quite different—comparing a gold corpus to itself with relation reification alone yields a difference of 25 Smatch points, suggesting that the outputs of two systems may not be directly comparable without normalization. The algorithms described in this paper are implemented on top of an existing open-source Python toolkit for AMR and will be released under the same license." } ```