-
Notifications
You must be signed in to change notification settings - Fork 205
1604KWWeights
YodaQA type of the anssel task datasets includes additional feature for the input pairs - weights of keywords and about-keywords of s0 matched in s1.
They are pretty strong predictors on their own (curatedv2 devMRR 0.337348, large2470 devMRR 0.318246).
TODO: We could also augment this with (or use only just...) BM25 weights. That could work for other datasets as well, and is an alternative use for the prescoring logic.
Baselines (we did these measurements with the vocabcase setting):
8x R_ay_3rnn - 0.419903 (95% [0.399927, 0.439880])
4x R_al_3rnn - 0.395602 (95% [0.383595, 0.407609])
4x R_al_3a51 - 0.404151 (95% [0.382397, 0.425904])
8x R_ay_3rnn_kw - 0.452198 (95% [0.436496, 0.467899]):
10884109.arien.ics.muni.cz.R_ay_3rnn_kw etc.
[0.467730, 0.466489, 0.458678, 0.480130, 0.427241, 0.423624, 0.452207, 0.441481, ]
4x R_al_3rnn_kw - 0.411832 (95% [0.388420, 0.435244]):
10884136.arien.ics.muni.cz.R_al_3rnn_kw etc.
[0.400349, 0.424932, 0.427774, 0.394274, ]
4x R_al_3a51_kw - 0.465138 (95% [0.461127, 0.469148]):
10884138.arien.ics.muni.cz.R_al_3a51_kw etc.
[0.465793, 0.468988, 0.462912, 0.462857, ]
TODO transfer learning check