You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hello, I meet another question that can not reproduce the paper's result. The problem lies in the generation of node_group_one_hot_vector.pkl, node_group_one_hot_vector_multi.pkl, group_adj_matrix.pkl, and group_adj_matrix_multi.pkl files. I have carried out two designs to generate different above *.pkl files. Take FB15k-237 as example:
experiment one: for kg_triple.txt, I merge the train, valid, and test triples together to produce the above *.pkl files, after training, the test results is similar to the paper's results.
experiment two: for kg_triple.txt, I only merge the train, and valid triples together besides the test triple to produce the above *.pkl files, after training, the performance dropped sharply, only slightly better than Query2Box, and compared with experiment one above, it dropped by about 0.2.
I want to ask whether the kg_triple.txt used in your experiment aggregates train, valid, and test or only used train and valid triples?
Supplementary note: I checked your data/FB15k-237/kg_triple.txt. For the FB15k-237 data, there are 272115, 17535 and 20466 triples in the training set, validation set and test set respectively. When are processed by create_queries.py (from BetaE paper), there are a total of 620,158 triples in the integration, but the kg_triple.txt you provided has a total of 781,694 triples. I would like to ask how the extra 161536 triples are generated? Besides, I have checked the data/FB15k-237/from_to_map.pkl, the total triples in from_to_map.pkl is 620,158. Very confused for me, what is the actual number of triples used? Looking forward to your reply.
The text was updated successfully, but these errors were encountered:
node_group_one_hot_vector.pkl, node_group_one_hot_vector_multi.pkl, group_adj_matrix.pkl, and group_adj_matrix_multi.pkl files.
Could you please send me a copy of the above documents?Thank you very much [email protected]
The graphics card I used was 2080Ti, but I had a memory overflow problem.
Command:
CUDA_VISIBLE_DEVICES=1 Python -U codes/ run_model_newlook. py --do_train -- CUDa -- do_VALID --do_test --data_pathData /FB15k --model BoxNewLook -n 128-b 256 -d 400 -g 24 -a 1.0 -lr 0.0001 --max_steps 50000 -- CPU_num 1- 16-0.02 - geo center_reg test_batch_size box - task 1 c. c. 2 3 c. i. 2 3 i. IC. Ci. 2 u. Uc. 2 d. 3 d. Dc -- stepsforpath 50000-- Areopagivarearev --print_on_screen.
What type of graphics card do you use? What is the run command?
Thanks!
Hello, I meet another question that can not reproduce the paper's result. The problem lies in the generation of node_group_one_hot_vector.pkl, node_group_one_hot_vector_multi.pkl, group_adj_matrix.pkl, and group_adj_matrix_multi.pkl files. I have carried out two designs to generate different above *.pkl files. Take FB15k-237 as example:
I want to ask whether the kg_triple.txt used in your experiment aggregates train, valid, and test or only used train and valid triples?
Supplementary note: I checked your data/FB15k-237/kg_triple.txt. For the FB15k-237 data, there are 272115, 17535 and 20466 triples in the training set, validation set and test set respectively. When are processed by create_queries.py (from BetaE paper), there are a total of 620,158 triples in the integration, but the kg_triple.txt you provided has a total of 781,694 triples. I would like to ask how the extra 161536 triples are generated? Besides, I have checked the data/FB15k-237/from_to_map.pkl, the total triples in from_to_map.pkl is 620,158. Very confused for me, what is the actual number of triples used? Looking forward to your reply.
The text was updated successfully, but these errors were encountered: