You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, you are doing a great job. I would like to reproduce your work, where can I find the relevant code for evaluating the generated text.
I run python test.py --length=50 --num_iterations=1 --temperature=1 --sample --gamma=1 --gm_scale=0.875 --kl_scale=0.01 --num_reviews=70.I changed num_reviews from 5 to 70
and there are some mistakes:
Written 70 records in the csv containing conditional sentences.
Traceback (most recent call last):
File "test.py", line 612, in <module>
run_pplm_example(**vars(args))
File "test.py", line 305, in run_pplm_example
kl_scale=kl_scale
File "test.py", line 413, in full_text_generation
kl_scale=kl_scale
File "test.py", line 500, in generate_text_pplm
device=device
File "test.py", line 152, in perturb_past
inputs_embeds=inputs_embeds
File "/home/minhao/anaconda3/envs/RexPlug/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/minhao/anaconda3/envs/RexPlug/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 593, in forward
inputs_embeds=inputs_embeds,
File "/home/minhao/anaconda3/envs/RexPlug/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/minhao/anaconda3/envs/RexPlug/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 476, in forward
hidden_states, layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask[i]
File "/home/minhao/anaconda3/envs/RexPlug/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/minhao/anaconda3/envs/RexPlug/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 226, in forward
self.ln_1(x), layer_past=layer_past, attention_mask=attention_mask, head_mask=head_mask
File "/home/minhao/anaconda3/envs/RexPlug/lib/python3.7/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl
return forward_call(*input, **kwargs)
File "/home/minhao/anaconda3/envs/RexPlug/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 189, in forward
attn_outputs = self._attn(query, key, value, attention_mask, head_mask)
File "/home/minhao/anaconda3/envs/RexPlug/lib/python3.7/site-packages/transformers/modeling_gpt2.py", line 146, in _attn
w = w * b - 1e4 * (1 - b)
RuntimeError: The size of tensor a (1025) must match the size of tensor b (1024) at non-singleton dimension 3
Do you know the reason for this big error?
The text was updated successfully, but these errors were encountered:
Hi, I have encountered the same issue. I think it's probably caused by the limited generated length of GPT-2 (reference link: ) .After I limit the candidate review and true review length in test_utils.py create_cond_df function , the problem is solved.
Hi, you are doing a great job. I would like to reproduce your work, where can I find the relevant code for evaluating the generated text.
I run
python test.py --length=50 --num_iterations=1 --temperature=1 --sample --gamma=1 --gm_scale=0.875 --kl_scale=0.01 --num_reviews=70
.I changed num_reviews from 5 to 70and there are some mistakes:
Do you know the reason for this big error?
The text was updated successfully, but these errors were encountered: