-
Notifications
You must be signed in to change notification settings - Fork 80
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Stage II #20
Comments
I used the datasets of 512*512, the pose feature vector and the noise vector are 512 dimensional. Is this the problem? |
Hi, your work is really great. I want to draw on the work of embedding the pose map in that part, but I don't see how the pose is organized and only see how it is called. Is the data also placed in the dataroot as an independent data set?Because I also want to do a feature embedding work. |
Hey @hqulxw123 |
Sorry, I didn't mean what I meant. I want to make a different auxiliary image instead of a pose, so I need an embedded model in your article to stitch the features. Since I started to research your code, it is not clear how to prepare my auxiliary dataset. Whether you generate the pose graph before input or call the model after input, your suggestion is very helpful to me. |
Hi @hqulxw123 |
Your work is so great, thank you for your kind reply. |
Traceback (most recent call last):
File "train.py", line 118, in
main()
File "train.py", line 78, in main
model.optimize_parameters()
File "/media/ouc/4T_B/zhr/FD-GAN/fdgan/model.py", line 218, in optimize_parameters
self.forward()
File "/media/ouc/4T_B/zhr/FD-GAN/fdgan/model.py", line 158, in forward
self.fake = self.net_G(B_map, A_id.view(A_id.size(0), A_id.size(1), 1, 1), z.view(z.size(0), z.size(1), 1, 1))
File "/home/ouc/miniconda3/envs/gogo/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/home/ouc/miniconda3/envs/gogo/lib/python3.6/site-packages/torch/nn/parallel/data_parallel.py", line 71, in forward
return self.module(*inputs[0], **kwargs[0])
File "/home/ouc/miniconda3/envs/gogo/lib/python3.6/site-packages/torch/nn/modules/module.py", line 357, in call
result = self.forward(*input, **kwargs)
File "/media/ouc/4T_B/zhr/FD-GAN/fdgan/networks.py", line 175, in forward
feature = torch.cat((reid_feature, pose_feature, noise),dim=1)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 1. Got 1 and 9 in dimension 2 at /pytorch/torch/lib/THC/generic/THCTensorMath.cu:111
I have encountered this error in the stage II. Can you help me ?
The text was updated successfully, but these errors were encountered: