Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Running on GPU with less memory - CUDA out of memeory #69

Open
syed-abdul-baqi opened this issue Jul 17, 2019 · 5 comments
Open

Running on GPU with less memory - CUDA out of memeory #69

syed-abdul-baqi opened this issue Jul 17, 2019 · 5 comments

Comments

@syed-abdul-baqi
Copy link

I was trying this model on GTX 1080 Ti (12GB memory). All trainings have worked fine for me but with smaller batch size.
However, testing for I-frame model is giving CUDA out of memory error. Which is strange because testing should not require that much or memory especially when training has worked fine.
What is the best way to resolve this issue?

@SethuGitHub
Copy link

I was trying this model on GTX 1080 Ti (12GB memory). All trainings have worked fine for me but with smaller batch size.
However, testing for I-frame model is giving CUDA out of memory error. Which is strange because testing should not require that much or memory especially when training has worked fine.
What is the best way to resolve this issue?

Hi Syed,
Please try to change the test.py as below.
In forward_video(),
def forward_video(data):
#input_var = torch.autograd.Variable(data, volatile=True)
input_var = torch.autograd.Variable(data)
with torch.no_grad(): #sethu
scores = net(input_var)
scores = scores.view((-1, args.test_segments * args.test_crops) + scores.size()[1:])
scores = torch.mean(scores, dim=1)
return scores.data.cpu().numpy().copy()
Try this and check.
-- [email protected]

@Maruidear
Copy link

Hi,I met this problem in training. How can I solve it?

@Maruidear
Copy link

I was trying this model on GTX 1080 Ti (12GB memory). All trainings have worked fine for me but with smaller batch size.
However, testing for I-frame model is giving CUDA out of memory error. Which is strange because testing should not require that much or memory especially when training has worked fine.
What is the best way to resolve this issue?

Hi,I met this problem in training.Can you tell me about your batch-size in training?

@syed-abdul-baqi
Copy link
Author

syed-abdul-baqi commented Aug 30, 2019

Reduce the batch size for training.
It was reduced to 4 for I-Frame. You can go further down to 2 or even 1.
Batch-size 40 worked fine for mv and residual frame.

In testing, passing the following parameter in the command made it run successfully.

--test-crops=1

Note that the default value is 10 for test crops.

Best of luck

@JimLee1996
Copy link

Half precision is also helpful to reduce memory consumptiuon. Try use tensor.half() and model.half().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants