RuntimeError: CUDA out of memory. Tried to allocate 9.54 GiB (GPU 0; 14.73 GiB total capacity; 5.34 GiB already allocated; 8.45 GiB free; 5.35 GiB reserved in total by PyTorch)

Please help me out! I have been stuck on this for days(almost a week now)…
I have tried using the torch.cuda.empty_cache() but does not help…

Your model uses too much GPU memory for given batchsize.

You might try to lower it (the batchsize I mean).
Otherwise you could modify your model to contain less parameters.

Given the nature of the problem I’m assuming (because as always, the OP never mention what they work on) you work with images. You might try to scale them down a bit (since you are only around 1 GB above the limit, the quality won’t suffer that much) with Resize((x, y)).

1 Like

I got the same error while working on course project model for image classification.
I did two things :

  1. I reduced batch size
  2. The images I was working with was of size 1024 X 1024 pixles , so I scaled them down to 64 * 64.

Out of this two the second was the most effective one.

1 Like

1024x1024 images are reaaaallyyy big for current GPU’s. No surprise that it didn’t work. The biggest I’ve worked on my PC was 128x128 with very small batch sizes (around 32?).

With kaggle you could go with 256x256 but still, batchsize above 64 would probably eat up all the memory.

Which means Kaggle’s gpu is more powerful than colab’s?

No idea, never been using colab extensively.