RuntimeError: CUDA out of memory. Tried to allocate 1.95 GiB (GPU 0; 15.90 GiB total capacity; 13.12 GiB already allocated; 1.93 GiB free; 13.14 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
after I try to predict the test file ifound this error ,I was reduce batch size and emptied the PyTorch cash memory but the problem still any help please
Hi,
This sounds like an error from the modelling side (Pytorch error). I unfortunately can only help with CodaLab errors.
All the best,
Hannah
Posted by: hannah.rose.kirk @ Jan. 15, 2023, 12:46 p.m.