Learn practical skills, build real-world projects, and advance your career

Pytorch on GPUs

Pytorch is an open source deep learning python library developed by Facebook.
PyTorch's CUDA package enables you to keep track of which GPU you are using and causes any tensors you create to be automatically assigned to that device. After a tensor is allocated, you can perform operations with it and the results are also assigned to the same device. CUDA package of this library implements same functions as CPU tensor except that it supports GPU utilization for computations.

Remember that by default, within PyTorch, you cannot use cross-GPU operations.

  • torch.cuda.is_available()
  • torch.cuda.get_device_name( device_id )
  • tensorObj.to()
  • torch.cuda.memory_allocated()
  • torch.ones(dim,device)

Before we begin, let's install and import PyTorch

# Uncomment and run the appropriate command for your operating system, if required

# Linux / Binder
# !pip install numpy torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html

# Windows
# !pip install numpy torch==1.7.0+cpu torchvision==0.8.1+cpu torchaudio==0.7.0 -f https://download.pytorch.org/whl/torch_stable.html

# MacOS
# !pip install numpy torch torchvision torchaudio
# Import torch and other required modules
import torch

Function 1 - torch.cuda.is_available()

  • returns True value if CUDA is currently available on the system else false.
#example 1 - working
torch.cuda.is_available() # if you are working on google collab and the runtime is not set to GPU
True