Learn practical skills, build real-world projects, and advance your career

Assignment 1

Investigation of Pytorch functions

An short introduction about PyTorch and about the chosen functions.

  • requires_grad_()
  • new_full()
  • torch.matmul()
  • function 4
  • function 5
# Import torch and other required modules
import torch
import numpy as np

Function 1 - requires_grad_()

If you have a Tensor data and just want to change its requires_grad flag you can use this function.
This function can change if autograd should record operations on this tensor: sets this tensor’s requires_grad attribute in-place.

# Example 1 - requires_grad_(requires_grad=True)

# Input
inputs = np.array([[73, 67, 43], 
                   [91, 88, 64], 
                   [87, 134, 58], 
                   [102, 43, 37], 
                   [69, 96, 70]], dtype='float32')
# Targets
targets = np.array([[56, 70], 
                    [81, 101], 
                    [119, 133], 
                    [22, 37], 
                    [103, 119]], dtype='float32')

inputs = torch.from_numpy(inputs)
targets = torch.from_numpy(targets)

w = torch.randn(2, 3)

# Now if you need to calculate gradient based on w, you can use function requires_grad_() as shown below:
w.requires_grad_()

# some function
def model(x):
    return x @ w.t()
preds = model(inputs)

out = preds.pow(2).sum()

out.backward()
# now we can see that the gradient based on w was calculated
print(w.grad)
tensor([[-170709.0469, -186016.1562, -114157.7188], [ -94434.0938, -108443.9141, -66155.3125]])

As we now we can't calculate gradient if we didn't put "requires_grad=True" in time of tensor creation.
May we didn't know that we need gradient in the some part of our code later.
Therefore, one of the ways to solve this is to use the function "requires_grad_(requires_grad=True)" as can be seen in the above example. After that we can compute gradient based on that specific tensor.