Learn practical skills, build real-world projects, and advance your career

PyTorch

Open source Python machine learning library powered by Torch. The main difference from such libraries is the dynamic construction of a graph of calculations

Main modules

Autograd

PyTorch uses an automatic differentiation method. The calculations performed in the forward direction are recorded, then the playback is performed in the reverse order to calculate the gradients. This method is especially useful in the construction of neural networks, since it allows you to calculate differential corrections of parameters simultaneously with direct passage.

Optim

The module that implements several optimization algorithms used in the construction of neural networks. Implemented most of the most commonly used methods.

nn

The PyTorch autograd module makes it easy to define computational graphs and work with gradients, but it may be too low for complex neural networks. A higher level abstraction for such applications is the nn module.

Tensor

The library operates on data, calling them tensors. A tensor is a matrix of any size, which can have data of various types.
An short introduction about PyTorch and about the chosen functions.

The most interesting functions:

  • matmul
  • sum
  • max
  • exp
  • log
# Import torch and other required modules
import torch

Function 1 - torch.matmul(input, other, out=None) → Tensor

Matrix product of two tensors.

# Example 1 - working (change this)
x = torch.tensor([[1., 1.], [2., 2.], [3., 3.], [4., 4.]])
y = torch.tensor([[1., 2., 3.], [2., 3., 4.]])

print(x.shape, y.shape)
z = x.matmul(y)
print(z.shape)
z
torch.Size([4, 2]) torch.Size([2, 3]) torch.Size([4, 3])
tensor([[ 3.,  5.,  7.],
        [ 6., 10., 14.],
        [ 9., 15., 21.],
        [12., 20., 28.]])

The basic rule of multiplying 2 matrices is that their internal dimensions should be equal. For example, if we multiply a 2x8 matrix by 8x5, we get a 2x5 matrix