Jovian
⭐️
Sign In
Learn data science and machine learning by building real-world projects on Jovian

Assignment One

A little about torch.Tensor

An short introduction about PyTorch and about the chosen functions.

  • torch.arange
  • torch.split
  • torch.stack
  • torch.abs
  • torch.add
In [5]:
# Import torch and other required modules
import torch
import numpy

Function 1 - torch.arange

Returns a one dimensional tensor with values from start to end incremented by steps

In [6]:
# Example 1 - 1-D tensor with size of 10 with steps of 1
torch.arange(10)
Out[6]:
tensor([0, 1, 2, 3, 4, 5, 6, 7, 8, 9])

Notice that the tensor is started with 0. If no start parameter is provided, it defaults to 0.

In [7]:
# Example 2 - More parameters
torch.arange(1., 2, .1, requires_grad = True)
Out[7]:
tensor([1.0000, 1.1000, 1.2000, 1.3000, 1.4000, 1.5000, 1.6000, 1.7000, 1.8000,
        1.9000], requires_grad=True)

Notice here the start and steps parameters are float points. We can also provide a range of other parameters such as requires_grad, out, dtype layout and device.

In [8]:
# Example 3 - breaking (to illustrate when it breaks)
length = 10
width = 5
torch.arange(length * width, step = 1, dtype = torch.get_default_dtype())
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-00a38a246dc8> in <module> 2 length = 10 3 width = 5 ----> 4 torch.arange(length * width, step = 1, dtype = torch.get_default_dtype()) TypeError: arange() received an invalid combination of arguments - got (int, dtype=torch.dtype, step=int), but expected one of: * (Number end, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad) * (Number start, Number end, Number step, *, Tensor out, torch.dtype dtype, torch.layout layout, torch.device device, bool pin_memory, bool requires_grad)

Be mindeful of how you organize your parameters. Correct way to write this would be like so

torch.arange(start = 0, end = length * width, step = 1, dtype = torch.get_default_dtype())

This function can be useful to create a one dimensional tensor of indices.

Function 2 - torch.split

Splits the tensor into chunks. Each chunk is a view of the original tensor.

In [9]:
# Example 1 - Basic split
input_tensor = torch.tensor([50, 80, 10, 2])
torch.split(input_tensor, split_size_or_sections = 1)
Out[9]:
(tensor([50]), tensor([80]), tensor([10]), tensor([2]))

Above we created a input tensor and used the split method to break our tensor data into equal chunks of size 1.

In [10]:
# Example 2 - Split a tensor of ones, size 10, into chunks of at most size 2
x = torch.ones(10)
torch.split(x, 2)
Out[10]:
(tensor([1., 1.]),
 tensor([1., 1.]),
 tensor([1., 1.]),
 tensor([1., 1.]),
 tensor([1., 1.]))

In the example above, first we created a tensor of 1's size 10. After which we use the split method to split the tensor into chucks of seperate tensors with a size of atmost 2.

In [11]:
# Example 3 - breaking (to illustrate when it breaks)
y = torch.zeros([3, 3])
torch.split(y, 1, 3)
--------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-11-ff2a21cfef4c> in <module> 1 # Example 3 - breaking (to illustrate when it breaks) 2 y = torch.zeros([3, 3]) ----> 3 torch.split(y, 1, 3) ~/miniconda3/envs/notebook/lib/python3.8/site-packages/torch/functional.py in split(tensor, split_size_or_sections, dim) 89 # split_size_or_sections. The branching code is in tensor.py, which we 90 # call here. ---> 91 return tensor.split(split_size_or_sections, dim) 92 93 # equivalent to itertools.product(indices) ~/miniconda3/envs/notebook/lib/python3.8/site-packages/torch/tensor.py in split(self, split_size, dim) 376 """ 377 if isinstance(split_size, int): --> 378 return super(Tensor, self).split(split_size, dim) 379 elif isinstance(split_size, Tensor): 380 try: IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 3)

We get this error IndexError: Dimension out of range when we attempt increase the dim size. This causes an issue with the shape. Keep in mind the dimensions along which you want to split the tensor.

This function is perfect when you need to split a Tensor into equal-size chunks.

Function 3 - torch.stack

Concatenates sequence of tensors along a new dimension.

In [12]:
# Example 1 - Basic example
a = torch.ones([3, 3])
b = torch.zeros([3, 3])
torch.stack([a, b])
Out[12]:
tensor([[[1., 1., 1.],
         [1., 1., 1.],
         [1., 1., 1.]],

        [[0., 0., 0.],
         [0., 0., 0.],
         [0., 0., 0.]]])

In the above example we create two seperated tensors using ones and zeros. After which was concatenated into one tensor using the stack method

In [13]:
# Example 2 - Stack two tensor on one dimension
c = torch.ones(5)
d = torch.zeros(5)
torch.stack([c, d], 1)
Out[13]:
tensor([[1., 0.],
        [1., 0.],
        [1., 0.],
        [1., 0.],
        [1., 0.]])

In the above example we created another two tensors with the ones and zeros method. This time when we concatenated them using the stack method we added an extra parameter for our dimension we want to stack along.

In [14]:
# Example 3 - breaking (to illustrate when it breaks)
e = torch.ones(3)
f = torch.zeros(4)
torch.stack([e, f])
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-14-6eb90f68ac43> in <module> 2 e = torch.ones(3) 3 f = torch.zeros(4) ----> 4 torch.stack([e, f]) RuntimeError: stack expects each tensor to be equal size, but got [3] at entry 0 and [4] at entry 1

We get the error above stack expects each tensor to be equal size because the shapes of tensor e and f are not the same.

This function is perfect when you need to concatenate two tensors of equal size.

Function 4 - torch.abs

Computes the element-wise absolute value of the given input tensor.

In [15]:
# Example 1 - Basic example
ex_tensor = torch.tensor([-1, -2, -3])
torch.abs(ex_tensor)
Out[15]:
tensor([1, 2, 3])

In the above example, first we created a tensor with all negative integers and passed that tensor through torches *abs which turned every element to a positive integer.

In [16]:
# Example 2
m = torch.tensor([1])
n = torch.tensor([2])
print('negative', m - n)
torch.abs(m - n)
negative tensor([-1])
Out[16]:
tensor([1])

Similar to the first example, here we create two tensors using tensor method, after which we run some arithmitic on both tensors which returns a negative integer but as you can see when we pass that same expression to abs method it returns as positive.

In [17]:
# Example 3 - breaking (to illustrate when it breaks)
o = numpy.array([1, [-2]])
torch.abs(o)
--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-17-6a3f847a59dc> in <module> 1 # Example 3 - breaking (to illustrate when it breaks) 2 o = numpy.array([1, [-2]]) ----> 3 torch.abs(o) TypeError: abs(): argument 'input' (position 1) must be Tensor, not numpy.ndarray

Make sure you pass a tensor as a argument to torch.abs unless it will throw the above error: abs(): argument 'input' (position 1) must be Tensor, not numpy.ndarray

This function is great when you want to work with all positive element in your tensor!

Function 5 - torch.add

Adds the scalar other to each element of the input and returns a new resulting tensor.

In [18]:
# Example 1 - Basic example
p = torch.ones(10)
torch.add(p, 9)
Out[18]:
tensor([10., 10., 10., 10., 10., 10., 10., 10., 10., 10.])

So in the above example we create a tensor of ones with size of 10 using ones method. Next we pass our tensor of ones to the add method with an accumulator of 9 which takes each element of our tensor and add 9 to it.

In [19]:
# Example 2
q = torch.randn(4)
r = torch.randn(1)
torch.add(q, r, alpha = 10)
Out[19]:
tensor([17.4783, 17.0044, 17.4107, 19.7463])

In this example we create a tensor of random floats with size of 4. Next we create a tensor of a random integer with size of one. Finally we add all the elements of q with r with a scalar multiplier of 10.

In [20]:
# Example 3 - breaking (to illustrate when it breaks)
s = complex(1, 1)
t = torch.zeros(1)
torch.add(t, s)
--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-20-418a293c00a5> in <module> 2 s = complex(1, 1) 3 t = torch.zeros(1) ----> 4 torch.add(t, s) RuntimeError: Complex dtype not supported.

In the above example we create a complex number using the complex method. Then we create a tensor of 0's with size of one and then attempt to add the element of our tensor to our complex number. We get the error: Complex dtype not supported. because only integers and floats are supported in the add method.

This function is great when you need to accumulate each element in your tensor by the same value.

Conclusion

So to wrap up we covered 5 different methods available in the pytorch API. If you want to learn more, check out the reference links below!

Reference Links

Provide links to your references and other interesting articles about tensors

In [21]:
!pip install jovian --upgrade --quiet
In [22]:
import jovian
In [ ]:
jovian.commit()
[jovian] Attempting to save notebook..
In [ ]: