# ajayan-n123/01-tensor-operations

a year ago

## Some of the functions used with torch module

#### Torch basics

Pytorch is a deep learning library similar to tensorflow. It can help in creating tensors as perfect as numpy and we can even pass through without noticing the difference. It has more of an easy use rather than tensorflow at its beginning stage. It is used by the help of the torch module in python.

Some of the functions used with torch that are mentioned here are:

• arange()
• sigmoid()
• mul()
• sort()
• mean()
In [1]:
``````## Import torch and other required modules
import torch``````

### torch.arange(start=0, end, step=1, out=None, dtype=None, layout=torch.strided, device=None, required_grad=False)

arange function is used to create an evenly spaced floating point integers in a `1-D tensor` The values given with `=` specifies the default value of the particular part to be filled in the arange function. The parameters inside the arange that we frequently use are `start`,`end` and `step`.

• start- This is used to give a value to begin with
• end- This is where the end of these values has to be specified
• step- The equal spacing in between these values
In [2]:
``````tensor_val = torch.arange(-1.3,4.2,0.3)
tensor_val
``````
Out[2]:
``````tensor([-1.3000, -1.0000, -0.7000, -0.4000, -0.1000,  0.2000,  0.5000,  0.8000,
1.1000,  1.4000,  1.7000,  2.0000,  2.3000,  2.6000,  2.9000,  3.2000,
3.5000,  3.8000,  4.1000])``````

Floating point values can be provided in start, end and even step so that equal spacing could be provided in between the values of start and end

In [4]:
``torch.arange(100)``
Out[4]:
``````tensor([ 0,  1,  2,  3,  4,  5,  6,  7,  8,  9, 10, 11, 12, 13, 14, 15, 16, 17,
18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30, 31, 32, 33, 34, 35,
36, 37, 38, 39, 40, 41, 42, 43, 44, 45, 46, 47, 48, 49, 50, 51, 52, 53,
54, 55, 56, 57, 58, 59, 60, 61, 62, 63, 64, 65, 66, 67, 68, 69, 70, 71,
72, 73, 74, 75, 76, 77, 78, 79, 80, 81, 82, 83, 84, 85, 86, 87, 88, 89,
90, 91, 92, 93, 94, 95, 96, 97, 98, 99])``````

Here 100 specifies the end value, and the default values are provided to the rest of the parameters as start will be initialised to zero and step will be initialised itself to one.

In [8]:
``torch.arange('\$')``
```--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-8-991dc99994d7> in <module> ----> 1 torch.arange('\$') TypeError: arange(): argument 'end' (position 1) must be Number, not str```

torch.arange doesnot make use of the ascii values but only works with int and float real numbers.

• arange() is a function which can be used to create a dummy dataset or to generate some equally spaced numbers to play with in order to work with some mathematical operations. This could be also used in iteration in place of range functions.

### torch.sigmoid(input, out=None)

This function works under the same principle of a logit function in logistic regression where probability= \(e^{x}\)/(1+\(e^{x}\)) Due to this the values in binomial logistic regression comes in between 0 and 1 and the loss value that we consider is of logloss.

• input: The input tensor
• out(optional): Output tensor
In [10]:
``tensor_val``
Out[10]:
``````tensor([-1.3000, -1.0000, -0.7000, -0.4000, -0.1000,  0.2000,  0.5000,  0.8000,
1.1000,  1.4000,  1.7000,  2.0000,  2.3000,  2.6000,  2.9000,  3.2000,
3.5000,  3.8000,  4.1000])``````

Here we will create some values with arange function that can be used for creating an input tensor to the sigmoid function.

In [8]:
``torch.sigmoid(tensor_val)``
Out[8]:
``````tensor([0.2142, 0.2689, 0.3318, 0.4013, 0.4750, 0.5498, 0.6225, 0.6900, 0.7503,
0.8022, 0.8455, 0.8808, 0.9089, 0.9309, 0.9478, 0.9608, 0.9707, 0.9781,
0.9837])``````

We are able to see that all the values have been converted to values in between 0 and 1 This can be used to draw a sigmoid regression curve.

In [13]:
``torch.sigmoid(torch.Tensor([1,2,3,4]))``
Out[13]:
``tensor([0.7311, 0.8808, 0.9526, 0.9820])``
In [16]:
``torch.sigmoid([1,2,3])``
```--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-16-9490c67afb2b> in <module> ----> 1 torch.sigmoid([1,2,3]) TypeError: sigmoid(): argument 'input' (position 1) must be Tensor, not list```

As correctly mentioned in the error, we should not provide any iterable which cannot classify itself to be a tensor into the sigmoid function.

• mul() function is used in logistic regression and also when generalisation happens between 0 and 1.

### torch.mul(input, other, out=None)

• input: input can be a scalar or tensor value
• other: other is meant to be a scalar real number if the input is a float or double tensor, it can also be a tensor of suitable dimensions for multiplication
• out: the output tensor(optional)
In [11]:
``tensor_val``
Out[11]:
``````tensor([-1.3000, -1.0000, -0.7000, -0.4000, -0.1000,  0.2000,  0.5000,  0.8000,
1.1000,  1.4000,  1.7000,  2.0000,  2.3000,  2.6000,  2.9000,  3.2000,
3.5000,  3.8000,  4.1000])``````
In [4]:
``torch.mul(tensor_val,100)``
Out[4]:
``````tensor([-130., -100.,  -70.,  -40.,  -10.,   20.,   50.,   80.,  110.,  140.,
170.,  200.,  230.,  260.,  290.,  320.,  350.,  380.,  410.])``````

A scalar real number is used for multiplication with the tensor.

In [5]:
``````a = torch.randn(3,1) #randn creates a tensor with random values of the particular dimensions
b = torch.randn(1,4)
print(a)
print(b)
``````
```tensor([[-1.0903], [ 0.8279], [ 1.3243]]) tensor([[-2.0269, -0.7475, 0.1836, -1.9493]]) ```
In [3]:
``torch.mul(a,b)``
Out[3]:
``````tensor([[-1.2560,  4.7677, -0.0684,  0.4522],
[-0.0979,  0.3715, -0.0053,  0.0352],
[ 0.3599, -1.3663,  0.0196, -0.1296]])``````

Here mul function is used to multiply 2 matrices which can be multiplied with proper dimensions (\(m_1\)x\(n_1\) of tensor a has \(n_1\) = the \(m_2\) of tensor b with dimension \(m_2\)x\(n_2\))

In [12]:
``torch.mul(tensor_val,complex(1,2))``
```--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-12-3ef0ee9ee191> in <module> ----> 1 torch.mul(tensor_val,complex(1,2)) RuntimeError: Complex dtype not supported.```
In [13]:
``````c = torch.randn(2,2)
d = torch.randn(3,2)
torch.mul(c,d)
``````
```--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-13-3d862a81941e> in <module> 1 c = torch.randn(2,2) 2 d = torch.randn(3,2) ----> 3 torch.mul(c,d) RuntimeError: The size of tensor a (2) must match the size of tensor b (3) at non-singleton dimension 0```

The errors shown here specifies that:

`no complex number can be used to multiply with a tensor`

`only the matrices following the above mentioned condition could be multiplied`

• mul() function can be used when scalar or matrix multiplication has to be done on a tensor

### torch.sort(input, dim=-1, descending=False, out=None)

• input : The input tensor
• dim : The dimension along which sorting should be performed
• descending : ascending or descending sort
• out : output tensor

dim=-1 specifies the sort function to sort along last dimension of the given input tensor but if we change it to dim=0,the sort function sorts along the column

In [10]:
``````tensor_rand_val = torch.randn(3,4)
tensor_rand_val
``````
Out[10]:
``````tensor([[ 1.6511,  1.1421, -1.7335, -0.3740],
[ 1.3212, -0.7966,  1.9173, -0.6006],
[ 0.0782, -0.4830, -0.1543, -0.9357]])``````
In [11]:
``torch.sort(tensor_rand_val)``
Out[11]:
``````torch.return_types.sort(
values=tensor([[-1.7335, -0.3740,  1.1421,  1.6511],
[-0.7966, -0.6006,  1.3212,  1.9173],
[-0.9357, -0.4830, -0.1543,  0.0782]]),
indices=tensor([[2, 3, 1, 0],
[1, 3, 0, 2],
[3, 1, 2, 0]]))``````
In [12]:
``````sort_tensor,index = torch.sort(tensor_rand_val)
print(sort_tensor)
``````
```tensor([[-1.7335, -0.3740, 1.1421, 1.6511], [-0.7966, -0.6006, 1.3212, 1.9173], [-0.9357, -0.4830, -0.1543, 0.0782]]) ```

We have to specify two variables to store the returning values of sort() because sort value gives a namedtuple of

1. sorted values
2. original indices to which they used to belong before sorting.
In [13]:
``````sort_tensor_col,index = torch.sort(tensor_rand_val,dim=0,descending=True)
print(sort_tensor_col)
print(index)
``````
```tensor([[ 1.6511, 1.1421, 1.9173, -0.3740], [ 1.3212, -0.4830, -0.1543, -0.6006], [ 0.0782, -0.7966, -1.7335, -0.9357]]) tensor([[0, 0, 1, 0], [1, 2, 2, 1], [2, 1, 0, 2]]) ```

We see that along the column a descending sort is done by the torch.sort function.

In [14]:
``torch.sort(tensor_rand_val,dim=3,descending=True)``
```--------------------------------------------------------------------------- IndexError Traceback (most recent call last) <ipython-input-14-05bd863bf916> in <module> ----> 1 torch.sort(tensor_rand_val,dim=3,descending=True) IndexError: Dimension out of range (expected to be in range of [-2, 1], but got 3)```

The errors that may rise are `TypeError`, when invalid type of input is given

when dim is not provided correctly in the dim keyword, `IndexError` gets raised.

• sort() function can be used when there is need for sorting of a given tensor input

### torch.mean(input,dim,keepdim=False,out=None)

• input: input tensor
• dim: the dimensions to reduce
• keepdim: either to use the dimension given in dim (True or False)
• out : Output Tensor
In [6]:
``tensor_val``
Out[6]:
``````tensor([-1.3000, -1.0000, -0.7000, -0.4000, -0.1000,  0.2000,  0.5000,  0.8000,
1.1000,  1.4000,  1.7000,  2.0000,  2.3000,  2.6000,  2.9000,  3.2000,
3.5000,  3.8000,  4.1000])``````
In [8]:
``torch.mean(tensor_val)``
Out[8]:
``tensor(1.4000)``
In [9]:
``assert torch.mean(tensor_val) == sum(tensor_val)/len(tensor_val)``
• The mean of simple tensor that we created was able to provide the mean of 1.400.
• We were also able to check if the calculation are both the same for the current tensor
In [15]:
``tensor_rand_val``
Out[15]:
``````tensor([[ 1.6511,  1.1421, -1.7335, -0.3740],
[ 1.3212, -0.7966,  1.9173, -0.6006],
[ 0.0782, -0.4830, -0.1543, -0.9357]])``````
In [21]:
``torch.mean(tensor_rand_val,dim=0,keepdim=True)``
Out[21]:
``tensor([[ 1.0168, -0.0458,  0.0098, -0.6368]])``
In [23]:
``torch.mean(tensor_rand_val,dim=1,keepdim=False)``
Out[23]:
``tensor([ 0.1714,  0.4603, -0.3737])``
In [22]:
``torch.mean(tensor_rand_val,dim=1,keepdim=True)``
Out[22]:
``````tensor([[ 0.1714],
[ 0.4603],
[-0.3737]])``````

The mean function returns the mean in the same dimensionality when keepdim=True, else it will remove the dimensionality.

In [24]:
``torch.mean([1,2,3])``
```--------------------------------------------------------------------------- TypeError Traceback (most recent call last) <ipython-input-24-0d9cf49c10b0> in <module> ----> 1 torch.mean([1,2,3]) TypeError: mean(): argument 'input' (position 1) must be Tensor, not list```

Like the numpy module, it doesnot provide mean on simple list but only on a tensor

torch.mean() is used to provide the mean of a tensor type matrix.

### Conclusion

The notebook covers some of the major functions that can be worked on using the torch module for tensors, used extensively for deep learning methodologies. The functions covered here includes,

• arange() : For creating a 1-D tensor of equal spacing in between
• sigmoid() : used to convert every values in the tensor between 0 and 1
• mul() : To multiply the tensor with a scalar or with another tensor of proper dimensions.
• sort() : To sort the tensor values according to the dimensions provided
• mean() : To find out the mean of values in the tensor.

The functions used here are of greater significance in the data preprocessing part of the cycle. These functions can be made used in implementation of many of the machine learning algorithms.

``!pip install jovian --upgrade --quiet``
``import jovian``
``jovian.commit()``
```[jovian] Attempting to save notebook.. ```
`` ``