# gaurav-singh141/pytorch-basics-2f591

a year ago
In [5]:
``````#!pip install jovian --upgrade
import jovian

``````
In [6]:
``````import torch
import torch.nn
import numpy as np

torch.manual_seed(123)``````
Out[6]:
``<torch._C.Generator at 0x7efb9caef910>``

#### Basic of Pytorch

For a scalar

In [7]:
``````# directly creating tensor
# A tensor is a number, vector, matrix or any n-dimensional array. Let's create a tensor with a single number:

t1 = torch.tensor([3.])
t1 , t1.dtype
``````
Out[7]:
``(tensor([3.]), torch.float32)``

For a Vector

In [8]:
``````#  create a tensor with a single list
list1 = [1,2,3,4]
t2 = torch.tensor(list1)
t2, t2.dtype
``````
Out[8]:
``(tensor([1, 2, 3, 4]), torch.int64)``
In [9]:
``````#  create a tensor with a numpy array
#  numpy array  ---> tensor

arr = np.array([[1, 2], [3, 4.]])
arr.shape, arr.dtype
``````
Out[9]:
``((2, 2), dtype('float64'))``
In [10]:
``````t4 = torch.tensor(arr)
t4, t4.dtype
``````
Out[10]:
``````(tensor([[1., 2.],
[3., 4.]], dtype=torch.float64), torch.float64)``````
In [11]:
``````t3 = torch.from_numpy(arr)
t3, t3.dtype
``````
Out[11]:
``````(tensor([[1., 2.],
[3., 4.]], dtype=torch.float64), torch.float64)``````
In [12]:
``````# tensor ---> numpy array
t3.numpy(), t3.dtype
``````
Out[12]:
``````(array([[1., 2.],
[3., 4.]]), torch.float64)``````

For a Matrix

In [13]:
``````# matrix
t5 = torch.tensor([[5., 6], [7, 8], [9, 10]])
t5, t5.dtype
``````
Out[13]:
``````(tensor([[ 5.,  6.],
[ 7.,  8.],
[ 9., 10.]]), torch.float32)``````
In [14]:
``t5.shape``
Out[14]:
``torch.Size([3, 2])``

Reshape | View | Resize

Diff between Reshape/View

In [15]:
``````# Returns a new tensor with the same data as the :attr:`self` tensor but of a
# different :attr:`shape`.
t5.view(2,-1)``````
Out[15]:
``````tensor([[ 5.,  6.,  7.],
[ 8.,  9., 10.]])``````
In [16]:
``````# Returns a tensor with the same data and number of elements as :attr:`self`
# but with the specified shape. This method returns a view if :attr:`shape` is
# compatible with the current shape.

t5.reshape(2,-1)``````
Out[16]:
``````tensor([[ 5.,  6.,  7.],
[ 8.,  9., 10.]])``````
In [ ]:
`` ``

In [17]:
``````q = torch.tensor(4., requires_grad=True)            # requires_grad=True set variable as a derivative which could be diffrentiabe

z = q*r+b1
s = z*r1+q1

s.backward()
``````
In [18]:
``s``
Out[18]:
``tensor(44., grad_fn=<AddBackward0>)``
In [19]:
``````q.grad, r.grad, b1.grad

``````
Out[19]:
``(tensor(6.), tensor(12.), tensor(3.))``
In [20]:
``````z.grad, r1.grad, q1.grad            # all of the variables except z and s were diff so gradient could be computed

``````
Out[20]:
``(None, tensor(13.), tensor(1.))``
In [21]:
``````# simple linear function with input(x), weight(w), bias(b)

x = torch.randn(1,3, requires_grad=True)            # 3 input values
w = torch.randn(3,1, requires_grad=True)            #  torch.randn_like(x).t() will produce same tensor weight as randn()
In [22]:
``x, w, b``
Out[22]:
``````(tensor([[-0.1115,  0.1204, -0.3696]], requires_grad=True), tensor([[-0.2404],
[-1.1969],
[-0.7550],

Apply simple linear network :

y = x*w+b

In [23]:
``y = torch.mm(x, w)+ b``
In [24]:
``y           # check the gradient function ``
Out[24]:
``````tensor([[-1.1670],
[-0.9497],
In [25]:
``y.backward()            # cal grad``
```--------------------------------------------------------------------------- RuntimeError Traceback (most recent call last) <ipython-input-25-308d2d002a01> in <module> ----> 1 y.backward() # cal grad /srv/conda/envs/notebook/lib/python3.7/site-packages/torch/tensor.py in backward(self, gradient, retain_graph, create_graph) 164 products. Defaults to ``False``. 165 """ --> 166 torch.autograd.backward(self, gradient, retain_graph, create_graph) 167 168 def register_hook(self, hook): /srv/conda/envs/notebook/lib/python3.7/site-packages/torch/autograd/__init__.py in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables) 91 grad_tensors = list(grad_tensors) 92 ---> 93 grad_tensors = _make_grads(tensors, grad_tensors) 94 if retain_graph is None: 95 retain_graph = create_graph /srv/conda/envs/notebook/lib/python3.7/site-packages/torch/autograd/__init__.py in _make_grads(outputs, grads) 32 if out.requires_grad: 33 if out.numel() != 1: ---> 34 raise RuntimeError("grad can be implicitly created only for scalar outputs") 35 new_grads.append(torch.ones_like(out)) 36 else: RuntimeError: grad can be implicitly created only for scalar outputs```
In [26]:
`````` y2 = torch.sum(y)              #  sum of elemnet of vector
y2.backward()

``````
In [27]:
``````for each in zip(x.grad, w.grad, b.grad):    # ------> dy/dx, dy/dw, dy/db
print(each)
``````
```(tensor([-0.7213, -3.5908, 0.6278]), tensor([-0.3344]), tensor([1.])) ```

Using Activation fucntion:

sigmod()

In [28]:
``````def activation(x):
""" Sigmoid activation function

Arguments
---------
x: torch.Tensor
"""
return 1/(1+torch.exp(-x))``````
In [29]:
``activation(y)``
Out[29]:
``````tensor([[0.2374],
[0.2790],

#### Neural N/w arch.

In [30]:
``````# Features are 3 random normal variables
features = torch.randn((1, 3))

# Define the size of each layer in our network
n_input = features.shape[1]     # Number of input units, must match number of input features
n_hidden = 2                    # Number of hidden units
n_output = 1                    # Number of output units

# Weights for inputs to hidden layer
W1 = torch.randn(n_input, n_hidden)
# Weights for hidden layer to output layer
W2 = torch.randn(n_hidden, n_output)

# and bias terms for hidden and output layers
B1 = torch.randn((1, n_hidden))
B2 = torch.randn((1, n_output))``````

This is single neuron arch.

Now, will use two hidden layers H1 and H2 to cal value for output layer

In [31]:
``````H1 = torch.matmul(features, W1) + B1
H1 = activation(H1)
H2 = torch.matmul(H1, W2) + B2
activation(H2)
``````
Out[31]:
``tensor([[0.2690]])``
In [ ]:
``jovian.commit()``
```[jovian] Saving notebook.. ```
In [ ]:
`` ``