Lecture 1 : PyTorch Basics & Linear Regression

Live Session Link:
English: https://youtu.be/5ioMqzMRFgM
Hindi: https://youtu.be/7yRkvPxE7Hs

Lecture Date and Time: November 21, 2020,
English: 9 PM IST/8.30 AM PST | Add to calendar (Google)
Hindi: 11 AM IST | Add to calendar (Google)

Notebooks:

  1. PyTorch Basics: https://jovian.ai/aakashns/01-pytorch-basics
  2. Linear Regression: https://jovian.ai/aakashns/02-linear-regression
  3. Machine Learning Intro: https://jovian.ai/aakashns/machine-learning-intro

What to do after the lecture?

  • Run the Jupyter notebooks shared above
  • Ask and answer questions on this thread
  • Start working on Assignment 1 - All About torch.Tensor

Asking/Answering Questions:
Reply on this thread to ask questions during and after the lecture. Before asking, scroll through the thread and check if your question (or a similar one) is already present. If yes, just like it. During the lecture, we’ll answer 8-10 questions with the most likes. The rest will be answered on the forum. If you see a question you know the answer to, please post your answer as a reply to that question. Let’s help each other learn!

2 Likes

Using the train() function in the original notebook will rise an error because is missing the last argument (train_dl)

1 Like

@aakashns Sir please explain requires_grad set to True. again …??? and its uses…??

requires_grad was difficult to understand, could you pl explain with simple practical use cases to start with?

@lakshaysethia @iamsusmita
if you set requires_grad to True to any tensor, then PyTorch will automatically track and calculate gradients for that tensor.
Now, why is this needed? Any Machine learning model will learn to predict better using an algorithm knows as Backpropagation, during backpropagation you need gradients to updates with new weights. This is done with the .backward() method, so during this operation tensors with requires_grad set to True will be used along with the tensor used to call backward() with to calculate the gradients.

Example:

import torch
x = torch.tensor(1.0, requires_grad = True)
z = x ** 3 # z=x^3
z.backward() #Computes the gradient 
print(x.grad.data) # this is dz/dx
5 Likes

Hi, where can I find the link to the recorded English lecture conducted today (21/11/20) ?

It’ll be live streamed at 9pm IST (7:30 am PST)

Can someone provide any material to understand gradient of a matrix?

When I am installing a torch on my personal machine, it is showing an error.
Please find the screenshot

When I am trying to install other libraries, I am able to install.
Can anyone help.

1 Like

Hello Jinesh,
Installing Pytorch in local machine requires another command as it depends on your machine whether it has GPU support or not. So choose the right combination you need by clicking here. The size of the installation will depend on what options you choose. GPU support takes more space so it is recommended to use colab.
Hope this helps :slight_smile:

3 Likes

there are plenty of resources you just need to google it. watch a couple of youtube videos covering the basics.

I would suggest you to install Anaconda and then choose parameters based on your system and OS here PyTorch installation. Copy and conda command and run it, it will install all the necessary dependencies.

What is advantage of Pytorch over Tensorflow?

May I ask why the training data use dtype='float32', but before we use float64, does it relates to the computing speed here? thanks

Both are useful and intuitive in their own ways, although afaik Pytorch is from Microsoft whereas Tensorflow is from Google. Moreover Pytorch is fairly easy to use and beginner friendly for budding users. That was my perspective and will surely depend person to person.

U mean Pytorch was developed by Facebook’s AI research lab, the windows version however is managed by Microsoft.

1 Like

Ohh! Did’nt knew that. Thanks :smile:

Yeah, I read about that. Also, AMD is developing ROCm to translate Cuda code to some programming language independent code. So that AMD GPUs can also use CUDA like features.

Wow!, this is getting interesting.

2 Likes

Sir , I am getting warning please give your suggestions
The .grad attribute of a Tensor that is not a leaf Tensor is being accessed. Its .grad attribute won’t be populated during autograd.backward(). If you indeed want the gradient for a non-leaf Tensor, use .retain_grad() on the non-leaf Tensor. If you access the non-leaf Tensor by mistake, make sure you access the leaf Tensor instead. See github.com/pytorch/pytorch/pull/30531 for more informations.
“”"Entry point for launching an IPython kernel.