```
opt = torch.optim.SGD(model.parameters(),lr=1e-5)
#Compute gradients
loss.backward()
#Update parameters using gradients
opt.step()
```

While building a linear regression model using PyTorch built-ins, I encountered this. How are the gradients computed using loss.backward() linked to the optimizer here? The link to the jupyter notebook cell is given here