# Train for 100 epochs

for i in range(100):

preds = model(inputs)

loss = mse(preds, targets)

loss.backward()

with torch.no_grad():

w -= w.grad * 1e-5

b -= b.grad * 1e-5

w.grad.zero_()

b.grad.zero_()

In this loop, should not we must calculate loss after adjusting w and b .I am getting doubt that each time we adjust w grad zero how are we calculating new grad as we are not taking new random w(weight).