%matplotlib inline
from fastai.basics import *
import jovian
We are going to learn about Stochastic Gradient Descent(SGD) with examples.
The goal here is to fit a line to set of points.
n = 100
# Create a tensor of n rows and 2 columns
x = torch.ones(n, 2)
#Replace every single row of first column with values b/w -1 to 1
x[:, 0].uniform_(-1.,1);x[:3]
tensor([[-0.0170, 1.0000],
[-0.2915, 1.0000],
[ 0.2883, 1.0000]])
a = tensor(3.,2); a
tensor([3., 2.])
y = x@a + torch.rand(n)
plt.scatter(x[:,0], y)
<matplotlib.collections.PathCollection at 0x7f87c339df28>
The goal is to find the weights a
such that we can minimize the error between the points & the line x@a
. Here a is unknown. For regression, loss/error function is mean squared error
def mean_squared_error(y_hat, y): return((y_hat - y)**2).mean()
a = tensor(-1.,1)
y_hat=x@a
mean_squared_error(y_hat, y)
tensor(7.8217)
plt.scatter(x[:,0], y)
plt.scatter(x[:,0], y_hat)
<matplotlib.collections.PathCollection at 0x7f87c3345710>
Here model (logistic regression) & evaluation criteria (or loss function) is specified. Now how can we find the optimized values for a
in order to find best fitting linear regression.
a = nn.Parameter(a);a
Parameter containing:
tensor([-1., 1.], requires_grad=True)
def update():
y_hat = x@a
loss = mean_squared_error(y_hat, y)
if t % 10 == 0: print(loss)
loss.backward()
with torch.no_grad():
a.sub_(lr * a.grad)
a.grad.zero_()
lr = 1e-1
for t in range(100): update()
tensor(7.3373, grad_fn=<MeanBackward1>)
tensor(1.3988, grad_fn=<MeanBackward1>)
tensor(0.3868, grad_fn=<MeanBackward1>)
tensor(0.1484, grad_fn=<MeanBackward1>)
tensor(0.0912, grad_fn=<MeanBackward1>)
tensor(0.0775, grad_fn=<MeanBackward1>)
tensor(0.0742, grad_fn=<MeanBackward1>)
tensor(0.0734, grad_fn=<MeanBackward1>)
tensor(0.0732, grad_fn=<MeanBackward1>)
tensor(0.0731, grad_fn=<MeanBackward1>)
plt.scatter(x[:,0],y)
plt.scatter(x[:,0],x@a);
from matplotlib import animation, rc
rc('animation', html='jshtml')
jovian.commit()
[jovian] Saving notebook..