Learn practical skills, build real-world projects, and advance your career

Visualizing Linear Regression

Linear regression is a common machine learning technique that predicts a real-valued output using a weighted linear combination of one or more input values.

The “learning” part of linear regression is to figure out a set of weights w1, w2, w3, ... w_n, b that leads to good predictions. This is done by looking at lots of examples one by one (or in batches) and adjusting the weights slightly each time to make better predictions, using an optimization technique called Gradient Descent.

from IPython.display import IFrame

IFrame(src='https://cdn-images-1.medium.com/max/800/1*i1mz7cVHTMd4w85QpNB9pQ.gif', width="100%", height="300px")
#import dependencies 
import numpy as np
import matplotlib.pyplot as plt
import pandas as pd
import torch
import torch.nn as nn
from torch.autograd import Variable

Let’s create some sample data with one feature "x" and one dependent variable "y". We’ll assume that "y" is a linear function of "x", with some noise added to account for features we haven’t considered here. Here’s how we generate the data points, or samples:

x = np.random.rand(500)
x[:5]
array([0.06833545, 0.45942712, 0.96850857, 0.01048988, 0.33533509])