Assignment 2 - Train Your First Model

Starter notebook: https://jovian.ai/aakashns/02-insurance-linear-regression
Submit here: https://forms.gle/2cauNYZn9ajgrEqN7
Submission Deadline: June 6 June 13, 8.30 AM PST/9:00 PM IST

Overview

In this assignment, we’re going to use information like a person’s age, sex, BMI, no. of children, and smoking habit to predict the price of yearly medical bills. We will train a model with the following steps:

  1. Download and explore the dataset
  2. Prepare the dataset for training
  3. Create a linear regression model
  4. Train the model to fit the data
  5. Make predictions using the trained model

Steps to complete the assignment

  1. Fork & run this notebook: https://jovian.ai/aakashns/02-insurance-linear-regression
  2. Fill out all the ??? in the notebook to complete the assignment, and commit the final version to Jovian
  3. Submit your assignment here: https://forms.gle/2cauNYZn9ajgrEqN7
  4. (Optional) Replicate the model training for another dataset & write a blog post
  5. (Optional) Share your work with the community on the Share Your Work Here - Assignment 2 thread

Make sure to review the material from Lecture 2 before starting the assignment. Please reply here if you have any questions or face issues. The recommended platform for writing your blog post is medium.com .

8 Likes

@aakashns
Can you please guide, what i am doing it wrong ?

1 Like

I think you did not use the DataLoader correctly. You should pass the dataset u want (In this case validation dataset in place of 100) and the batch size. Please try it out and let me know if this fixed your issue.

Is anyone experiencing an astronomical value for validation losses? My first call of the evaluate function gave me a val_loss of 342229984.0, and training does make the value decrease, but still the value remains really high (like a few ten millions). Is this normal for this notebook, or am I going crazy?

EDIT: I used mse_loss, which I found in another fourm is probably not the best loss function for this assignment. Seems like l1_loss does much better…

6 Likes

Some Other errors that you have made :

  1. Input_cols should not include charges which is the desired output column
  2. Your loss function computes cross entropy for a linear regression problem. Should be F.mse_loss instead in the loss function
1 Like

Also, make sure that the data types of the inputs and targets are torch.float32 since I got an error from that further on too. torch.from_numpy() is not enough at cell #77; you have to run torch.tensor(<array>, dtype= torch.float32) (replace <array> with the arrays you need to convert). Hopefully this helps!

6 Likes

What should the input and output size be? I am getting 5 and 7 and it is causing issues

1 Like

input size should be 6, and output size should be only 1. I’ve been stuck on the output size, you have to wrap charges with brackets so that len(output_cols) would yield 1. For input size, I’m not too sure about how you got 5, but for me input_cols = ['age', 'sex', 'bmi', 'children', 'smoker', 'region'], so taking len() should yield 6. Hope this helps!

4 Likes

Thanks but it works with F.l1_loss(out, targets)

input size = 6 and output size = 1

Getting nan as my validation loss, I tried re-initializing the model, the loss is still nan. Can anyone help me? The loss function I used is mse.

5 Likes

mse_loss doesn’t work well for this assignment. Try l1_loss or smooth_l1_loss, trust me it’ll give you much better results!

7 Likes

Yes, you are right, that worked. Thanks! Although I don’t understand why mse wouldn’t work, maybe because the numbers are too large in mse.

Nice! According to @allenkong221, the data is skewed, making mse less effective, so that’s why.

1 Like

‘DataFrame’ object has no attribute ‘cat’
why i am facing this?

Could you share the code snippet? I’m guessing you are applying cat directly to dataframe rather that it’s columns.

That does make sense but it’s doesn’t explain why mse would throw nan. Also, l1_loss calcuates element wise mean absolute error therefore, it is also sensitive to skewed data like mse.

Could anyone explain to the difference between F.mse_loss and F.l1_loss? It works with l1_loss but I have no justification

I was using [[]] for columns.