Share your work: January 2021

Use this thread to share your work for the month of January.

The best way to structure your reply is to include:

  • What would you like to share?
  • Brief Summary
  • Link to your work
  • Learnings

Read the getting started post for more instructions:
https://jovian.ai/forum/t/share-your-work-getting-started/7166/2

Hey freinds,

after few trial and error with Lr and Epochs, I got a loss of just 13.68. Happy to share the same work

Again, wrong.

Geez, people, read the other posts. You can’t use “charges” column as your input. It’s been told like dozen of times now at least.

Sorry and Thanks for the correcting the same. I have removed ‘Charges’ from the input list. The val loss is pretty high though

You can train the model for a longer period with 1000’s of epoch , you can go nearly 5000 or 4000 with this Model I guess

Here is my notebook on the insurance charges model. I managed to get the val_loss to 3810. See if any further improvement can be made

Hey, I recently took the Zero to GANs course here, and I was trying to emulate CNN on this dataset of German Traffic Signs.

I managed to achieve 99% validation accuracy but somehow my model performs very very poorly on the Test dataset, am I missing something? Can someone from the community help me out?
here’s my notebook link.

The dataset is the German Traffic Sign Benchmark from Kaggle.

1 Like

The dataset is kinda small to be honest (the “base” examples have been augmented).

You’re creating a validation set, out of the initial dataset, splitting it into actual training and validation examples.

This is ok, but have a look at the images:

The only difference between many of them is their size.

So, let’s say you split the initial dataset into 80%/20% training/validation.

But there are so many identical images, different only by their size, that the model starts to OVERFIT, because the images in validation set are identical to these from training set (they just have a different size).

While the dataset augmentation is a nice technique, this one gives a false assumption when you create validation set from it, that the model performs very well.

Since it’s been overfitting, it didn’t generalize at all what the sign represents, it just memorized the training examples.

2 Likes

Hey, I have done similar project. Please have a look it might help.
Course project link:- https://jovian.ai/rakshu-gade/traffic-sign-classification-recognition-system

Hey, This is the reason I posted it in forum actually, this overfitting issue seems to happen with PyTorch only, I also tried it with tensorflow and achieved decent accuracy of 96%. Somehow same architecture in PyTorch is learning a lot more, and overfitting the data.

I am yet to deduce what extra is happening with PyTorch.

1 Like

Hello Sir, what you are stating might be correct. But sir, when I tried this same architecture with Tensorflow, it was able to properly learn from the same dataset with same preprocessing.

What extra is happening with PyTorch that is overfitting the model?

1 Like

I’ve tried to run both models on kaggle, and got the accuracy around 96% as well for keras, but literally no idea why the behavior is so different.

My suspicion is that the fit() method of keras models has some sort of regularization built-in (but I’m not used to keras so can’t be sure about that). Like, you don’t even provide a learning rate or scheduler here, it just happens automagically.

The scary thought is I’ve used pytorch in my master thesis, achieving somewhat poor results. If there’s some problem inside the framework (or I’ve overlooked something important), then my whole thesis could have a completely different conclusion.

If I’ll find some time I’m gonna look at it again.

If anyone wants I can add you as colaborators to these notebooks (gonna need kaggle usernames though).

For the Course Project I have done Emotion Detection from audio

My notebook
in Jovian : https://jovian.ai/kuntal-das/emotional-speech-classification2d-resnet
in Kaggle : https://www.kaggle.com/kuntaldas599/emotional-speech-classification2d

I have achieved at-most 75% accuracy doing so

I published in “Can We Make Machines Understand Human Emotions ?” in medium : https://link.medium.com/RNGsoLVb9cb
and guess what, I got a opportunity to publish it in @thestartup
Check it out.

2 Likes

Check out my course project for Zero to GANs!
:arrow_down::arrow_down::arrow_down:
https://towardsdatascience.com/garbage-segregation-using-pytorch-6f2a8a67f92c

I tried to create an image classification model which would segregate images of garbage items into 6 different garbage bins.
Achieved an Accuracy of 80%. Any suggestions to improve accuracy are welcome!

Hi, junta,
Hope all are safe and healthy. I personally thanks to Akaash and his team for this wonderful course in the pandemic. Here, i am sharing my project which could be helpful to many aspirants like me.

My model detects the gender of the person(male/female) from the photo of the person with an accuracy of 95%. Do check it out and all improvements are welcomed.

1 Like

Hey Rishab, Good work!! Nice implementation,
I would like to suggest that, Do not use any augmentation Transformations on the validation set because it will create new samples of the same data which might show us a different score because of the duplicated images.
:slight_smile:

**

Here is link to my assignment 3 notebook

**

assignment-03

1 Like

Hi everyone,
I’ve attached the link to my submission.
Was still seeing some light fluctuation in the accuracy, so I’m not too sure how good this model is.

But these are my final results. I’ve documented the hyperparameters, metrics and notes. You can have a look at it in the compare section.

FINAL RESULTS -

  • Final Test Losses -> 1.39335

  • Final Test Accuracy -> 50.1 percent

  • Loss Function -> Cross Entropy

  • Activation Function -> relu(all layers)

  • Batch Size -> 128

  • Hidden Layer Architecture -> 4 layers [2048, 1024, 512, 256]

  • Epochs and Learning Rate -> 5 epochs with a learning rate of 5e-2, then 5 epochs with a learning rate of 2.5e-2, and then another 10 epochs with a learning rate of 1e-2.

  • Time taken by the model -> 2 mins

Here is my week 3 assignment with a loss of 1.28 and accuracy of 52%. Please check if anything can be improved…