Assignment 3 - Feed Forward Neural Networks

@TumAro @jhonathanortiz @tanmher09 @tanmher09 anyone who is facing an issue on kaggle kernel(related to GPU/CPU) can also try on Google Colab.

Hello TumAro,
By default the calculations are done on the CPU.
GPU acceleration is probably disabled on the environment you are using.
If you are using Kaggle environment to execute your python notebook, select “GPU” in the “accelerator tabs” at the right of the page.
Good work :slight_smile:

1 Like

Hello guys, the downloading of CIFAR dataset in kaggle fails and gives an error, and in collab It says it can’t detect jupyter notebook or python and I can’t commit so I had to manually download the notebook and upload it.

Anybody else having these problems and maybe some fixes for future reference?

I am still not at your level but try to change the other parameters as suggested in the course:

  • Increase or decrease the number of hidden layers
  • Increase of decrease the size of each hidden layer
  • Try different activation functions
  • Try training for different number of epochs
  • Try different learning rates in every epoch"
1 Like

Anyone could help please:

@shravankumar224
You have 2 options at the point.

  1. You can resize you original images sizes, to a desired size and build a model based on that. (This is a easy step) There are different variations to resize as well, you can crop, squish or stretch, take avg. of pixels values and convert to lower res image, or dynamically crop random area for every batch.
  2. With CNNs you will have handle in the model that you don’t make any static sizes for layers, and use things like GlobalAvgPooling at the end of the model. (This is complex you can ignore for now) Also mind that Tensor will expect same dimensions, there you can make collection of similar resolution images for batches and pad with black pixels to largest resolution in the batch.

Which memory is this ? RAM or VRAM (GPU memory) ? If you VRAM then you will have to restart the kernel and try it again, if CUDA errors appears then either you will have to reduce batch size or number of neurons and layers. @phuonganh-thuy-do

1 Like

Hello everyone!!

I’m having a problem with my assignment 3,

you can check my reports, I tried using dense hidden layers, different kind of activation functions different values of or and sometimes a long run of epochs, but still I don’t see the model making any more good accuracy than 50%. In the answer section, I saw some people getting more accuracy doing the same thing I did still in my case the accuracy just won’t increase…

Please help… I’m clueless what’s wrong with it.

@jhonathanortiz Yes, you can write your blog in a language of your choice. We have no such restriction. It would be really great if you write in Spanish, since most blogs from this course are in English. It would help regional Deep Learning enthusiasts.
Happy blogging :smile:

@TumAro Linear layers does not require a lot of VRAM (GPU memory) so its that low. You can increases the batch size to say 1024, 2048 range and check it out it will still fit and you would have more memory utilization. You can increase layers, no. of layers too.
As to 2% clock speed, it might also increase with the above mentioned changes, since linear layers does not demand more per batch you wont be reaching high utilization here IMO.

Also make sure in you notebook that the device is cuda.

@tanmher09 You can try higher lr.
@TumAro
I think just having linear layers will saturate to 50-55% accuracy.

Those results of 80%/90% are not accurate, there was a bug in aakash’s initial notebook. You can ignore those.

Some people are adding Conv layers which will definitely increase the performance.

Edit: Now i’m seeing some good results with just linear layers.
Some things to watch out:

  1. Make sure your valid_loss does not rise again, this might indicate that overfitting.
  2. Reduce lr after few epochs.

Will update after going through other notebooks, even i’ll experiment more.

@hurly119 You might have toggle the internet option in kaggle.

@njanvier You have verify your phone number to enable internet option. You can see that link for verification on the right side bar. (Unfortunately this is mandatory from kaggle, please verify your phone number and you’ll be able to download the dataset).

1 Like

This was a great recommendation. I use it and it works. Now, the highest accuracy rate I got is 54%

when I’m trying to use google collab so that I can use the GPU properly here,


this is what I get… how to fix, please help

Hi Jovian Team @aakashns

I am trying to solve a binary image classification problem. I am trying to implement all the the learnings we got from both the lectures.

I am getting a problem - like this.

File “/opt/conda/lib/python3.7/site-packages/torchvision/transforms/functional.py”, line 104, in to_pil_image
raise ValueError(‘pic should be 2/3 dimensional. Got {} dimensions.’.format(pic.ndimension()))
ValueError: pic should be 2/3 dimensional. Got 1 dimensions.

Here is my code.

Please let me know where is the problem . CODE HERE

I tried to save the code from kaggle to jovian - but i m not sure why the code is not reflecting but I can see the project name created. Hence I had to give give github link.
It would be great if someone directs me to the code error location.

Regards
Shravan

Regards
Shravan

1 Like

Hi All,

I am working on a problem where the data is loaded in kaggle and I tried to save the notebook in Jovian.
I can see the project name created but I cannot see the code being getting reflected.
Please let me know why this is happening? Please see the project name HERE

Please note that I have directly started the project in kaggle but did all import for jovian later along with saving the project name.

Regards
Shravan

1 Like

Is there a way to use dynamic iterations for hidden layers/sizes, rather than hard-coding?

Hello, please is there any limit number of committing notebook from kaggle? Because after certain number, my notebook is not committing well now.

How many images does the training dataset contain?
test_data_size
it is there to fill?


TypeError Traceback (most recent call last)
in
2 plt.imshow(img.permute((1, 2, 0)))
3 print(‘Label (numeric):’, label)
----> 4 print(‘Label (textual):’, classes[label])

TypeError: ‘int’ object is not subscriptable

For some reason, when I commit on kaggle I get an empty notebook on the new version.
Anybody else experiencing this?

1 Like

Yes! That has happened to me too, the first time this is happening (no trouble with the other homework assignments). In fact I downloaded six different versions before I went back to Jovian to check and saw they were all empty. My work around for this was to download the notebook locally to my computer and then upload it to Jovian.

Done Assignment 3, will be glad to hear suggestions Assignment 3
thank you