Lecture 5: Data Augmentation, Regularization and ResNets

Session Links
Hindi: https://youtu.be/S2zNff6rgOg

Lecture Date and Time: December 19, 2020
English: 9 PM IST/8.30 AM PST | Add to calendar (Google)
Hindi: 2 PM IST | Add to calendar (Google)

This lesson covers some advanced techniques like data augmentation, regularization, and adding residual layers to convolutional neural networks. We train a state-of-the-art model from scratch in just five minutes. Notebooks used in this lesson:

Asking/Answering Questions :
Reply on this thread to ask questions during and after the lecture. Before asking, scroll through the thread and check if your question (or a similar one) is already present. If yes, just like it. During the lecture, we’ll answer 8-10 questions with the most likes. The rest will be answered on the forum. If you see a question you know the answer to, please post your answer as a reply to that question. Let’s help each other learn!

Hi!

# Data transforms (normalization & data augmentation)
stats = ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))

How do we get these stats? Someone please help me.

5 Likes

@poduguvenu I had the same problem, and found this video: https://www.youtube.com/watch?v=y6IEcEBRZks

Hope it helps.

5 Likes

Thanks much!
Video is helpful!!:+1:

Thanks @lsalvador-ht this is helpful, but I could see different values for Standard deviation?

stats = ((0.4914, 0.4822, 0.4465), (0.2023, 0.1994, 0.2010))

So, 2nd tuple represents some other thing?

Hi. The second tuple represents standart deviation, but I guess it is different because of the random split. I used it in my course project, and It worked as I spected.

Greetings.

1 Like

def denormalize(images, means, stds):
means = torch.tensor(means).reshape(1, 3, 1, 1)
stds = torch.tensor(stds).reshape(1, 3, 1, 1)
return images * stds + means

I would like to ask about the denormalize function.
How do we come up with the reshape(1,3,1,1)?

image

I would like to understand Dropout, As far as I know from lecture videos, Dropout deactivates set of inputs randomly. From the above image, How to understand Dropout?

Dropout turns some of the values from the input tensor into 0.
The p stands for probability of changing a single value from input.
The higher, the more 0’s there will be.

Thank you for your reply. I understand, but why the values are getting changed everytime I invoke? I am not able to understand on changes that happend in the Image.

Because each time you use dropout, it randomly checks each value in the tensor, and decides if it should be zeroed or not.

It’s the randomness that changes the result. For one run you may experience that no changes happened (if the probability of changing the value is low enough), other times you may see half or even all of the values turned into 0 (unlikely, but in theory, could happen).

original tensor is

[1.,2.,3.,4.,5.,6]

after dropout layer it became

[1.6667, 3.3333, 0.0000, 6.6667, 8.3333, 10.0000]

I understand a couple of elements becomes zero, but why values are changing, for example, 5 became 8.3333 ??

Hi,
I am trying to use the https://jovian.ai/aakashns/simple-cnn-starter notebook. Facing issue with kaggle username and key. I manually put the username and API key, yet it is giving me error. Can someone help me with this. TIA

Hi! I was trying to parallelize my training over multi-gpu using nn.DataParallel(network) but unfortunately i get the following error: torch.nn.modules.ModuleAttributeError ‘DataParallel’ object has no attribute ‘validation_step’.

So I modified the evaluate function as such:
outputs = [model.Module.validation_step(batch) for batch in val_loader]
return model.Module.validation_epoch_end(outputs)

but now the training works fine but only on 1 GPU device :((

Any help would be appreciated in this regard!

Hey, I have the same doubt? Did you figured it out ??

Yes, I do. I resize the input tensor to another size like 256, 64, etc