I have mostly just replication all the classes and functions from Lesson 5 and run it on a new dataset (intel Images from Kaggle) the images are 150X150. I got the error below, so I first tried starting with tt.Resize((36,36)), (make sure you use two brackets as it is tuple- An hour of my lift lost figuring that one out)
history = [evaluate(model, valid_dl)]
history
/usr/local/lib/python3.6/dist-packages/torch/autograd/grad_mode.py in decorate_context(*args, **kwargs) 24 def decorate_context(*args, **kwargs): 25 with self.class(): —> 26 return func(*args, **kwargs) 27 return cast(F, decorate_context) 28
in evaluate(model, val_loader) 2 def evaluate(model, val_loader): 3 model.eval() ----> 4 outputs = [model.validation_step(batch) for batch in val_loader] 5 return model.validation_epoch_end(outputs) 6
in (.0) 2 def evaluate(model, val_loader): 3 model.eval() ----> 4 outputs = [model.validation_step(batch) for batch in val_loader] 5 return model.validation_epoch_end(outputs) 6
in validation_step(self, batch) 12 def validation_step(self, batch): 13 images, labels = batch —> 14 out = self(images) # Generate predictions 15 loss = F.cross_entropy(out, labels) # Calculate loss 16 acc = accuracy(out, labels) # Calculate accuracy
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),
in forward(self, xb) 32 out = self.conv4(out) 33 out = self.res2(out) + out —> 34 out = self.classifier(out) 35 return out
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/container.py in forward(self, input) 115 def forward(self, input): 116 for module in self: --> 117 input = module(input) 118 return input 119
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/module.py in _call_impl(self, *input, **kwargs) 725 result = self._slow_forward(*input, **kwargs) 726 else: --> 727 result = self.forward(*input, **kwargs) 728 for hook in itertools.chain( 729 _global_forward_hooks.values(),
/usr/local/lib/python3.6/dist-packages/torch/nn/modules/linear.py in forward(self, input) 91 92 def forward(self, input: Tensor) -> Tensor: —> 93 return F.linear(input, self.weight, self.bias) 94 95 def extra_repr(self) -> str:
/usr/local/lib/python3.6/dist-packages/torch/nn/functional.py in linear(input, weight, bias)
1688 if input.dim() == 2 and bias is not None:
1689 # fused op is marginally faster ->
1690 ret = torch.addmm(bias, input, weight.t())
1691 else:
1692 output = input.matmul(weight.t())
RuntimeError: mat1 dim 1 must match mat2 dim 0
I ran the following,
for images, labels in train_dl:
print(images.size())
break
torch.Size(500,3,32,32) which is the same as the original from lesson 5.
I have made no changes tot he original code from Lesson 5 other than just trying to get the files loaded, and added the Resize to
train_tfms = tt.Compose([tt.Resize((32,32)),
tt.RandomCrop(32, padding=4, padding_mode=‘reflect’),
tt.RandomHorizontalFlip(),
tt.RandomRotation([-15, 15]),
tt.RandomResizedCrop(32),
tt.ColorJitter(brightness=0.1, contrast=0.1, saturation=0.1, hue=0.1),
tt.ToTensor(),
tt.Normalize(*stats,inplace=True)])
– Updated –
commented out the following and it worked - trained to 82%
tt.RandomCrop(32, padding=4, padding_mode=‘reflect’),
tt.RandomHorizontalFlip(),
tt.RandomRotation([-15, 15]),
tt.RandomResizedCrop(32),
tt.ColorJitter
Changed Resize to 150,150 as this is needed to clean up about 40 images that are not 150X150, and I am back to “RuntimeError: mat1 dim 1 must match mat2 dim 0”