i used resize funtion from torchvision.transforms to convert my images in the dataset from size 256 * 256 to 64 * 64
but after conversion I found some of the images are of size 64 * 74
Can someone tell me why this is happening ?
Docs are your friend
Your notebook has tt.Resize(64)
. Which means only the smaller edge will be matched to this size (which also means that this dataset contains non-square images for some reason) and the second one will be resized to keep image ratio.
Try tt.Resize((64, 64))