How do we know what goes to device when I call

I am having trouble figuring out what to put on the device and how.

I noticed that everything I declare in the constructor goes to device when I call
class VariationalAutoEncoder(nn.Module):
def init(self, latent_vector_size):
self.latent_vector_size = latent_vector_size
self.encoder = nn.Sequential(

self.laten_linear = nn.Linear(3136, 2)

In above both encoder and laten_linear went to device.
But when I define this method, it doesn’t go to device:

def sample_z(self, mean, logvar):
stddev = torch.exp(0.5 * logvar)
noise = torch.randn(stddev.size())
return (noise * stddev) + mean

How do I put this method on the device?
Also is there any rule of thumb as to what goes on the device?

If you want to train on GPU (or any other device), then the model along with every input, target or any additional variable(s) used in calculations should be on such device.

In the above method the conversion is missing, because the mean and logvar values are a result of modules which are on the target device already. This means that they are already there, so there’s no need to convert them.
The same applies to reparam stuff inside this method → they are calculated using the tensors on target device, so they are already there as well.

Oh ok make sense.
I think I was just confused and tried to put the sample_z method into the device but it’s really the tensors that need to be there.

After putting some print statements there I found out the randn tensor is not on device, hence the issue.

def sample_z(self, mean, logvar):
stddev = torch.exp(0.5 * logvar)
print(f"stddev {stddev.is_cuda}")
noise = torch.randn(stddev.size())
print(f"noise {noise.is_cuda}")
return (noise * stddev) + mean

stddev True
noise False

After putting the tensor on the device, it worked.
Thanks a lot sebastian.

Ah yes, missed that randn().

It would be better (cleaner?) to user randn_like(). This would take care of making sure the size and device matches stddev

1 Like

Ah yes looks much cleaner now. Thanks for the tip :smile:

def sample_z(self, mean, logvar):
    stddev = torch.exp(0.5 * logvar)
    noise = torch.randn_like(stddev)
    return (noise * stddev) + mean