Fit function error ---> RuntimeError: Found dtype Double but expected Float

RuntimeError Traceback (most recent call last)

in ()
1 epochs = 5
2 lr = 0.1
----> 3 history1 = fit(epochs, lr, model, train_loader, val_loader)

2 frames

in fit(epochs, lr, model, train_loader, val_loader, opt_func)
10 for batch in train_loader:
11 loss = model.training_step(batch)
β€”> 12 loss.backward()
13 optimizer.step()
14 optimizer.zero_grad()

/usr/local/lib/python3.7/dist-packages/torch/ in backward(self, gradient, retain_graph, create_graph, inputs)
243 create_graph=create_graph,
244 inputs=inputs)
β†’ 245 torch.autograd.backward(self, gradient, retain_graph, create_graph, inputs=inputs)
247 def register_hook(self, hook):

/usr/local/lib/python3.7/dist-packages/torch/autograd/ in backward(tensors, grad_tensors, retain_graph, create_graph, grad_variables, inputs)
145 Variable.execution_engine.run_backward(
146 tensors, grad_tensors
, retain_graph, create_graph, inputs,
β†’ 147 allow_unreachable=True, accumulate_grad=True) # allow_unreachable flag

RuntimeError: Found dtype Double but expected Float

Link to my model

Link to the β€˜fit’ function where this error occured

The inputs and targets are already in float format as you can see in the screenshot here. So why is the dtype coming as Double?

Please let me know if you need any more info.

You can write torch.from_numpy(inputs_array).astype('float') for both input and targets.
float64 basically means double whereas float32 means float.
Note: If astype('float') also gives float64, you can directly mention .astype('float32')

1 Like

Got it, thank you. :slight_smile:

But now the val_loss is coming as nan !!
Printed afterβ€”> history1 = fit(...)

Might be you are using mse_loss as a loss function and giving high learning rate values, If you use mse_loss keep the learning rate below 1e-4, Otherwise use any other loss function.

1 Like

Thank you, it worked!

1 Like