a = torch.tensor([5.0, 3.0], requires_grad=True)
b = torch.tensor([1.0, 4.0])
ab = ((a + b) ** 2).sum()
ab.backward()
RuntimeError Traceback (most recent call last)
in ()
----> 1 ab.backward()
2 frames
/usr/local/lib/python3.6/dist-packages/torch/autograd/init.py in _make_grads(outputs, grads)
48 if out.requires_grad:
49 if out.numel() != 1:
—> 50 raise RuntimeError(“grad can be implicitly created only for scalar outputs”)
51 new_grads.append(torch.ones_like(out, memory_format=torch.preserve_format))
52 else:
RuntimeError: grad can be implicitly created only for scalar outputs
Why the grad should be implicitly created only for scalar outputs?