This error message indicates that you cannot directly call the numpy() method on a tensor that requires gradient tracking(requires_grad=True). To convert the PyTorch tensor into a NumPy array, you need to first detach the tensor from the computational graph using the detach() method.

In this tutorial, we will explain why this error occurs and how to solve using tensor. detach() and with torch.no_grad():

PyTorch AUTOGRAD

Tensor is a fundamental data structure in PyTorch. Tensors refer to the generalization of vectors and matrices to an arbitrary number of dimensions, 

PyTorch tensors can perform very fast operations on GPUs, distribute operations on multiple devices or machines, and keep track of the graph of computations that created them. 

PyTorch tensors can remember where they come from, and they can automatically provide the chain of derivatives of such operations with respect to their inputs. Let’s initialize a parameters tensor: 

import torch
import torch.nn as nn

a = torch.tensor([1,2,3.], requires_grad=True)
print(a.numpy())

The requires_grad=True argument telling PyTorch to track the entire family tree of tensors resulting from operations on params. To get a NumPy array out of our tensor, we just call tensor.numpy() which will return a NumPy multidimensional array of the right size, shape, and numerical type. 

RuntimeError: Can't call numpy() on Tensor that requires grad. Use tensor.detach().numpy() instead.

Using detach()

You can’t call .numpy() on a tensor if that tensor is part of the computation graph. You first have to detach it from the graph.

print(a.detach().numpy())

tensor.detach() will return a new tensor that shares the same underlying storage but doesn’t track gradients (requires_grad is False). Then you can call .numpy() safely. Just replace tensor.numpy() with tensor.detach().numpy().

Using no_grad()

In order to address this, PyTorch allows us to switch off autograd when we don’t need it, using the torch.no_grad context manager.

with torch.no_grad():
  print(a.numpy())

First, we are encapsulating the tensor.numpy() in a no_grad context using Python with statement. This means within the with block, the PyTorch autograd mechanism should look away.

Related Post

What is tensor in PyTorch?

PyTorch Contiguous Tensor

How to convert PyTorch tensor to NumPy array?

Fix: can’t convert cuda device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

Difference between clone() vs detach() copy.deepcopy() in PyTorch

Difference between “tensor.detach()” vs “with torch.no_grad()”

Create NumPy array from PyTorch Tensor using detach().numpy()

How to copy PyTorch Tensor using clone, detach, and deepcopy?