NumPy is the most popular multidimensional array library, it has now arguably become the lingua franca of data science. PyTorch tensor seamless interoperability with NumPy.
NumPy has excellent interoperability with PyTorch. PyTorch tensors can be converted to NumPy arrays and vice versa very efficiently. By doing so, we can take advantage of the huge swath of functionality in the wider Python ecosystem that has built up around the NumPy array type.
To get a NumPy array out of our
tensor, we just call
tensor.numpy() , which will return a NumPy multidimensional array of the right size, shape, and numerical type.
print(type(x_np)) #<class 'numpy.ndarray'>
The returned array shares the same underlying buffer with the tensor storage. This means the numpy method can be effectively executed at no cost, as long as the data sits in CPU RAM.
It also means modifying the NumPy array will lead to a change in the originating tensor. If the tensor is allocated on the GPU, PyTorch will make a copy of the content of the tensor into a NumPy array allocated on the CPU.
Conversely, we can obtain a PyTorch tensor from a NumPy array this way which will use the same buffer-sharing strategy.
x_ten = torch.from_numpy(x_np)
While the default numeric type in PyTorch is 32-bit floating-point, for NumPy it is 64-bit. We usually want to use 32-bit floating-points, so we need to make sure we have tensors of dtype torch .float after converting.