NumPy is the most widely used library in Python. It provides similar functionality and a similar API to that provided by PyTorch. It does not support using the GPU or calculating gradients, which are both critical for deep learning.
PyTorch also supports the vast majority of methods and operators supported by NumPy, but PyTorch tensors have additional capabilities.
One major capability is that these structures can live on the GPU, in which case their computation will be optimized for the GPU and can run much faster. In addition, PyTorch can automatically calculate derivatives of these operations, including combinations of operations.
Convert PyTorch tensor to NumPy array
PyTorch tensors can be converted to NumPy arrays and vice versa very efficiently. This zero-copy interoperability with NumPy arrays is due to the storage system working with the Python buffer protocol. To get a NumPy array out of our tensor, we just call:
import torch.nn as nn
a_ten = torch.tensor([1,2,3.],device='cuda:0')
TypeError: can't convert cuda:0 device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.
The error message indicates that the
a_ten is on the GPU and cannot be directly converted to a NumPy array. To fix this, we first need to copy the tensor to the CPU.
Moving tensors to the CPU
PyTorch tensors also can be stored on a different kind of processor: CPU or GPU. Every PyTorch tensor can be transferred CPU to GPU or GPU to CPU. All operations that will be performed on the tensor will be carried out using GPU-specific routines that come with PyTorch.
PyTorch Tensor has the notion of device, which is where on the computer the tensor data is placed. Here is how we can create a tensor on the GPU by specifying the corresponding argument to the constructor.
a_ten = torch.tensor([0.52,0.42,0.53],device='cuda:0')
We could instead copy a tensor created on the GPU onto the CPU using the to method:
a_cpu = a_ten.to(device='cpu')
Doing so returns a new tensor that has the same numerical data, but is stored in regular system RAM, rather than in the RAM of the GPU. Now that the data is stored locally on the CPU.
We can also use the shorthand methods
.cpu() and .
cuda() instead of the
to method to achieve the same goal:
ten_gpu = ten.cuda() #Defaults to GPU index 0
ten_gpu = ten.cuda(0)
ten_cpu = ten_gpu.cpu()