Deep Learning takes data in some form, like images or text, and produces data in another form, like labels, numbers, or more images or text. It transforms the data from one representation to another. This transformation is driven by extracting commonalities from a series of examples.
You need to convert input data into floating-point numbers, PyTorch deal with floating-point numbers using Tensors. Tensors are the fundamental data structure in PyTorch. The tensor is an array that is a data structure that stores a collection of numbers that are accessible individually using an index, and that can be indexed with multiple indices.
Before we begin the process of converting non-contiguous Tensor to contiguous Tensor, we must first have a solid understanding of “how PyTorch handles and stores Tensor in memory”.
Python lists are collections of Python objects that are individually allocated in memory while PyTorch tensors, on the other hand, are views over contiguous memory blocks containing unboxed numeric types rather than Python objects. Each element is a 32-bit (4-byte) float, in this case, this means storing a 1D tensor of 10 float numbers will require exactly 40 contiguous bytes, plus metadata such as dimensions and numeric type.
Values in tensors are allocated in contiguous chunks of memory managed by torch.Storage instances. Storage is a one-dimensional array of numerical data, that is a contiguous block of memory containing numbers of a given type, such as float32 or int64. The PyTorch Tensor instance is a view of such a Storage instance that is capable of indexing into that storage using offset and per-dimension strides.
The underlying memory is allocated only once, creating alternate tensor views of the data can be done quickly regardless of the size of the data managed by the Storage instance. Let’s see how indexing into the storage works in practice with our 2D input. The storage for a given tensor is accessible using the .storage property:
input = torch.tensor([[3.0, 7.0], [6.0, 8.0], [2.0, 9.0]]) input.storage() #3.0 7.0 6.0 8.0 2.0 9.0 [torch.FloatStorage of size 6]
Even though the tensor reports itself as having three rows and two columns, the storage under the hood is a contiguous array of size 6. In this sense, the tensor just knows “how to translate a pair of indices into a location in the storage”.
Let’s take our input tensor and transpose it.
input_t=input.t() input.is_contiguous() #True input_t.is_contiguous() #False
In the example above, input is contiguous but input_t is not because its memory layout is different. When you call transpose(), PyTorch doesn’t generate a new tensor with a new layout, it just modifies meta information in the Tensor object so that the offset and stride describe the desired new shape.
print(id(input.storage())==id(input_t.storage())) #True input_t.view(3,2)
We can easily verify that the two tensors share the same storage.
RuntimeError: view size is not compatible with input tensor’s size and stride
There are a few operations on Tensors that only work on contiguous tensors like narrow(), view(), expand(), and transpose() that do not change the contents of a tensor, but change the way the data is organized. In that case, PyTorch will throw an informative exception and require us to call contiguous explicitly. It’s worth noting that calling contiguous will do nothing if the tensor is already contiguous.
We can obtain a new contiguous tensor from a non-contiguous one using the contiguous method. The content of the tensor will be the same, but the stride will change. Notice that the storage has been reshuffled in order for elements to be laid out row by row in the new storage. The stride has been changed to reflect the new layout.
contiguous vs non-contiguous tensor
A tensor whose values are laid out in the storage starting from the rightmost dimension onward (that is, moving along rows for a 2D tensor) is defined as contiguous. Contiguous tensors are convenient because we can visit them efficiently in order without jumping around in the storage. It improves data locality and improves performance because of the way memory access works on modern CPUs. This advantage of course depends on the way algorithms visit.
When you call contiguous(), it actually makes a copy of the tensor such that the order of its elements in memory is the same as if it had been created from scratch with the same data.