PyTorch allocates tensor values in contiguous chunks of memory managed by torch.Storage instances. torch.Storage is a one-dimensional array of numerical data, that is a contiguous block of memory containing numbers of a given type, such as float32 or int64.

A PyTorch Tensor instance is a view of such a Storage instance that is capable of indexing into that storage using offset and per-dimension strides.

PyTorch tensor storage

Multiple tensors can index the same storage even if they index into the data differently. The underlying memory is allocated only once, however, so creating alternative tensor-views of the data can be done quickly regardless of the size of the data managed by the Storage instance.

View into storage

tensor.view() allows us to be a View of an existing tensor. The return tensor shares the same underlying data as its base tensor. It avoids explicit data copy, fast and memory-efficient reshaping, slicing, and element-wise operations.

a = torch.rand(2, 3)
b = a.view(3, 2)
print(a.storage().data_ptr() == b.storage().data_ptr())  # True'a and 'b' share the same underlying data.

b[0][0] = 8.14

print(a[0][0]) #tensor(8.1400)

If you edit the data in the view, it will be reflected in the base tensor as well. tensor.view() tensor just changes the way it interprets the same data. Taking a view of a contiguous tensor could potentially produce a non-contiguous tensor.

a = torch.tensor([[0, 1],[2, 3]])

print(a.is_contiguous()) #True

b = a.transpose(0, 1)  
# 'b' is a view of `a`. No data movement happened here.
print(b.is_contiguous()) 
#False # View tensors might be non-contiguous.

c = b.contiguous() 
#To get a contiguous tensor, call b.contiguous(). 
#It enforce copying data when 'b' is not contiguous.

In order to index into a storage, tensors rely on a few pieces of information that, together with their storage, define them: size, offset, and stride.

PyTorch Tensor View

The size (or shape) is a tuple indicating how many elements across each dimension the tensor represents. The storage offset is the storage index corresponding to the tensor’s first element. The stride is the number of elements in the storage that need to be skipped over to obtain the next element along each dimension. 

import torch

x=torch.tensor([[7,9,4],
                [3,1,6],
                [8,5,2]],dtype=torch.int)

print(x.stride()) #(3, 1)
print(x.shape) #torch.Size([3, 3])

print(x.storage_offset())

When accessing the contents of a tensor via indexing, it returns views, while advanced indexing returns a copy. Assignment via either basic or advanced indexing is in-place. See more examples here.

Related Post

PyTorch Difference Between View and Reshape.

What does view(-1) in PyTorch?

What does Unsqueeze do in PyTorch?

PyTorch Contiguous Tensor

PyTorch changes image channel order between channel first and channel last

How to reshape tensor in PyTorch?

How to Indexing and Slicing PyTorch Tensor?