Deep learning takes data in some form, like images or text, and produces data in another form, like labels, numbers, or more images or text. The deep learning model transforms data from one representation to another.

The process begins by converting our input into floating-point numbers. In this tutorial, we learn how to deal with all the floating-point numbers in PyTorch by using tensors.

Model as Floating Point Numbers

Floating-point numbers are the way a network deals with information, we need to encode real-world data that we want to process and then decode the output back to something we can understand and use for our purpose. 

A deep neural network transforms one form of data to another in stages, which means the partially transformed data between each stage can be thought of as a sequence of intermediate representations.

PyTorch Deep Learning Model

Intermediate representations are collections of floating-point numbers that characterize the input and capture the data’s structure in a way that is instrumental for describing how inputs are mapped to the outputs of the neural network.

Tensors is a Multidimensional Array

PyTorch introduces a fundamental data structure called the tensor. In the context of deep learning, tensors refer to the generalization of vectors and matrices to an arbitrary number of dimensions.

PyTorch Tensor

Another name for the same concept is multidimensional array. The dimensionality of a tensor coincides with the number of indexes used to refer to scalar values within the tensor. 

Create Our First Tensors

In the PyTorch deep learning model, all of your data like inputs, outputs learning weights are going to be expressed as tensors. It is a multi-dimensional array that can contain floating-point integer or boolean data.

In this tutorial, we’re going to go over some of the ways to create PyTorch tensors. Let’s construct our first PyTorch tensor and see what it looks like. It won’t be a particularly meaningful tensor, for now.

The torch module has multiple factory methods that will let you create tensors with and without initial values and whatever data type you need. This is the most basic way to allocate tensor.

import torch

x=torch.empty(2,3)

print(type(x)) #<class 'torch.Tensor'>
print(x)

#tensor([[2.0545e+20, 4.1951e-08, 7.9871e+20],
        #[1.0529e-11, 5.2895e+22, 3.2916e-09]])

Here it’s going to create a 2×3 tensor and we can see that the object itself is of type torch.tensor.

When you run this code you may see random-looking values in the output that’s because torch.empty just allocates memory and does not write any values to it so whatever happened to be a memory at the time you allocated this tensor is what you’re going to see here.

More often than not you’ll want to initialize your tensor with some value like all zero, all ones, or ransom values and the torch module provides factory methods for all of these.

zeros=torch.zeros(2,3)
print(zeros)

ones=torch.ones(2,3)
print(ones)

random=torch.rand(2,3)
print(random)

#out
tensor([[0., 0., 0.],
        [0., 0., 0.]])
tensor([[1., 1., 1.],
        [1., 1., 1.]])
tensor([[0.0795, 0.8243, 0.7187],
        [0.1133, 0.1968, 0.3209]])

You get the things that you might expect from the method names, a 2×3 tensor full of zeros a 2×3 tensor full of ones, and then a tensor full of random values between 0 and 1.

Related Post

How to save and load PyTorch Tensor to file?

How to Indexing and Slicing PyTorch Tensor?

How to change the PyTorch tensor type?

PyTorch Contiguous Tensor

Fix: can’t convert cuda device type tensor to numpy. Use Tensor.cpu() to copy the tensor to host memory first.

How to convert PyTorch tensor to NumPy array?

Difference Between Contiguous and Non-Contiguous Array

How to copy PyTorch Tensor using clone, detach, and deepcopy?

How to Create PyTorch Tensor From List and NumPy Array?