Floating-point numbers are the best way to deal with a neural network, we need a way to encode real-world data and then decode the output back to understand.

Intermediate representations are collections of floating-point numbers that characterize the input and capture the data’s structure in a way that is instrumental for describing how inputs are mapped to the outputs of the neural network.

So far, we have covered the basics of what is tensors and how to create tensors, but we have not yet covered what kinds of numeric types we can store in a tensor. In this tutorial, we will learn how to cast tensors to another type.

## Create a tensor with the dtype attribute

To allocate a tensor of the right numeric type, we can specify the proper `dtype` as an argument to the constructor. For example:

``````import torch

double_ten = torch.ones(120, 12, dtype=torch.double)
short_ten = torch.tensor([[11, 22], [33, 44]], dtype=torch.short)``````

We can find out about the `dtype` for a tensor by accessing the corresponding attribute:

``short_ten.dtype #torch.int16``

The `dtype` argument of tensor constructors specifies the numerical data (d) type that will be contained in the tensor. The data type specifies the possible values the tensor can hold (integers or floating-point numbers) and the number of bytes per value.

The `dtype` argument is deliberately similar to the standard NumPy argument of the same name. Here’s a list of the possible values for the dtype argument:

``````torch.bool: Boolean

torch.int8: signed 8-bit integers
torch.uint8: unsigned 8-bit integers
torch.int16 or torch.short: signed 16-bit integers
torch.int32 or torch.int: signed 32-bit integers
torch.int64 or torch.long: signed 64-bit integers

torch.float32 or torch.float: 32-bit floating-point
torch.float64 or torch.double: 64-bit, double-precision floating-point torch.float16 or torch.half: 16-bit, half-precision floating-point ``````

The default data type for tensors is 32-bit floating-point. We can also cast the output of a tensor creation function to the right type using the corresponding casting method, such as:

``````double_ten = torch.zeros(5, 2).double()
short_ten = torch.ones(5, 2).short()``````

the more convenient is `to()` method:

``````double_ten = torch.zeros(5, 2).to(torch.double)
short_ten = torch.ones(5, 2).to(dtype=torch.short)``````

`to()` checks whether the conversion is necessary and, if so, does it. The dtype named casting methods like float are shorthands for to, but the to method can take additional arguments.

PyTorch tensor also has the notion of `device`, which is where on the computer the tensor data is placed. Here is how we can create a tensor on the GPU by specifying the corresponding argument to the constructor:

``ten_gpu = torch.tensor([[4.0, 1.0], [5.0, 3.0], [2.0, 1.0]], device='cuda')``

We could instead copy a tensor created on the CPU onto the GPU using the to method:

``````ten_gpu = ten.to(device='cuda')
``````

When mixing input types in operations, the inputs are converted to the larger type automatically. Thus, if we want 32-bit computation, we need to make sure all our inputs are (at most) 32-bit.

Creating a tensor with integers as arguments, such as using `torch.tensor([2, 2])`, will create a 64-bit integer tensor by default. As such, we’ll spend most of our time dealing with float32 and int64.

Computations happening in neural networks 32-bit floating-point precision. Higher precision, like 64-bit, will not buy improvements in the accuracy of a model and will require more memory and computing time.

The 16-bit floating-point, half-precision data type is not present natively in standard CPUs, but it is offered on modern GPUs. It is possible to switch to half-precision to decrease the footprint of a neural network model if needed, with a minor impact on accuracy.

### Related Post

What is tensor in PyTorch?

How to create an empty tensor in PyTorch?

How to convert PyTorch tensor to NumPy array?

How to Create PyTorch Tensor From List and NumPy Array?

PyTorch Contiguous Tensor

Difference Between Contiguous and Non-Contiguous Array