Python is the most widely used language for scientific and numeric programming. However, it does not support using the GPU or calculating gradients, which are critical for deep learning.

Python is slow compared to many languages. NumPy, or PyTorch is likely to be a wrapper for a compiled object written in C. NumPy arrays and PyTorch tensors thousands of times faster than using pure Python.

PyTorch tensor is a multi-dimensional rectangular data structure, with all items of the same type. PyTorch tensors can be arrays of arrays, with the innermost arrays potentially being different sizes.

If all items are the same type such as integer or float, PyTorch will store them as a compact C data structure in memory. PyTorch tensors have additional capabilities, they can live on the GPU, in which case their computation will be optimized for the GPU and can run much faster.

Perhaps the most important new coding skill for a Python programmer to learn is how to use tensor APIs effectively. In this tutorial, you will learn a few different ways of creating tensors, and then see some of their properties.

## Create Tensor From Python List

We can initialize a tensor from a variety of data types. Some examples are Python lists and Python numerical primitives.

To create a tensor from an array, pass a list (or list of lists, or list of lists of lists, etc.) to or tensor: Firstly, we can simply create a tensor from a list array using the` torch.tensor()` function as follows:

``````import torch

data = [[1,2,3],[4,5,6]]
a_tensor = torch.tensor(data)

print(a_tensor) #tensor([[1, 2, 3],
#[4, 5, 6]])

print(a_tensor.shape) #torch.Size([2, 3])
print(a_tensor.dtype) #torch.int64``````

This resulted in a_tensors, with their properties, shape=(2,3) and dtype=int64, adopted from their source array. We can set the data type of a tensor during initialization.

``a_tensor = torch.tensor(data,dtype=torch.float32)``

Similarly to data type, we can set the device of a tensor upon initialization:

``````# PyTorch will use GPU if it's available
device = 'cuda' if torch.cuda.is_available() else 'cpu'
arr = [1,2]
x_t = torch.tensor(arr, dtype=torch.float32, device=device)``````

PyTorch tensors can be initialized with the argument requires_grad, which when set to True, stores the tensorâ€™s gradient in an attribute called grad.

``a_tensor = torch.tensor(data,dtype=torch.float32,requires_grad=True)``

## Create a tensor from NumPy arrays

Tensors can also be initialized from NumPy arrays, allowing PyTorch to be integrated easily into existing data science and machine learning workflows:

``````import numpy as np

np_array = np.array([1,2])
t_array = torch.from_numpy(np_array)

print(t_array) #tensor([1, 2])``````

You can select a row (note that, like lists in Python, tensors are 0-indexed, so 1 refers to the second row/column):

``````data = [[1,2,3],[4,5,6]]
a_tensor = torch.tensor(data)

print(a_tensor[0]) #tensor([1, 2, 3])``````

Or a column, by using: to indicate all of the first axis (we sometimes refer to the dimensions of tensors/arrays as axes):

``print(a_tensor[:,1]) #tensor([2, 5])``

You can combine these with Python slice syntax ([start: end], with end being excluded) to select part of a row or column:

``print(a_tensor[1,1:3]) #tensor([5, 6])``

### Related Post

How to save and load PyTorch Tensor to file?

What is tensor in PyTorch?

How to create an empty tensor in PyTorch?

How to convert PyTorch tensor to NumPy array?

How to change the PyTorch tensor type?

Difference Between Contiguous and Non-Contiguous Array

PyTorch Contiguous Tensor