In the previous tutorial, we learned what is tensors are and how to create tensors from the list in PyTorch. A tensor is an array: a data structure that stores a collection of numbers accessible individually using an index, and that can be indexed with multiple indices.

PyTorch indexing is used for accessing an element from an array by giving it an index value **that starts from 0**. Slicing a PyTorch tensor array means extracting elements from an array in a specific range. It obtains a **sublist from the list**.

There are two types of Indexing: basic and advanced. Advanced indexing is further divided into Boolean and Purely Integer indexing. Negative Slicing index values start from the end of the array.

## Indexing a Tensor Array

Indexing is used to access individual elements. It is also possible to extract entire rows, columns, or planes from multi-dimensional arrays. **Indexing starts from 0.** Let’s see an array example below to understand the concept of indexing:

Index | 0 | 1 | 2 | 3 | 4 | 5 | 6 |

Element of array | 0.68 | 0.12 | 0.54 | 0.45 | 0.89 | 0.11 | 0.76 |

## Indexing in 1 Dimension

Let’s see PyTorch tensor array indexing in action. Take a list of six numbers in PyTorch.

```
import torch
import numpy as np
ten = torch.tensor([4,1,5,3,2,1])
print("Element at index 0 :",ten[0]) #Element at index 0 : tensor(4)
print("Element at index 1 :",ten[1]) #Element at index 1 : tensor(1)
```

In the above code, a tensor array of shape 6 is created using the `torch.tensor()`

function. The elements at index 0 and 1 of the array are printed as output.

## Indexing with another tensor

In PyTorch arrays, when arrays are used as indexes to access groups of elements, this is called indexing using index arrays. Let’s see some examples to understand how an array can be used as the index for indexing:

```
ten = torch.tensor([4,1,5,3,2,1])
index_t=torch.as_tensor([-2,4,0,-1])
print(ten[index_t]) #tensor([2, 2, 4, 1])
```

Here, we also use negative indexes as negative indexing starts from the last element, and the end element is treated as the first element with `index -1`

.

**Indexing in 2 Dimensions**

Let’s look at the example below to understand how PyTorch indexing is done in a 2-D array:

5 (0,0) | 6 (0,1) | 0 (0,2) |

1 (1,0) | 3 (1,1) | 7 (1,2) |

1 (2,0) | 4 (2,1) | 8 (2,2) |

Let’s look at the example below, where two indexes are given, the first one for the row and the second one for the column.

```
ten = torch.randint(9, (9,))
ten=ten.reshape(3,3)
print(ten)
print("Element at 0th row and 0th column of arr1 is:",ten[0,0])
print("Element at 1st row and 2nd column of arr1 is:",ten[1,2])
```

Output:

```
tensor([[5, 6, 0],
[1, 3, 7],
[1, 4, 8]])
Element at 0th row and 0th column of arr1 is: tensor(5)
Element at 1st row and 2nd column of arr1 is: tensor(7)
```

In the above code example, a 2-D array is created using the `torch.randint()`

function, which is used for creating the 1-D array, and the `tensor.reshape()`

function, which is used for transforming a 1-D array into 3 rows and 3 columns.

**Picking a Row or Column in 2-D NumPy Array**

`print("1st row :\n",ten[1]) #tensor([1, 3, 7])`

Here, 1 in a 2-D array stands for the row at index 1 of an array, i.e., [1, 3, 7]. As a result, the row at index 1 is printed as output.

## Indexing in 3 Dimensions

There are three dimensions in a 3-D array, suppose we have three dimensions as (i, j, k), where i stands for the 1st dimension, j stands for the 2nd dimension and, k stands for the 3rd dimension. Let’s look at the given examples for a better understanding. Remember: **Indexing starts from zero.**

```
ten = torch.randint(12, (12,))
ten=ten.reshape(2,2,3)
print("Array:\n",ten)
print("Element:",ten[1,0,2])
```

Output:

```
Array:
tensor([[[ 0, 1, 3],
[10, 4, 7]],
[[ 8, 0, 3],
[ 7, 11, 1]]])
Element: tensor(3)
```

I**f** We want to access the element of an array at index(1,0,2). Here 1 represents the 1st dimension, and the 1st dimension has two arrays: `1st array: [0,1,3] [10,4,7] and: 2nd array: [8,0,3] [7,11,1]`

##### Indexing starts from 0

We have the 2nd array as we select 1: [[8,0,3] [7,11,1]. The 2nd digit 0, stands for the 2nd dimension, and the 2nd dimension also contains two arrays: Array 1: [[8,0,3] and: Array 2: [7,11,1].

0 is selected and we have 1st array : [8, 0, 3]. The 3rd digit 2, represents the 3rd dimension, 3rd dimension further has three values: 8,0,3. **As 2 is selected, 3 is the output**.

**Picking a Row or Column in a 3D Array**

`print("1st row :",ten[0,1]) #1st row : tensor([10, 4, 7])`

If We want the row of an array at index(0,1) Here 0 stands for the 1st dimension, and the 1st dimension has two arrays:

1st array: [0,1,3] [10,4,7] and: 2nd array: [[8,0,3] [7,11,1]. As we select 0, we have 1st array because indexing starts from 0: [0,1,3] [10,4,7].

The 2nd digit 1, stands for the 2nd dimension, and the 2nd dimension also contains two arrays: 1st array: [8, 0, 3] and: 2nd array: [7, 11, 1]. As 1 is selected, we have the 2nd array as output: [10, 4, 7].

**Picking a matrix in a 3D array**

```
print("1st matrix :\n",ten[1])
#output
1st matrix :
tensor([[ 8, 0, 3],
[ 7, 11, 1]])
```

We want a matrix of an array at index arr[1]. Here 1 represents the 1st dimension, and the 1st dimension has two arrays. As indexing starts from 0 and we want the matrix at index 1, so 2nd array is selected.

## Slicing

Using basic indexing and slicing we can access a particular element or group of elements from an array. Basic indexing and slicing return the views of the original arrays.

slicing occurs when the arr[index] is:

- a slice object (constructed by start: stop: step).
- an integer.
- or a tuple of slice objects and integers.

```
ten = torch.tensor([4,1,5,3,2,1,3,8,1,3])
print("Element from index 3 to 8 :",ten[3:8]) #Element from index 3 to 8 : tensor([3, 2, 1, 3, 8])
```

The elements are sliced from the 3rd index to the 7th index because the **last value is excluded**.

**Ellipsis**

**Ellipsis** can also be used along with basic slicing. An ellipsis is the built-in Python constant represented by three dots (…). We can also use “Ellipsis” instead of (…). Let’s see the example below to understand the concept of ellipsis.

```
ten = torch.tensor([[[1,2,3],
[4,5,6]],
[[7,8,9],
[10,11,12]]])
print(ten.shape) #torch.Size([2, 2, 3])
## Access the last dimension
print("using simple indexing:\n",ten[:,:,0])
print("using ellipsis:\n ",ten[...,0])
```

Output:

```
using simple indexing:
tensor([[ 1, 4],
[ 7, 10]])
using ellipsis:
tensor([[ 1, 4],
[ 7, 10]])
```

There is no such start value mentioned, so all slicing begins from the last dimension, i.e., the 0th column of each row of both layers is selected.

### Related Post

How to use PyTorch gather function for indexing?

How to reshape tensor in PyTorch?

What does Unsqueeze do in PyTorch?