Category Archives: PyTorch
Create NumPy array from PyTorch Tensor using detach().numpy()
When creating an np.array from torch.tensor or vice versa, both object reference the same underlying storage in memory. Since np.ndarray does not store the computational graph associated with the array, this graph should be explicitly removed using detach() when sharing both numpy and torch wish to reference the same tensor.
Use of ‘model.eval()’ and ‘with torch.no_grad()’ in model evaluation
Using the designated settings for training model.train() and evaluation model.eval() will automatically set the mode for the dropout layer and batch normalization layers and rescale appropriately so that we do not have to worry about that at all.
How to calculate running loss using loss.item() in PyTorch?
you could just sum it and calculate the mean after the epoch finishes or at the end of the epoch, we divide by the number of steps(dataset size). It gives you the correct average sample loss for this particular epoch. This training loss is used to see, “how well your model performs on the training dataset”.
Advantage of using LogSoftmax vs Softmax vs Crossentropyloss in PyTorch
The workaround is to use log probability instead of probability, which takes care to make the calculation numerically stable. The reformulated version allows us to evaluate softmax with only small numerical errors even when z contains extremely large or extremely negative numbers.
PyTorch Contiguous Tensor
Contiguous tensors are convenient because we can visit them efficiently in order without jumping around in the storage. It improves data locality and improves performance because of the way memory access works on modern CPUs. This advantage of course depends on the way algorithms visit.
Advantages of ReLU vs Tanh vs Sigmoid activation function in deep neural networks.
The saturated neurons can kill gradients if we’re too positive or too negative of an input. They’re also not zero-centered and so we get these, this inefficient kind of gradient update. The third problem is an exponential function. This is a little bit computationally expensive.
How to copy PyTorch Tensor using clone, detach, and deepcopy?
we want to make a copy of a tensor and ensure that any operations are done with the cloned tensor to ensure that the gradients are propagated to the original tensor, we must use clone(). We should use detach() when we don’t want to include a tensor in the resulting computational graph.