Category Archives: PyTorch
How loss.backward(), optimizer.step() and optimizer.zero_grad() related in PyTorch
When we call loss.backward(), PyTorch traverses this graph in the reverse direction to compute the gradients and accumulate their values in the grad attribute of those tensors (the leaf nodes of the graph).
Access PyTorch model weights and bise with its name and ‘requires_grad value’
The parameters method to ask any nn.Module for a list of parameters owned by it or any of its submodules. Calling model.parameters() will collect weight and bias from modules. It’s instructive to inspect the parameters in this case by printing their shapes.
Use Saved PyTorch model to predict single and multiple images.
we will focus on writing the inference code for the single sample image. This will involve two parts, one where we prepare the image so that it can be fed to ResNet, and next, we will write the code to get the actual prediction from the model.
PyTorch Confusion Matrix for multi-class image classification
Confusion matrix lets us see not only where the model was wrong, but also how it was wrong. That is, we can look at patterns of misclassification. For example, our model had an easy time differentiating ‘truck and dog’, but a much more difficult time classifying ‘dog and cat’.
Create NumPy array from PyTorch Tensor using detach().numpy()
When creating an np.array from torch.tensor or vice versa, both object reference the same underlying storage in memory. Since np.ndarray does not store the computational graph associated with the array, this graph should be explicitly removed using detach() when sharing both numpy and torch wish to reference the same tensor.