PyTorch Confusion Matrix for multi-class image classification
Confusion matrix lets us see not only where the model was wrong, but also how it was wrong. That is, we can look at patterns of misclassification. For example, our model had an easy time differentiating ‘truck and dog’, but a much more difficult time classifying ‘dog and cat’.
Create NumPy array from PyTorch Tensor using detach().numpy()
When creating an np.array from torch.tensor or vice versa, both object reference the same underlying storage in memory. Since np.ndarray does not store the computational graph associated with the array, this graph should be explicitly removed using detach() when sharing both numpy and torch wish to reference the same tensor.
Use of ‘model.eval()’ and ‘with torch.no_grad()’ in PyTorch model evaluate
Using the designated settings for training model.train() and evaluation model.eval() will automatically set the mode for the dropout layer and batch normalization layers and rescale appropriately so that we do not have to worry about that at all.
How to calculate running loss using loss.item() in PyTorch?
you could just sum it and calculate the mean after the epoch finishes or at the end of the epoch, we divide by the number of steps(dataset size). It gives you the correct average sample loss for this particular epoch. This training loss is used to see, “how well your model performs on the training dataset”.
Advantage of using LogSoftmax vs Softmax vs Crossentropyloss in PyTorch
The workaround is to use log probability instead of probability, which takes care to make the calculation numerically stable. The reformulated version allows us to evaluate softmax with only small numerical errors even when z contains extremely large or extremely negative numbers.