Recent Posts by Pragati
Use of ‘model.eval()’ and ‘with torch.no_grad()’ in PyTorch model evaluate
Using the designated settings for training model.train() and evaluation model.eval() will automatically set the mode for the dropout layer and batch normalization layers and rescale appropriately so that we do not have to worry about that at all.
Advantages of ReLU vs Tanh vs Sigmoid activation function in deep neural networks.
The saturated neurons can kill gradients if we’re too positive or too negative of an input. They’re also not zero-centered and so we get these, this inefficient kind of gradient update. The third problem is an exponential function. This is a little bit computationally expensive.
Find Correlation between features and target using the correlation matrix.
You can evaluate the relationship between each feature and target using a correlation and selecting those features that have the strongest relationship with the target variable. Such as methods that remove redundant variables using correlation.
Plot two overlay Histograms on single chart with Pandas and Matplotlib.
We’ll look at a real-world example with data that I’ll load from a CSV file and also we’re going to learn how to draw overlapping histograms using Pandas and Matplotlib. Before we go and set up how we create the chart let’s look at what the end results would be.
How to copy PyTorch Tensor using clone, detach, and deepcopy?
we want to make a copy of a tensor and ensure that any operations are done with the cloned tensor to ensure that the gradients are propagated to the original tensor, we must use clone(). We should use detach() when we don’t want to include a tensor in the resulting computational graph.
PyTorch Linear Layer (Fully Connected Layer) Explained.
The weight matrix defined the linear function this basically demonstrates how the network mapping changes as the weights are updated during the training process when we update the weight we are changing the function.