We can actually go and visualize each of those 26×26 elements slices of the feature map as a grayscale image and this gives us some sense for what types of things in the input is those features in that convolutional layer looking for.

This is the logic behind deep networks in general. If you’re familiar with the idea of why deeper is better for let’s say convolutional networks, then this is kind of the same logic.

Your parents told you to look both ways before you cross the street. It is kind of idea that there’s useful information to the left and the right that you’d like to know about before you do anything.

K-Fold cross-validation has a single parameter called k that refers to the number of groups that a given dataset is to be split(fold). First Split the dataset into k groups then take the group as a test data set and the remaining groups as a training data set. In this tutorial, we create a simple classification keras model and train and evaluate using K-fold cross-validation. Download Dataset This guide uses Iris Dataset to categorize flowers by species. This is a popular dataset for a beginner in machine learning classification problems. Download […]

it’s important to have the same vector space between training & predicting. The most common way is to save tokenizer and load the same tokenizer at predicting time using pickle.

We build a Linear Regression model to predict the Celsius degree from given Fahrenheit degree.TensorFlow Keras is our API for building Linear Regression models and for running Machine Learning models.

In this tutorial, you will discover how to develop a convolutional neural network to classify satellite images of the Amazon forest.

To compute the ROC curve, you first need to have a set of predictions probability, so they can be compared to the actual targets. You could make predictions on the validation set.

Combine precision and recall into a single metric called the F1 score, in particular, if you need a simple way to compare classifiers.

Accuracy is not a reliable metric for the model performance, because it will yield misleading results if the validation data set is unbalanced.