High-performance model is trained on millions of examples.They can easily classify thousands of categories.We can reuse that architecture and trained weights of that model without the classification layers.In that way, we can add our own image classifier on top.We can train it on our image examples and keep the reused weights fixed.

A couple of examples not enough to train an entire image classification model from scratch, but what we could do is start from an existing general-purpose image classification model.

With TensorFlow Hub, you can build, share and reuse pieces of machine learning.


You needs to install or upgrade TensorFlow  to 1.7+ to use TensorFlow Hub:

2.TensorFlow Hub Image Module for Retraining

The module is actually a saved model.It contains pre-trained weights and graphs.It is compostable, reusable,re-trainable.It packs up the algorithm in the form of a graph and weights.

Module Contain
You can find a list of all of the newly released image modules. Some of them include the classification layers and some of them remove them just providing a feature vector as output.We’ll choose one of the feature vectors modules Inception V1.

3.Creating Training Set of Images

Before you start training, you’ll need a set of images to teach the model about the new classes you want to recognize.For training to work well, you should gather at least a hundred photos of each category you want to recognize.

Organize Training Set

You have a folder containing class-named subfolders, each full of images for each label. The example folder fruits should have a structure like this:

Image classifier folder

Here’s what the folder structure of the fruits archive looks like.

The subfolder names are important since they define what label is applied to each image, but the filenames themselves don’t matter.The label for each image is taken from the name of the subfolder it’s in.

4.Retrain Image Module

Once your images are prepared, and you have pip-installed TensorFlow Hub and
a recent version(1.7+) of TensorFlow, you can run the training with a command like this:

Note: Active TensorFlow Environment

The script loads the pre-trained module and trains a new classifier on top of the fruits photos.You can replace the image_dir argument with any folder containing subfolders of
images. The label for each image is taken from the name of the subfolder it’s in.

The top layer receives as input a 2048-dimensional vector for each image. We train a softmax layer on top of this representation. If the softmax layer contains N labels, this corresponds to learning N + 2048*N model parameters for the biases and weights.


You can visualize the graph and statistics, such as how the weights or accuracy varied during training.

Run this command during or after retraining.

After TensorBoard is running, navigate your web browser to localhost:6006 to view the TensorBoard.

6.Testing Model

retrain.py script writes new model trained on your categories to /tmp/output_graph.pb, and a text file containing the labels to /tmp/output_labels.txt. The new model contains the new classification layer.

Since you’ve replaced the top layer, you will need to specify the new name in the script.

Here’s an example of how to run the label_image example with your retrained graphs.

Change TensoFlow Hub Module

The script use the highly accurate, comparatively large and slow Inception V3 model architecture.If you want to deploy on mobile platforms, you can try the –tfhub_module flag with a Mobilenet model.

Run floating-point version of Mobilenet

These models converted to fully quantized mobile models via TensorFlow Lite.

Leave a Reply