In this post, we’re going to train machine learning models capable of localizing and identifying multiple objects in an image. You’ll need to install TensorFlow and you’ll need to understand how to use the command line.

Tensorflow Object Detection API


The TensorFlow Object Detection API built on top of TensorFlow that makes it easy to construct, train and deploy object detection models. This post walks through the steps required to train an object detection model locally.

1. Clone Repository


git clone https://github.com/tensorflow/models.git

you can download directly ZIP file.

2.Installation


Tensorflow Object Detection API depends on the following libraries.

  • Protobuf 2.6
  • Pillow 1.0
  • Lxml
sudo apt-get install protobuf-compiler python-pil python-lxml

1.Protobufs to configure model and training parameters. Before the framework can be used, the Protobuf libraries must be compiled. This should be done by running the following command from the tensorflow/models directory:

# From tensorflow/models/
protoc object_detection/protos/*.proto --python_out=.

Add Libraries to PYTHONPATH

When running locally, the tensorflow/models/ and slim directories should be appended to PYTHONPATH. This can be done by running the following from tensorflow/models/:

# From tensorflow/models/
export PYTHONPATH=$PYTHONPATH:`pwd`:`pwd`/slim

Note: This command needs to run from every new terminal you start. If you wish to avoid running this manually, you can add it as a new line to the end of your ~/.bashrc file.

Testing the Installation

You can test that you have correctly installed the Tensorflow Object Detection API by running the following command:

# From tensorflow/models/
python object_detection/builders/model_builder_test.py

Above command generate following output. Install Object Detection API

3.Preparing Inputs


Tensorflow Object Detection API reads data using the TFRecord file format. Two sample scripts (create_pascal_tf_record.py and create_pet_tf_record.py) are provided to convert dataset to TFRecords. Directory Structure for Training input data

# From the tensorflow/models/ directory
+ annotations
     -xmls
     -trainval.txt
+ images
+ label.pbtxt
    • To prepare the input file for the sample scripts you need to consider two things. Firstly, you need an RGB image which is encoded as jpg or png and secondly, you need a list of bounding boxes (xmin, ymin, xmax, ymax) for the image and the class of the object in the bounding box.
  • Here is a subset of the pet image data set that I collected in images folder:

Afterward, labeled them manually with LabelImg. LabelImg is a graphical image annotation tool that is written in Python. It’s super easy to use and the annotations are saved as XML files.Save image annotations xml in /annotations/xmls folder. Image Annotation Create trainval.txt in annotations folder which content name of the images without extension.Use the following command to generate trainval.txt.

# From the tensorflow/models/ directory
ls images | grep ".jpg" | sed s/.jpg// > annotations/trainval.txt

Label Maps

Each dataset is required to have a label map associated with it. This label map defines a mapping from string class names to integer class Ids.Label maps should always start from id 1.Create label.pbtxt file with the following label map:

item {
  id: 1
  name: 'Dog'
}

Generating the Pet TFRecord files.

Run the following commands.

# From the tensorflow/models/ directory
python object_detection/create_pet_tf_record.py
--label_map_path=object_detection/label.pbtxt
--data_dir=`pwd`
--output_dir=`pwd

You should end up with two TFRecord files named    pet_train.record and pet_val.record in the tensorflow/modelsdirectory.

4.Training the model


After creating the required input file for the API, Now you can train your model.For training, you need the following command:

# From the tensorflow/models/ directory
python object_detection/train.py
--logtostderr
--pipeline_config_path=/tensorflow/models/object_detection/samples/configs/ssd_mobilenet_v1_pets.config
--train_dir=/tensorflow/models/pet

An object detection training pipeline also provide sample config files on the repo. For my training, I used ssd_mobilenet_v1_pets.config basis. I needed to adjust the num_classes to one and also set the path (PATH_TO_BE_CONFIGURED) for the model checkpoint, the train, and test data files as well as the label map. In terms of other configurations like the learning rate, batch size and many more, I used their default settings.

5.Running the Evaluation Job


Evaluation is run as a separate job. The eval job will periodically poll the training directory for new checkpoints and evaluate them on a test dataset. The job can be run using the following command:

# From the tensorflow/models/ directory
python object_detection/eval.py \
    --logtostderr \
    --pipeline_config_path=${PATH_TO_YOUR_PIPELINE_CONFIG} \
    --checkpoint_dir=${PATH_TO_TRAIN_DIR} \
    --eval_dir=${PATH_TO_EVAL_DIR}

where ${PATH_TO_YOUR_PIPELINE_CONFIG} points to the pipeline config, ${PATH_TO_TRAIN_DIR} points to the directory in which training checkpoints were saved (same as the training job) and ${PATH_TO_EVAL_DIR} points to the directory in which evaluation events will be saved. As with the training job, the eval job run until terminated by default.

6.Running TensorBoard


Progress for training and eval jobs can be inspected using TensorBoard. If using the recommended directory structure, TensorBoard can be run using the following command:

tensorboard --logdir=${PATH_TO_MODEL_DIRECTORY}

where ${PATH_TO_MODEL_DIRECTORY} points to the directory that contains the train and eval directories. Please note it may take TensorBoard a couple minutes to populate with data.

7.Exporting the Tensorflow Graph


After your model has been trained, you should export it to a Tensorflow graph proto. First, you need to identify a candidate checkpoint to export. The checkpoint will typically consist of three files in pet folder:

  1.  model.ckpt-${CHECKPOINT_NUMBER}.data-00000-of-00001
  2. model.ckpt-${CHECKPOINT_NUMBER}.index
  3. model.ckpt-${CHECKPOINT_NUMBER}.meta

Run the following command to export Tensorflow graph.Change the checkpoint number.

# From the tensorflow/models/ directory
python object_detection/export_inference_graph.py
--input_type image_tensor
--pipeline_config_path=/tensorflow/models/object_detection/samples/configs/ssd_mobilenet_v1_pets.config
--trained_checkpoint_prefix  pet/model.ckpt-200
--output_directory parrot/exported_model_directory

 

Related Post

TenserFlow Lite Train Image classifier with TensorFlow Android TensorFlow Machine Learning