Image Classify Using TensorFlow Lite

We know that machine learning adds great power to your mobile app.TensorFlow Lite is a lightweight ML library for mobile and embedded devices.TensorFlow works well on large devices and TensorFlow Lite works really well on small devices. So that it’s easier and faster and smaller to work on mobile devices.

Getting Started with an Android App


This post contains an example application using TensorFlow Lite for Android App. The app is a simple camera app that classifies images continuously using a quantized MobileNets model.

Step 1: Decide which Model to use


Depending on the use case, you may choose to use one of the popular open-sourced models such as InceptionV3 or MobileNets or re-train these models with their own custom data set or even build their own custom model.In this example, we use pre-train MobileNets model.

Step 2: Add TensorFlow Lite Android AAR


Android apps need to be written in Java, and core TensorFlow is in C++, a JNI library is provided to interface between the two. Its interface is aimed only at inference, so it provides the ability to load a graph, set up inputs, and run the model to calculate particular outputs.

This app uses a pre-compiled TFLite Android Archive (AAR). This AAR is hosted on jcenter.

The following lines in the app’s build.gradle file includes the newest version of the AAR, from the TensorFlow maven repository, in the project.

We use the following block, to instruct the Android Asset Packaging Tool that .lite or .tflite assets should not be compressed. This is important as the .lite file will be memory-mapped, and that will not work when the file is compressed.

Step 3: Add your model files to the project


Download the quantized Mobilenet TensorFlow Lite model from here, unzip and copy mobilenet_quant_v1_224.tflite and label.txt to the assets directory: src/main/assets

Step 4: Load TensorFlow Lite Model


TensorFlow Lite’s Java API supports on-device inference and is provided as an Android Studio Library that allows loading models, feeding inputs, and retrieving inference outputs.

The Interpreter.java class drives model inference with TensorFlow Lite. In most of the cases, this is the only class an app developer will need.Initializing an Interpreter with a Model File.The Interpreter can be initialized with a MappedByteBuffer:

This byte buffer is sized to contain the image data once converted to float. The interpreter can accept float arrays directly as input, but the ByteBuffer is more efficient as it avoids extra copies in the interpreter.

The following lines load the label list and create the output buffer:

The output buffer is a float array with one element for each label where the model will write the output probabilities.

Running Model Inference

If a model takes only one input and returns only one output, the following will trigger an inference run:

For models with multiple inputs, or multiple outputs, use:

where each entry in inputs corresponds to an input tensor and map_of_indices_to_outputs maps indices of output tensors to the corresponding output data. In both cases the tensor indices should correspond to the values given to the TensorFlow Lite Optimized Converter when the model was created. Be aware that the order of tensors in input must match the order given to the TensorFlow Lite Optimized Converter.

Following method takes a Bitmap as input, runs the model and returns the text to print in the app.

This method does three things. First converts and copies the input Bitmap to the imgData ByteBuffer for input to the model. Then it calls the interpreter’s run method, passing the input buffer and the output array as arguments. The interpreter sets the values in the output array to the probability calculated for each class. The input and output nodes are defined by the arguments to the toco conversion step that created the .lite model file earlier.

Conclusion


The app is resizing each camera image frame to (224 width * 224 height) to match the quantized Mobilenet model being used. The resized image is converted into a ByteBuffer row by row of size 1 * 224 * 224 * 3 bytes, where 1 is the number of images in a batch 224 * 224 is the width and height of the image 3 bytes represents three colors of a pixel. This app uses the TensorFlow Lite Java inference API for models which take a single input and provide a single output. This outputs a two-dimensional array, with the first dimension being the category index and the second dimension being the confidence of classification. The Mobilenet model has 1001 unique categories and the app sorts the probabilities of all the categories and displays the top three. The Mobilenet quantized model is bundled within the assets directory of the app.

 

Download this project from GitHub

Related Post

TensorFlow Lite

Train Image classifier with TensorFlow

 

One Reply to “Image Classify Using TensorFlow Lite”

Leave a Reply

Your email address will not be published. Required fields are marked *