Training Word Embedding with TensorFlow High Lavel API

In this post, you will learn how to implement a skip gram model in TensorFlow to generate word vectors and then use TensorBoard to visualize them.

TensorBoard visualize

Jupyter notebook that can run locally, or on Colaboratory.

One of the key ideas of word embeddings is a way of representing words that model automatically understand analogies like that “Man is the Woman as King is the Queen”.You’ll be able to build NLP application even relatively small training sets.

Word2Vec


Word2Vec use the theory of meaning to predict between every word and its context words. There are two methods to producing word vectors 1. Continuous Bag-of-Words model (CBOW) and 2.Skip-Gram model. For this post we implement is a Skip-Gram method.

Skip-Gram

TensorFlow Skip-gram model

The idea of the skip-gram model is, for each estimation step, you’re taking one word as the center word. In above example “for” is the center word and you’re going to try and predict words in its context out to some window size. The model is going to define a probability distribution that is the probability if a word appearing in the context given in this center word.

1.Setup


This code requires Python 3 and TensorFlow v1.4 +.

2. Download Dataset


The “IMDB dataset” is a set of around 50,000 positive or negative reviews for movies from the Internet Movie Database. Each review consists of a series of word indexes that go from 4(the most frequent word in the dataset, the) to 4999.Index 1 for the beginning of the sentence and 2 for unknown.TensorFlow provides Keras API for download the dataset.

3. Pad Sequences


After Loaded the data in memory pad each of the sentences with 0 to a fixed size (200). You have two 2-dimensional (25000X200)arrays for training and testing respectively.

4. Input Function


You needs to convert the data from numpy arrays into Tensors. The tf.data provide classes that allow you to easily load data, manipulate it, and pipe it into your model.

To construct a Dataset from some tensors in memory, you can use tf.data.Dataset.from_tensor_slices().
You might create one function to import the training set and another function to import the test set.

This input function builds an input pipeline that yields batches of (features, labels) pairs, where features is a dictionary feature.

5. Create Feature Columns


When you build an Estimator model, you pass a list of feature columns that describe each of the features you want the model to use.

categorical_column_with_identity is the right choice for text input. You need to convert your existing feature column into an embedding_column. The representation seen by the model is the mean of the embeddings for each token. We can plug in the embedded features into a pre-canned DNNClassifier.

6. Estimator


You have to not worry about creating the computational graph or sessions since Pre-made Estimators handle all the “plumbing” for you. DNNClassifier is a pre-made Estimator class that trains classification models through dense, feed-forward neural networks.

7.Train Model


Estimators expect an input_fn to take no arguments.

Evaluate Model


The following code block evaluates the accuracy of the trained model on the test data

Running this code yields the following output (or something similar):

Test set accuracy: 0.821

8. Visualizing Embeddings


TensorBoard includes the Embedding Projector, a tool for interactively visualize embeddings.It can read embeddings from your model and render them in two or three dimensions.

Metadata

You need to attach labels to the data points. You can do this by generating a metadata file containing the labels for each point and clicking “Load data” in the data panel of the Embedding Projector.

 

TensorFlow Lite Object Detection in Android App

Object detection in the image is an important task for applications including self-driving, face detection, video surveillance, count objects in the image. You can implement the CNN based object detection algorithm on the mobile app.

TensorFlow Lite is a great solution for object detection with high accuracy. The SSD Model is create using TensorFlow Object Detection API to get image feature maps and a convolutional layer to find bounding boxes for recognized objects.

Android Demo App


The demo app available on GitHub. It is a simple camera app that Demonstrates an SSD-Mobilenet model trained using the TensorFlow Object Detection API to localize and track objects in the camera preview in real-time. To run the demo, a device running Android 5.0 ( API 21) or higher is required.

Building TensorFlow Lite on Android


In this tutorial we will use Bazel to build our TensorFlow Lite mobile demos APK and deploying with ADB on the command line.

Prerequisites.

  1. Install least SDK version(26.1.1).
  2. Install latest version of Bazel(0.13.0+).
  3. TensorFlow Lite required NDK(16b+) to build the native (C/C++)
  4. Bazel requires Android Build Tools 27.0.3+
  5. You also need to install the Android Support Repository, available through Android Studio under Android SDK Manager -> SDK Tools -> Android Support Repository.

Clone the TensorFlow repo

In the root of the TensorFlow repository, update the tensorflow/WORKSPACE file with the api level and location of the SDK and NDK.

Create a Bazel WORKSPACE

Every workspace must have a text file named WORKSPACE located in the top-level workspace directory. Enter the following at the command line to create a workspace:

Download pre-trained model

Download the quantized mobilenet_ssd TensorFlow Lite model and unzip and copy mobilenet_ssd.tflite to the assets: tensorflow/contrib/lite/examples/android/assets/

Build the source code

To build the demo app,  run following command in command prompt:

Install

After building uses the following command from your workspace root to install the APK:

Tensorflow lite object detection

App uses a multi-box model to try to draw bounding boxes around the locations of people in the camera. These boxes are annotated with the confidence for each detection result.

Retrain Image Classifier Model using TensorFlow Hub

High-performance model is trained on millions of examples.They can easily classify thousands of categories.We can reuse that architecture and trained weights of that model without the classification layers.In that way, we can add our own image classifier on top.We can train it on our image examples and keep the reused weights fixed.

A couple of examples not enough to train an entire image classification model from scratch, but what we could do is start from an existing general-purpose image classification model.

With TensorFlow Hub, you can build, share and reuse pieces of machine learning.

1.Installation


You needs to install or upgrade TensorFlow  to 1.7+ to use TensorFlow Hub:

2.TensorFlow Hub Image Module for Retraining


The module is actually a saved model.It contains pre-trained weights and graphs.It is compostable, reusable,re-trainable.It packs up the algorithm in the form of a graph and weights.

Module Contain
You can find a list of all of the newly released image modules. Some of them include the classification layers and some of them remove them just providing a feature vector as output.We’ll choose one of the feature vectors modules Inception V1.

3.Creating Training Set of Images


Before you start training, you’ll need a set of images to teach the model about the new classes you want to recognize.For training to work well, you should gather at least a hundred photos of each category you want to recognize.

Organize Training Set

You have a folder containing class-named subfolders, each full of images for each label. The example folder fruits should have a structure like this:

Image classifier folder

Here’s what the folder structure of the fruits archive looks like.

The subfolder names are important since they define what label is applied to each image, but the filenames themselves don’t matter.The label for each image is taken from the name of the subfolder it’s in.

4.Retrain Image Module

Once your images are prepared, and you have pip-installed TensorFlow Hub and
a recent version(1.7+) of TensorFlow, you can run the training with a command like this:

Note: Active TensorFlow Environment

The script loads the pre-trained module and trains a new classifier on top of the fruits photos.You can replace the image_dir argument with any folder containing subfolders of
images. The label for each image is taken from the name of the subfolder it’s in.

The top layer receives as input a 2048-dimensional vector for each image. We train a softmax layer on top of this representation. If the softmax layer contains N labels, this corresponds to learning N + 2048*N model parameters for the biases and weights.

5.TensorBoard


You can visualize the graph and statistics, such as how the weights or accuracy varied during training.

Run this command during or after retraining.

After TensorBoard is running, navigate your web browser to localhost:6006 to view the TensorBoard.

6.Testing Model


retrain.py script writes new model trained on your categories to /tmp/output_graph.pb, and a text file containing the labels to /tmp/output_labels.txt. The new model contains the new classification layer.

Since you’ve replaced the top layer, you will need to specify the new name in the script.

Here’s an example of how to run the label_image example with your retrained graphs.

Change TensoFlow Hub Module


The script use the highly accurate, comparatively large and slow Inception V3 model architecture.If you want to deploy on mobile platforms, you can try the –tfhub_module flag with a Mobilenet model.

Run floating-point version of Mobilenet

These models converted to fully quantized mobile models via TensorFlow Lite.

Feeding your own data set into the CNN model in TensorFlow

I’m assuming you know about Neural Network and Convolutional Neural Network, as I won’t go into too much detail about their background and how they work. I am using TensorFlow as a Machine Learning framework. In case you are not familiar with TensorFlow, make sure to check out my recent post getting started with TensorFlow.

Dataset


The Kaggle Dog vs Cat dataset consists of 25,000 color images of dogs and cats that we use for training. Each image is a different size of pixel intensities, represented as [0, 255] integer values in RGB color space.

TFRecords

You need to convert the data to native TFRecord format. Google provide a single script for converting Image data to TFRecord format.

When the script finishes you will find 2 shards for the training and validation files in the DATA_DIR. The files will match the patterns train-?????-of-00002 and validation-?????-of-00002, respectively.

Convolution neural network Architecture


We use three types of layers to build ConvNet architectures: Convolutional Layer, Pooling Layer, and Fully-Connected Layer. We will stack these layers to form a full ConvNet architecture.

Building the CNN for Image Classifier

You’re inputting an image which is 252x252x3 it’s an RGB image and trying to recognize either Dog or Cat. Let’s build a neural network to do this.

What’s gonna use in this post is inspired and similar to one of the classic neural networks called LeNet-5.
convolution neural network architecture
252x252x3 input image that is the first layer uses a 32,5x5 filter stride of 1 and same padding. Next, apply max pooling of parameter, filter 2x2 and strides=2.This should reduce the height and width of the representation by a factor of 2. so 252x252x32 now become 126x126x32.The number of channels remains the same. we are going to call this max pooling 1.

Next given 126x126x32 volume and apply another convolution layer to it.Use a filter size this 5×5 and stride 1 and 64 filters this time. So now you end up with a 126x126x64 volume called conv2. Then in this network do max pooling with a Filter:2×2 and Strides:2 and the 126X126X64 this will the half the height and width(63X63X64).

Dense Layer

Next, we want to add a dense layer (with 1,024 neurons and ReLU activation) to our CNN to perform classification on the features extracted by the convolution/pooling layers.

Before we connect the layer, we’ll flatten our feature map (max pooling 2) to shape [batch_size, features], so that our tensor has only two dimensions:
63x63x64=254016 so let’s now fatten output to a 254016x1 dimensional vector we also think of this a flattened result into just a set of neurons.

Logits Layer

You have 1024 real numbers that you can feed to a softmax unit. If you’re trying to do classifying images like either dog or cat then this would be a softmax with 2 outputs so this is a reasonably typical example of what a convolutional network looks like.

Generate Predictions

The logits layer of our model returns our predictions as raw values in a [batch_size, 2]-dimensional tensor. Let’s convert these raw values into two different formats that our model function can return:

  • The predicted class for each example: Dog or Cat

Our predicted class is the element in the corresponding row of the logits tensor with the highest raw value. We can find the index of this element using the tf.argmax function:

 The input argument specifies the tensor from which to extract maximum values—here logits. The axisargument specifies the axis of the input tensor along which to find the greatest value. Here, we want to find the largest value along the dimension with index of 1, which corresponds to our predictions (recall that our logits tensor has shape [batch_size, 2]).

We can derive probabilities from our logits layer by applying softmax activation using tf.nn.softmax:

Calculate Loss

That measures how closely the model’s predictions match the target classes. For classification problems, cross entropy is typically used as the loss metric. The following code calculates cross entropy when the model runs in either TRAIN or EVAL mode:

Training Operation

we defined loss for the model as the softmax cross-entropy of the logits layer and our labels. Let’s configure our model to optimize this loss value during training. We’ll use a learning rate of 0.001 and stochastic gradient descent as the optimization algorithm:

Add evaluation metrics

Define eval_metric_ops dict in EVAL mode as follows:

Load Training and Test Data


Convert whatever data you have into a TFRecordes supported format.This approach makes it easier to mix and match data sets. The recommended format for TensorFlow is an TFRecords file containing tf.train.Example protocol buffers  which contain Features as a field.

To read a file of TFRecords, use tf.TFRecordReader with the tf.parse_single_example decoder. The parse_single_example op decodes the example protocol buffers into tensors.

Train a model with a different image size.

The simplest solution is to artificially resize your images to 252×252 pixels. See Images section for many resizing, cropping and padding methods. Note that the entire model architecture is predicated on a 252x252 image, thus if you wish to change the input image size, then you may need to redesign the entire model architecture.

Fused decode and crop

If inputs are JPEG images that also require cropping, use fused tf.image.decode_and_crop_jpeg to speed up preprocessing. tf.image.decode_and_crop_jpeg only decodes the part of the image within the crop window. This significantly speeds up the process if the crop window is much smaller than the full image. For image data, this approach could speed up the input pipeline by up to 30%.

Create input functions


You must create input functions to supply data for training, evaluating, and prediction

The Dataset API can handle a lot of common cases for you. Using the Dataset API, you can easily read in records from a large collection of files in parallel and join them into a single stream.

Create the Estimator


Next, let’s create an Estimator a TensorFlow class for performing high-level model training, evaluation, and inference for our model. Add the following code to main():

The model_fn argument specifies the model function to use for training, evaluation, and prediction; we pass it the cnn_model_fn that we have created.The model_dir argument specifies the directory where model data (checkpoints) will be saved (here, we specify the temp directory /tmp/convnet_model, but feel free to change to another directory of your choice).

Set Up a Logging Hook

CNN can take time to train, let’s set up some logging so we can track progress during training. We can use TensorFlow’s tf.train.SessionRunHook to create a tf.train.LoggingTensorHook that will log the probability values from the softmax layer of our CNN. Add the following to main().

We store a dict of the tensors we want to log in tensors_to_log. Each key is a label of our choice that will be printed in the log output, and the corresponding label is the name of a Tensor in the TensorFlow graph. Here, our probabilities can be found in softmax_tensor, the name we gave our softmax operation earlier when we generated the probabilities in cnn_model_fn.

Next, we create the LoggingTensorHook, passing tensors_to_log to the tensors argument. We set every_n_iter=50, which specifies that probabilities should be logged after every 50 steps of training.

Train the Model

Now we’re ready to train our model, which we can do by creating train_input_fn ans calling train() on mnist_classifier. Add the following to main()

Evaluate the Model

Once training is complete, we want to evaluate our model to determine its accuracy on the test set. We call the evaluate method, which evaluates the metrics we specified in eval_metric_ops argument in the cnn_model_fn. Add the following to main()

Run the Model

We’ve coded the CNN model function, Estimator, and the training/evaluation logic; now run the python script.

Training CNN is quite computationally intensive. Estimated completion time of python script will vary depending on your processor.To train more quickly, you can decrease the number of steps passed to train(), but note that this will affect accuracy.

Download this project from GitHub

 

Related Post

 

References

http://cs231n.github.io/convolutional-networks/

https://www.tensorflow.org/tutorials/layers

 

Convert a directory of images to TFRecords

In this post, I’ll show you how you can convert the dataset into a TFRecord file so you can fine-tune the model.

Before you run the training script for the first time, you will need to convert the Image data to native TFRecord format. The TFRecord format consists of a set of shared files where each entry(image) is a serialized tf.Example proto. Each tf.Example proto contains the image as well as metadata such as label and bounding box information.

TFRecord file format is a simple record-oriented binary format that many TensorFlow applications use for training data.It is default file format for TensorFlow.

Advantages Of Binary Format


Binary files are sometimes easier to use because you don’t have to specify different directories for images and annotations. While storing your data in the binary file, you have your data in one block of memory, compared to storing each image and annotation separately.

Opening a file is a considerably time-consuming operation especially if you use HDD.Overall, by using binary files you make it easier to distribute and make the data better aligned for efficient reading.

This file format allows you to shuffle, batch and split datasets with its own functions.Most of the batch operations aren’t done directly from images, rather they are converted into a single tfrecord file.

Convert images into a TFRecord


Before you start any training, you’ll need a set of images to teach the model about the new classes you want to recognize.When you are working with an image dataset, what is the first thing you do? Split into Train and Validate sets.

Here’s an example, which assumes you have a folder containing class-named subfolders, each full of images for each label. The example folder animal_photos should have a structure like this:

The subfolder names are important since they define what label is applied to each image, but the filenames themselves don’t matter.The label for each image is taken from the name of the subfolder it’s in.

The list of valid labels is held in label file. The code assumes that the fill contains entries as such:

where each line corresponds to a label. Script map each label contained in the file to an integer corresponding to the line number starting from 0.

Code Organization


The code for this tutorial resides in data/build_image_data.py.Change train_directory path which contain training image data,validation_directory path which contain validation image data,output_directory which contain tfrecord file after run python script and labels_file which is contains a list of valid labels are held in this file.

This TensorFlow script converts the training and evaluation data into a sharded data set consisting of TFRecord files

where we have selected 1024 and 128 shards for each data set. Each record within the TFRecord file is a serialized Example proto.

 

Related Post

 

Deep learning model for Car Price prediction using TensorFlow

In this post you will how to handle a variety of features, and then train and evaluate different types of models. We do that on a data set of cars.

Launching TensorBoard from Python

To run TensorBoard, use the following code.logdir points to the directory where the FileWriter serialized its data.Once TensorBoard is running, navigate your web browser to localhost:6006 to view the TensorBoard.

Data Set


First thing to do Download dataset. We’re using pandas to read the CSV file.

The CSV file does not have a header, so we have to fill in column names. We also have to specify dtypes.

The training set contains the examples that we’ll use to train the model; the test set contains the examples that we’ll use to evaluate the trained model’s effectiveness.

The training set and test set started out as a single data set. Then, we split the examples, with the majority going into the training set and the remainder going into the test set.

Adding examples to the training set usually builds a better model; however, adding more examples to the test set enables us to better gauge the model’s effectiveness.

Regardless of the split, the examples in the test set must be separate from the examples in the training set. Otherwise, you can’t accurately determine the model’s effectiveness.

Feature Columns


Feature columns enabling you to transform raw data into formats that Estimators can use, allowing easy experimentation.

The price predictor calls the tf.feature_column.numeric_column function for numeric input features:

Categorical column

We cannot input strings directly to a model. Instead, we must first map strings to numeric or categorical values. Categorical vocabulary columns provide a good way to represent strings as a one-hot vector.

Hashed Column

The number of categories can be so big that it’s not possible to have individual categories for each vocabulary word or integer because that would consume too much memory. For these cases, we can instead turn the question around and ask, “How many categories am I willing to have for my input?” In fact, the tf.feature_column.categorical_column_with_hash_bucket function enables you to specify the number of categories.

Create Input Functions


TensorFlow has off-the-shelf input pipeline for many formats.Particularly, here in this example, I’m using input from pandas.So I’m going to read input from a pandas data frame.

What I’m telling it here is I want to use the batches of 64. So each iteration of the algorithm will use 64 input data pieces.

I’m going to shuffle the input, which always a good thing to do when you’re training.Please always shuffle the input and num_epochs=None means to cycle through the data indefinitely.If you’re done with the data, just do it again.

Instantiate an Estimator


We specify what kind of machine learning algorithm we want to apply to prediction Car price and in my case here, I’m going to use first a linear regression, which is kind of the simplest way to learn something and all I have to do is tell him, hey,look,you’re going to use these input features that I’ve just declared.

Train, Evaluate, and Predict


Now that we have an Estimator object, we can call methods to do the following:

  • Train the model.
  • Evaluate the trained model.
  • Use the trained model to make predictions.

Train the model

Train the model by calling the Estimator’s train method as follows:


The steps argument tells the method to stop training after a number of training steps.

Evaluate the trained model

Now that the model has been trained, we can get some statistics on its performance. The following code block evaluates the accuracy of the trained model on the test data:

Unlike our call to the train method, we did not pass the steps argument to evaluate. Our eval_input_fn only yields a single epoch of data.

Making predictions (inferring) from the trained model

We now have a trained model that produces good evaluation results. We can now use the trained model to predict the price of a car flower based on some unlabeled measurements. As with training and evaluation, we make predictions using a single function call:

Deep Neural Network


We have to obviously change the name of the class that we’re using.Then we’ll also have to adapt the inputs to something that this new model can use.

DNN model can’t use these categorical features directly.we have to do something to it and the two things that you can do to a categorical feature, typically, to make it work with a deep neural network is you either embed it or you transform it into what’s called a one-hot or an indicator.So we do this by simply saying, hey, make me an embedding, and out of the cylinders, make it an indicator column because there are not so many values there.Usually, this is fairly complicated stuff, and you have to write a lot of code.

Most of these more complicated models have hyperparameters and in this case, the DNN, basically we tell it, hey, make me a three-layer neural network with layer size 50,30, and 10 and that’s all really you need to–this is a very high-level interface.

Conclusion

TensorFlow, implementations of complete machine learning models.You can get started with them extremely quickly.They come with all of the integrations, with TensorBord, visualization for serving and production, for different hardware, different use cases.

Download this project from GitHub

 

Related Post

 

Image Classify Using TensorFlow Lite

Machine learning adds power to your application.TensorFlow Lite is a lightweight ML library for mobile and embedded devices.TensorFlow works well on large devices and TensorFlow Lite works really well on small devices. So that it’s easier and faster and smaller to work on mobile devices.

Getting Started TensorFlowLite


This post contains an example application using TensorFlow Lite for Android App. The app is a simple camera app that classifies images continuously using a quantized MobileNets model.

Step 1: Decide which Model to use


Depending on the use case, you may choose to use one of the popular open-sourced models such as InceptionV3 or MobileNets or re-train these models with their own custom data set or even build their own custom model.In this example, we use pre-train MobileNets model.

Step 2: Add TensorFlow Lite Android AAR


Android apps need to be written in Java, and core TensorFlow is in C++, a JNI library is provided to interface between the two.

Its interface is aimed only at inference, so it provides the ability to load a graph, set up inputs, and run the model to calculate particular outputs.

This app uses a pre-compiled TFLite Android Archive (AAR). This AAR is hosted on jcenter.

The following lines in the app’s build.gradle file includes the newest version of the AAR, from the TensorFlow maven repository, in the project.

We use the following block, to instruct the Android Asset Packaging Tool that .lite or .tflite assets should not be compressed. This is important as the .lite file will be memory-mapped, and that will not work when the file is compressed.

Step 3: Add your model files to the project


Download the quantized Mobilenet TensorFlow Lite model from here, unzip and copy mobilenet_quant_v1_224.tflite and label.txt to the assets directory: src/main/assets

Step 4: Load TensorFlow Lite Model


TensorFlow Lite’s Java API supports on-device inference and is provided as an Android Studio Library that allows loading models, feeding inputs, and retrieving inference outputs.

The Interpreter.java class drives model inference with TensorFlow Lite. In most of the cases, this is the only class an app developer will need.Initializing an Interpreter with a Model File.The Interpreter can be initialized with a MappedByteBuffer:

This byte buffer is sized to contain the image data once converted to float. The interpreter can accept float arrays directly as input, but the ByteBuffer is more efficient as it avoids extra copies in the interpreter.

The following lines load the label list and create the output buffer:

The output buffer is a float array with one element for each label where the model will write the output probabilities.

Running Model Inference

If a model takes only one input and returns only one output, the following will trigger an inference run:

For models with multiple inputs, or multiple outputs, use:

where each entry in inputs corresponds to an input tensor and map_of_indices_to_outputs maps indices of output tensors to the corresponding output data. In both cases the tensor indices should correspond to the values given to the TensorFlow Lite Optimized Converter when the model was created. Be aware that the order of tensors in input must match the order given to the TensorFlow Lite Optimized Converter.

Following method takes a Bitmap as input, runs the model and returns the text to print in the app.

This method does three things. First converts and copies the input Bitmap to the imgData ByteBuffer for input to the model. Then it calls the interpreter’s run method, passing the input buffer and the output array as arguments. The interpreter sets the values in the output array to the probability calculated for each class. The input and output nodes are defined by the arguments to the toco conversion step that created the .lite model file earlier.

 

 

Download this project from GitHub

Related Post

TensorFlow Lite

Train Image classifier with TensorFlow

 

TensorFlow Lite

If you want to bring the TensorFlow into your mobile applications there are some challenges you have to face. The neural network is big compared with the other classic machine learning models because deep learning you have to multiple layers. The total amount of the parameters and amount of the calculation is very large.

Freeze Graph

You can remove the all the variables from the TensorFlow graph and convert it into the constants. Once you have finish training you don’t have to those parameters in the variable. You can put everything into constant. Converting from variables to constants you can get much faster learning time.

Quantization in TensorFlow

Quantization is another optimization you can take for the mobile app. Quantizations means that you can compress the precision of each variable in parameters, weights, and biases into fewer operations. For example, TensorFlow uses the 32-bit floating point numbers for representing any weights and biases. But by using quantization, you can compress that into an 8-bit integer.

TensorFlow Lite


TensorFlow Lite is a lightweight ML library for mobile and embedded devices.TensorFlow works well on large devices and TensorFlow Lite works really well on small devices. It’s easier and faster and smaller to work on mobile devices.

How to build a model using TensorFlow Lite.

There are two aspects one is the workstation side and other one is the mobile side and let’s walk through the complete lifecycle.
TensorFlow Lite lifecycle

The first step is to decide what model you want to use. One option is to use a pre-trained model the other option would be to retrain just the last layers like you did in the post. You can write custom model and train and generate a graph this is nothing specific to TensorFlow Lite this is as good as standard TensorFlow where you build a model and generate graph depths and checkpoints.

The next step is specific to TensorFlow Lite is to convert the generated model into a format the TensorFlow Lite understands. Prerequisite to converting it is to freeze graph.

Freezing the graph is a step where you combine these two results and feed it to your converter. The converter is provided as part of the TensorFlow Lite software. You can use this to convert your model into the format that you need. Once conversion step is completed you will have what is called as a .lite binary file.

Move the model to the mobile side

You feed this TensorFlow Lite model into the interpreter.The interpreter executes the model using a set of operators.If the interpreter is running a CPU then this can be executed directly on the CPU otherwise if there is hardware acceleration then it can be executed on the hardware accelerated hardware as well.

Components of TensorFlow Lite

TensorFlow Lite ComponentThe main components of TensorFlow Lite are the model file format, the interpreter for processing the graph, a set of kernels to work to or where the interpreter can invoke a set of kernels, and lastly an interface to the hardware acceleration layer.

 

 

1.Model File

TensorFlow Lite has a special model file formate and this is lightweight and has very few dependencies and most graph calculations are done using 32-bit float

2.Interpreter

The interpreter is engineered to be work with low overhead and on small devices. TensorFlow Lite has very few dependencies and it is easy to build on simple devices.TensorFlow Lite kept the binary size of 70KB and 300KB with operators.

It uses FlatBuffers. So it can load really and the speed comes at the cost of flexibility.TensFolw Lite support only a subset of operators that TensorFlow has.

3.Ops/Kernels

The set of operators are smaller. Every model will be not supported them, in particular, TensorFlow Lite provides a set of core built-in ops and these have been optimized for arm CPU using neon and they work in both float and quantized.

4.Interface to Hardware Acceleration

It targets custom hardware. It is the neural network API TensorFlow lite comes pre-loaded with hooks for neural network API. If your device supports NN API then tensor flow lite will delegate these operators into NN API and if you have a device that does not support NN API it’s executed directly on the CPU.

Android Neural Network API


Android Neural Network API is supported for Android with 8.1+ release in Oreo. It will support various hardware acceleration. It uses TensorFlow as a core technology.

You can use TensorFlow to write your mobile app and your app will get the benefits of hardware acceleration through your NN API. It basically abstracts the hardware layer for ML inference, for example, if a device has ML DSP it can transparently map to it and it uses NN primitives that are very similar to TensorFlow Lite.

android neural network architecture

Architecture for neural network API’s looks like this essentially there’s an android app. On top typically there is no need for the Android app to access the neural network API directly it will be accessing it through the machine learning interface which is the TensorFlow Lite interpreter and the NN runtime. The neural network runtime can talk to the hardware abstraction layer and then which talks to their device and run various accelerators.

 

Related Post

Image Classify Using TensorFlow Lite

Train Image classifier with TensorFlow

Train your Object Detection model locally with TensorFlow

Android TensorFlow Machine Learning

 

Speech Recognition Using TensorFlow

This tutorial will show you how to runs a simple speech recognition TensorFlow model built using the audio training. Listens for a small set of words, and display them in the UI when they are recognized.

Once you’ve completed this tutorial, you’ll have a application that tries to classify a one second audio clip as either silence, an unknown word, “yes”, “no”, “up”, “down”, “left”, “right”, “on”, “off”, “stop”, or “go”.

TensorFow speech recognition model

1.Preparation


You can train your model on the desktop or on the laptop or on the server and then you can use that pre-trained model on our mobile device.So there’s no training that would happen on the device. The training would happen on your bigger machine either a server or our laptop.You can download a pretrained model from tensorflow.org

2. Adding Dependencies


The TensorFlow Inference Interface is available as a JCenter package and can be included quite simply in your android project with a couple of lines in the project’s build.gradle file:

Add the following dependency in app’s build.gradle

This will tell Gradle to use the latest version of the TensorFlow AAR that has been released to https://bintray.com/google/tensorflow/tensorflow-android. You may replace the + with an explicit version label if you wish to use a specific release of TensorFlow in your app.

3.Add Pre-trained Model to Project


You need the pre-trained model and label file.You can download the model from here.Unzip this zip file, You will get conv_actions_labels.txt(label for objects) and conv_actions_frozen.pb(pre-trained model).

Put conv_actions_labels.txt and conv_actions_frozen.pb into android/assets directory.

4.Microphone Permission


To request microphone, you should be requesting RECORD_AUDIO permission in your manifest file as below:

Since Android 6.0 Marshmallow, the application will not be granted any permission at installation time. Instead, the application has to ask the user for a permission one-by-one at runtime.

5.Recording Audio


The AudioRecord class manages the audio resources for Java applications to record audio from the audio input hardware of the platform. This is achieved by reading the data from the AudioRecord object. The application is responsible for polling the AudioRecord object in time using read(short[], int, int).

6.Run TensorFlow Model


TensorFlowInferenceInterface class that provides a smaller API surface suitable for inference and summarizing the performance of model execution.

7.Recognize Commands


RecognizeCommands class is fed the output of running the TensorFlow model over time, it averages the signals and returns information about a label when it has enough evidence to think that a recognized word has been found. The implementation is fairly small, just keeping track of the last few predictions and averaging them.

The demo app updates its UI of results automatically based on the labels text file you copy into assets alongside your frozen graph, which means you can easily try out different models without needing to make any code changes. You will need to updaye LABEL_FILENAME and MODEL_FILENAME to point to the files you’ve added if you change the paths though.

8.conclusion


You can easily replace it with a model you’ve trained yourself. If you do this, you’ll need to make sure that the constants in the main MainActivity Java source file like SAMPLE_RATE and SAMPLE_DURATION match any changes you’ve made to the defaults while training. You’ll also see that there’s a Java version of the RecognizeCommands module that’s very similar to the C++ version in this tutorial. If you’ve tweaked parameters for that, you can also update them in MainActivity to get the same results as in your server testing.

 

Download this project from GitHub

 

Related Post

Android TensorFlow Machine Learning

Google Cloud Speech API in Android APP

 

 

 

Train your Object Detection model locally with TensorFlow

In this post, we’re going to train machine learning models capable of localizing and identifying multiple objects in an image. You’ll need to install TensorFlow and you’ll need to understand how to use the command line.

Tensorflow Object Detection API


The TensorFlow Object Detection API built on top of TensorFlow that makes it easy to construct, train and deploy object detection models.

This post walks through the steps required to train an object detection model locally.

1. Clone Repository


you can download directly ZIP file.

2.Installation


Tensorflow Object Detection API depends on the following libraries.

  • Protobuf 2.6
  • Pillow 1.0
  • Lxml

1.Protobufs to configure model and training parameters. Before the framework can be used, the Protobuf libraries must be compiled. This should be done by running the following command from the tensorflow/models directory:

Add Libraries to PYTHONPATH

When running locally, the tensorflow/models/ and slim directories should be appended to PYTHONPATH. This can be done by running the following from tensorflow/models/:

Note: This command needs to run from every new terminal you start. If you wish to avoid running this manually, you can add it as a new line to the end of your ~/.bashrc file.

Testing the Installation

You can test that you have correctly installed the Tensorflow Object Detection API by running the following command:

Above command generate following output.

Install Object Detection API

3.Preparing Inputs


Tensorflow Object Detection API reads data using the TFRecord file format. Two sample scripts (create_pascal_tf_record.py and create_pet_tf_record.py) are provided to convert dataset to TFRecords.

Directory Structure for Training input data

  • To prepare the input file for the sample scripts you need to consider two things. Firstly, you need an RGB image which is encoded as jpg or png and secondly, you need a list of bounding boxes (xmin, ymin, xmax, ymax) for the image and the class of the object in the bounding box.
  • Here is a subset of the pet image data set that I collected in images folder:

 

Afterward, labeled them manually with LabelImg. LabelImg is a graphical image annotation tool that is written in Python. It’s super easy to use and the annotations are saved as XML files.Save image annotations xml in /annotations/xmls folder.

Image Annotation

Create trainval.txt in annotations folder which content name of the images without extension.Use the following command to generate trainval.txt.

Label Maps

Each dataset is required to have a label map associated with it. This label map defines a mapping from string class names to integer class Ids.Label maps should always start from id 1.Create label.pbtxt file with the following label map:

Generating the Pet TFRecord files.

Run the following commands.

You should end up with two TFRecord files named    pet_train.record and pet_val.record in the tensorflow/modelsdirectory.

4.Training the model


After creating the required input file for the API, Now you can train your model.For training, you need the following command:

An object detection training pipeline also provide sample config files on the repo. For my training, I used ssd_mobilenet_v1_pets.config basis. I needed to adjust the num_classes to one and also set the path (PATH_TO_BE_CONFIGURED) for the model checkpoint, the train, and test data files as well as the label map. In terms of other configurations like the learning rate, batch size and many more, I used their default settings.

5.Running the Evaluation Job


Evaluation is run as a separate job. The eval job will periodically poll the training directory for new checkpoints and evaluate them on a test dataset. The job can be run using the following command:

where ${PATH_TO_YOUR_PIPELINE_CONFIG} points to the pipeline config, ${PATH_TO_TRAIN_DIR} points to the directory in which training checkpoints were saved (same as the training job) and ${PATH_TO_EVAL_DIR} points to the directory in which evaluation events will be saved. As with the training job, the eval job run until terminated by default.

6.Running TensorBoard


Progress for training and eval jobs can be inspected using TensorBoard. If using the recommended directory structure, TensorBoard can be run using the following command:

where ${PATH_TO_MODEL_DIRECTORY} points to the directory that contains the train and eval directories. Please note it may take TensorBoard a couple minutes to populate with data.

7.Exporting the Tensorflow Graph


After your model has been trained, you should export it to a Tensorflow graph proto. First, you need to identify a candidate checkpoint to export. The checkpoint will typically consist of three files in pet folder:

  1.  model.ckpt-${CHECKPOINT_NUMBER}.data-00000-of-00001
  2. model.ckpt-${CHECKPOINT_NUMBER}.index
  3. model.ckpt-${CHECKPOINT_NUMBER}.meta

Run the following command to export Tensorflow graph.Change the checkpoint number.

Related Post

TenserFlow Lite

Train Image classifier with TensorFlow

Android TensorFlow Machine Learning