TensorFlow Lite is a TensorFlow lightweight solution for mobile and embedded devices. It lets you run TensorFlow models on mobile devices with low latency and quickly without necessarily incurring a round trip to the server.
Before you run your model with TensorFlow Lite you need to convert your trained model into the TensorFlow Lite model. This model is then exported to mobile devices.
In this tutorial, we’re going to convert the TensorFlow or Keras model into the TensorFlow Lite model to use on mobile or IoT devices. To convert the model we are using Python API. It makes it easier to convert models as part of a model development pipeline.TensorFlow Lite converter takes a TensorFlow or Keras model and generates a
The following code describes how to use the
tf.lite.TFLiteConverter using the Python API in TensorFlow 2.0.
Convert .pb to .tflite file
We have a model saved after training as
.pb file. Now converts SavedModel directories into
converter = tf.lite.TFLiteConverter.from_saved_model('save_model') tflite_model = converter.convert() open("converted_model.tflite", "wb").write(tflite_model)
Finally, we’ll convert
.tflite using the
TFLiteConverter this is achieved with the
from_saved_model method will pass the directory of
.pb file and
variable. Then call the converter and save its results as
Convert Keras(.h5) model to .tflite file
If you have saved keras(h5) model then you need to convert it to
tflite before running it on the mobile device. In TensorFlow 2.0 you can not convert .h5 to .tflite file directly. First, you need to load the saved keras model and then convert using
tflite_converter = tf.lite.TFLiteConverter.from_keras_model(new_model) tflite_model = tflite_converter.convert() open("tf_lite_model.tflite", "wb").write(tflite_model)
Finally, there’s the TensorFlow Lite model as a
.lite which built from Keras(.h5) using the TensorFlow