Convert a directory of images to TFRecords

In this post, I’ll show you how you can convert the dataset into a TFRecord file so you can fine-tune the model.

Before you run the training script for the first time, you will need to convert the Image data to native TFRecord format. The TFRecord format consists of a set of shared files where each entry is a serialized tf.Example proto. Each tf.Example proto contains the image as well as metadata such as label and bounding box information.

TFRecord file format is a simple record-oriented binary format that many TensorFlow applications use for training data.It is default file format for TensorFlow.Binary files are sometimes easier to use because you don’t have to specify different directories for images and annotations. While storing your data in the binary file, you have your data in one block of memory, compared to storing each image and annotation separately. Opening a file is a considerably time-consuming operation especially if you use HDD.Overall, by using binary files you make it easier to distribute and make the data better aligned for efficient reading.

This native file format used in Tensorflow allows you to shuffle, batch and split datasets with its own functions.Most of the batch operations aren’t done directly from images, rather they are converted into a single tfrecord file.

Convert images into a TFRecord

Before you start any training, you’ll need a set of images to teach the model about the new classes you want to recognize.When you are working with an image dataset, what is the first thing you do? Split into Train and Validate sets.

Here’s an example, which assumes you have a folder containing class-named subfolders, each full of images for each label. The example folder animal_photos should have a structure like this:

The subfolder names are important since they define what label is applied to each image, but the filenames themselves don’t matter.The label for each image is taken from the name of the subfolder it’s in.

The list of valid labels is held in label file. The code assumes that the fill contains entries as such:

where each line corresponds to a label. Script map each label contained in the file to an integer corresponding to the line number starting from 0.

Code Organization

The code for this tutorial resides in data/ train_directory path which contain training image data,validation_directory path which contain validation image data,output_directory which contain tfrecord file after run python script and labels_file which is contains a list of valid labels are held in this file.

This TensorFlow script converts the training and evaluation data into a sharded data set consisting of TFRecord files

where we have selected 1024 and 128 shards for each data set. Each record within the TFRecord file is a serialized Example proto.


Deep learning model for Car Price prediction using TensorFlow

Before AI, Image search looks at the metadata and looks where the images are and now, with AI the computer looks at the images.If you search for say, Tiger, it looks at the images, all of them in the world and whenever it sees one that has Tiger in it, it returns it. AI is now going to be everywhere, and machine learning is everywhere.What makes this image search work, in reality, is TensorFlow.Once you’ve written one of these models in TensorFlow you can deploy it anywhere in mobile.TensorFlow is truly what enables apps to classify the image.

TensorFlow gives you distribution out of the box. So that you can run it in the cloud if you need to do that.It works on all of the hardware you need to work on.It’s fast and it’s flexible and what I’m going to tell you today is that it’s also super easy to get started.

tensorflow programming environment

The generic thing that people used to say this is TensorFlow, it’s pretty low level.So you’re thinking about like multiplying matrices, adding vectors together, that kind of thing.What TensorFlow built on top of this is libraries that help you do more complex things easier.TensorFlow built a library of layers to help you build models, it built training infrastructure that helps you actually train a model and evaluate a model and put it into production.This you can do with Keras or you can do with estimators.Finally, it built models in a box and those are really full, complete machine learning algorithms such as run, and all you have to do is instantiate one and go.That’s mostly what I’m going to talk about today.

So usually when you talk about, oh, my first model in TensorFlow it’s usually something simple, like let’s fit a line to a bunch of points or something like that.But nobody is actually interested in fitting a line to a bunch of points, distributed.It doesn’t really happen all that much in reality.So we’re not going to do that. I’m going to show you instead how to handle a variety of features, and then train and evaluate different types of models.We do that on a data set of cars.So the first model today will be about predicting the price of a car from a bunch of features about the car, information about the car.


The next thing I can do with TensorBord is I can actually look at the model that was created and look at the lower levels of the model, look at what we call the graph. TehsorFlow works by generating a graph, and then this graph is shipped to all of the distributed workers that it has, it’s executed there.You don’t have to worry about this too much, but it’s awfully useful to be able to inspect this graph when you’re doing debugging or something like that.

Launching TensorBoard from Python

To run TensorBoard, use the following code.logdir points to the directory where the FileWriter serialized its data.Once TensorBoard is running, navigate your web browser to localhost:6006 to view the TensorBoard.

Data Set

First thing to do Download dataset.We’re using pandas to read the CSV file. This is easy for small datasets but for large and complex datasets.

The CSV file does not have a header, so we have to fill in column names.We also have to specify dtypes.

The training set contains the examples that we’ll use to train the model; the test set contains the examples that we’ll use to evaluate the trained model’s effectiveness.

The training set and test set started out as a single data set. Then, we split the examples, with the majority going into the training set and the remainder going into the test set. Adding examples to the training set usually builds a better model; however, adding more examples to the test set enables us to better gauge the model’s effectiveness. Regardless of the split, the examples in the test set must be separate from the examples in the training set. Otherwise, you can’t accurately determine the model’s effectiveness.

Feature Columns

Feature Columns are the intermediaries between raw data and Estimators. Feature columns are very rich, enabling you to transform a diverse range of raw data into formats that Estimators can use, allowing easy experimentation.

Every neuron in a neural network performs multiplication and addition operations on weights and input data. Real-life input data often contains non-numerical (categorical) data. For example, consider a fuel-type feature that can contain the following two non-numerical values:

  • gas
  • diesel

ML models generally represent categorical values as simple vectors in which a 1 represents the presence of a value and a 0 represents the absence of a value. For example, when fuel_type is set to diesel, an ML model would usually represent fuel_type as [1,0], meaning:

  • 0gas is absent
  • 1diesel is present

So, although raw data can be numerical or categorical, an ML model represents all features as numbers.

Numeric column

The price predictor calls the tf.feature_column.numeric_column function for numeric input features:

Categorical column

We cannot input strings directly to a model. Instead, we must first map strings to numeric or categorical values. Categorical vocabulary columns provide a good way to represent strings as a one-hot vector.

Hashed Column

the number of categories can be so big that it’s not possible to have individual categories for each vocabulary word or integer because that would consume too much memory. For these cases, we can instead turn the question around and ask, “How many categories am I willing to have for my input?” In fact, the tf.feature_column.categorical_column_with_hash_bucket function enables you to specify the number of categories.

Create Input Functions

I still have to give it some input data.TensorFlow has off-the-shelf input pipeline for most formats, or for many formats.And particularly, here in this example, I’m using input from pandas.So I’m going to read input from a pandas data frame.

What I’m telling it here is I want to use the batches of 64. So each iteration of the algorithm will use 64 input data pieces. I’m going to shuffle the input, which always a good thing to do when you’re training.Please always shuffle the input and num_epochs=None means to cycle through the data indefinitely.If you’re done with the data, just do it again.

Instantiate an Estimator

We specify what kind of machine learning algorithm we want to apply to prediction Car price and in my case here, I’m going to use first a linear regression, which is kind of the simplest way to learn something and all I have to do is tell him, hey,look,you’re going to use these input features that I’ve just declared.

Train, Evaluate, and Predict

Now that we have an Estimator object, we can call methods to do the following:

  • Train the model.
  • Evaluate the trained model.
  • Use the trained model to make predictions.

Train the model

Train the model by calling the Estimator’s train method as follows:

The steps argument tells the method to stop training after a number of training steps.

Evaluate the trained model

Now that the model has been trained, we can get some statistics on its performance. The following code block evaluates the accuracy of the trained model on the test data:

Unlike our call to the train method, we did not pass the steps argument to evaluate. Our eval_input_fn only yields a single epoch of data.

Making predictions (inferring) from the trained model

We now have a trained model that produces good evaluation results. We can now use the trained model to predict the price of a car flower based on some unlabeled measurements. As with training and evaluation, we make predictions using a single function call:

Deep Neural Network

We have to obviously change the name of the class that we’re using.Then we’ll also have to adapt the inputs to something that this new model can use.So in this case, a DNN model can’t use these categorical features directly.we have to do something to it and the two things that you can do to a categorical feature, typically, to make it work with a deep neural network is you either embed it or you transform it into what’s called a one-hot or an indicator.So we do this by simply saying, hey, make me an embedding, and out of the cylinders, make it an indicator column because there are not so many values there.Usually, this is fairly complicated stuff, and you have to write a lot of code.

Then also, most of these more complicated models have hyperparameters and in this case, the DNN, basically we tell it, hey, make me a three-layer neural network with layer size 50,30, and 10 and that’s all really you need to–this is a very high-level interface.


TensorFlow, implementations of complete machine learning models.You can get started with them extremely quickly.They come with all of the integrations, with TensorBord, visualization for serving and production, for different hardware, different use cases.They obviously work in distributed settings.We use them in data centers.You can use them on your home computer network if that’s what you’d like.You can use them in flocks of mobile devices.Everything is possible.They run on all kinds of different hardware.Particularly, they will run on TPU.They also always run on GPU, on CPU.

Download this project from GitHub


Nearby Connections API 2.0

House advertise the availability and rates of their driveways, So your phone could scan for them and query all of them and book the one that makes more sense for you in a completely offline fashion.Taking it one step further, if you are late getting back, why can’t these houses then query your car, which will tell them that you are just a block away, so I can avoid being penalized or towed.Because implementing these features entails dealing with the vagaries of Bluetooth and Wi-Fi across the product, the range of Android os version out there and the variety of hardware out there.What these apps back then needed was a reliable performance proximity platform that abstracts away all this complexity and leaves them free so they can focus on just adding the features that matter to their users.

Nearby Connections API

Nearby Connections enables advertising and discovery of nearby devices, as well as high-bandwidth low-latency encrypted data transfers between these devices in a fully-offline P2P manner.It achieves this by using a combination of classic Bluetooth, BLE, and Wi-Fi hotspots.It leverages the strengths of each while supplementing their respective weaknesses.For instance, Bluetooth has low connection latency but also provides low bandwidth Wi-Fi hotspots have slightly higher connection latency, but also provide much higher bandwidth. So what it does is connect over Bluetooth and start transferring data instantly but in the background, it also brings up a Wi-Fi hotspot and when that’s ready, it seamlessly transfers your connection from Bluetooth to WiFi with absolutely no work required by the app developer.

Nearby API Flow

Nearby API Flow

1.An advertiser calls startAdvertising(), and sometime later, a discover calls startDiscovery(). Soon enough, the discovery is alerted to the advertiser’s presence by means of the onEndpointFound() callback. If the discoverer is interested, they can call requestConnection().This is the end of the asymmetric part of the API and from here on everything is completely symmetric.

2.Both sides get an onConnectionInitiated() callback that gives them an authentication token they can use to verify they are indeed talking to each other.Upon out-of-band verification, they can both either call acceptConnection() or rejectConnection().When each side gets the other’s response, it invokes the ConnectionLifecycleCallback.onConnectionResult() callback.At this point, the connection is either established or it’s not.

3.From here, either side can send payload by calling the sendPayload()method.This leads to the other side getting the PayloadCallback.onPayloadReceived()callback, followed by both sides getting a series of PayloadCallback.onPayloadTransferUpdate() callbacks up until the transfer reaches the final state of success or failure.

4.Finally, either side can disconnect at any time, and that leads to the other side getting the ConnectionLifecycleCallback.onDisconnected() callback.


Nearby API support three kinds of payloads 1.Bytes, these are byte arrays of up to 32k, and they are typically used for sending metadata control messages.
2.Files, these represent files of the device’s storage and Nearby API make sure that it transfer from the application to the network interface with a minimal amount of copying across process boundaries.
3.Streams, this is good for when you need to generate data on the fly, and you don’t know the final size up front, as is the case with recorded audio or video.


As I mentioned earlier, Nearby API uses multiple radio techniques to advertise, discover, and establish connections.The combinations of interactions of these techniques are qualified in its strategies.Strategies are named for how far they’ll cause their net to try and find a nearby device and what kind of connection topology they will enable.Right now, it has two of them, P2P_STAR and P2P_CLUSTER.

Nearby API Strategies
As the name might suggest, P2P_STAR makes sense for when you want to enforce a star network with a 1-to-N connection topology.And P2P_CLUSTER make sense when you want to allow for slightly looser M-to-M connection topologies.For example, a classroom app where the teacher wants to host a quiz for all the students, that would probably be best mounted over P2P_STAR with the teacher as the one advertiser and the students at the end discovers and the same classroom app could have a mode that allows students to break out into ephemeral project groups.This mode,where students want to drift in and out of multiple groups,would be best served with P2P_CLUSTER.

Now let’s see how all this can be put together in an undoubtedly harmless mock application.

Get Started

Before you start to code using the Nearby Connections API:

Request Permissions

Before using Nearby Connections, your app must request the appropriate permissions for the Strategy you plan to use.

For example, in order to use the P2P_STAR Strategy, add the specified permissions to your AndroidManifest.xml:

Since ACCESS_COARSE_LOCATION is considered to be a dangerous system permission, in addition to adding it to your manifest, you must request the permission at runtime, as described in Requesting Permissions.

Advertise and Discover

Your app needs to begin to advertise and discover in order to find nearby devices.

1.On devices that will advertise, call startAdvertising() with the desired Strategy and a serviceIdparameter that identifies your app.

2.On devices that will discover nearby advertisers, call startDiscovery() with the same Strategy and serviceId.

The serviceId value must uniquely identify your app. As a best practice, use the package name of your app (for example, com.example.myapp).

The following example shows how to advertise:

The ConnectionLifecycleCallback parameter is the callback that will be invoked when discoverers request to connect to the advertiser.

The following example shows how to discover:

Call stopAdvertising() when you no longer need to advertise.

CallstopDiscovery() when you no longer need to discover.

Initiate a connection

When nearby devices are found, the discoverer can initiate connections. The following example shows initiating a connection with a discovered device.

Depending on your use case, you may wish to instead display a list of discovered devices to the user, allowing them to choose which devices to connect to.

Accept or reject a connection

After the discoverer has requested a connection to an advertiser, both sides are notified of the connection initiation process via the onConnectionInitiated() method of the ConnectionLifecycleCallbackcallback. Note that this callback is passed in to startAdvertising() on the advertiser and requestConnection() on the discoverer, but from this point forward, the API is symmetric.

Now both sides must choose whether to accept or reject the connection via a call to acceptConnection()or rejectConnection(), respectively. The connection is fully established only when both sides have accepted. If one or both reject, the connection is discarded. Either way, the result is delivered toonConnectionResult().

The following code snippet shows how to implement this callback:

Exchange Data

Once connections are established between devices, you can exchange data by sending and receiving Payload objects. A Payload can represent a simple byte array, such as a short text message; a file, such as a photo or video; or a stream, such as the audio stream from the device’s microphone.

Payloads of the same type are guaranteed to arrive in the order they were sent, but there is no guarantee of preserving the ordering amongst payloads of different types. For example, if a sender sends a FILE payload followed by a BYTE payload, the receiver could get the BYTE payload first, followed by the FILE payload.

Send and Receive

To send payloads to a connected endpoint, call sendPayload().To receive payloads, implement the onPayloadReceived() method of the PayloadCallback that was passed to acceptConnection().


Here are the slightly less bone-chilling uses of nearby Connections spotted in the wild.

The Weather channel is using Nearby Connections to build a pretty darn cool offline mesh to help spread urgent weather updates and warnings, especially in the wake of natural disasters.


Sendanywhere is a South Korean app that allows sharing files intelligently in the most efficient manner possible, regardless of whether you’re online or not.They’re using Nearby Connections for their offline modality.


Pocket Casts have been a great partner for Nearby Messages.They now want to enable people sharing and discovering podcasts in a completely offline manner. For example, when you’re stuck in an aircraft and are looking to cloud so what’s your options for entertainment?

GameInsight is a leading game developer, and they’re using Nearby Connections to not only find nearby players but also to run their games completely offline. Again, being stuck in the inside of an aircraft comes to mind.


Hotstar is India’s fastest growing streaming network.They’re using nearby connections to allow offline sharing of downloaded movies and TV shows.So you don’t need to have access to the internet.You only need access to friend who does.


Android TV is about to launch a new remote control app where they use Nearby Connections to not only set up your new Android TV and configure it to be on your Wi-Fi network, but also to serve interactive second stream content in a streaming fashion.


Related Post

Bluetooth Low Energy



Room database Migrating

When upgrading your Android application you often need to change its data model. When the model is stored in SQLite database, then its schema must be updated as well.
However, in a real application, migration is essential as we would want to retain the user existing data even when they upgrade the App.

The Room persistence library allows you to write Migration classes to preserve user data in this manner. Each Migration class specifies a startVersion and endVersion. At runtime, Room runs each Migration class’s migrate() method, using the correct order to migrate the database to a later version.

A migration can handle more than 1 version (e.g. if you have a faster path to choose when going version 3 to 5 without going to version 4). If Room opens a database at version 3 and latest version is >= 5, Room will use the migration object that can migrate from 3 to 5 instead of 3 to 4 and 4 to 5.

If there are not enough migrations provided to move from the current version to the latest version, Room will clear the database and recreate so even if you have no changes between 2 versions, you should still provide a Migration object to the builder.

Create New Entity Or Add New Columns 

The following code snippet shows how to define an entity:

migrate() method is already called inside a transaction and that transaction might actually be a composite transaction of all necessary Migrations.

After the migration process finishes, Room validates the schema to ensure that the migration occurred correctly. If Room finds a problem, it throws an exception that contains the mismatched information.


Related Post

Room Persistence Library

How to use DateTime datatype in SQLite Using Room

Room: Database Relationships


Rest API Pagination with Paging Library.

Almost every REST API your app call requires to handle pagination. When calling a REST API for some resource instead of delivering all of the results, which could be time-consuming and cumbersome to deal with, a REST API will typically make you paginate through the results.If you are trying to support pagination on a client you need to handle it gracefully.

Let’s say that there’s an API endpoint that returns a list of users. In the database of the API, there are 100,000+ users. It would be impractical to fetch all 100,000 users from the API all in one request. In fact, we would probably get an OutOfMemory exception. To avoid this, we want to paginate through our list of the user when making requests to our API.

Paging Library call REST API


Paging Library

The new paging library makes it easier for your app to gradually load information as needed from a REST API, without overloading the device or waiting too long for an all result.This library contains several classes to streamline the process of requesting data as you need it. These classes also work seamlessly with existing architecture components, like Room.

1.Adding Components to your Project

Architecture Components are available from Google’s Maven repository. To use them, follow these steps:

Open the build.gradle file for your project and add the line as shown below:

Add Architecture Components

In this tutorial, we are using LiveData, and ViewModel.

Open the build.gradle file for your app or module and add the artifacts that you need as dependencies:

2.Setting up Retrofit for Pagination

The code samples below for making the API calls are using Retrofit with GSON.We are working with the GitHub API and making a call to the GitHub User endpoint.

Now, let’s write our class that will generate our RetrofitService.

3.Create ItemKeyedDataSource

Use ItemKeyedDataSource class to define a data source.It uses for incremental data loading for paging keyed content, where loaded content uses previously loaded items as input to next loads.

Implement a DataSource using ItemKeyedDataSource if you need to use data from item N - 1 to load item N. This is common, for example, in sorted database queries where attributes of the item such just before the next query define how to execute it.

To implement, extend ItemKeyedDataSourcesubclass.


A simple data source factory which also provides a way to observe the last created data source.This allows us to channel its network request status etc back to the UI.

5.Create ViewModel

In the ViewModel, we would extend from the Architecture Component ViewModel, and then we would keep a reference to that LiveData of our PagedList.

LivePagedListBuilder:This class generates a LiveData<PagedList> from the DataSource.Factory you provide.
PagedList: A PagedList is a List which loads its data in chunks (pages) from a DataSource.All data in a PagedList is loaded from its DataSource. Creating a PagedList loads data from the DataSource immediately, and should, for this reason, be done on a background thread. The constructed PagedList may then be passed to and used on the UI thread. This is done to prevent passing a list with no loaded content to the UI thread, which should generally not be presented to the user.

6.Create progressbar as footer in a RecyclerView

Create RecyclerView with 2 type of items one is our usual items the second is a progress bar, then we need to listen NetworkState LiveData and decide are we going to show progressbar or not.


Now, set the Network State and add or remove ProgressBar in Adapter.

hasExtraRow() check the Network State and return true or false.

To tell the PagedListAdapter how to compute the difference between the two elements, you’ll need to implement a new class, DiffCallback.Here, you will define two things.

You will define how to compute whether the contents are the same, and how to define whether the items are the same.
Let’s look at the adapter.So our adapter would extend PagedListAdapter, and then it will connect the user, which is the information that’s being displayed, with the user ViewHolder.

We define the callback, the DIFF_CALLBACK, for our user objects and then in onBindViewHolder, all we need to do is bind the item to the ViewHolder.

7.Create RecyclerView

In the onCreate, we get the reference to our ViewModel and get the reference to the RecyclerView, and we create our adapter.


Download this project from GitHub


Related Post

Architecture Components:Paging Library.



Architecture Components:Paging Library

Many Applications need to load a lot of information from the database.Database queries can take a long time to run and use a lot of memory.Android has a new paging library that can help you with all of this.

Main Components of Paging Library

The main components of the paging library are PagedListAdapter, that actually extends the RecyclerViewAdapter, PagedList, and DataSource.

Component Of Paging Library

DataSource: The DataSource is an interface for page source to provide the data gradually.But you’ll need to implement one of the three DataSource, either a PageKeyedDataSourceorItemKeyedDataSource or PositionalDataSource which will be used when you need to load item N based on the item N-1.

  • Implement a DataSource using PageKeyedDataSource if you need to use data from page N - 1 to load page N. This is common, for example, in network APIs that include a next/previous link or key with each page load.
  • Implement a DataSource using ItemKeyedDataSource if you need to use data from item N - 1 to load item N. This is common, for example, in sorted database queries where attributes of the item such just before the next query define how to execute it.
  • Extend PositionalDataSource if you can load pages of a requested size at arbitrary positions, and provide a fixed item count. If your data source can’t support loading arbitrary requested page sizes (e.g. when network page size constraints are only known at runtime), use  either PageKeyedDataSource or ItemKeyedDataSource instead.


If you use the Room persistence library to manage your data, it can generate a DataSource.Factory to producePositionalDataSources for you automatically, for example:

PagedList: The PagedList is a component that loads the data automatically and can provide update signal, for example, to the RecyclerViewAdapter. The data is loaded automatically from a DataSource on the background thread.But then, it’s consumed on the main thread.It supports both an infinite scrolling list, but also countable lists.

You can configure several things.You can configure the initial load size, the page size, but also the prefetch distance.

PagedListAdapter:This class is an implementation of RecyclerView.Adapter that presents data from a PagedList. For example, when a new page is loaded, the PagedListAdapter signals the RecyclerView that the data has arrived; this lets the RecyclerView replace any placeholders with the actual items, performing the appropriate animation.

The PagedListAdapter also uses a background thread to compute changes from one PagedList to the next (for example, when a database change produces a new PagedList with updated data), and calls the notifyItem…()methods as needed to update the list’s contents. RecyclerView then performs the necessary changes. For example, if an item changes position between PagedList versions, the RecyclerView animates that item moving to the new location in the list.


Paging data flow

Let’s say that we have some data that we put on the DataSource on the background thread.The DataSource invalidates the PagedList and updates its value.Then on the main thread, the PagedList notifies its observers of the new value.So now the PagedListAdapter knows about the new value.So on the background thread, the PageListAdapter needs to compute what’s changed, whats’ the difference.Then, back on the UI thread, the View is updated in the onBinderViewHolder.So all of this happens automatically.You just insert an item in that database, and then you see it animated in and no UI code is required.

Paging Library Example

Architecture Components Paging Demo

1.Adding Components to your Project

Architecture Components are available from Google’s Maven repository. To use them, follow these steps:

Open the build.gradle file for your project and add the line as shown below:

Add Architecture Components

In this tutorial, we are using Room, LiveData, and ViewModel.

Open the build.gradle file for your app or module and add the artifacts that you need as dependencies:

2.Create DataSource

Live Paged List Provider

Create Entity

Represents a class that holds a database row. For each entity, a database table is created to hold the items. You must reference the entity class in the Database class.

Data Access Objects (DAO)

To simplify the connection between the DataSource and the RecyclerView, we can use a LivePagedListProvider.So this will expose, actually, a LiveData of a PageList of our user.So all you will need to do is provide a DataSource.But if that DataSource is true, then it will be generated for you in the DAO.You don’t need to write any invalidating handling code.You can simply bind the LiveData of a PagedList to a PagedListAdapter and get updates, invalidates, and lifecycle cleanup with a single line of binding code.

So in our user DAO, we would return a LivePagedListProvider of our user to get the users by the last name.

Create Database

The annotation defines the list of entities, and the class’s content defines the list of data access objects (DAOs) in the database. It is also the main access point for the underlying connection.

The annotated class should be an abstract class that extends RoomDatabase.

3.Create ViewModel

In the ViewModel, we would extend from the Architecture Component ViewModel, and then we would keep a reference to that LiveData of our PagedList and we will get that reference from the DAO by calling getUsers(), and then call Create using the configuration that you want.So for example, setting the page size to 50, setting the prefetch distance to 50 and so on.

In the onCreate, we get the reference to our ViewModel.We get the reference to the RecyclerView, and we create our adapter.

4.Create Adapter

To tell the PagedListAdapter how to compute the difference between the two elements, you’ll need to implement a new class, DiffCallback.Here, you will define two things.

You will define how to compute whether the contents are the same, and how to define whether the items are the same.

Let’s look at the adapter.So our adapter would extend PagedListAdapter, and then it will connect the user, which is the information that’s being displayed, with the user ViewHolder.

We define the callback, the DIFF_CALLBACK, for our user objects and then in onBindViewHolder, all we need to do is bind the item to the ViewHolder.That’s all.


Android has a lot of new concepts and components with architecture Components.But the thing is, you can them separately.So if you want, you’ll only be able to use lifecycle, LiveData, and PagedList or only ViewModel, or only Room.But you can also use them together.So start using the Architecture Components to create a more testable architecture for your application.

Download this project from GitHub


Related Post

Rest API Pagination with Paging Library.


Room: Database Relationships

Most modern applications today use databases for offline storage. Lucky for us, this interaction is quite easy using Room Persistence Library.In this tutorial, we’ll learn how to work with multiple tables that have relationships with each other. First, we will go over some core concepts, and then will begin working with JOIN queries in SQL.


When we creating a database, we separate tables for different types of entities. For examples, customers, orders, items etc… But we also need to have relationships between these tables. For instance, customers make orders, and orders contain items. These relationships need to be represented in the database. Also, when fetching data with SQL, we need to use certain types of JOIN queries to get what we need.

There are several types of database relationships. Today we are going to cover the following:

  • One to One Relationships
  • One to Many and Many to One Relationships
  • Many to Many Relationships

When selecting data from multiple tables with relationships, we will be using the JOIN query.

Room: One-To-One Relationships

In this example you will learn how to map one-to-one relationship using Room. Consider the following relationship between Customer and Address entity.
Room: One-To-One mappingTo create this relationship you need to have a CUSTOMER and ADDRESS table. The relational model is shown below.

Room: One-To-One mapping

Use a primary key

Each entity must define at least 1 field as a primary key. Even when there is only 1 field, you still need to annotate the field with the @PrimaryKey annotation. Also, if you want Room to assign automatic IDs to entities, you can set the @PrimaryKey‘s autoGenerate property.

Define relationships between objects

You need to specify relationships between customer and address objects. The Room allows you to define Foreign Key constraints between entities.

For example, if there’s Customer entity, you can define its relationship to the Address entity using the @ForeignKeyannotation, as shown in the following code snippe

Now we have a relationship between the Customers table and the Addresses table. If each address can belong to only one customer, this relationship is “One to One”. Keep in mind that this kind of relationship is not very common. Our initial table that included the address along with the customer could have worked fine in most cases.
Notice that now there is a field named “address_id” in the Customers table, that refers to the matching record in the Address table.

Room: One-To-Many Relationships

This is the most commonly used type of relationship. Consider an e-commerce app, with the following:

  • Customers can make many orders
  • Orders can contain many items
  • Items can have descriptions in many languages

In these cases, we would need to create “One to Many” relationships. Here is an example:
In the following example, you will learn how to map one-to-many relationship using Room. Consider the following relationship between Customer and Order entity.
Room: One-To-ManyAccording to the relationship, a Customer can have any number of Orders.
To create this relationship you need to have a Customer and Order table. The relational model is shown below.
Room: One-To-Many

Each customer may have zero, one or multiple orders. But an order can belong to only one customer.

To create Order table you need to create the following Java bean class.

Room: Many to Many Relationships

In some cases, you may need multiple instances on both sides of the relationship. For example, each order can contain multiple items. And each item can also be in multiple orders.

Room: Many to Many RelationshipsFor these relationships, we need to create an extra table:
Room: Many to Many Relationships
The item_order table has only one purpose, and that is to create a “Many to Many” relationship between the items and the orders.

To create the items and item_order tables you need to create the following Java Bean class.

Annotate indices and uniqueness

Sometimes, certain fields or groups of fields in a database must be unique. You can enforce this uniqueness property by setting the unique property of an @Index annotation to true. The following code sample prevents a table from having two rows that contain the same set of values for the order_id and item_id columns:

Join Queries

Some of your queries might require join tables to calculate the result. Room allows you to write any query. Furthermore, if the response is an observable data type, such as Flowable or LiveData, Room watches all tables referenced in the query for invalidation.

The following code snippet shows how to perform a table join to consolidate information between a table.


Related Post

Room Persistence Library

How to use DateTime datatype in SQLite Using Room

Room database Migrating


Image Classify Using TensorFlow Lite

We know that machine learning adds great power to your mobile app.TensorFlow Lite is a lightweight ML library for mobile and embedded devices.TensorFlow works well on large devices and TensorFlow Lite works really well on small devices. So that it’s easier and faster and smaller to work on mobile devices.

Getting Started with an Android App

This post contains an example application using TensorFlow Lite for Android App. The app is a simple camera app that classifies images continuously using a quantized MobileNets model.

Step 1: Decide which Model to use

Depending on the use case, you may choose to use one of the popular open-sourced models such as InceptionV3 or MobileNets or re-train these models with their own custom data set or even build their own custom model.In this example, we use pre-train MobileNets model.

Step 2: Add TensorFlow Lite Android AAR

Android apps need to be written in Java, and core TensorFlow is in C++, a JNI library is provided to interface between the two. Its interface is aimed only at inference, so it provides the ability to load a graph, set up inputs, and run the model to calculate particular outputs.

This app uses a pre-compiled TFLite Android Archive (AAR). This AAR is hosted on jcenter.

The following lines in the app’s build.gradle file includes the newest version of the AAR, from the TensorFlow maven repository, in the project.

We use the following block, to instruct the Android Asset Packaging Tool that .lite or .tflite assets should not be compressed. This is important as the .lite file will be memory-mapped, and that will not work when the file is compressed.

Step 3: Add your model files to the project

Download the quantized Mobilenet TensorFlow Lite model from here, unzip and copy mobilenet_quant_v1_224.tflite and label.txt to the assets directory: src/main/assets

Step 4: Load TensorFlow Lite Model

TensorFlow Lite’s Java API supports on-device inference and is provided as an Android Studio Library that allows loading models, feeding inputs, and retrieving inference outputs.

The class drives model inference with TensorFlow Lite. In most of the cases, this is the only class an app developer will need.Initializing an Interpreter with a Model File.The Interpreter can be initialized with a MappedByteBuffer:

This byte buffer is sized to contain the image data once converted to float. The interpreter can accept float arrays directly as input, but the ByteBuffer is more efficient as it avoids extra copies in the interpreter.

The following lines load the label list and create the output buffer:

The output buffer is a float array with one element for each label where the model will write the output probabilities.

Running Model Inference

If a model takes only one input and returns only one output, the following will trigger an inference run:

For models with multiple inputs, or multiple outputs, use:

where each entry in inputs corresponds to an input tensor and map_of_indices_to_outputs maps indices of output tensors to the corresponding output data. In both cases the tensor indices should correspond to the values given to the TensorFlow Lite Optimized Converter when the model was created. Be aware that the order of tensors in input must match the order given to the TensorFlow Lite Optimized Converter.

Following method takes a Bitmap as input, runs the model and returns the text to print in the app.

This method does three things. First converts and copies the input Bitmap to the imgData ByteBuffer for input to the model. Then it calls the interpreter’s run method, passing the input buffer and the output array as arguments. The interpreter sets the values in the output array to the probability calculated for each class. The input and output nodes are defined by the arguments to the toco conversion step that created the .lite model file earlier.


The app is resizing each camera image frame to (224 width * 224 height) to match the quantized Mobilenet model being used. The resized image is converted into a ByteBuffer row by row of size 1 * 224 * 224 * 3 bytes, where 1 is the number of images in a batch 224 * 224 is the width and height of the image 3 bytes represents three colors of a pixel. This app uses the TensorFlow Lite Java inference API for models which take a single input and provide a single output. This outputs a two-dimensional array, with the first dimension being the category index and the second dimension being the confidence of classification. The Mobilenet model has 1001 unique categories and the app sorts the probabilities of all the categories and displays the top three. The Mobilenet quantized model is bundled within the assets directory of the app.


Download this project from GitHub

Related Post

TensorFlow Lite

Train Image classifier with TensorFlow


TensorFlow Lite

What is TensorFlow?

Implement the Machine Learning or AI-powered applications running on mobile phones it may be easiest and the fastest way to use TensorFlow. which is the open source library for Machine Learning.TensorFlow is some google standard framework for building new ML or AI basis product. So this is a standard play mapper Machine Learning in google and created by Google brain team and Google has opensource in 2015.TensorFlow is scalable and portable. So you can get started with downloading TensorFlow code on your laptop and try out with some sample code and then you can move to you models the production level use cases by using GPU. After training the model you can bring the model which consists of tens of megabytes of data that could be ported to the mobile embedded systems.

Neural Network for Mobile

If you want to bring the TensorFlow into your mobile applications there are some challenges you have to face. The neural network is big compared with the other classic machine learning models because deep learning you have to multiple layers.So the total amount of the parameters and amount of the calculation you have to do it can be big for example, the inceptionV3 which is one of the popular image classification models that requires to 91 MB.If you use TensorFlow without any changes by default which consume like 12MB of the binary code. So if you want to bring your mobile applications in productions you don’t want to have users downloading 100 MB. When you’re starting to use your applications you may want to compress everything into the rack at 10-20 MB.So Google has to think about optimization for mobile applications things like pleasing graph quantization memory mapping and selective registration.

Freeze Graph

Freezing graph means that you can remove the all the variables from the TensorFlow graph and convert it into the constants.TensorFlow has the weights and biases so the parameters inside neural networks as a variable because you want to train the model you want to train the neural network in its training data but once you have finish training you don’t have to those parameters in the variable you can put everything into constant.So that by converting from variables to constants you can get much faster learning time.

Quantization in TensorFlow

Quantization is another optimization you can take for the mobile app.Quantizations means that you can compress the precision of each variable in parameters, weights, and biases into fewer operations.For example, by default, TensorFlow use the 32-bit floating point numbers for representing any weights and biases.But by using quantization, you can compress that into 8-bit integer.By using 8-bit integer, you can shrink the size of the parameters much, much smaller and especially for the embedded systems or mobile systems.It’s important to use the integer numbers rather than the floating point numbers to do all the calculations such as multiplications and additions between matrices and vectors because hardware for floating point requires much larger footprint in implementation.So TensorFlow already provides your primitive datatypes for supporting quantization of parameters and operations quantizing, and de-quantizing, or operations that support the quantized variables.

What is TensorFlow Lite?

We know that machine learning adds great power to your mobile application.So with great power comes great responsibility.TensorFlow Lite is a lightweight ML library for mobile and embedded devices.TensorFlow works well on large devices and TensorFlow Lite works really well on small devices. So that it’s easier and faster and smaller to work on mobile devices.

What is different between TensorFlow mobile and TensorFlow Lite?

You should view TensorFlow Lite as an evolution of TensorFlow mobile. TensorFlow Lite is like the next generation.This is created to be really small in size and opt for smaller devices.

TensorFlow Lite came up with three goals.It wanted to have a very small memory and binary size.So even without selective registration.It wants to keep the binary size small and it wants to make sure that the overhead latency is also really small you really can’t 30 seconds for an inference to happen by the time that model is downloaded and processes and quantization is the first-class citizen.It support quantization and many of the model’s support are quantized models.

TensorFlow Lite Architecture
TensorFlow Lite architecture

This is the high-level architecture as you can see it’s a simplified architecture and works both for Android and ios.This is lightweight performs better and leverages hardware acceleration if available.

So to better understanding how to write a model let’s consider how to build a model using TensorFlow Lite.There are two aspects one is the workstation side and other one is the mobile side and let’s walk through the complete lifecycle.
TensorFlow Lite lifecycle

The first step is to decide what model you want to use. So if you want to use their already pre-trained model then you can skip this step because you’ve already done the model generation. One option is to use a pre-trained model the other option would be to retrain just the last layers like you did in the post. You can write your own custom model and train and generate a graph this is nothing specific to TensorFlow Lite this is as good as standard TensorFlow where you build a model and generate graph depths and checkpoints.

The next step is specific to TensorFlow Lite is to convert the generated model into a format the TensorFlow Lite understands.A prerequisite to converting it is to freeze graph.So checkpoints have the weight the graphdef has the variables and tensors freezing the graph is a step where you combine these two results and feed it to your converter the converter is provided as part of the TensorFlow Lite software.You can use this to convert your model into the format that we need. Once this step is completed the conversion step is completed you will have what is called as a .lite binary file.

So now you have a means to move the model to the mobile side.You feed this TensorFlow Lite model into the interpreter.The interpreter executes the model using a set of operators.It supports selective operator loading and only and without this operator it’s only about 70KB and with all the operators it’s about 300KB so you can see how small the minor resize this is a significant reduction from what the TensorFlow is which is over 1 MB at this point so you can also implement custom kernels using the API.If the interpreter is running a CPU then this can be executed directly on the CPU otherwise if there is hardware acceleration then it can be executed on the hardware accelerated hardware as well.

Components of TensorFlow Lite

TensorFlow Lite ComponentThe main components of TensorFlow are the model file format, the interpreter for processing the graph, a set of kernels to work to or where the interpreter can invoke a set of kernels, and lastly an interface to the hardware acceleration layer.TensorFlow Lite has a special model file formate and this is lightweight and has very few dependencies and most graph calculations are done using 32-bit float, but neural networks are trained to be robust for noise and this allows us to explore lower precision numeric the advantages of lower precision numeric is lower memory and faster computation and this is vital for mobile and embedded devices this using lower precision can result in come amount of accuracy loss. So depending on the application you want to develop you can overcome this and use quantization lost in your training. So you can get better accuracy.So quantization is supported as the first class citizen in TensorFlow Lite.TensorFlow has also FlatBuffer base system so we can have the speed of execution.


FlatBuffer is an opensource Google project and it’s comparable to protocol buffers but much faster to use it’s much more memory efficient and in the past when we developed of application we always thought about optimizing for CPU instructions but now CPU are moved far ahead and writing something efficient for memory is more important today. So this is a FlatBuffers is a cross-platform serialization library and it is similar to protobufs but it is designed to be more efficient that you don’t need to you can access them without unpacking and there is no need for secondary representation before you access the data. So this is aimed for speed and efficiency and it is strongly typed so you can find errors at compile time.


The interpreter is engineered to be lower work with low overhead and on very small devices. TensorFlow Lite has very few dependencies and it is easy to build on simple devices.TensorFlow Lite kept the binary size of 70KB and 300KB with operators.

It uses FlatBuffers. So it can load really and the speed comes at the cost of flexibility.TensFolw Lite support only a subset of operators that TensorFlow has. So if you are building a mobile application and if the operators are supported by TensorFlow Lite then the recommendation is use TensorFlow Lite but if you are building a mobile application that is not supported by TensorFlow Lite yet then you should use TensorFlow mobile but be going forward all developer we are going to be using TensorFlow Lite as the main standard.


It has support for operators and used in some common inference models.The set of operators are smaller.Every model will be not supported them, in particular, TensorFlow Lite provides a set of core built-in ops and these have been optimized for arm CPU using neon and they work in both float and quantized. These have been used by Google apps and so they have been battle tested and Gooogle has done the handoff. Google has hand optimized for many common patterns and it has fused many operations to reduce the memory bandwidth.If there are ops that are unsupported it also provides a C API so you could use custom operators and you can write your own operators for this.

4.Interface to Hardware Acceleration

It targets custom hardware.It is the neural network API TensorFlow lite comes pre-loaded with hooks for neural network API if you have an Android release that supports NN API then tensor flow lite will delegate these operators into NN API and if you have an Android release that does not support NN API it’s executed directly on the CPU.

Android Neural Network API

Android Neural Network API is supported for Android with 8.1 release in Oreo.It will support various hardware acceleration you can get from vendors for GPU for DPS and CUP.It uses TensorFlow as a core technology. So, for now, you can keep using TensorFlow to write your mobile app and your app will get the benefits of hardware acceleration through your NN API. It basically abstracts the hardware layer for ML inference for example if a device has ML DSP it can transparently map to it and it uses NN primitives that are very similar to TensorFlow Lite.

android neural network architecture

So It’s architecture for neural network API’s looks like this essentially there’s an android app. On top typically there is no need for the Android app to access the neural network API directly it will be accessing it through the machine learning interface which is the TensorFlow Lite interpreter and the NN runtime. The neural network runtime can talk to the hardware abstraction layer and then which talks to their device and run various accelerators.


Related Post

Image Classify Using TensorFlow Lite

Introduction TensorFlow Machine Learning Library

Install TensorFlow

Train Image classifier with TensorFlow

Train your Object Detection model locally with TensorFlow

Android TensorFlow Machine Learning


Location Use cases Best Practices

Location-based apps are absolutely everywhere like transportation apps, geo apps navigation apps, weather apps even dating apps all use location.We will dive into a common-use case that every developer has to address when they are writing location apps.we come up with some best practices. 

1.Use cached location

Use cached locationLet’s start with an obvious one, do you want to know the location of a device? For example, weather app, you want to show the right weather.You need to know, where the phone is?Here, I would say you don’t need location updates, you used cached location.Every time location is obtained from your device, it’s cached somewhere. You can use getLastLocation(). This will give you what you need in a lot of cases.The API has ways of knowing how stale or fresh this is.You save a tonne of battery that way.

2.User-visible(foreground) update

User visible updatesYou have user-visible foreground updates – for example, a mapping app of some kind.So here, because it is foreground, it is okay to use high accuracy, high frequency, and low latency.It’s expensive, but it is okay because, in the foreground, this is pretty much tied to your activities’ life cycle and it will end soon.So typically, what you would do in an activity is you would request location updates, but you would also do the following which is that onStop you remove the updates.Location gathering will keep happening long after your activities there, which is obviously a very, very bad thing to do.

3.Starting updated at a specific location

Starting updates at a specific locationAnother use case is you want to start location update at a specific location. You want to start location update when you’re near home, when you’re near work, near a cricket stadium-whatever. So here, it is a pretty good case of mixing geofencing and location updates.So, typically, what will happen is, imagine you’ve defined a geofence around some area of interest.If a user enters or exits a geofence, location services will let you know, and, at that point, you can say this is a trigger I was waiting for, I’m now going to request location updates.A common pattern for this is the geofence gets triggered, you get notified, you maybe show the user a notification, the user taps on notification, your app opens up to some activity, and, at that point, location updates begin.

4.Activity Recognition location update

Activity RecognitionAnother common use case is, where you want location updates but you only want them tied to a specific user activity – maybe when the user is riding a bike, or driving in a car.Here, we would use the activity-recognition API and combine that with location updates.It would work like the previous example: let’s say you were tracking cycling.Location services would tell you when the user is likely to be on a bicycle, and you can take that and start location updates to the notifications, something comes into the foreground.


5.Awareness API

Awareness APIAndroid has an exposed and an awareness API.It basically senses and infers your context, and it manages system health for you, and it does so in a battery-efficient manner. If you’re dealing with complex scenarios, awareness API may be exactly what you’re looking for.It tracks lots of things: what time of day is it? What is the location of the device? What are are the place nearby? Are there coffee shops or stadiums nearby?Houses of worship?What is the activity of the device, the person on a bike is the person on the car?Are there beacons nearby.It is a person wearing headphones.what is the weather like?You can take all of these contexts and treat of a larger sense of a fence.Basically, you can easily react to changes in multiple aspects of the user’s context and thing generalizes the idea of a fence well beyond conventional geofences which of course are just for the location.

So here is an example so, you create a context fence and it tracks three things you create an activity fence which says tracked that the user is driving it.Create a location fence and says track that this geofence may be a stadium geo being tracked. Then our time geo fence make sure it’s time between this time and this time when all of this thing are true you and your app is let’s say even in the background location services will say all the conditions you specified are true.I’m letting you know you can now do whatever and that whatever could include location update.

6.Snapshot API

Snapshot API that is made possible through awareness and again it’s a simple way to ask for multiple aspects of user’s context from a city again an example you find out what the current places, you find out what the current activity is? if the current place is a shopping mall and the activity is walking hey maybe it’s time to start location updates so that you can tell the user as walks what the stores are nearby or maybe some discounts that you can offer.

The thing is you are using multiple inputs and multiple contexts and that can get pretty expensive for battery because you’re running a lot of different things. If you use awareness API you can minimize battery cost because awareness API is actually pretty battery optimized.

7.Long-running(background) updates tied to specific locations

Long running updatesYou want to find all the Starbucks in the city or you want to find all the ATMs.Android has a solution that involves and a Dynamic geofence says location services make a requirement that you can only 100 geofences at one time there are of course many more ATM many more Starbucks than just a hundred and also maintaining 100 geofences is actually pretty expensive.That’s a lot of scanning that the location services will have to do and that’s going to drain your battery.So the solution is dynamic geofences maybe put a geofence around the city and when the device enters that city dynamically registered geofences in locations inside that city so you have the outer geofences dynamically you put energy offenses if the person leaves the city you can remove those dynamic geofences that are inside because you don’t need them anymore and this is a way you can actually in a very battery efficient way get a lot of geofences get around 100 geofences limit and actually do really pretty amazing things.

8.Long-running background updates.

The problematic one you want long-running background updates with no visible app component so basically think of an app that passively tracks your location for hours or days at a time. So this is a case that gives that keeps people up this the case that is inherently problematic and this is a case where you get into that problem. That I initially refer to that background location gathering is a really really major drain on battery but if you have to do it how do you it so?

Solution: long-running service?

Exposes a method for getting location updates using a PendingIntent and that’s exactly what you should do. So you request location updates you give it a location request you give it a PendingIntent and GMS core location services will wake up your app when an update is found.

In cases like this, what should location request look like what are you gonna do in the background that you gonna do in the background that doesn’t burn battery so you use moderate accuracy low frequency and high latency let’s just look at three of that thing right now.

You do not want accuracy which is priority high accuracy for any background use cases this bad for battery


For frequency, I think a good pattern would be to request updates times an hour let’s say every 15 minutes.Certainly, you should try to get more update through passive location gathering. So that’s why it’s a good idea to set the fastest interval to some small amount this way if others are gathering location you get that location for free doesn’t cost you anything.


Latency this is really really important again imagine that you set your interval to 15-minutes. If you set the max wait time to 1 hour you will get location updates every hour but you’ll updates which are calculated every 15 minutes at a time that’s pretty good for background and that will save battery.

9.Frequent Updates while a user interacts

Frequent updatesWhat if you want frequent updates while a user interacts with other apps?So imagine a Fitness app or Navigation app. So in this kind of a case, you should use a foreground service.This is sort of the recommendation that Android coming up with because android believes that when potentially expensive work is being done on behalf of the user that user should be aware of the work. A foreground service, as you know, requires a persistent notification.The user will be able to see


Related Posts

Creating and Monitoring Geofences

How Location API Works in Android

Understanding Battery Drain when using Location