Rest API Pagination with Paging Library.

Almost every REST API your app call requires to handle pagination. When calling a REST API for some resource instead of delivering all of the results, which could be time-consuming and cumbersome to deal with, a REST API will typically make you paginate through the results.If you are trying to support pagination on a client you need to handle it gracefully.

Let’s say that there’s an API endpoint that returns a list of users. In the database of the API, there are 100,000+ users. It would be impractical to fetch all 100,000 users from the API all in one request. In fact, we would probably get an OutOfMemory exception. To avoid this, we want to paginate through our list of the user when making requests to our API.

Paging Library call REST API


Paging Library

The new paging library makes it easier for your app to gradually load information as needed from a REST API, without overloading the device or waiting too long for an all result.This library contains several classes to streamline the process of requesting data as you need it. These classes also work seamlessly with existing architecture components, like Room.

1.Adding Components to your Project

Architecture Components are available from Google’s Maven repository. To use them, follow these steps:

Open the build.gradle file for your project and add the line as shown below:

Add Architecture Components

In this tutorial, we are using LiveData, and ViewModel.

Open the build.gradle file for your app or module and add the artifacts that you need as dependencies:

2.Setting up Retrofit for Pagination

The code samples below for making the API calls are using Retrofit with GSON.We are working with the GitHub API and making a call to the GitHub User endpoint.

Now, let’s write our class that will generate our RetrofitService.

3.Create ItemKeyedDataSource

Use ItemKeyedDataSource class to define a data source.It uses for incremental data loading for paging keyed content, where loaded content uses previously loaded items as input to next loads.

Implement a DataSource using ItemKeyedDataSource if you need to use data from item N - 1 to load item N. This is common, for example, in sorted database queries where attributes of the item such just before the next query define how to execute it.

To implement, extend ItemKeyedDataSourcesubclass.


A simple data source factory which also provides a way to observe the last created data source.This allows us to channel its network request status etc back to the UI.

5.Create ViewModel

In the ViewModel, we would extend from the Architecture Component ViewModel, and then we would keep a reference to that LiveData of our PagedList.

LivePagedListBuilder:This class generates a LiveData<PagedList> from the DataSource.Factory you provide.
PagedList: A PagedList is a List which loads its data in chunks (pages) from a DataSource.All data in a PagedList is loaded from its DataSource. Creating a PagedList loads data from the DataSource immediately, and should, for this reason, be done on a background thread. The constructed PagedList may then be passed to and used on the UI thread. This is done to prevent passing a list with no loaded content to the UI thread, which should generally not be presented to the user.

6.Create progressbar as footer in a RecyclerView

Create RecyclerView with 2 type of items one is our usual items the second is a progress bar, then we need to listen NetworkState LiveData and decide are we going to show progressbar or not.


Now, set the Network State and add or remove ProgressBar in Adapter.

hasExtraRow() check the Network State and return true or false.

To tell the PagedListAdapter how to compute the difference between the two elements, you’ll need to implement a new class, DiffCallback.Here, you will define two things.

You will define how to compute whether the contents are the same, and how to define whether the items are the same.
Let’s look at the adapter.So our adapter would extend PagedListAdapter, and then it will connect the user, which is the information that’s being displayed, with the user ViewHolder.

We define the callback, the DIFF_CALLBACK, for our user objects and then in onBindViewHolder, all we need to do is bind the item to the ViewHolder.

7.Create RecyclerView

In the onCreate, we get the reference to our ViewModel and get the reference to the RecyclerView, and we create our adapter.


Download this project from GitHub


Related Post

Architecture Components:Paging Library.



Architecture Components:Paging Library

Many Applications need to load a lot of information from the database.Database queries can take a long time to run and use a lot of memory.Android has a new paging library that can help you with all of this.

Main Components of Paging Library

The main components of the paging library are PagedListAdapter, that actually extends the RecyclerViewAdapter, PagedList, and DataSource.

Component Of Paging Library

DataSource: The DataSource is an interface for page source to provide the data gradually.But you’ll need to implement one of the three DataSource, either a PageKeyedDataSourceorItemKeyedDataSource or PositionalDataSource which will be used when you need to load item N based on the item N-1.

  • Implement a DataSource using PageKeyedDataSource if you need to use data from page N - 1 to load page N. This is common, for example, in network APIs that include a next/previous link or key with each page load.
  • Implement a DataSource using ItemKeyedDataSource if you need to use data from item N - 1 to load item N. This is common, for example, in sorted database queries where attributes of the item such just before the next query define how to execute it.
  • Extend PositionalDataSource if you can load pages of a requested size at arbitrary positions, and provide a fixed item count. If your data source can’t support loading arbitrary requested page sizes (e.g. when network page size constraints are only known at runtime), use  either PageKeyedDataSource or ItemKeyedDataSource instead.


If you use the Room persistence library to manage your data, it can generate a DataSource.Factory to producePositionalDataSources for you automatically, for example:

PagedList: The PagedList is a component that loads the data automatically and can provide update signal, for example, to the RecyclerViewAdapter. The data is loaded automatically from a DataSource on the background thread.But then, it’s consumed on the main thread.It supports both an infinite scrolling list, but also countable lists.

You can configure several things.You can configure the initial load size, the page size, but also the prefetch distance.

PagedListAdapter:This class is an implementation of RecyclerView.Adapter that presents data from a PagedList. For example, when a new page is loaded, the PagedListAdapter signals the RecyclerView that the data has arrived; this lets the RecyclerView replace any placeholders with the actual items, performing the appropriate animation.

The PagedListAdapter also uses a background thread to compute changes from one PagedList to the next (for example, when a database change produces a new PagedList with updated data), and calls the notifyItem…()methods as needed to update the list’s contents. RecyclerView then performs the necessary changes. For example, if an item changes position between PagedList versions, the RecyclerView animates that item moving to the new location in the list.


Paging data flow

Let’s say that we have some data that we put on the DataSource on the background thread.The DataSource invalidates the PagedList and updates its value.Then on the main thread, the PagedList notifies its observers of the new value.So now the PagedListAdapter knows about the new value.So on the background thread, the PageListAdapter needs to compute what’s changed, whats’ the difference.Then, back on the UI thread, the View is updated in the onBinderViewHolder.So all of this happens automatically.You just insert an item in that database, and then you see it animated in and no UI code is required.

Paging Library Example

Architecture Components Paging Demo

1.Adding Components to your Project

Architecture Components are available from Google’s Maven repository. To use them, follow these steps:

Open the build.gradle file for your project and add the line as shown below:

Add Architecture Components

In this tutorial, we are using Room, LiveData, and ViewModel.

Open the build.gradle file for your app or module and add the artifacts that you need as dependencies:

2.Create DataSource

Live Paged List Provider

Create Entity

Represents a class that holds a database row. For each entity, a database table is created to hold the items. You must reference the entity class in the Database class.

Data Access Objects (DAO)

To simplify the connection between the DataSource and the RecyclerView, we can use a LivePagedListProvider.So this will expose, actually, a LiveData of a PageList of our user.So all you will need to do is provide a DataSource.But if that DataSource is true, then it will be generated for you in the DAO.You don’t need to write any invalidating handling code.You can simply bind the LiveData of a PagedList to a PagedListAdapter and get updates, invalidates, and lifecycle cleanup with a single line of binding code.

So in our user DAO, we would return a LivePagedListProvider of our user to get the users by the last name.

Create Database

The annotation defines the list of entities, and the class’s content defines the list of data access objects (DAOs) in the database. It is also the main access point for the underlying connection.

The annotated class should be an abstract class that extends RoomDatabase.

3.Create ViewModel

In the ViewModel, we would extend from the Architecture Component ViewModel, and then we would keep a reference to that LiveData of our PagedList and we will get that reference from the DAO by calling getUsers(), and then call Create using the configuration that you want.So for example, setting the page size to 50, setting the prefetch distance to 50 and so on.

In the onCreate, we get the reference to our ViewModel.We get the reference to the RecyclerView, and we create our adapter.

4.Create Adapter

To tell the PagedListAdapter how to compute the difference between the two elements, you’ll need to implement a new class, DiffCallback.Here, you will define two things.

You will define how to compute whether the contents are the same, and how to define whether the items are the same.

Let’s look at the adapter.So our adapter would extend PagedListAdapter, and then it will connect the user, which is the information that’s being displayed, with the user ViewHolder.

We define the callback, the DIFF_CALLBACK, for our user objects and then in onBindViewHolder, all we need to do is bind the item to the ViewHolder.That’s all.


Android has a lot of new concepts and components with architecture Components.But the thing is, you can them separately.So if you want, you’ll only be able to use lifecycle, LiveData, and PagedList or only ViewModel, or only Room.But you can also use them together.So start using the Architecture Components to create a more testable architecture for your application.

Download this project from GitHub


Related Post

Rest API Pagination with Paging Library.

Guide to Android Architecture Components


Room: Database Relationships

Most modern applications today use databases for offline storage. Lucky for us, this interaction is quite easy using Room Persistence Library.In this tutorial, we’ll learn how to work with multiple tables that have relationships with each other. First, we will go over some core concepts, and then will begin working with JOIN queries in SQL.


When we creating a database, we separate tables for different types of entities. For examples, customers, orders, items etc… But we also need to have relationships between these tables. For instance, customers make orders, and orders contain items. These relationships need to be represented in the database. Also, when fetching data with SQL, we need to use certain types of JOIN queries to get what we need.

There are several types of database relationships. Today we are going to cover the following:

  • One to One Relationships
  • One to Many and Many to One Relationships
  • Many to Many Relationships

When selecting data from multiple tables with relationships, we will be using the JOIN query.

Room: One-To-One Relationships

In this example you will learn how to map one-to-one relationship using Room. Consider the following relationship between Customer and Address entity.
Room: One-To-One mappingTo create this relationship you need to have a CUSTOMER and ADDRESS table. The relational model is shown below.

Room: One-To-One mapping

Use a primary key

Each entity must define at least 1 field as a primary key. Even when there is only 1 field, you still need to annotate the field with the @PrimaryKey annotation. Also, if you want Room to assign automatic IDs to entities, you can set the @PrimaryKey‘s autoGenerate property.

Define relationships between objects

You need to specify relationships between customer and address objects. The Room allows you to define Foreign Key constraints between entities.

For example, if there’s Customer entity, you can define its relationship to the Address entity using the @ForeignKeyannotation, as shown in the following code snippe

Now we have a relationship between the Customers table and the Addresses table. If each address can belong to only one customer, this relationship is “One to One”. Keep in mind that this kind of relationship is not very common. Our initial table that included the address along with the customer could have worked fine in most cases.
Notice that now there is a field named “address_id” in the Customers table, that refers to the matching record in the Address table.

Room: One-To-Many Relationships

This is the most commonly used type of relationship. Consider an e-commerce app, with the following:

  • Customers can make many orders
  • Orders can contain many items
  • Items can have descriptions in many languages

In these cases, we would need to create “One to Many” relationships. Here is an example:
In the following example, you will learn how to map one-to-many relationship using Room. Consider the following relationship between Customer and Order entity.
Room: One-To-ManyAccording to the relationship, a Customer can have any number of Orders.
To create this relationship you need to have a Customer and Order table. The relational model is shown below.
Room: One-To-Many

Each customer may have zero, one or multiple orders. But an order can belong to only one customer.

To create Order table you need to create the following Java bean class.

Room: Many to Many Relationships

In some cases, you may need multiple instances on both sides of the relationship. For example, each order can contain multiple items. And each item can also be in multiple orders.

Room: Many to Many RelationshipsFor these relationships, we need to create an extra table:
Room: Many to Many Relationships
The item_order table has only one purpose, and that is to create a “Many to Many” relationship between the items and the orders.

To create the items and item_order tables you need to create the following Java Bean class.

Annotate indices and uniqueness

Sometimes, certain fields or groups of fields in a database must be unique. You can enforce this uniqueness property by setting the unique property of an @Index annotation to true. The following code sample prevents a table from having two rows that contain the same set of values for the order_id and item_id columns:

Join Queries

Some of your queries might require join tables to calculate the result. Room allows you to write any query. Furthermore, if the response is an observable data type, such as Flowable or LiveData, Room watches all tables referenced in the query for invalidation.

The following code snippet shows how to perform a table join to consolidate information between a table.


Related Post

Room Persistence Library

How to use DateTime datatype in SQLite Using Room

Room database Migrating


Image Classify Using TensorFlow Lite

We know that machine learning adds great power to your mobile app.TensorFlow Lite is a lightweight ML library for mobile and embedded devices.TensorFlow works well on large devices and TensorFlow Lite works really well on small devices. So that it’s easier and faster and smaller to work on mobile devices.

Getting Started with an Android App

This post contains an example application using TensorFlow Lite for Android App. The app is a simple camera app that classifies images continuously using a quantized MobileNets model.

Step 1: Decide which Model to use

Depending on the use case, you may choose to use one of the popular open-sourced models such as InceptionV3 or MobileNets or re-train these models with their own custom data set or even build their own custom model.In this example, we use pre-train MobileNets model.

Step 2: Add TensorFlow Lite Android AAR

Android apps need to be written in Java, and core TensorFlow is in C++, a JNI library is provided to interface between the two. Its interface is aimed only at inference, so it provides the ability to load a graph, set up inputs, and run the model to calculate particular outputs.

This app uses a pre-compiled TFLite Android Archive (AAR). This AAR is hosted on jcenter.

The following lines in the app’s build.gradle file includes the newest version of the AAR, from the TensorFlow maven repository, in the project.

We use the following block, to instruct the Android Asset Packaging Tool that .lite or .tflite assets should not be compressed. This is important as the .lite file will be memory-mapped, and that will not work when the file is compressed.

Step 3: Add your model files to the project

Download the quantized Mobilenet TensorFlow Lite model from here, unzip and copy mobilenet_quant_v1_224.tflite and label.txt to the assets directory: src/main/assets

Step 4: Load TensorFlow Lite Model

TensorFlow Lite’s Java API supports on-device inference and is provided as an Android Studio Library that allows loading models, feeding inputs, and retrieving inference outputs.

The class drives model inference with TensorFlow Lite. In most of the cases, this is the only class an app developer will need.Initializing an Interpreter with a Model File.The Interpreter can be initialized with a MappedByteBuffer:

This byte buffer is sized to contain the image data once converted to float. The interpreter can accept float arrays directly as input, but the ByteBuffer is more efficient as it avoids extra copies in the interpreter.

The following lines load the label list and create the output buffer:

The output buffer is a float array with one element for each label where the model will write the output probabilities.

Running Model Inference

If a model takes only one input and returns only one output, the following will trigger an inference run:

For models with multiple inputs, or multiple outputs, use:

where each entry in inputs corresponds to an input tensor and map_of_indices_to_outputs maps indices of output tensors to the corresponding output data. In both cases the tensor indices should correspond to the values given to the TensorFlow Lite Optimized Converter when the model was created. Be aware that the order of tensors in input must match the order given to the TensorFlow Lite Optimized Converter.

Following method takes a Bitmap as input, runs the model and returns the text to print in the app.

This method does three things. First converts and copies the input Bitmap to the imgData ByteBuffer for input to the model. Then it calls the interpreter’s run method, passing the input buffer and the output array as arguments. The interpreter sets the values in the output array to the probability calculated for each class. The input and output nodes are defined by the arguments to the toco conversion step that created the .lite model file earlier.


The app is resizing each camera image frame to (224 width * 224 height) to match the quantized Mobilenet model being used. The resized image is converted into a ByteBuffer row by row of size 1 * 224 * 224 * 3 bytes, where 1 is the number of images in a batch 224 * 224 is the width and height of the image 3 bytes represents three colors of a pixel. This app uses the TensorFlow Lite Java inference API for models which take a single input and provide a single output. This outputs a two-dimensional array, with the first dimension being the category index and the second dimension being the confidence of classification. The Mobilenet model has 1001 unique categories and the app sorts the probabilities of all the categories and displays the top three. The Mobilenet quantized model is bundled within the assets directory of the app.


Download this project from GitHub

Related Post

TensorFlow Lite

Train Image classifier with TensorFlow


TensorFlow Lite

What is TensorFlow?

Implement the Machine Learning or AI-powered applications running on mobile phones it may be easiest and the fastest way to use TensorFlow. which is the open source library for Machine Learning.TensorFlow is some google standard framework for building new ML or AI basis product. So this is a standard play mapper Machine Learning in google and created by Google brain team and Google has opensource in 2015.TensorFlow is scalable and portable. So you can get started with downloading TensorFlow code on your laptop and try out with some sample code and then you can move to you models the production level use cases by using GPU. After training the model you can bring the model which consists of tens of megabytes of data that could be ported to the mobile embedded systems.

Neural Network for Mobile

If you want to bring the TensorFlow into your mobile applications there are some challenges you have to face. The neural network is big compared with the other classic machine learning models because deep learning you have to multiple layers.So the total amount of the parameters and amount of the calculation you have to do it can be big for example, the inceptionV3 which is one of the popular image classification models that requires to 91 MB.If you use TensorFlow without any changes by default which consume like 12MB of the binary code. So if you want to bring your mobile applications in productions you don’t want to have users downloading 100 MB. When you’re starting to use your applications you may want to compress everything into the rack at 10-20 MB.So Google has to think about optimization for mobile applications things like pleasing graph quantization memory mapping and selective registration.

Freeze Graph

Freezing graph means that you can remove the all the variables from the TensorFlow graph and convert it into the constants.TensorFlow has the weights and biases so the parameters inside neural networks as a variable because you want to train the model you want to train the neural network in its training data but once you have finish training you don’t have to those parameters in the variable you can put everything into constant.So that by converting from variables to constants you can get much faster learning time.

Quantization in TensorFlow

Quantization is another optimization you can take for the mobile app.Quantizations means that you can compress the precision of each variable in parameters, weights, and biases into fewer operations.For example, by default, TensorFlow use the 32-bit floating point numbers for representing any weights and biases.But by using quantization, you can compress that into 8-bit integer.By using 8-bit integer, you can shrink the size of the parameters much, much smaller and especially for the embedded systems or mobile systems.It’s important to use the integer numbers rather than the floating point numbers to do all the calculations such as multiplications and additions between matrices and vectors because hardware for floating point requires much larger footprint in implementation.So TensorFlow already provides your primitive datatypes for supporting quantization of parameters and operations quantizing, and de-quantizing, or operations that support the quantized variables.

What is TensorFlow Lite?

We know that machine learning adds great power to your mobile application.So with great power comes great responsibility.TensorFlow Lite is a lightweight ML library for mobile and embedded devices.TensorFlow works well on large devices and TensorFlow Lite works really well on small devices. So that it’s easier and faster and smaller to work on mobile devices.

What is different between TensorFlow mobile and TensorFlow Lite?

You should view TensorFlow Lite as an evolution of TensorFlow mobile. TensorFlow Lite is like the next generation.This is created to be really small in size and opt for smaller devices.

TensorFlow Lite came up with three goals.It wanted to have a very small memory and binary size.So even without selective registration.It wants to keep the binary size small and it wants to make sure that the overhead latency is also really small you really can’t 30 seconds for an inference to happen by the time that model is downloaded and processes and quantization is the first-class citizen.It support quantization and many of the model’s support are quantized models.

TensorFlow Lite Architecture
TensorFlow Lite architecture

This is the high-level architecture as you can see it’s a simplified architecture and works both for Android and ios.This is lightweight performs better and leverages hardware acceleration if available.

So to better understanding how to write a model let’s consider how to build a model using TensorFlow Lite.There are two aspects one is the workstation side and other one is the mobile side and let’s walk through the complete lifecycle.
TensorFlow Lite lifecycle

The first step is to decide what model you want to use. So if you want to use their already pre-trained model then you can skip this step because you’ve already done the model generation. One option is to use a pre-trained model the other option would be to retrain just the last layers like you did in the post. You can write your own custom model and train and generate a graph this is nothing specific to TensorFlow Lite this is as good as standard TensorFlow where you build a model and generate graph depths and checkpoints.

The next step is specific to TensorFlow Lite is to convert the generated model into a format the TensorFlow Lite understands.A prerequisite to converting it is to freeze graph.So checkpoints have the weight the graphdef has the variables and tensors freezing the graph is a step where you combine these two results and feed it to your converter the converter is provided as part of the TensorFlow Lite software.You can use this to convert your model into the format that we need. Once this step is completed the conversion step is completed you will have what is called as a .lite binary file.

So now you have a means to move the model to the mobile side.You feed this TensorFlow Lite model into the interpreter.The interpreter executes the model using a set of operators.It supports selective operator loading and only and without this operator it’s only about 70KB and with all the operators it’s about 300KB so you can see how small the minor resize this is a significant reduction from what the TensorFlow is which is over 1 MB at this point so you can also implement custom kernels using the API.If the interpreter is running a CPU then this can be executed directly on the CPU otherwise if there is hardware acceleration then it can be executed on the hardware accelerated hardware as well.

Components of TensorFlow Lite

TensorFlow Lite ComponentThe main components of TensorFlow are the model file format, the interpreter for processing the graph, a set of kernels to work to or where the interpreter can invoke a set of kernels, and lastly an interface to the hardware acceleration layer.TensorFlow Lite has a special model file formate and this is lightweight and has very few dependencies and most graph calculations are done using 32-bit float, but neural networks are trained to be robust for noise and this allows us to explore lower precision numeric the advantages of lower precision numeric is lower memory and faster computation and this is vital for mobile and embedded devices this using lower precision can result in come amount of accuracy loss. So depending on the application you want to develop you can overcome this and use quantization lost in your training. So you can get better accuracy.So quantization is supported as the first class citizen in TensorFlow Lite.TensorFlow has also FlatBuffer base system so we can have the speed of execution.


FlatBuffer is an opensource Google project and it’s comparable to protocol buffers but much faster to use it’s much more memory efficient and in the past when we developed of application we always thought about optimizing for CPU instructions but now CPU are moved far ahead and writing something efficient for memory is more important today. So this is a FlatBuffers is a cross-platform serialization library and it is similar to protobufs but it is designed to be more efficient that you don’t need to you can access them without unpacking and there is no need for secondary representation before you access the data. So this is aimed for speed and efficiency and it is strongly typed so you can find errors at compile time.


The interpreter is engineered to be lower work with low overhead and on very small devices. TensorFlow Lite has very few dependencies and it is easy to build on simple devices.TensorFlow Lite kept the binary size of 70KB and 300KB with operators.

It uses FlatBuffers. So it can load really and the speed comes at the cost of flexibility.TensFolw Lite support only a subset of operators that TensorFlow has. So if you are building a mobile application and if the operators are supported by TensorFlow Lite then the recommendation is use TensorFlow Lite but if you are building a mobile application that is not supported by TensorFlow Lite yet then you should use TensorFlow mobile but be going forward all developer we are going to be using TensorFlow Lite as the main standard.


It has support for operators and used in some common inference models.The set of operators are smaller.Every model will be not supported them, in particular, TensorFlow Lite provides a set of core built-in ops and these have been optimized for arm CPU using neon and they work in both float and quantized. These have been used by Google apps and so they have been battle tested and Gooogle has done the handoff. Google has hand optimized for many common patterns and it has fused many operations to reduce the memory bandwidth.If there are ops that are unsupported it also provides a C API so you could use custom operators and you can write your own operators for this.

4.Interface to Hardware Acceleration

It targets custom hardware.It is the neural network API TensorFlow lite comes pre-loaded with hooks for neural network API if you have an Android release that supports NN API then tensor flow lite will delegate these operators into NN API and if you have an Android release that does not support NN API it’s executed directly on the CPU.

Android Neural Network API

Android Neural Network API is supported for Android with 8.1 release in Oreo.It will support various hardware acceleration you can get from vendors for GPU for DPS and CUP.It uses TensorFlow as a core technology. So, for now, you can keep using TensorFlow to write your mobile app and your app will get the benefits of hardware acceleration through your NN API. It basically abstracts the hardware layer for ML inference for example if a device has ML DSP it can transparently map to it and it uses NN primitives that are very similar to TensorFlow Lite.

android neural network architecture

So It’s architecture for neural network API’s looks like this essentially there’s an android app. On top typically there is no need for the Android app to access the neural network API directly it will be accessing it through the machine learning interface which is the TensorFlow Lite interpreter and the NN runtime. The neural network runtime can talk to the hardware abstraction layer and then which talks to their device and run various accelerators.


Related Post

Image Classify Using TensorFlow Lite

Introduction TensorFlow Machine Learning Library

Install TensorFlow

Train Image classifier with TensorFlow

Train your Object Detection model locally with TensorFlow

Android TensorFlow Machine Learning


Location Use cases Best Practices

Location-based apps are absolutely everywhere like transportation apps, geo apps navigation apps, weather apps even dating apps all use location.We will dive into a common-use case that every developer has to address when they are writing location apps.we come up with some best practices. 

1.Use cached location

Use cached locationLet’s start with an obvious one, do you want to know the location of a device? For example, weather app, you want to show the right weather.You need to know, where the phone is?Here, I would say you don’t need location updates, you used cached location.Every time location is obtained from your device, it’s cached somewhere. You can use getLastLocation(). This will give you what you need in a lot of cases.The API has ways of knowing how stale or fresh this is.You save a tonne of battery that way.

2.User-visible(foreground) update

User visible updatesYou have user-visible foreground updates – for example, a mapping app of some kind.So here, because it is foreground, it is okay to use high accuracy, high frequency, and low latency.It’s expensive, but it is okay because, in the foreground, this is pretty much tied to your activities’ life cycle and it will end soon.So typically, what you would do in an activity is you would request location updates, but you would also do the following which is that onStop you remove the updates.Location gathering will keep happening long after your activities there, which is obviously a very, very bad thing to do.

3.Starting updated at a specific location

Starting updates at a specific locationAnother use case is you want to start location update at a specific location. You want to start location update when you’re near home, when you’re near work, near a cricket stadium-whatever. So here, it is a pretty good case of mixing geofencing and location updates.So, typically, what will happen is, imagine you’ve defined a geofence around some area of interest.If a user enters or exits a geofence, location services will let you know, and, at that point, you can say this is a trigger I was waiting for, I’m now going to request location updates.A common pattern for this is the geofence gets triggered, you get notified, you maybe show the user a notification, the user taps on notification, your app opens up to some activity, and, at that point, location updates begin.

4.Activity Recognition location update

Activity RecognitionAnother common use case is, where you want location updates but you only want them tied to a specific user activity – maybe when the user is riding a bike, or driving in a car.Here, we would use the activity-recognition API and combine that with location updates.It would work like the previous example: let’s say you were tracking cycling.Location services would tell you when the user is likely to be on a bicycle, and you can take that and start location updates to the notifications, something comes into the foreground.


5.Awareness API

Awareness APIAndroid has an exposed and an awareness API.It basically senses and infers your context, and it manages system health for you, and it does so in a battery-efficient manner. If you’re dealing with complex scenarios, awareness API may be exactly what you’re looking for.It tracks lots of things: what time of day is it? What is the location of the device? What are are the place nearby? Are there coffee shops or stadiums nearby?Houses of worship?What is the activity of the device, the person on a bike is the person on the car?Are there beacons nearby.It is a person wearing headphones.what is the weather like?You can take all of these contexts and treat of a larger sense of a fence.Basically, you can easily react to changes in multiple aspects of the user’s context and thing generalizes the idea of a fence well beyond conventional geofences which of course are just for the location.

So here is an example so, you create a context fence and it tracks three things you create an activity fence which says tracked that the user is driving it.Create a location fence and says track that this geofence may be a stadium geo being tracked. Then our time geo fence make sure it’s time between this time and this time when all of this thing are true you and your app is let’s say even in the background location services will say all the conditions you specified are true.I’m letting you know you can now do whatever and that whatever could include location update.

6.Snapshot API

Snapshot API that is made possible through awareness and again it’s a simple way to ask for multiple aspects of user’s context from a city again an example you find out what the current places, you find out what the current activity is? if the current place is a shopping mall and the activity is walking hey maybe it’s time to start location updates so that you can tell the user as walks what the stores are nearby or maybe some discounts that you can offer.

The thing is you are using multiple inputs and multiple contexts and that can get pretty expensive for battery because you’re running a lot of different things. If you use awareness API you can minimize battery cost because awareness API is actually pretty battery optimized.

7.Long-running(background) updates tied to specific locations

Long running updatesYou want to find all the Starbucks in the city or you want to find all the ATMs.Android has a solution that involves and a Dynamic geofence says location services make a requirement that you can only 100 geofences at one time there are of course many more ATM many more Starbucks than just a hundred and also maintaining 100 geofences is actually pretty expensive.That’s a lot of scanning that the location services will have to do and that’s going to drain your battery.So the solution is dynamic geofences maybe put a geofence around the city and when the device enters that city dynamically registered geofences in locations inside that city so you have the outer geofences dynamically you put energy offenses if the person leaves the city you can remove those dynamic geofences that are inside because you don’t need them anymore and this is a way you can actually in a very battery efficient way get a lot of geofences get around 100 geofences limit and actually do really pretty amazing things.

8.Long-running background updates.

The problematic one you want long-running background updates with no visible app component so basically think of an app that passively tracks your location for hours or days at a time. So this is a case that gives that keeps people up this the case that is inherently problematic and this is a case where you get into that problem. That I initially refer to that background location gathering is a really really major drain on battery but if you have to do it how do you it so?

Solution: long-running service?

Exposes a method for getting location updates using a PendingIntent and that’s exactly what you should do. So you request location updates you give it a location request you give it a PendingIntent and GMS core location services will wake up your app when an update is found.

In cases like this, what should location request look like what are you gonna do in the background that you gonna do in the background that doesn’t burn battery so you use moderate accuracy low frequency and high latency let’s just look at three of that thing right now.

You do not want accuracy which is priority high accuracy for any background use cases this bad for battery


For frequency, I think a good pattern would be to request updates times an hour let’s say every 15 minutes.Certainly, you should try to get more update through passive location gathering. So that’s why it’s a good idea to set the fastest interval to some small amount this way if others are gathering location you get that location for free doesn’t cost you anything.


Latency this is really really important again imagine that you set your interval to 15-minutes. If you set the max wait time to 1 hour you will get location updates every hour but you’ll updates which are calculated every 15 minutes at a time that’s pretty good for background and that will save battery.

9.Frequent Updates while a user interacts

Frequent updatesWhat if you want frequent updates while a user interacts with other apps?So imagine a Fitness app or Navigation app. So in this kind of a case, you should use a foreground service.This is sort of the recommendation that Android coming up with because android believes that when potentially expensive work is being done on behalf of the user that user should be aware of the work. A foreground service, as you know, requires a persistent notification.The user will be able to see


Related Posts

Creating and Monitoring Geofences

How Location API Works in Android

Understanding Battery Drain when using Location


Understanding Battery Drain when using Location

Turn off location

Users simply turn off location on their devices which means a lot of these apps either don’t work at all or they work in a degraded manner.

why do users do this?

Battery life huge problem

Because fairly or not, they associate a location with battery drain and they think turning off location will help preserve battery.Location is used a lot- we know that.

The relationship between Battery drain and Location

The relationship between battery drain and location in sort of a concrete way.I mentioned with the Fuse Location provider you have to essentially tell Fuse location provider what you want. You make a location request, and each does the right thing, and it does the right thing in a battery-efficient way.So, essentially, what this post is going to be, it’s going to about what is a good location request? How do you tell Fuse location provider what it should do? So, I would say battery can be measured on three points, the discussion can be anchored on this three-point: accuracy, frequency, latency.


It is course how accurate is your location? How fine do you want it to be? The way this works is that you can take the location request that you create and define a priority.There are a bunch of priorities that you can choose from, and depending on what you choose, the Fused location will give you different technologies under the hood and give you what you want.

The most accurate state-of-the-art is priority high accuracy.This will use GPS if it is available, and everyone will lose and accuracy will win.It is going to give you the most accurate location it knows how to do. This is a good kind of a use case for the foreground, when you have a short-lived activity that’s in the foreground, or something.This is a terrible idea for the background because it is going to be prohibitively expensive in terms of battery.

Related to that is another priority balance power accuracy.This will rarely use GPS but will have GPS matching and will not.I would recommend you writing the location apps to consider this something as a default.It does give you pretty good location without burning out your battery.

The next is “priority low power” and this will be hitting the cell network, not using a lot of Wi-Fi, it will not use GPS. This will give you coarse location.You can’t say I’m a few feet here or there, but you will be able to say I’m in this part of city vs that part of the city.Depending on your use case, this may be all you need in which case you should never request more expensive location updates than this.

The most interesting of all is the “priority no power”

Which is saying give you location updates but not spend any power.How does this bit of magic work? In this case, what you’re saying to Fused location power is don’t calculate any location for me.If another app is computing that, let me know the results.That’s what “priority now power” means and it’s an incredibly good tool to have because it doesn’t cost your app anything.


Again, it is fairly simple to understand what this means.The more frequent your updates, your location consumption, the more expensive.It is for the batter but it is a little bit more than that.Frequency is defined by a setIntervl().Location services will try to honor that value.If you say give me location updates every two minutes, it will try to do that.If you do it every 15 seconds, it will try to do that.Generally speaking, apps should pass the largest possible value when using setInterval().Especially background apps.Using intervals of a few seconds,15 seconds, 30 seconds, it really is something you should reserve for foreground use cases.Location services will sort of doing what you ask it does.It is up to you to choose wisely.Now, if you say location update setInterval() for two minutes, there’s caveat there which is that is just a suggestion.Your location updates may be a little slower or a little faster.The way that can happen a little faster is if another app is requesting location at a faster rate, that will be brought to your app as well because this location data is shared between apps.

So for that reason, Android has another method we can call in building our location request called setFastestInterval(..).It says give me the location, even if it is coming from another app no faster than what I’m specifying coming from another app no faster than what I’m specifying here.So, here’s a little example.

You create a location request object and set its interval to be five minutes.So at this point, every five minutes, your app is going to have location computed for it But if you also setFasterInterval(), which is, in this case, one minute, any app running out there this is requesting location also, that location will be made available to you but no faster than one minute.This is a pretty good way of not burning the battery yourself.You’re relying on other application to do the work and you get the location kind of freedom that they’re doing it is a passive way of getting location, and it’s a pretty powerful way of conserving battery.


The latency is really about when location services have a location to give you, how quickly do you need it? How quickly do you want location updates to be given to you?So, remember, we talked about setInterval(), when you set an interval of 30 seconds or two minutes, that’s what location services will try to use as this interval at which it gives you location.There also a method call setMaxWaitTime() which is a way of having your location delivered in watches after a few times after it’s been computed.setInterval() is how often is location computed for you? setMaxWaitTime() is how often location is delivered to you.

Let me make this concrete with an example.Again, we create a location request and set the interval to five minutes.This means low cakes will be computed for every five minutes.if in that time when the location is found and a new location is found, and your app will be woken up, and that location will be given to your app. If you set a max wait time of one hour, something different will happen.Your location will still be computed every 5 minutes but it will be delivered to you in a batch every hour, and you will get 12 location data points, at least in theory in. Instead of being woken up five minutes, your app will be woken up every hour which is dramatically better for battery consumption.Batching is a really, really good thing to use, especially for background case where you don’t want your device to get woken up a lot.

If you’re using geofencing, it’s the set Notification Responsiveness.if you don’t want your geofencing to be immediate, you can have a window to hold off before a geofencing result is given to your app. You can set the responsiveness period to something high, and that is also a very good thing for battery.

This is a classic case of how you build a set a circular region, you set when it expires, you set what condition you want, and you build it.But if to that you add a setNotificationResponsiveness and give it a sufficiently large value.That will make your geofencing all the more battery-efficient.

That’s a bunch of stuff summarise, it is a fairly obvious thing, the more frequent and more accurate your updates and the lower your latency, the more expensive it is for battery. So, in foreground use case, you could have it all – you can be frequent, as accurate as you want, as low latency as you want, but for everything else,you’re going to have to trade off on one of these or more than one of these, and that’s where you get to preserve battery.


Related Posts

Creating and Monitoring Geofences

How Location API Works in Android

Location Use Cases Best Practices


How Location API Works in Android

Location-based apps are absolutely everywhere like transportation apps, geo apps navigation apps, weather apps even dating apps all use location.

Location APIs currently allow developers to request location at virtually anytime and make progressive location requests with no barriers.

Background location has major power issues

Background location has been identified as a major contributor to battery drain and power issues.Aggressive use of background location is a major reason why people disable location on their devices.So, in a response to this, the Android team starting with Android O put in place some fairly substantial limits on the gathering of background location.

What about pre O?

The majority are running on Android N or lower.What about those devices? For the foreseeable future, that is going to be the case.This talk is fundamentally about identifying best practices that you could use now in your Android apps when you use location. So that you’re writing your apps in a battery-efficient manner.Let’s dive into this.

Location APIs

Framework vs Fused location

For historical reasons, there are two ways in which you can get location when you’re using Android apps: Framework location and Fuse location.

android framework location apis

Framework location is the older one, been there the beginning.It is basically android.location.locationmanager giving you a wide API surface whereby you as app developers can decide I want to use GPS, I want to use wifi, with I want to use some sensor, and you can get the location as you see fit.This type of location is not optimized for battery, and we discourage you using this.what we would like to you instead is Fused location provider.This is available through GMS core.It is in android.gms.location.

Fused Location Provider

Fused location provider provides a narrower surface and sits on top of platform and hardware components.The way this work is you tell Fuse location provider what kind of location we want: forced, find, how frequently you want it.It just figures out what underlying technologies to use and how to do this in the most battery-efficient way.This location provider is highly optimized for battery, and we would like you to use this.

How Fused location Work?

There is a bunk of inputs that go into Fused location.There is Wi-Fi, GPS, Cell, Accelerometer, Gyroscope, Magnetometer.


GPS works great outside. It has some trouble with cities and tall building but in clear skies, it works fantastically, super accurate location but terrible for battery.That is your trade-off-great location accuracy but really bad for the battery.


The coverage for Wi-Fi is mostly indoors.The accuracy is pretty good.You can tell using just Wi-Fi where a person is in a building and what floor they’re on.The power consumption isn’t as bad as GPS but Wi-Fi scans are fairly expensive.It is not free: it does cost something.


This is available indoors and outdoors, available almost everywhere.The accuracy with the cell is not so great.You’re not going to get a location which is accurate to within a few feet, you will get the location to a neighborhood level or a city block, etcetera. But it is great for power consumption.It basically uses very, very little power, So it is fantastic for that.


Then you have the sensor which plays an extremely important role in making Fused location provider do the right thing and do the right thing for battery. You have an Accelerometer which measures changes in velocity and position. You have Gyroscope which measures changes in orientation in the device, and the Magnetometer which allows you to use the device as a compass.By and large, most of these sensors have very, very little battery cost.Fused location provider will use these sensors in conjunction with Wi-Fi and GPS to use the best as it can with minimal battery usage.

If you requested Fine location, accurate to within a few meters, Fuse location would use GPS and Wi-FI but GPS and Wi-Fi work better when you combine them with sensors.So, for instance, I mentioned GPS is a little bit jumpy when you’re in an environment with tall buildings.Imagine Hong Kong, Mumbai, New York City.Those are challenging environments for GPS. When GPS gets a little flaky, Fused location, instead of making expensive GPS scans, will say,”Let me see what the sensor data tell me.What is the accelerometer telling me?” It pieces together a pretty good sense of what is that is happening. The same with Wi-Fi, Wi-Fi can be a bit jumpy.When it gets jumpy, Fused location provider will not do excessive Wi-Fi scans but instead look at the sensor data and look at the what the device might be doing.Indoor maps sort of work like that.There was a time when Google maps would give you – if you want to a shopping mall, It would say you’re in this mall. Now it says you’re right here in this shopping mall on the third floor.It will do things like that.A lot of that is driven by sensors.If the location had to be pulled constantly, if the Wi-Fi scans had to be constantly done, that would be terrible for battery.It doesn’t have to do that.Once it gets a Wi-Fi fix, it can look to the sensor data and look in a battery-efficient way look where you are? you turning or moving, etcetera? That’s basically what it is. The Summary of this is, where possible, give the choice between framework location and Fused location, you should always use Fused location.


GeofencingThere’s one higher-level API that is the geofencing API, and that should be an important tool for anyone building location apps.It is a case where you can define a circular region somewhere and say whenever the device enters or leaves in the region, or sits in this region for a certain number of hours, do something.let me know and that basically is how geofencing works.Geofencing is built on top of Fused location and it’s highly optimized for battery. So the basically the way it works is the API monitors device proximity to a geofence.The closer you are to the geofence, the more expensive it is.It basically figures out what is your speed? Are you in the car? Are you walking? How far are you from the geofence. It optimizes for battery in terms of among Torrington the geofence in the background.

Related Posts

Creating and Monitoring Geofences

Understanding Battery drain when using Location

Location Use Cases Best Practices