What’s new in MQTT 5

MQTT 5 is pretty new, a lot of things changed.We will walk through the most important changes of MQTT 5 in this blog.

I guess most of you are already familiar with MQTT but for those who are not quite sure what MQTT is, what it’s about and what are the main principles let’s start with a quick refresher this is also important to understand some of the changes in MQTT 5.

MQTT Overview

MQTT is an iot messaging protocol. It gained massive attraction in the last few years.It’s mostly used for device to cloud communication and cloud to device communication sometimes device to device communication directly.It has a lot of features which are good for mobile network use cases.If you have devices out in the field like cars or like gateways or physical hardware which needs to connect via mobile networks to the backend services.


MQTT has 3 different quality of services levels which you can define on the application level.You can send messages with fire and forget.you can also make sure that message arrives at least once or exactly once on the backend.

Retained Messages

MQTT has nice features which are unique to the protocol is retained messages which essentially allows you to save messages on your MQTT message broker.

Persistent offline sessions

MQTT has a feature which allows the client to come back and the broker remembers the client and also sends out messages the client did not get at the time.In general, from an application perspective, you can program with MQTT like you never lost the connection.

Binary Protocol

MQTT is a binary protocol with a minimal overhead so it’s really tiny it saves a lot of bandwidth.

MQTT Use Case

MQTT is a very good protocol for constraint devices.If you don’t have too much computing power or a memory then MQTT is a very good protocol choice.This is typically on physical hardware at a few megabytes of memory.

Push Communication

Typical use cases for MQTT is push communication.We have reliable communication of unreliable networks this is mainly mobile networks.

Low Bandwidth and High Latency

It also stays extremely on the backend. Some MQTT brokers allow scaling to more than 10 billions of concurrently connected devices.Typically especially if you’re in a mobile network you offer a very low bandwidth in the high latency and MQTT make sure that you really get the best here so it doesn’t waste any bandwidth and the best at low latency.


publisher subscriber protocol

MQTT uses the publish/subscribe mechanism.we have an MQTT broker which is in middle.MQTT broker is responsible for distributing data across clients.

We have some producing clients like here temperature sensor.It publishes data to MQTT broker and the broker distributes the data to devices which could be in this case a laptop or a mobile device it could be a car it is really anything which can connect to the internet.

This device works by a subscription mechanism.The laptop or the mobile devices subscribes to the MQTT broker it says hey “I’m interested in a particular data set” and the broker make sure that only the data which the client are interested in forward to their clients. You have a decoupling here and what is very important to know about MQTT you have a standing TCP connection here so all of the devices which talk to the MQTT broker are connected all the time so this is different to other protocols like HTTP which typically close the connection after doing stuff.

MQTT 5 Overview

MQTT 5 is the successor of MQTT 3.1.1.It is not backward compatible with the old specification.Officially released in January 2018. It also had a lot of clarifications from the MQTT 3.1.1 specification so to make sure that implementers of the specification get everything right.

MQTT 5 Goals

The goals of MQTT 5 is enhancements scalability and improved error reporting.Error reporting wasn’t most wished features by users because MQTT 3.1.1 has some blind spots when you come to error handling.MQTT 5 did a lot of work here and also formalized common patterns like request response.

One of the most wished features is the extensibility of the protocol because they(MQTT 3)weren’t headers like you know from HTTP. MQTT 3.1.1 wasn’t that flexible this changed.Performance improvement for small clients they’re also a big part.

MQTT is very easy to program on the client side but it’s important to notice that implementing MQTT broker is not as easy as it sounds.

MQTT 5 has some enhancement for the scalability.MQTT free brokers scale up to 10 millions of devices which is already a lot but also we expect that MQTT 5 allows us to scale even beyond the magic 10 million by concurrent connections.

Foundational Changes

Before digging into the specific features let us talk about the foundational changes of the MQTT protocol

Negative acknowledgments

First foundational change is Negative acknowledgments.I already mentioned that the error reporting in MQTT 3.1.1 wasn’t optimum.There was no way for the broker to tell “hey you’re doing something wrong”.So what was defined here was their negative acknowledgments.A broker can actually notify the client that something went wrong and also what went wrong. This is for many use cases this is very important especially for production use cases where it’s it’s hard to debug what happened.The client can react when something weird happens but also the client can send acknowledgments to the broker if something bad happens.

Return Code

Another foundational change is returned codes for unsupported features when a client connects to an MQTT broker if the broker does not allow all MQTT features or doesn’t implement all MQTT features which is pretty common atypical like cloud platforms like AWS.It’s possible to tell the client, ok this feature is not available or this is not available for you.If you haven’t a permission control and you do not allow clients let’s say to subscribe or publish or something like this.

MQTT 5  can restrict the retain message feature so we can turn it off if we want for a specific client.

we can define a maximum quality of service level for the client.

We can restrict the use of wildcard subscriptions.

we can also restrict subscription identifiers share subscriptions.

A client can be notified what is maximum message size to broker support.What deserves a keepalive value is which is also very important because it also changes with MQTT 5.

The broker can tell the client how often it should send ping messages or heartbeat messages in order to recognize it for clients is offline.

Another notable change is MQTT 5 not allowed to send retry for the quality of service 1&2 messages. This is something which may come surprisingly for some people because We noticed that many projects rely on retries which are not allowed with MQTT5. So this is a common pitfall when upgrading to MQTT 5.

Passwords are now allowed to send without having usernames which may be interested in sending tokens.Clients are now allowed to send disconnect packages traditionally. The broker has no way to disconnect the client gracefully with MQTT 3. The client can connect and when it decides okay I want to disconnect now in a graceful then the client sent a disconnect packet to a broker but now it’s also allowed for the broker to say that this connect packet it back to the client and tell the reason why it was disconnected this is always something new and is used heavily for the negative acknowledgments.

New Features in MQTT 5

Let’s talk about some of the features in MQTT 5.We cannot dig into all features because there are more than 25 features available which take a lot of time but what I want to highlight some of the features more detail.

Session & Message Expiry

So one of them from interesting features of MQTT 5 is session and message expiry.
MQTT 3.1.1 allows two kinds of session. We have a clean session which ends when a client disconnects so the broker does not need to remember anything about the client and we have a persistent session which is a session, the broker saves and persists when a client is offline and come back. the client can just resume the session. So essentially what we get here is we have a state on the broker side.

The problem here is if we have some clients which connect to the broker and disconnect and never come back. The broker can have a very hard time because it needs to persist to data until the client reconnects.When a client never reconnects essentially we get a problem here and most brokers like mosquito allow to set that time to live for a session on an administrative level. So the broker can clean up after this time.

In MQTT 5 this feature went back to the specification and now all brokers must implement a cleanup because this is needed on the broker’s side otherwise it would be a problem of denial of services attacks. So the client can say okay when I’m connecting to this broker I want the session expiry interval in let’s say 100 seconds or 10 minutes or one day and then the broker is allowed to delete the session after this time.

The problem here is let’s assume we have a car which often for one week and when it comes back doesn’t really need to get all messages.Perhaps some messages need some messages not and now sending client can send publish expiry to a published message and say to the broker “if time to live message is over do not send out the message anymore”.This allows the broker to clean up messages especially if it’s queue of messages and also it saves a lot of bandwidth.

Negative Acknowledgment

Return code for all acknowledgments is essential.


So now the broker and the client can say sorry we have a problem here and this is the problem there are reasons defined which are humanly readable. Which the broker and the client can send out if they want but they aren’t they don’t need to do this and also send a return code. These return codes are specified most of them are error codes and all client and broker must have the ability to react to this code to the returns code. So if you are a client and the broker disconnects because you were let’s say idle for long then you have the possibility on the client side to adjust the interval for sending heartbeats.

Payload format indication & Content Type

What you get here is a content type this is similar to mime types like this is a JPEG picture or this is a text.This can be sent as a meta information.You can also indicate what kind of content do we have it is also possible for up messages.We get two new headers in MQTT content type which is the mime type and you have a payload format indicator this is more interesting for debugging purposes.

Request and Response

It is possible to send hints like you want to request-response.What you can do here is you can send metadata like request response information. So a publisher can say that can send a message and it can also send a metadata.Request response information can be used that the receiver of the message can send the answer to the topic.

Shared Subscription

Share subscriptions are a very interesting concept for scaling out backend subscribers.
share subscription mqtt 5
Let’s assume we have a publisher which sends a lot of messages.We have a high-frequency topic.The problem which could arise is that the backend clients cannot process data this fast. Because it says it writes to the database which is slow at the moment, what to do how can we scale this? with MQTT 3 you cannot scale this without using shear subscriptions.

MQTT 5 has a logical or mutual shared topic a shared subscription.The client can decide ok I want to share my subscription with others clients and then the broker can send one message to the one client and one message to the other client.If you have a stream of let’s 1,000 messages/second on a topic and you have a share subscription with two clients then each of these clients get 500 messages/seconds and now if you scale out three clients then each of these backend instances get 330 messages/second and so on.This is a way how you can elastically scale up the backend subscribers or client up and down for topics which have a lot of traffic.

User Properties

MQTT 5 has headers and it’s possible to use user-defined properties which is you can just like with HTTP. Add any data you want to your MQTT packet. It can modify the publisher and user properties.Let’s say you have a publisher which has some application specific identifiers which you want to parse in the backend without looking into the whole payload of the packet contrast.the backend applications can just get out there the header without decoding the whole message.You can add unlimited user properties which is a bit controversial.

Other Features

Topic Alias: The client can choose to shrink topics.If you have a very long topic and repeatedly published.They save a lot of bandwidth because they can just use an alias.

MQTT 5 has Flow Control so our client can decide how many messages you can actually receive.

It has maximum message size indication and an authentication flow.

We also get will delay. You can tell the broker, please wait 5 seconds before sending out the last New Testament message.

The broker can tell the client what keep alive. It expects the program also overwrite client identifier.


Unfortunately, I did not find any MQTT5 broker yet expect the eclipse paho test broker.


Related Post

Android MQTT Client


Calling REST API from a Flutter App

Building Flutter applications whose content is static can be a bad idea. Instead, you should consider building applications that can fetch content from the Web server. That might sound hard, but Web server exposing their resources through REST APIs, it’s actually quite easy.

In this tutorial, I’m going to show you how to use the classes and methods available in the Flutter SDK to connect to remote web servers and interact with them using their REST APIs.

Calling REST API from a Flutter App

Creating an HTTP Connection

To create an HTTP Client we need to add an import.

The following code snippet shows you how to setup a connection with the GitHub API endpoint:

Note that the HTTP APIs use Dart Futures in the return values.Flutter recommend using the API calls with the async/await syntax.


Responsive UI

Network calls are slow.It doesn’t matter where you are on your phone.Sometimes, it will just be slow.sometimes, your server might be slow and you just don’t want to show a white page.So you want to show a progress bar.The way you do that currently in Flutter is you have a show loading equals false.If show loading is false in your building function, you show a spinner animation else ,you show your entire widget tree.Now, I didn’t want to write that over and over again, so I started finding a solution for it and so what I came across was this great library called async_loader.Following code snippet show how to use it.


To use this plugin, add async_loader as a dependency in your pubspec.yaml file.

Create instance

You need to create an async loader in your build function.It has a few parameters.It has initState,renderLoad,renderError,and renderSuccess.initState is basically as the widget is floating, what do you want to load?what data do you want to load? As renderLoad is as it’s being loaded, what do you want to show? So in renderLoad, I show a progress bar.renderError is if something went crazy wrong, what do you want to do? So here for the sake of the demo I just have new text error loading conversation.so you see a boring old error loading conversation on the page.what you typically want is some sort of nice little graphic that says, oh, something went wrong please press back.and then finally when all your data is loaded, renderSuccess is called with your data that you return in your initState. And then you can take that data, and then you can actually render your entire UI.

JSON Parsing

In Android, you can use GSON for JSON parsing.There’s nothing like this on Flutter that I found, mainly because Flutter doesn’t have reflection.I’m a lazy developer. I did not want to write down all these single data types over and over again and build out a fromJSON and to map.There is a great library called json_serializable.

Setting up json_serializable

To include json_serializable, you need one regular and two dev dependencies. dev dependencies are dependencies that are not included in your app source code.

Check the latest versions.


Click “Packages Get” in your editor to make these new dependencies available in your project.

Convert  User class to a json_serializable .


When creating json_serializable classes the first time, you will get errors.Because the generated code for the model class does not exist yet. To resolve this, you must run the code generator that generates the serialization boilerplate for us.

By running (in command prompt)flutter packages pub run build_runner build in our project root, you can generate json serialization code for our models whenever needed.

Consuming json_serializable models

To deserialize a JSON string json_serializable way, we do not have actually to make any changes to our previous code.

Same goes for serialization. The calling API is the same as before.


Download Project from GitHub


Room database Migrating

When upgrading your Android application you often need to change its data model. When the model is stored in SQLite database, then its schema must be updated as well.
However, in a real application, migration is essential as we would want to retain the user existing data even when they upgrade the App.

The Room persistence library allows you to write Migration classes to preserve user data in this manner. Each Migration class specifies a startVersion and endVersion. At runtime, Room runs each Migration class’s migrate() method, using the correct order to migrate the database to a later version.

A migration can handle more than 1 version (e.g. if you have a faster path to choose when going version 3 to 5 without going to version 4). If Room opens a database at version 3 and latest version is >= 5, Room will use the migration object that can migrate from 3 to 5 instead of 3 to 4 and 4 to 5.

If there are not enough migrations provided to move from the current version to the latest version, Room will clear the database and recreate so even if you have no changes between 2 versions, you should still provide a Migration object to the builder.

Create New Entity Or Add New Columns 

The following code snippet shows how to define an entity:

migrate() method is already called inside a transaction and that transaction might actually be a composite transaction of all necessary Migrations.

After the migration process finishes, Room validates the schema to ensure that the migration occurred correctly. If Room finds a problem, it throws an exception that contains the mismatched information.


Related Post

Room Persistence Library

How to use DateTime datatype in SQLite Using Room

Room: Database Relationships


TensorFlow Lite

What is TensorFlow?

Implement the Machine Learning or AI-powered applications running on mobile phones it may be easiest and the fastest way to use TensorFlow. which is the open source library for Machine Learning.TensorFlow is some google standard framework for building new ML or AI basis product. So this is a standard play mapper Machine Learning in google and created by Google brain team and Google has opensource in 2015.TensorFlow is scalable and portable. So you can get started with downloading TensorFlow code on your laptop and try out with some sample code and then you can move to you models the production level use cases by using GPU. After training the model you can bring the model which consists of tens of megabytes of data that could be ported to the mobile embedded systems.

Neural Network for Mobile

If you want to bring the TensorFlow into your mobile applications there are some challenges you have to face. The neural network is big compared with the other classic machine learning models because deep learning you have to multiple layers.So the total amount of the parameters and amount of the calculation you have to do it can be big for example, the inceptionV3 which is one of the popular image classification models that requires to 91 MB.If you use TensorFlow without any changes by default which consume like 12MB of the binary code. So if you want to bring your mobile applications in productions you don’t want to have users downloading 100 MB. When you’re starting to use your applications you may want to compress everything into the rack at 10-20 MB.So Google has to think about optimization for mobile applications things like pleasing graph quantization memory mapping and selective registration.

Freeze Graph

Freezing graph means that you can remove the all the variables from the TensorFlow graph and convert it into the constants.TensorFlow has the weights and biases so the parameters inside neural networks as a variable because you want to train the model you want to train the neural network in its training data but once you have finish training you don’t have to those parameters in the variable you can put everything into constant.So that by converting from variables to constants you can get much faster learning time.

Quantization in TensorFlow

Quantization is another optimization you can take for the mobile app.Quantizations means that you can compress the precision of each variable in parameters, weights, and biases into fewer operations.For example, by default, TensorFlow use the 32-bit floating point numbers for representing any weights and biases.But by using quantization, you can compress that into 8-bit integer.By using 8-bit integer, you can shrink the size of the parameters much, much smaller and especially for the embedded systems or mobile systems.It’s important to use the integer numbers rather than the floating point numbers to do all the calculations such as multiplications and additions between matrices and vectors because hardware for floating point requires much larger footprint in implementation.So TensorFlow already provides your primitive datatypes for supporting quantization of parameters and operations quantizing, and de-quantizing, or operations that support the quantized variables.

What is TensorFlow Lite?

We know that machine learning adds great power to your mobile application.So with great power comes great responsibility.TensorFlow Lite is a lightweight ML library for mobile and embedded devices.TensorFlow works well on large devices and TensorFlow Lite works really well on small devices. So that it’s easier and faster and smaller to work on mobile devices.

What is different between TensorFlow mobile and TensorFlow Lite?

You should view TensorFlow Lite as an evolution of TensorFlow mobile. TensorFlow Lite is like the next generation.This is created to be really small in size and opt for smaller devices.

TensorFlow Lite came up with three goals.It wanted to have a very small memory and binary size.So even without selective registration.It wants to keep the binary size small and it wants to make sure that the overhead latency is also really small you really can’t 30 seconds for an inference to happen by the time that model is downloaded and processes and quantization is the first-class citizen.It support quantization and many of the model’s support are quantized models.

TensorFlow Lite Architecture
TensorFlow Lite architecture

This is the high-level architecture as you can see it’s a simplified architecture and works both for Android and ios.This is lightweight performs better and leverages hardware acceleration if available.

So to better understanding how to write a model let’s consider how to build a model using TensorFlow Lite.There are two aspects one is the workstation side and other one is the mobile side and let’s walk through the complete lifecycle.
TensorFlow Lite lifecycle

The first step is to decide what model you want to use. So if you want to use their already pre-trained model then you can skip this step because you’ve already done the model generation. One option is to use a pre-trained model the other option would be to retrain just the last layers like you did in the post. You can write your own custom model and train and generate a graph this is nothing specific to TensorFlow Lite this is as good as standard TensorFlow where you build a model and generate graph depths and checkpoints.

The next step is specific to TensorFlow Lite is to convert the generated model into a format the TensorFlow Lite understands.A prerequisite to converting it is to freeze graph.So checkpoints have the weight the graphdef has the variables and tensors freezing the graph is a step where you combine these two results and feed it to your converter the converter is provided as part of the TensorFlow Lite software.You can use this to convert your model into the format that we need. Once this step is completed the conversion step is completed you will have what is called as a .lite binary file.

So now you have a means to move the model to the mobile side.You feed this TensorFlow Lite model into the interpreter.The interpreter executes the model using a set of operators.It supports selective operator loading and only and without this operator it’s only about 70KB and with all the operators it’s about 300KB so you can see how small the minor resize this is a significant reduction from what the TensorFlow is which is over 1 MB at this point so you can also implement custom kernels using the API.If the interpreter is running a CPU then this can be executed directly on the CPU otherwise if there is hardware acceleration then it can be executed on the hardware accelerated hardware as well.

Components of TensorFlow Lite

TensorFlow Lite ComponentThe main components of TensorFlow are the model file format, the interpreter for processing the graph, a set of kernels to work to or where the interpreter can invoke a set of kernels, and lastly an interface to the hardware acceleration layer.TensorFlow Lite has a special model file formate and this is lightweight and has very few dependencies and most graph calculations are done using 32-bit float, but neural networks are trained to be robust for noise and this allows us to explore lower precision numeric the advantages of lower precision numeric is lower memory and faster computation and this is vital for mobile and embedded devices this using lower precision can result in come amount of accuracy loss. So depending on the application you want to develop you can overcome this and use quantization lost in your training. So you can get better accuracy.So quantization is supported as the first class citizen in TensorFlow Lite.TensorFlow has also FlatBuffer base system so we can have the speed of execution.


FlatBuffer is an opensource Google project and it’s comparable to protocol buffers but much faster to use it’s much more memory efficient and in the past when we developed of application we always thought about optimizing for CPU instructions but now CPU are moved far ahead and writing something efficient for memory is more important today. So this is a FlatBuffers is a cross-platform serialization library and it is similar to protobufs but it is designed to be more efficient that you don’t need to you can access them without unpacking and there is no need for secondary representation before you access the data. So this is aimed for speed and efficiency and it is strongly typed so you can find errors at compile time.


The interpreter is engineered to be lower work with low overhead and on very small devices. TensorFlow Lite has very few dependencies and it is easy to build on simple devices.TensorFlow Lite kept the binary size of 70KB and 300KB with operators.

It uses FlatBuffers. So it can load really and the speed comes at the cost of flexibility.TensFolw Lite support only a subset of operators that TensorFlow has. So if you are building a mobile application and if the operators are supported by TensorFlow Lite then the recommendation is use TensorFlow Lite but if you are building a mobile application that is not supported by TensorFlow Lite yet then you should use TensorFlow mobile but be going forward all developer we are going to be using TensorFlow Lite as the main standard.


It has support for operators and used in some common inference models.The set of operators are smaller.Every model will be not supported them, in particular, TensorFlow Lite provides a set of core built-in ops and these have been optimized for arm CPU using neon and they work in both float and quantized. These have been used by Google apps and so they have been battle tested and Gooogle has done the handoff. Google has hand optimized for many common patterns and it has fused many operations to reduce the memory bandwidth.If there are ops that are unsupported it also provides a C API so you could use custom operators and you can write your own operators for this.

4.Interface to Hardware Acceleration

It targets custom hardware.It is the neural network API TensorFlow lite comes pre-loaded with hooks for neural network API if you have an Android release that supports NN API then tensor flow lite will delegate these operators into NN API and if you have an Android release that does not support NN API it’s executed directly on the CPU.

Android Neural Network API

Android Neural Network API is supported for Android with 8.1 release in Oreo.It will support various hardware acceleration you can get from vendors for GPU for DPS and CUP.It uses TensorFlow as a core technology. So, for now, you can keep using TensorFlow to write your mobile app and your app will get the benefits of hardware acceleration through your NN API. It basically abstracts the hardware layer for ML inference for example if a device has ML DSP it can transparently map to it and it uses NN primitives that are very similar to TensorFlow Lite.

android neural network architecture

So It’s architecture for neural network API’s looks like this essentially there’s an android app. On top typically there is no need for the Android app to access the neural network API directly it will be accessing it through the machine learning interface which is the TensorFlow Lite interpreter and the NN runtime. The neural network runtime can talk to the hardware abstraction layer and then which talks to their device and run various accelerators.


Related Post

Image Classify Using TensorFlow Lite

Introduction TensorFlow Machine Learning Library

Install TensorFlow

Train Image classifier with TensorFlow

Train your Object Detection model locally with TensorFlow

Android TensorFlow Machine Learning


How to use DateTime datatype in SQLite Using Room

One of the most interesting and confusing data types that SQLite not supports is Date and Time. I see more questions in online public discussion forums about this type than any other. In this article, I shed light on some very confusing issues regarding select query using Date.

Date and Time Datatype in SQLite

SQLite does not have a storage class set aside for storing dates and/or times. Instead, the built-in Date And Time Functions of SQLite are capable of storing dates and times as TEXT, REAL, or INTEGER values:

  • TEXT as ISO8601 strings (“YYYY-MM-DD HH:MM:SS.SSS”).
  • REAL as Julian day numbers, the number of days since noon in Greenwich on November 24, 4714 B.C. according to the proleptic Gregorian calendar.
  • INTEGER as Unix Time, the number of seconds since 1970-01-01 00:00:00 UTC.

Applications can chose to store dates and times in any of these formats and freely convert between formats using the built-in date and time functions.

Using type converters

Sometimes, your app needs to use a custom data type, like Datetime whose value you would like to store in a single database column. To add this kind of support for custom types, you provide a TypeConverter, which converts a custom class to and from a known type that Room can persist.

For example, if we want to persist instances of Date, we can write the following TypeConverter to store the equivalent Text in the database:

The preceding example defines 2 functions, one that converts a Date object to a String object and another that performs the inverse conversion, from String to Date. Since Room already knows how to persist String objects, it can use this converter to persist values of type Date.

Next, you add the @TypeConverters annotation to the Field of class class so that Room can use the converter that you’ve defined for each Row in entity.

Note:You can also limit the @TypeConverters to different scopes, including individual entities, DAOs, and DAO methods.

1.SQLite Query to select data between two date

I have a start_date and end_date.I want to get the list of dates in between these two dates. Put those two dates between single quotes like..

2.SQLite Query to compare date

3.SQLite Query to group by Year

4.SQLite Select data for a specific year

5.SQLite Query to get Last month Data

6.SQLite Query to Order by Date

7.SQLite Query to calculate age from birth date


Download this project from GitHub.


Related Post

Room Persistence Library

Room: Database Relationships

Room database Migrating


ConstraintLayout 1.1.0: Circular Positioning

Android just published ConstraintLayout 1.1.0 beta 3 on the google maven repository.One of the more interesting additions in this release is Circular Positioning. Circular positioning allows you to constrain a widget center relative to another widget center, at an angle and a distance. This allows you to position a widget on a circle.

ConstraintLayout Circular constraints

Add ConstraintLayout to your project

To use ConstraintLayout in your project, proceed as follows:

1.Ensure you have the maven.google.com repository declared in your project-level build.gradle file:

2.Add the library as a dependency in the same build.gradle file:

Example Circular positioning

The following attributes can be used:

  • layout_constraintCircle : references another widget id.
  • layout_constraintCircleRadius: the distance to the other widget center
  • layout_constraintCircleAngle : which angle the widget should be at (in degrees, from 0 to 360).


Related Post

New features in Constraint Layout 1.1.0


Autosizing TextViews Using Support Library 26.0

Material design recommends using a dynamic type text instead of smaller type sizes or truncating large size text.Android making this much easier to implement with the introduction of TextView auto-sizing.With Android O and Support Library 26.0, TextView gains a new property auto-size text type which allows you to optimize the text size when working with dynamic content.

Autosizing TextViews

Adding support library dependency

The Support Library 26.0 provides full support to the auto sizing TextView feature on devices running Android versions prior to Android 8.0 (API level 26). The android.support.v4.widget package contains the TextViewCompat class to access features in a backward-compatible fashion.

Support Library 26 has now been moved to Google’s maven repository, first include that in your project level build.gradle.

Add the support library in your app level build.gradle.

Enable Autosizing

To enable auto-size in XML, set autoSizeTextType to uniform.This scales the text uniformly on horizontal and vertical axes, ignoring the text size attribute.When using support Library, make sure you use the app namespace.Note that you shouldn’t use wrap_content for layout width or layout height for a textView set to auto-size since it may produce unexpected results.Instead, use match_prent or a fixedsize.

Turn off auto-sizing by selecting none instead of uniform. You can also use auto-size programmatically like this.

Provide an instance of the TextView widget and one of the text types, such asTextViewCompat.AUTO_SIZE_TEXT_TYPE_NONE or TextViewCompat.AUTO_SIZE_TEXT_TYPE_UNIFORM.

Customize TextView

If you want to customize your TextView more, it has some extra attributes for you to auto-size min and max text size and step granularity.The TextView will scale uniformly in the range between the minimum and the maximum size in increments of step granularity.If you don’t set these properties, the default values will be used.

To define a range of text sizes and a dimension in XML, use the app namespace and set the following attributes:


To define a range of text sizes and a dimension programmatically, call the setAutoSizeTextTypeUniformWithConfiguration(int autoSizeMinTextSize, int autoSizeMaxTextSize, int autoSizeStepGranularity, int unit) method. Provide the maximum value, the minimum value, the granularity value, and any TypedValue dimension unit.

Preset Sizes

To have more control over the final size, for example, your app needs to comply with specific text size design guidelines, you can provide a list of size, and it will use the largest one that fits.

Create an array with the size in your resources and then set the auto-size present sizes attribute in the XML.

To use preset sizes to set up the auto-sizing of TextView programmatically through the support library, call theTextViewCompat.setAutoSizeTextTypeUniformWithPresetSizes(TextView textView, int[] presetSizes, int unit) method. Provide an instance of the TextView class, an array of sizes, and any TypedValue dimension unit for the size.



How to Create Instant app from Existing App

Android Instant Apps allows Android users to run your apps instantly, without installation. Users can get your flagship Android experience from any URL—including search, social media, messaging, and other deep links—without needing to install your app first.Android Instant Apps supports the latest Android devices from Android 6.0 through Android O. Google will be rolling out to more devices soon, including expanding support to Android 5.0 (API level 21) devices shortly.

How does Instant app work?

When Google Play receives a request for a URL that matches an instant app, it sends the necessary code files to the Android device that sent the request. The device then runs the app.

How does Instant app work

Structure of the Instant App

  • Base feature module: The fundamental module of your instant app is the base feature module. All other feature modules must depend on the base feature module. The base feature module contains shared resources, such as activities, fragments, and layout files. When built into an instant app, this module builds a feature APK. When built into an installed app, the base feature module produces an AAR file.
  • Features: At a very basic level, apps have at least one feature or thing that they do: find a location on a map, send an email, or read the daily news as examples. Many apps provide multiple features.
  • Feature ModulesTo provide this on-demand downloading of features, you need to break up your app into smaller modules and refactor them into feature modules.
  • Feature APKs: Each feature APK is built from a feature module in your project and can be downloaded on demand by the user and launched as an instant app.


Each feature within the instant app should have at least one Activity that acts as the entry-point for that feature. An entry-point activity hosts the UI for the feature and defines the overall user flow. When users launch the feature on their device, the entry-point activity is what they see first. A feature can have more than one entry-point activity, but it only needs one.

Structure of Instant App


As you see in the figure, both “Feature 1” and “Feature 2” depend on the base feature module. In turn, both the instant and installed app modules depend on the feature 1 and feature 2 modules. All three feature modules are shown in figure -base feature, feature 1, and feature 2—have the com.android.feature plugin applied to their build configuration files.

Upgrade Your Existing App

Android Instant Apps functionality is an upgrade to your existing Android app, not a new, separate app. It’s the same Android APIs, the same project, and the same source code. Android Studio provides the tools you need to modularize your app so that users load only the portion of the instant app that they need when they need it.

Step 1: Develop a use case for your instant App

Focus on a core user experience that completes a specific action and optimizes a key business metric besides app installs. Then review the user experience guidelines for Android Instant Apps.

Step 2: Set up your development Environment

To develop an instant app, you need the following:

Install Instant App SDK

Build an Android Instant App, we need to install the SDK. Go to Tools >Android > SDK Manager. Click on the “SDK Tools” tab and install “Instant Apps Development SDK” by checking the box and hitting “Apply”

Install Instant App SDK

Set up your device or emulator

You can develop instant apps on the following devices and emulators:

  • Devices: Nexus 5X, Nexus 6P, Pixel, Pixel XL, Galaxy S7 running Android 6.0 or higher.
  • Emulator: Nexus 5X image running Android 6.0 (API level 23), x86, with Google APIs(You cannot use x86_64 architectures).

Step 3: Moving existing code into a feature module

In this step, we will convert the existing application module into a shareable feature module. We will then create a minimal application module that has a dependency on the newly formed feature. Note that this feature module will be included into the Instant App build targets later. 

Convert the app module into a feature module called app-base

We start with renaming the module from 'app' to 'app-base':

create base module

Change Module type

Next, we change the module type to Feature module by changing the plugin type from com.android.application to com.android.feature  and also remove applicationId   because this is no longer an application module in the app-base/build.gradle file:

Specify base feature in the project  app-base/build.gradle

Synchronize gradle files and re-build the project with Build->Rebuild Project.

Step 4: Create appapk module to build APK file

Now that we have transformed our source code into a reusable library module, we can create a minimal application module that will create the APK. From File->New Module

Create New Module

Enter application name “app apk”, leave suggested module name (topekaapk).

If your project uses Data Binding, you need to ensure that appaapk/build.gradle includes the following in the android { ... } section.

Replace compile dependencies in appaapk/build.gradle:

Switch to “Project” view and remove unused files:Instant app remove unused folder

Switch back to “Android view” and remove the application element from appaapk/src/main/AndroidManifest.xml. It should only contain this single manifest element.

Finally sync Gradle files, re-build and run the project. The application should behave exactly the same despite all of our changes.

Create Feature module

We have just moved the application’s core functionality into a shareable feature module and we are now ready to start adding in the Instant App modules.

Step 5: Creating the instant app APK

Instant Apps uses feature APKs to break up an app into smaller, feature focused modules. One way to look at Instant Apps is a collection of these feature APKs. When the user launches a URL, Instant Apps will only deliver the necessary feature APKs to provide the functionality for that URL.

The app-base module feature APK that encompasses the full functionality of our app. We will create an Instant App module that bundles our single feature APK. At the end, we are going to have our single feature instant app!

single feature instant app

The Instant App module is merely a wrapper for all the feature modules in your project. It should not contain any code or resources.

Create an Instant App module

Select File -> New -> New Module
Create Instant App

Next, we need to update the instant app gradle file to depend on the base feature module. At this point we also need to add buildToolsVersion to explicitly use 26.0.1 since there is an issue in the current Android preview which makes the default 26.0.0. Since we’ve installed 26.0.1 with the project we don’t want to fetch 26.0.0 necessarily.


The instant app does not hold any code or resources.It contains only a build.gradle file.

Now do a clean rebuild: Build -> Rebuild project.

Step 6: Defining App Links

App links are required in instant apps because URLS are the only way a user can launch an instant app (instant apps are not installed).Each entry-point activity in an instant app needs to be addressable: it needs to correspond to a unique URL address. If the URL addresses for the features in an instant app share a domain, each feature needs to correspond to a different path within that domain.

In order to enable Instant App runtime to call your application, we must establish the relationship between your web site and the app. To associate links, we will use a new feature built into Android Studio called “App Links Assistant.”. Invoke this tool through the Android Studio “Tools” menu:

App Links Assistant

From the sidebar, tap on the “Open URL Mapping Editor” button:

add url intent filters

From the editor, tap the “+” button to add a new app link entry:

Create a new URL mapping with the following details:

Host: http://topeka.samples.androidinstantapps.com

Path: pathPattern/signin

Activity: activity.SigninActivity (app-base)

map url to activity

Repeat this dialog for https variation, as well as other links:

In the end, you should have 4 mappings like this:

Url to Activity mapping

Sync gradle files if required and rebuild the project.

Again, since we have not used Android Studio wizard, the run configuration for instant app is not valid, we need to define the URL before we can launch the instant app from the IDE.

Click the Run configuration dropdown and choose “Edit Configurations…”

Select instantapp under Android App.

Edit Configuration

Replace the text ‘<< ERROR – NO URL SET>>’ with https://topeka.samples.androidinstantapps.com/signin

To run your Instant App, select instantapp from the Run configuration dropdown and click Run:

Run Instant APP


Now, you have created and deployed an Android Instant App. You took an existing Android application and restructured it to build both a full installed APK and an Instant APK that will be loaded when the user taps on the associated URLs.


You may need to separate the code in multiple features for various reasons, the most obvious one is feature separation, to let users download only the relevant portions of the app.We will create another feature module and move all UI code there (activities and related fragments). It will let us create two features (

You will create another feature module and move all UI code there (activities and related fragments). It will create two features (app-base and app-ui) later.







EmojiCompat Support Library

Have you seen☐this blank square character called “tofu” when an app can’t display an emoji? New emojis are constantly being added to the Unicode standards, but since they are bundled as a font, your phone’s emojis are set in stone with each OS release.Well, they were.

With the EmojiCompat library(part of the Support Library 26) your app can get backward-compatible emoji support on devices with API level 19+ and get rid of tofu.

How does EmojiCompat work?

EmojiCompat Process

For a given char sequence, EmojiCompat can identify the emojis, replace them with the EmojiSpan, and then render the glyphs.On versions prior to API level 19, you’ll still get the tofu characters.

EmojiCompat build on the new font mechanism to make sure you always have the latest emoji available.

Downloadable fonts configuration

The downloadable fonts configuration uses the Downloadable Fonts support library feature to download an emoji font. It also updates the necessary emoji metadata that the EmojiCompat support library needs to keep up with the latest versions of the Unicode specification.

Adding support library dependency

Support Library 26 has now been moved to Google’s maven repository, first include that in your project level build.gradle.

Add the support library in your app level build.gradle.

Adding certificates

When a font provider is not preinstalled or if you are using the support library, you must declare the certificates the font provider is signed with. The system uses the certificates to verify the font provider’s identity.

Create a string array with the certificate details.

Initializing the downloadable font configuration

Before using EmojiCompat, the library needs a one-time asynchronous setup(in application class).

When a downloadable font configuration, create your FontRequest, and the FontRequestEmojiCompatConfig object.

It depends on the way you’re using it, the initialization takes at least 150 ms, even up to a few seconds. So you might want to get notified about its state.for this, use the registerInitCallback method.

Use EmojiCompat widgets in layout XMLs. If you are using AppCompat, refer to the Using EmojiCompat widgets with AppCompat section.

You can preprocess a char Sequence using the process method. You can then reuse the result instead of the initial registering in any widget that can render spanned instances.So, for example, if you’re doing your own custom drawing, you can use this to display emoji text.


Download this project from GitHub

Fast Scrolling in RecyclerView Using Support Library 26

InListView, you could have a fast scroller which allowed you to drag a scrollbar to easily scroll to wherever you wished using fastScrollEnabled attribute.With Support Library 26, you can also easily enable fast scrolling for RecyclerView.

Fast Scrolling RecyclerView

 Add Dependency

Support Library 26 has now been moved to Google’s maven repository, first include that in your project level build.gradle.

Add Support Library 26 in your app level build.gradle.

Enable Fast Scrolling

If fastScrollEnabled boolean flag for RecyclerView is enabled then,fastScrollHorizontalThumbDrawable,fastScrollHorizontalTrackDrawable, fastScrollVerticalThumbDrawable, and fastScrollVerticalTrackDrawable must be set.

  • fastScrollEnabled : boolean value to enable the fast scrolling. Setting this as true will require that must provide the following four properties.
  • fastScrollHorizontalThumbDrawable : A StateListDrawable that will be used to draw the thumb which will be draggable across the horizontal axis.
  • fastScrollHorizontalTrackDrawable : A StateListDrawable that will be used to draw the line that will represent the scrollbar on horizontal axis.
  • fastScrollVerticalThumbDrawable : A StateListDrawable that will be used to draw the thumb which will be draggable on vertical axis.
  • fastScrollVerticalTrackDrawable : A StateListDrawable that will be used to draw the line that will represent the scrollbar on vertical axis.


Now, Create native shapes StateListDrawables.






Now Create RecyclerView and enable fastScrollEnabled.


Download this project from GitHub