Transfer Learning with TensorFlow Hub Module

Growth on the machine learning side is lagging behind the software engineering side by 15-20 years.This creates a really interesting opportunity.We can look at some of the things that happened at software engineering and think about what kind of impact they may have on machine learning. Looking at software engineering there’s something that’s so fundamental and this idea of sharing code.

Shared Machine Learning


sharing code
Shared code repositories make us immediately more productive.We can search for code, download it, use it, it has these really powerful effects.This actually changes the way we write code.We refactor our code, put it in libraries, share those libraries and this really makes people even more productive.

sharing machine learning code
It’s the same dynamic that TensorFlow creates for machine learning with TensorFlow Hub.With TensorFlow Hub, you can build, share and use pieces of machine learning.

Why shared Machine Learning


Elements of Machine Learning
Anyone who’s done machine learning from scratch knows you need a lot to do, you need an algorithm, data, computer power and expertise and if you’re missing any of these you’re out of luck.

TensorFlow Hub lets you distill all these things down into a reusable package.We call a module.Those modules go into TensorFlow Hub where people can discover them and then easily use them.

TensorFlow Hub
Notice that it’s module instead of a model.A model is big for sharing.If you have a model you can use that model if you have the exact inputs it wants and you expect the exact outputs it provides.If there are any little differences you’re kind of out of luck. Modules are a smaller piece.If you think of a model like a binary and think of a module like a library.

Modules Contain


Module ContainModules contain pre-trained weights and graphs.The module is actually a saved model.It lets us package up the algorithm in the form of a graph.This is package up the weights. You can do things like initializing use assets.Modules are composable, reusable,re-trainable.

TensorFlow libraries make it(Models) very easy to instantiate in your TensorFlow code.So you can compose these in interesting ways.This makes things very reusable.You can produce one of these and share it. These are also retrainable.Once you patch it into your program you can backpropagate through it just like normal.This is really powerful because if you have enough data you can customize the TensorFlow hub module for your own application.

Image Retraining


Let’s look at a specific example of using a TensorFlow Hub for image retraining.Let’s say that we’re gonna make an app that can classify cat from the image. The problem is we have only a couple hundred examples, probably not enough to train an entire image classification model from scratch, but what we could do is start from an existing general-purpose image classification model.

Image Retraining

Most of the high-performance model is trained on millions of examples.They can easily classify thousands of categories.We want to reuse the architecture and the trained weights of that model without the classification layers and in that way we can add our own cat classifier on top.We can train it on our cat examples and keep the reused weights fixed.

Image Modules

You can find a list of all of the newly released the state-of-the-art, well-known image modules. some of them include the classification layers and some of them remove them just providing a feature vector as output.We’ll choose one of the feature vectors, for this NASNet which is a state-of-the-art image module that was created by a neural architecture.

So you just paste the URL of a module and TensorFlow Hub takes care of downloading the graph and all of its weights and importing it into your model in one line.You’re ready to use the module like any function.

Here, We just provide a batch of inputs and we get back our feature vectors.We add a classification layer on top and we output our predictions.But in that one line, you get a huge amount of value in this particular case more than 62,000 hours of GPU time.

Available Image Modules

NASNet is available in a large size as well as a mobile size module and there’s also the new progressive PNASNet .A number of new MobileNet modules for doing on-device image classification as well as some industry standard ones like Inception, and ResNet that complete list is available here.All those modules are pre-trained using the TF-slim checkpoints and already used for classification or as feature vector inputs to your own model.

Text Classification


We’d like to know whether a movie review is a positive or negative sentiment.One of the great things about TensorFlow Hub is that it packages graph with the date.

Restaurant ReviewText and embedding modules contain all of the pre-processing and included a thing like normalizing and tokenizing operations.We can use a pre-trained sentence embedding module to map a full sentence to an embedding vector.So If we want to classify some movie reviews then we just take one of those sentence embedding modules.we add our own classification layer on top and then we train with our review.We keep the sentence modules weights fixed.

TensorFlow Hub has a number of different text modules.It has neural network language models that are trained on Google news for English, Japanese, German, and Spanish. It has  word2Vec model trained on Wikipedia as well a new module called ELMO that models the characteristics of word use.

Universal Sentence Encoder

Universal sentence encoder it’s a sentence level embedding module.It is trained on a variety of tasks and it enables a variety of tasks in other words universal. So some of the things that are good for our semantic similarity doing custom text classification clustering and semantic search but the best part about is It requires little training data.

We just past that URL and TensorFlow Hub downloads the module and inserts it into your graph.This time we’re using the text embedding column that way we can feed it into an estimator to do the classification part.Like with the image retraining example this module can be fine-tuned with your model by setting trainable to true.You have to lower the learning rate.So that you don’t ruin any existing weights that are in there but it’s something that’s worth exploring if you have enough data.

Module URL


Module is a program So make sure what you’re executing is from a location that you trust. In this case, the module is from tfhub.dev.It provided modules like NASNet and a universal sentence encoder. TensorFlow create a place where you can publish the modules that you create.In this case, Google is the publisher and universal sentence encoder is the name of the module and the version number is 1.So Tensorflow Hub considers modules to be immutable that way you don’t have to worry about the weight changing between training sessions.So All of the module URLs on tfhub.dev include a version number and you can take that URL and past it into the browser and see that complete documentation for any module that’s hosted on tfhub.dve.

Conclusion

TensorFlow also has modules for other domains besides text classification and image retraining like generative image module. TensorFlow Hub added another module that was based on deep local features network which can identify the key points of landmark images both of those have great colab notebooks.

 

References

TensorFlow Hub (TensorFlow Dev Summit 2018)

 

Eager Execution:Pythonic way of using TensorFlow

When you enable Eager execution in TensorFlow operations are executed immediately as they are called from Python.

Eager Execution is an imperative object-oriented pythonic way of using TensorFlow. TensorFlow is graph execution engine for machine learning.

Why Graph Execution?


A really good reason is your computation represented as a platform independent.The graph is that once you have that it’s very easy to Automatic differentiation that graph.

If you have a platform independent abstract representation of your computation you can just go and deploy it to pretty much anything you want. You can run it on the TPU you can run on a GPU you can put it on a phone, Raspberry PI like all sorts of cool deployment scenarios.It’s really valuable to have this kind of platform independent view

The compilers work with data for graphs internally and they know how to do all sorts of nice optimizations that rely on having a global view of computation like constant folding common subexpression elimination and data laying thing like that.

A lot of these optimizations are really like deep learning specific.We can choose how to properly layout your channels and your height and width, so your convolutions are faster.

A key reason that’s very important is once you have a platform independent representation of your computation, You can just deploy it and distribute it across hundreds of machines or an TPU.

Why Eager Execution


These graphs are so good what made us to think that now it’s a good idea to move beyond them and let you do Eager Execution.

You can just build up a trace as you go and then walk back the trace to compute gradients.

You can iterate a lot more quickly you can play with your model as you build it.

You can inspect it you can poke and prod in it and this can let you just be more productive when you’re like making all these changes.

You can run your model for debuggers and profilers and add all sorts of like analysis tools to them to just really understand how they’re doing what they’re doing.

If you don’t force you to represent you computation in a separate way then the host programming language you’re using you can just use ultimate like machinery of your host programming language to do control flow and data flow complicated data structures which for some models is key to being able to make your model working at all.

Enable Eager Execution


You import tensorflow and you call tf.enable_eager_execution() and once you do that what happens is anytime you run a TensorFlow operation like in this case it runs immediately instead of building a graph. That later runs when executed is going to run that matrix multiplication. TensorFlow immediately runs that matrix multiplication for you and give you the result and you can print it you can slice it dice it you can do whatever you want with it.

Control Flow

Because thing happening immediately you can have highly dynamic control flow that depends on the actual values of the computation you’re executing and here is just simple like if conditions example.It doesn’t matter it just matters it has like while loops that depend on like complicated values are computed based on the computation and this runs just fine on whatever device you have.

TensorFlow also brings you a few symbols that make it easier for you to write code that’s going to work with both when building graphs you know executing eagerly.

Gradients

Different operations can occur during each call, TensorFlow record all forward operations to a tape, which is then played backward when computing gradients. After it computed the gradients, it discards the tape

The gradients_function call takes a Python function square() as an argument and returns a Python callable that computes the partial derivatives of square() with respect to its inputs. So, to get the derivative of square() at 10.0, invoke grad(10.), which is 20.

Loops

Also writing loops in Eager is very easy and straightforward you can just use a Python for loop to iterate over your datasets and datasets work in eager just fine and they work the same high performance you get in the graph execution engine then you can just do your predictions compute your gradient supply your gradients all the things you’re to doing.

Debugging

When Eager Execution is enabled you can just take any model code add notes to like drop into the Python debugger anywhere you want. Once you’re in the Python debugger you have the full power of debugging available you can print the value of anything.You can change the value of any tensor you can run any operation you want on any tensor and this will hopefully empower you to really understand what’s going on in your models and really be able to fix any problems you have.You can also take Eager Execution code and profile it using whatever profiling tool.

Variable is Object


A big change when programming with Eager from the graph that variables intensive though is usually a complicated thing to think about but when eager execution is enabled it’s much simpler.The TensorFlow variable is just the python object.You create one you have it, you can write, you can change its value, you can read this value when the last reference to it goes away, you get your memory back even if it’s a GPU memory. So if you want to share variables you just reuse those object you don’t worry about variable scopes or any other complicated structure and because TensorFlow has this like object-oriented approach to variables it can look at some of the API is intensively flowing like rethink them in a way that’s a little more.

Object-oriented Saving


TensorFlow also giving you a way to do object-oriented saving of TensorFlow models. If you’ve tried looking at TensorFlow checkpoints you know that they depend on variable names and variable names depend not just on a name you, but on all other which are present in your graph.This can make it a little hard for you to save and load subsets of your model and really control what’s in your checkpoint. TensorFlow introducing a completely object-oriented python object based saving API.Any variable that’s reachable from your model gets saved on your model.You can save any subset of your model, you can load any subset of your model, you can even use this.

Conclusion

With Eager TensorFlow bringing you a lot of new that make it easier for you to built TensorFlow graph and to execute models.These are compatible with both eager execution and graph building.

 

 

Machine Intelligence Library For Browser

Python has been one of the mainstream programming languages for Machine Learning and it’s been like that for a while and there’s a lot of tools and libraries around Python.JavaScript and Browser have a lot to offer.TensorFlow playground is a great example of that.It is an in-browser visualization of a small neural network and it shows in real time all the internals of the network as it training.This was a lot of fun to make.It has a huge educational success.
TensorFlow Playground

Tensorflow Playground

TensorFlow playground is built by Google.It is powered by a small neural network 300 lines of vanilla javascript code that Google wrote as a one-off library.It doesn’t scale.It is just simple for loops and it wasn’t engineered to be reusable.

Why Machine Learning in Browser?


The browser is the unique platform where the things you build, you can share with anyone with just a link.Those people that open your app don’t have to install any drivers or any software.

The browser is highly interactive and so the user is going to be engaged with whatever you’re building.

Another big thing is that browsers have access to sensors data.All of these sensors are standardize API’s that work on all browsers.
Mobile Sensors DataThe data that comes from these sensors don’t ever have to leave the client you don’t have to upload anything to the server which preserves privacy.

deeplearn.js


Google released deeplearn.js as a JavaScript library that is GPU accelerated and it does that via WebGL which is a standard in the browser.That allows you to render 3D graphics.It allows you to both run inference in the browser and training entirely in the browser.The community took the deeplearn.js and took existing models from python and build interactive things with it.

One example is the style transfer.

Style Transfer

Another is ported the character RNN and then build a novel interface that allows you to explore all the different possible endings of a sentence all generated by the model in real time.

character RNN

Another example is a phone generative model allowed users to explore the interesting dimensions in the embedding space and you can see how they relate to boldness and slanted the font.

phone generative model

There was an educational examples teachable machine that builds this fun little game. That taught people how computer vision models work.So people could interact directly with the webcam.

interact directly with the webcam

All above examples use deeplearn.js.

deeplearn.js has become TensorFlow.js

TensorFlow.js


Google releasing a new ecosystem of libraries and tools for machine learning with JavaScript call tensorflow.js.

Use cases

One use case is you can write models directly in the browser and this has huge educational implications think of the playground.

The second use case is a major, you can take a pre-existing model(pre-trained model in python) and you can import it into the browser to do inference.

The third related use case is the same model that you take to do inference you can retrain it potentially with private data that comes from those sensors of the browser in the browser itself.

Architecture


TensorFow.js architecture

The browser utilizes WebGL to do fast linear algebra, on top of it. tensorflow.js has two sets of API’s the Ops API(eager) which used to be deeplearn.js.It is powered by an automatic differentiation library that is built analogously to eager mode.On top of that, it has high-level API that allows you to use best practices and high-level building blocks to write models.

Google release tools that can take an existing TensorFlow save model or keras model and port it automatically for execution in the browser.

Add selection support to RecyclerView:recyclerview-selection:28.0.0

recyclerview-selection provides item selection support for RecyclerView.It support for creating, modifying, inspecting, and monitoring changes to items in a RecyclerView list.It enables users to select items in RecyclerView list using touch or mouse input.

To add selection support to a RecyclerView instance, follow these steps:

1.Add Dependency


Add following depedency to your app’s build.gradle file.28.0.0-alpha1 is a pre-release version to support the Android P developer preview.

2.Implement ItemKeyProvider


Developers must decide on the key type used to identify selected items. There are three key types: Parcelable,String, and Long.

3.Implement ItemDetails


ItemDetails implementation provides the selection library with access to information about a specific RecyclerView item. This class is a key component in controling the behaviors of the selection library in the context of a specific activity.

4.Implement ItemDetailsLookup


ItemDetailsLookup enables the selection library to access information about RecyclerView items given a MotionEvent. It is effectively a factory for ItemDetails instances that are backed up by aRecyclerView.ViewHolder instance.

The Selection library calls getItemDetails() when it needs access to information about the area. Your implementation must negotiate ViewHolder lookup with the corresponding RecyclerView instance, and the subsequent conversion of the ViewHolder instance to an ItemDetailsLookup.ItemDetails instance.

5.Reflect Selected State


When the user selects an item the library will record that in SelectionTracker then notify RecyclerView that the state of the item has changed.The item must then be updated to reflect the new selection status. Without this the user will not see that the item has been selected.

Update the styling of the view to represent the activated status with a color state list.

item_list.xml

The selection library does not provide a default visual decoration for the selected items. You must provide this when you implement onBindViewHolder().

6.Add Contextual Actions


ActionModes used to provide alternative interaction modes and replace parts of the normal UI until finished. Action modes include text selection and contextual actions.

7.Build SelectionTracker


In order to build a SelectionTracker instance, your app must supply the same Adapter that you used to initialize RecyclerView to SelectionTracker.Builder.

Register a SelectionTracker.SelectionObserver to be notified when selection changes. When a selection is first created, start ActionMode to represent this to the user, and provide selection specific actions.

8.Activity lifecycle events


In order to preserve state you must handling of Activity lifecycle events. your app must call the selection tracker’sonSaveInstanceState() and onRestoreInstanceState() methods from the activity’s onSaveInstanceState()and onRestoreInstanceState() methods respectively.

 

Download this Project from GitHub

 

 

Detect user’s activity using Activity Recognition Transition API

Understanding what people are doing can help your app better adapt your user’s needs.Transition API can tell if your users are walking, running, cycling, or in a vehicle and you can use it to improve your app experience.Transition API combine various signals like location and sensor data to determine when the user has started or ended an activity like walking or driving.

Since November 2017, the Transition API use to power the Driving Do-Not-Disturb feature launched on the Pixel 2.Now, Activity Recognition Transition API available to all Android developers.Transition API take care of stillness means the user parked their car and ended a drive or simply stopped at a traffic light and will continue on.

Use Case


If a user starts running, you’ll get a callback indicating the most probable activity is running.If the confidence is 75 or higher, you know you can act on it and show notification asking if they want to start tracking their run.

When an app is in use and the user starts driving, you can offer popup switch to car mode dialogue.

You can also use it for historical cases such as showing your users when they parked their car or how long they spent commuting to work each day.

Mileage tracking app could start tracking miles when a user starts driving, or a messaging app could mute all conversations until the user stops driving.

1.Add Dependencies


To declare a dependency to the API, add a reference to the Google maven repository and add an implementation entry to com.google.android.gms:play-services-location:12.0.0 to the dependencies section of your app build.gradle file.

2.Add Permission


Specify the com.google.android.gms.permission.ACTIVITY_RECOGNITION permission in the app manifest.

3.Register for activity transition updates


To start receiving notifications about activity transitions, you must implement, An ActivityTransitionRequest object that specifies the type of activity and transition.PendingIntent callback where your app receives notifications.

ActivityTransitionRequest

You must create a list of ActivityTransition objects, which represent the transition that you want to receive notifications about. An ActivityTransition object includes the 1.An activity type  IN_VEHICLE,ON_BICYCLE,RUNNING,STILL,WALKING

2.A transition type of ACTIVITY_TRANSITION_ENTER or ACTIVITY_TRANSITION_EXIT. For more information, refer to theActivityTransition class.

The following code shows how to create a list of ActivityTransition objects:

PendingIntent

Reducing memory consumption is important to keep the phone running smoothly.When using the API, a common use case is that an application wants to monitor activity in the background and perform an action when a specific activity is detected.Typically, when developing for Android, you don’t use a persistent running service to handle this.That consumed a lot of resources.This API removes that burden by delivering the data via an intent.The application specifies a pending intent callback typically an intent service, which will be called with an intent when activities are detected.So there’s no need to keep the service always running in the background.

After successfully registering for activity transition updates, your app receives notifications in the registered PendingIntent.

4.Receive activity transition events


When the requested activity transition occurs, you app receives an Intent callback.The events are ordered in chronological order, for example, if an app requests for the IN_VEHICLE activity type on the ACTIVITY_TRANSITION_ENTER and ACTIVITY_TRANSITION_EXIT transitions, then it receives an ActivityTransitionEvent object when the user starts driving, and another one when the user transitions to any other activity.

Your application will receive callbacks with extras containing an activity recognition result.This contains a list of activities that the user may be doing at the particular time.

5.Deregister for activity transition updates


Deregister for activity transition updates by calling the removeActivityTransitionUpdates() method of the ActivityRecognitionClient and passing your PendingIntent object as a parameter.

 

Download this project from GitHub

 

 

What’s new in MQTT 5

MQTT 5 is pretty new, a lot of things changed.We will walk through the most important changes of MQTT 5 in this blog.

I guess most of you are already familiar with MQTT but for those who are not quite sure what MQTT is, what it’s about and what are the main principles let’s start with a quick refresher this is also important to understand some of the changes in MQTT 5.

MQTT Overview


MQTT is an iot messaging protocol. It gained massive attraction in the last few years.It’s mostly used for device to cloud communication and cloud to device communication sometimes device to device communication directly.It has a lot of features which are good for mobile network use cases.If you have devices out in the field like cars or like gateways or physical hardware which needs to connect via mobile networks to the backend services.

QoS

MQTT has 3 different quality of services levels which you can define on the application level.You can send messages with fire and forget.you can also make sure that message arrives at least once or exactly once on the backend.

Retained Messages

MQTT has nice features which are unique to the protocol is retained messages which essentially allows you to save messages on your MQTT message broker.

Persistent offline sessions

MQTT has a feature which allows the client to come back and the broker remembers the client and also sends out messages the client did not get at the time.In general, from an application perspective, you can program with MQTT like you never lost the connection.

Binary Protocol

MQTT is a binary protocol with a minimal overhead so it’s really tiny it saves a lot of bandwidth.

MQTT Use Case


MQTT is a very good protocol for constraint devices.If you don’t have too much computing power or a memory then MQTT is a very good protocol choice.This is typically on physical hardware at a few megabytes of memory.

Push Communication

Typical use cases for MQTT is push communication.We have reliable communication of unreliable networks this is mainly mobile networks.

Low Bandwidth and High Latency

It also stays extremely on the backend. Some MQTT brokers allow scaling to more than 10 billions of concurrently connected devices.Typically especially if you’re in a mobile network you offer a very low bandwidth in the high latency and MQTT make sure that you really get the best here so it doesn’t waste any bandwidth and the best at low latency.

Publish/Subscribe

publisher subscriber protocol

MQTT uses the publish/subscribe mechanism.we have an MQTT broker which is in middle.MQTT broker is responsible for distributing data across clients.

We have some producing clients like here temperature sensor.It publishes data to MQTT broker and the broker distributes the data to devices which could be in this case a laptop or a mobile device it could be a car it is really anything which can connect to the internet.

This device works by a subscription mechanism.The laptop or the mobile devices subscribes to the MQTT broker it says hey “I’m interested in a particular data set” and the broker make sure that only the data which the client are interested in forward to their clients. You have a decoupling here and what is very important to know about MQTT you have a standing TCP connection here so all of the devices which talk to the MQTT broker are connected all the time so this is different to other protocols like HTTP which typically close the connection after doing stuff.

MQTT 5 Overview


MQTT 5 is the successor of MQTT 3.1.1.It is not backward compatible with the old specification.Officially released in January 2018. It also had a lot of clarifications from the MQTT 3.1.1 specification so to make sure that implementers of the specification get everything right.

MQTT 5 Goals

The goals of MQTT 5 is enhancements scalability and improved error reporting.Error reporting wasn’t most wished features by users because MQTT 3.1.1 has some blind spots when you come to error handling.MQTT 5 did a lot of work here and also formalized common patterns like request response.

One of the most wished features is the extensibility of the protocol because they(MQTT 3)weren’t headers like you know from HTTP. MQTT 3.1.1 wasn’t that flexible this changed.Performance improvement for small clients they’re also a big part.

MQTT is very easy to program on the client side but it’s important to notice that implementing MQTT broker is not as easy as it sounds.

MQTT 5 has some enhancement for the scalability.MQTT free brokers scale up to 10 millions of devices which is already a lot but also we expect that MQTT 5 allows us to scale even beyond the magic 10 million by concurrent connections.

Foundational Changes


Before digging into the specific features let us talk about the foundational changes of the MQTT protocol

Negative acknowledgments

First foundational change is Negative acknowledgments.I already mentioned that the error reporting in MQTT 3.1.1 wasn’t optimum.There was no way for the broker to tell “hey you’re doing something wrong”.So what was defined here was their negative acknowledgments.A broker can actually notify the client that something went wrong and also what went wrong. This is for many use cases this is very important especially for production use cases where it’s it’s hard to debug what happened.The client can react when something weird happens but also the client can send acknowledgments to the broker if something bad happens.

Return Code

Another foundational change is returned codes for unsupported features when a client connects to an MQTT broker if the broker does not allow all MQTT features or doesn’t implement all MQTT features which is pretty common atypical like cloud platforms like AWS.It’s possible to tell the client, ok this feature is not available or this is not available for you.If you haven’t a permission control and you do not allow clients let’s say to subscribe or publish or something like this.

MQTT 5  can restrict the retain message feature so we can turn it off if we want for a specific client.

we can define a maximum quality of service level for the client.

We can restrict the use of wildcard subscriptions.

we can also restrict subscription identifiers share subscriptions.

A client can be notified what is maximum message size to broker support.What deserves a keepalive value is which is also very important because it also changes with MQTT 5.

The broker can tell the client how often it should send ping messages or heartbeat messages in order to recognize it for clients is offline.

Another notable change is MQTT 5 not allowed to send retry for the quality of service 1&2 messages. This is something which may come surprisingly for some people because We noticed that many projects rely on retries which are not allowed with MQTT5. So this is a common pitfall when upgrading to MQTT 5.

Passwords are now allowed to send without having usernames which may be interested in sending tokens.Clients are now allowed to send disconnect packages traditionally. The broker has no way to disconnect the client gracefully with MQTT 3. The client can connect and when it decides okay I want to disconnect now in a graceful then the client sent a disconnect packet to a broker but now it’s also allowed for the broker to say that this connect packet it back to the client and tell the reason why it was disconnected this is always something new and is used heavily for the negative acknowledgments.

New Features in MQTT 5


Let’s talk about some of the features in MQTT 5.We cannot dig into all features because there are more than 25 features available which take a lot of time but what I want to highlight some of the features more detail.

Session & Message Expiry

So one of them from interesting features of MQTT 5 is session and message expiry.
MQTT 3.1.1 allows two kinds of session. We have a clean session which ends when a client disconnects so the broker does not need to remember anything about the client and we have a persistent session which is a session, the broker saves and persists when a client is offline and come back. the client can just resume the session. So essentially what we get here is we have a state on the broker side.

The problem here is if we have some clients which connect to the broker and disconnect and never come back. The broker can have a very hard time because it needs to persist to data until the client reconnects.When a client never reconnects essentially we get a problem here and most brokers like mosquito allow to set that time to live for a session on an administrative level. So the broker can clean up after this time.

In MQTT 5 this feature went back to the specification and now all brokers must implement a cleanup because this is needed on the broker’s side otherwise it would be a problem of denial of services attacks. So the client can say okay when I’m connecting to this broker I want the session expiry interval in let’s say 100 seconds or 10 minutes or one day and then the broker is allowed to delete the session after this time.

The problem here is let’s assume we have a car which often for one week and when it comes back doesn’t really need to get all messages.Perhaps some messages need some messages not and now sending client can send publish expiry to a published message and say to the broker “if time to live message is over do not send out the message anymore”.This allows the broker to clean up messages especially if it’s queue of messages and also it saves a lot of bandwidth.

Negative Acknowledgment

Return code for all acknowledgments is essential.

CONNACK,PUBACK,PUBREC,PUBREL,PUBCOMP,UNSUBACK,DISCONNECT,SUBACKAND AUTH

So now the broker and the client can say sorry we have a problem here and this is the problem there are reasons defined which are humanly readable. Which the broker and the client can send out if they want but they aren’t they don’t need to do this and also send a return code. These return codes are specified most of them are error codes and all client and broker must have the ability to react to this code to the returns code. So if you are a client and the broker disconnects because you were let’s say idle for long then you have the possibility on the client side to adjust the interval for sending heartbeats.

Payload format indication & Content Type

What you get here is a content type this is similar to mime types like this is a JPEG picture or this is a text.This can be sent as a meta information.You can also indicate what kind of content do we have it is also possible for up messages.We get two new headers in MQTT content type which is the mime type and you have a payload format indicator this is more interesting for debugging purposes.

Request and Response

It is possible to send hints like you want to request-response.What you can do here is you can send metadata like request response information. So a publisher can say that can send a message and it can also send a metadata.Request response information can be used that the receiver of the message can send the answer to the topic.

Shared Subscription

Share subscriptions are a very interesting concept for scaling out backend subscribers.
share subscription mqtt 5
Let’s assume we have a publisher which sends a lot of messages.We have a high-frequency topic.The problem which could arise is that the backend clients cannot process data this fast. Because it says it writes to the database which is slow at the moment, what to do how can we scale this? with MQTT 3 you cannot scale this without using shear subscriptions.

MQTT 5 has a logical or mutual shared topic a shared subscription.The client can decide ok I want to share my subscription with others clients and then the broker can send one message to the one client and one message to the other client.If you have a stream of let’s 1,000 messages/second on a topic and you have a share subscription with two clients then each of these clients get 500 messages/seconds and now if you scale out three clients then each of these backend instances get 330 messages/second and so on.This is a way how you can elastically scale up the backend subscribers or client up and down for topics which have a lot of traffic.

User Properties

MQTT 5 has headers and it’s possible to use user-defined properties which is you can just like with HTTP. Add any data you want to your MQTT packet. It can modify the publisher and user properties.Let’s say you have a publisher which has some application specific identifiers which you want to parse in the backend without looking into the whole payload of the packet contrast.the backend applications can just get out there the header without decoding the whole message.You can add unlimited user properties which is a bit controversial.

Other Features

Topic Alias: The client can choose to shrink topics.If you have a very long topic and repeatedly published.They save a lot of bandwidth because they can just use an alias.

MQTT 5 has Flow Control so our client can decide how many messages you can actually receive.

It has maximum message size indication and an authentication flow.

We also get will delay. You can tell the broker, please wait 5 seconds before sending out the last New Testament message.

The broker can tell the client what keep alive. It expects the program also overwrite client identifier.

Broker

Unfortunately, I did not find any MQTT5 broker yet expect the eclipse paho test broker.

 

Related Post

Android MQTT Client

 

Calling REST API from a Flutter App

Building Flutter applications whose content is static can be a bad idea. Instead, you should consider building applications that can fetch content from the Web server. That might sound hard, but Web server exposing their resources through REST APIs, it’s actually quite easy.

In this tutorial, I’m going to show you how to use the classes and methods available in the Flutter SDK to connect to remote web servers and interact with them using their REST APIs.

Calling REST API from a Flutter App

Creating an HTTP Connection


To create an HTTP Client we need to add an import.

The following code snippet shows you how to setup a connection with the GitHub API endpoint:

Note that the HTTP APIs use Dart Futures in the return values.Flutter recommend using the API calls with the async/await syntax.

 

Responsive UI


Network calls are slow.It doesn’t matter where you are on your phone.Sometimes, it will just be slow.sometimes, your server might be slow and you just don’t want to show a white page.So you want to show a progress bar.The way you do that currently in Flutter is you have a show loading equals false.If show loading is false in your building function, you show a spinner animation else ,you show your entire widget tree.Now, I didn’t want to write that over and over again, so I started finding a solution for it and so what I came across was this great library called async_loader.Following code snippet show how to use it.

Usage

To use this plugin, add async_loader as a dependency in your pubspec.yaml file.

Create instance

You need to create an async loader in your build function.It has a few parameters.It has initState,renderLoad,renderError,and renderSuccess.initState is basically as the widget is floating, what do you want to load?what data do you want to load? As renderLoad is as it’s being loaded, what do you want to show? So in renderLoad, I show a progress bar.renderError is if something went crazy wrong, what do you want to do? So here for the sake of the demo I just have new text error loading conversation.so you see a boring old error loading conversation on the page.what you typically want is some sort of nice little graphic that says, oh, something went wrong please press back.and then finally when all your data is loaded, renderSuccess is called with your data that you return in your initState. And then you can take that data, and then you can actually render your entire UI.

JSON Parsing


In Android, you can use GSON for JSON parsing.There’s nothing like this on Flutter that I found, mainly because Flutter doesn’t have reflection.I’m a lazy developer. I did not want to write down all these single data types over and over again and build out a fromJSON and to map.There is a great library called json_serializable.

Setting up json_serializable

To include json_serializable, you need one regular and two dev dependencies. dev dependencies are dependencies that are not included in your app source code.

Check the latest versions.

pubspec.yaml

Click “Packages Get” in your editor to make these new dependencies available in your project.

Convert  User class to a json_serializable .

user.dart

When creating json_serializable classes the first time, you will get errors.Because the generated code for the model class does not exist yet. To resolve this, you must run the code generator that generates the serialization boilerplate for us.

By running (in command prompt)flutter packages pub run build_runner build in our project root, you can generate json serialization code for our models whenever needed.

Consuming json_serializable models

To deserialize a JSON string json_serializable way, we do not have actually to make any changes to our previous code.

Same goes for serialization. The calling API is the same as before.

 

Download Project from GitHub

 

Guide to Android Architecture Components

You know that early Android developers or potentially mediate Android developers might find is that they end up putting a bunch of code in their activity class and they know ends up with a very bloated activity class. So what this post suggests to you is how you might be able to divide out that code a little bit more intelligently.

What is Architecture component?

Architecture components are a growing set of libraries.They are meant for creating Android apps and the whole point of these libraries is to simplify things that might have been a little bit challenging with Android development.

Started with two libraries, a library for persistence on android and a library for lifecycle on Android and making lifecycle management easier.

The first is Room the second one is Lifecycle library and both of these libraries reached 1.0 stable in November so they are production ready and you could use them in your app safely. These libraries were built in a way where they can be used alone or they can work together just fine.So if you’re only looking for a solution to make SQLite database you could easily use Room alone without having to use other libraries if you want but they’re also designed in a way to really work well together.There is the third library called Paging library its purpose is to simplify lazily loading large data set for you.

Create an app in a way that it uses a reactive UI which means, in other words, the UI will automatically keep in sync with the database which is what are the powerful things you could do about that connection between Room and the Lifecycle library.

Design Classes


So what is the big principles of the guide is really encouraging you to have a separation of responsibilities for each of your classes.So i’m going to through each of the classes and talk about what their responsibility is.

android architecture components

The first class is the UI controller an activity or fragment.The responsibility of the UI controller is to display data. Basically, it’s telling views what to draw to the screen another responsibility that it has is capturing things like user interactions.Your activity is the one that knows if a user clicks a button.

But as soon as it gets that button click instead of processing itself it will pass on that information to a new class called the ViewModel. The ViewModel will also contain another new class called LiveData that the responsibility of the ViewModel class is to hold all of the UI data that is needed for your UI controller.

The ViewModel class then communicating with a class known as the Repository.Creating a Repository class is a convention and it’s a suggested best practice but it’s not part of the library. It’s not a new architecture component it’s just it’s a best practice and in simply put the Repository class contains the API through which you will get access to all of the apps data.

Room manages all of the local persistence of this the SQLite database.Room contains a bunch of different classes that work together including entities and DAO and database class and it’s built on top of SQLite.

Avoiding Strong References

Another core principle that I want to point out is we have separation responsibilities but we also have the idea that the classes only reference the class directly below them.what I mean is for your UI controller or your activity that class is only going to have a reference to the ViewModel. The ViewModel though will not have a reference back up to the activity and in addition the activity won’t reference the Repository or the database this sort of strict rule of avoiding strong references +like the activity to the database or in different parts of your architecture makes sure that you keep things modular and that your app doesn’t become a tangle of dependencies and what this means is that if at a later point you want to rewrite a portion of your app say that you want to replace the Room database with something else you can easily do that and you would only need to change references in the repository and not your entire app to be able to do that update it also makes thing more tastable.

Observer Pattern

In some cases, you’re going to want to be able to communicate information back from a sort of lower level on this diagram up for example, if some data changes at your database you’re gonna want to be able to communicate that back up to the UI. But the database doesn’t know about the activity. So how do we do that well you will be using the observer pattern and more specifically you will be using LiveData.In the past, you might have used callbacks but LiveData will replace that.

Room


room architecture components

Room is SQLite object mapping library and it takes care of local data persistence for an app and Room has a lot of advantages over using the framework classes for example instead of using SQLite helper use Room you need to write a lot less boilerplate and one of the ways that it does this is that it maps database rows to objects and vice versa. So you don’t have to use intermediate values like cursors or content values you can get rid of all of those from your app. Room also does a handy thing where it validates your SQLite queries at compile time it actually won’t let you compile invalid SQL and it will also give a helpful error message so that if you wrote your SQL incorrectly you’ll know how to fix it finally Room has support to work well together with LiveData and Rxjava for that observability.

Repository


Repository class

The Repository class functions as a clean API for handling all data operations.The Repository class functions as a mediator between different data source.You might be able to imagine an example where you’re getting both data from your network server and you’re also getting data from a local data cache the logic about whether to grab new data from the server or use the local cache data and when to do additional fetches all of that complexity would be inside of the Repository class and what this means is that when your UI code needs some data it doesn’t have to worry about all the complexity of life should Repository class get this from the network or should it get this with local data caches or whatever else you might have there. So it hides that complexity of where the data is actually coming from the rest of your app.

 

Lifecycle Library Classes


There is a couple of core classes and concepts that you’re not going to use directly that you should be generally aware of the.First of those in that in the lifecycle library, it has an object that represents a lifecycle.A Lifecycle in android and it’s just called the lifecycle.Similarly, lifecycle owner is an object that has a lifecycle, for example, an activity or fragment.Finally is the concept of lifecycle observation so you can actually make a lifecycle observer its interface for observing lifecycles so if you’ve ever had listeners or services that require you to write some like cleanup code and onStop those listeners and services could be using lifecycle observation to be doing that clean up for you.

ViewModel


ViewModels provide data for UI while a surviving configuration changes common example of a configuration change is rotated your device or changing languages as well. Because ViewModel survives configuration changes they can replace AsyncTask loaders more importantly though they encourage you to have this separation of responsibilities.

In the ViewModel class, we suggest that you have all of your UI data and then leave the activity class just to be responsible for actually drawing that UI data.
ViewModel

As you can see here my activity data is no longer in my activity I’ve moved it over to the ViewModel and my activity gets the data that it needs to be able to display itself by communicating with the ViewModel. The ViewModel survive rotation or configuration changes.If a configuration change happens is typical activity dies and it’s recreated but importantly that activity UI data did not go with it and all that my newly recreated activity has to do is reestablish a connection with the same ViewModel which continued living through.

onSaveInstanceState

ViewModel survive configuration changes but they don’t survive the activity being finished.So they’re not like a permanent thing that stays around forever they’re linked to an activity lifecycle but they’re not linked to like the app lifecycle or anything like that. So when the activity finishes such as if the user presses a back button of they go to their overview screen it actually swipes your activity off the screen the ViewModel also be destroyed as well so a ViewModel does not replace a database or persisting your data.An important thing to realize that ViewModel is not a replacement for onSaveInstanceState even though they seemed similar.So if your device is very stressed out because there’s a lot of like memory constraints going on and your APIs in the background it’s possible that the os will just kill your app onSaveInstanceState is actually useful for surviving this total app destruction ViewModels don’t survive this they they also get destroyed therefore in those cases you still need to use onSaveInstanceState to get your UI data and your UI state back to what it was before.

LiveData


LiveData is an object that is a data holder class and it’s also a lifecycle aware it also allows for data to be observed.

Observer pattern
Observer patternWith the observer patterns you have an object called a subject and that subject will have a list of associated objects called observers that basically register with the subject and say hey I’m interested in you please tell me when you change so then when the subject’s state changes some of that causes it to change it knows about that list of observers so it’ll notify all of them and it will usually do by calling a method that is inside of that observer object.

So LiveData follows this pattern almost exactly so in our case the LiveData is the subject and you will be creating objects called observers which are the observers.

The other property of LiveData is it is lifecycle aware. LiveData actually uses lifecycle observation it observes a lifecycle to make sure that it only notifies observers when they are in a started or resumed state.

LiveData knows how to clean up observers when they’re no longer needed so if the activity that’s associated with the observer is destroyed the observer will clean itself up which means that you are never in a case where your LiveData is updating a destroyed activity and this makes it so they could not gonna have memory leaks.

LiveData Benefits

  • Reactive UI that update automatically when data changes
  • Only updates UI in started/resumed state
  • LiveData cleans up after itself automatically.
  • Allows for the database to communicate to UI without knowing about it(Testability)

 

 

 Related Post

Architecture Components:Paging Library

Room Persistence Library

Lifecycle Aware Components

ViewModel

LiveData

 

 

Feeding your own data set into the CNN model in TensorFlow

I’m assuming you already know a fair bit about Neural Network and Convolutional Neural Network, as I won’t go into too much detail about their background and how they work. I am using TensorFlow as a Machine Learning framework. In case you are not familiar with TensorFlow, make sure to check out my recent post getting started with TensorFlow.

Dataset


The Kaggle Dog vs Cat dataset consists of 25,000 color images of dogs and cats that we are supposed to use for training.Each image is a different size of pixel intensities, represented as [0, 255] integer values in RGB color space.

TFRecords

Before you run the training script for the first time, you will need to convert the data to native TFRecord format. The TFRecord format consists of a set of sharded files where each entry is a serialized tf.Example proto. Each tf.Example proto contains the image (JPEG encoded) as well as metadata such as label height, width no of channels.Google provide a single script for converting Image data to TFRecord format.

When the script finishes you will find 2 shards for the training and validation files in the DATA_DIR. The files will match the patterns train-?????-of-00002 and validation-?????-of-00002, respectively.

Convolution neural network architecture


ConvNet is a sequence of layers, and every layer of a ConvNet transforms one volume of activations to another through a differentiable function. We use three main types of layers to build ConvNet architectures: Convolutional Layer, Pooling Layer, and Fully-Connected Layer.We will stack these layers to form a full ConvNet architecture.

Building the CNN for Image Classifier

You need to know the building block to building a full convolution neural network. Let’s look at an example let’s say that you’re inputting an image which is 252x252x3 it’s an RGB image and trying to recognize either Dog or Cat.Let’s build a neural network to do this.
What’s gonna use in this post is inspired and it’s actually quite similar to one of the classic neural networks called LeNet-5.what up the show here isn’t exactly LeNet-5 but inspired by it but many of parameter choices were inspired by it.
convolution neural network architecture
252x252x3 input image lets say that the first layer uses a 32,5x5 filter stride of 1 and same padding so so the output of this layer same as the input call this layer conv1. Next, let’s apply a pooling layer so I’m going apply max pooling here and let’s use a filter 2x2 and strides=2.This should reduce the height and width of the representation by a factor of 2 so 252x252x32 now become 126x126x32.The number of channels remains the same. we are going to call this max pooling 1.
Next given 126x126x32 volume let’s apply another convolution layer to it let’s use a filter size this 5×5 and stride 1 and let’s use a 64 filters this time so now you end up with a 126x126x64 volume so called conv2 and then in this network lets’ do max pooling with a Filter:2×2 and Strides:2 and the 126X126X64 this will the half the height and width.

Dense Layer

Next, we want to add a dense layer (with 1,024 neurons and ReLU activation) to our CNN to perform classification on the features extracted by the convolution/pooling layers. Before we connect the layer, we’ll flatten our feature map (max pooling 2) to shape [batch_size, features], so that our tensor has only two dimensions:
63x63x64=254016 so let’s now fatten output to a 254016x1 dimensional vector we also think of this a flattened result into just a set of neurons.What we’re going to do is then take this 254016 units and let’s build the next layer as having 1024 units so this is actually our first fully connected layer I’m gonna call this FC2 because we have 254016 unit density connected to 1024 units. So this fully connected unit is just like the single neural network layer or this is just a standard neural network where you have a weight matrix that’s call W3 of dimension 1024x254016 and this is called fully connected because each of the 254016 units here is connected to each of the 1024 units.You also have a bias parameter that’s going to be just 1024 dimensional because of 1024 outputs.

Logits Layer

Finally, you now have 1024 real numbers that you can feed to a softmax unit and if you’re trying to do classifying images like either dog or cat then this would be a softmax with 2 outputs so this is a reasonably typical example of what a convolutional network looks like.

Generate Predictions

The logits layer of our model returns our predictions as raw values in a [batch_size, 2]-dimensional tensor. Let’s convert these raw values into two different formats that our model function can return:

  • The predicted class for each example: Dog or Cat

Our predicted class is the element in the corresponding row of the logits tensor with the highest raw value. We can find the index of this element using the tf.argmax function:

 The input argument specifies the tensor from which to extract maximum values—here logits. The axisargument specifies the axis of the input tensor along which to find the greatest value. Here, we want to find the largest value along the dimension with index of 1, which corresponds to our predictions (recall that our logits tensor has shape [batch_size, 2]).

We can derive probabilities from our logits layer by applying softmax activation using tf.nn.softmax:

Calculate Loss

For training and evaluation, we need to define a loss function that measures how closely the model’s predictions match the target classes. For classification problems, cross entropy is typically used as the loss metric. The following code calculates cross entropy when the model runs in either TRAIN or EVAL mode:

Training Operation

we defined loss for the model as the softmax cross-entropy of the logits layer and our labels. Let’s configure our model to optimize this loss value during training. We’ll use a learning rate of 0.001 and stochastic gradient descent as the optimization algorithm:

Add evaluation metrics

Define eval_metric_ops dict in EVAL mode as follows:

Load Training and Test Data


Convert whatever data you have into a TFRecordes supported format.This approach makes it easier to mix and match data sets. The recommended format for TensorFlow is an TFRecords file containing tf.train.Example protocol buffers  which contain Features as a field.

To read a file of TFRecords, use tf.TFRecordReader with the tf.parse_single_example decoder. The parse_single_example op decodes the example protocol buffers into tensors.

Train a model with a different image size.

The simplest solution is to artificially resize your images to 252×252 pixels. See Images section for many resizing, cropping and padding methods. Note that the entire model architecture is predicated on a 252x252 image, thus if you wish to change the input image size, then you may need to redesign the entire model architecture.

Fused decode and crop

If inputs are JPEG images that also require cropping, use fused tf.image.decode_and_crop_jpeg to speed up preprocessing. tf.image.decode_and_crop_jpeg only decodes the part of the image within the crop window. This significantly speeds up the process if the crop window is much smaller than the full image. For image data, this approach could speed up the input pipeline by up to 30%.

Create input functions


You must create input functions to supply data for training, evaluating, and prediction.Input function is a function that returns the following two-element tuple:

  • “features” – A Python dictionary in which:
    • Each key is the name of a feature.
    • Each value is an array containing all of that feature’s values.
  • “label” – An array containing the values of the label for every example.

The Dataset API can handle a lot of common cases for you. Using the Dataset API, you can easily read in records from a large collection of files in parallel and join them into a single stream.

Create the Estimator


Next, let’s create an Estimator a TensorFlow class for performing high-level model training, evaluation, and inference for our model. Add the following code to main():

The model_fn argument specifies the model function to use for training, evaluation, and prediction; we pass it the cnn_model_fn that we have created.The model_dir argument specifies the directory where model data (checkpoints) will be saved (here, we specify the temp directory /tmp/convnet_model, but feel free to change to another directory of your choice).

Set Up a Logging Hook

CNNs can take time to train, let’s set up some logging so we can track progress during training. We can use TensorFlow’s tf.train.SessionRunHook to create a tf.train.LoggingTensorHook that will log the probability values from the softmax layer of our CNN. Add the following to main().

We store a dict of the tensors we want to log in tensors_to_log. Each key is a label of our choice that will be printed in the log output, and the corresponding label is the name of a Tensor in the TensorFlow graph. Here, our probabilities can be found in softmax_tensor, the name we gave our softmax operation earlier when we generated the probabilities in cnn_model_fn.

Next, we create the LoggingTensorHook, passing tensors_to_log to the tensors argument. We set every_n_iter=50, which specifies that probabilities should be logged after every 50 steps of training.

Train the Model

Now we’re ready to train our model, which we can do by creating train_input_fn ans calling train() on mnist_classifier. Add the following to main()

Evaluate the Model

Once training is complete, we want to evaluate our model to determine its accuracy on the test set. We call the evaluate method, which evaluates the metrics we specified in eval_metric_ops argument in the cnn_model_fn. Add the following to main()

Run the Model

We’ve coded the CNN model function, Estimator, and the training/evaluation logic; now run the python script.

Training CNNs is quite computationally intensive. Estimated completion time of python script will vary depending on your processor.To train more quickly, you can decrease the number of steps passed to train(), but note that this will affect accuracy.

Download this project from GitHub

 

Related Post

 

References

http://cs231n.github.io/convolutional-networks/

https://www.tensorflow.org/tutorials/layers

 

Convert a directory of images to TFRecords

In this post, I’ll show you how you can convert the dataset into a TFRecord file so you can fine-tune the model.

Before you run the training script for the first time, you will need to convert the Image data to native TFRecord format. The TFRecord format consists of a set of shared files where each entry is a serialized tf.Example proto. Each tf.Example proto contains the image as well as metadata such as label and bounding box information.

TFRecord file format is a simple record-oriented binary format that many TensorFlow applications use for training data.It is default file format for TensorFlow.Binary files are sometimes easier to use because you don’t have to specify different directories for images and annotations. While storing your data in the binary file, you have your data in one block of memory, compared to storing each image and annotation separately. Opening a file is a considerably time-consuming operation especially if you use HDD.Overall, by using binary files you make it easier to distribute and make the data better aligned for efficient reading.

This native file format used in Tensorflow allows you to shuffle, batch and split datasets with its own functions.Most of the batch operations aren’t done directly from images, rather they are converted into a single tfrecord file.

Convert images into a TFRecord


Before you start any training, you’ll need a set of images to teach the model about the new classes you want to recognize.When you are working with an image dataset, what is the first thing you do? Split into Train and Validate sets.

Here’s an example, which assumes you have a folder containing class-named subfolders, each full of images for each label. The example folder animal_photos should have a structure like this:

The subfolder names are important since they define what label is applied to each image, but the filenames themselves don’t matter.The label for each image is taken from the name of the subfolder it’s in.

The list of valid labels is held in label file. The code assumes that the fill contains entries as such:

where each line corresponds to a label. Script map each label contained in the file to an integer corresponding to the line number starting from 0.

Code Organization


The code for this tutorial resides in data/build_image_data.py.Change train_directory path which contain training image data,validation_directory path which contain validation image data,output_directory which contain tfrecord file after run python script and labels_file which is contains a list of valid labels are held in this file.

This TensorFlow script converts the training and evaluation data into a sharded data set consisting of TFRecord files

where we have selected 1024 and 128 shards for each data set. Each record within the TFRecord file is a serialized Example proto.

 

Related Post