Google Cloud Speech API in Android APP

Many of you have used the “Ok Google” functionality on your phone.What the Speech API does is it lets developers integrate that functionality into their own applications, lets you do speech to text transcription in over 80 languages.

It also works in streaming mode.If you want to send it a continuous stream of audio, you can send it to a stream of audio and it returns transcriptions as that audio is coming in.In this tutorial, we are going to learn Streaming Recognition to perform speech recognition.

So the best way to see how the speech API works is through a demo.

Prerequisites


  • Google Cloud Platform account(You can use 12 months free trial)

1.Acquiring an API Key


To Use Google Cloud Speech API services in the app, you need a Service account keys. You can get one by creating a new project in the Google Cloud Platform console.

Once the project has been created, go to API Manager > Dashboard and press the Enable API button.

Enable Google Cloud Speech API

2.Creating a New Android Project


Google provides client libraries in a to simplify the process of building and sending requests and receiving and parsing responses.

Add the following compile dependencies to the app build.gradle:

Set Up to Authenticate With Your Project’s Credentials

In order to get Service Account Key, visit the Cloud Console, and navigate to: API Manager > Credentials > Create credentials > Service account key > New service account. Create a new service account, and download the JSON credentials file. Put the file in the app resources as app/src/main/res/raw/credential.json.

Generate Google Cloud Service Account

Streaming Speech API Recognition Requests


Streaming Speech API recognition call is designed for real-time capture and recognition of audio, within a bi-directional stream. Your application can send audio on the request stream and receive interim and final recognition results on the response stream in real time.

Interim results represent the current recognition result for a section of audio, while the final recognition result represents the last, best guess for that section of audio.

Streaming requests

You send both the configuration and audio within a single request, calling the streaming Speech API requires sending multiple requests. The first StreamingRecognizeRequest must contain a configuration of type StreamingRecognitionConfig without any accompanying audio. Subsequent StreamingRecognizeRequests sent over the same stream will then consist of consecutive frames of raw audio bytes.

StreamingRecognitionConfig consists of the following fields:

  • config – (required) contains configuration information for the audio, of type RecognitionConfig.
  • single_utterance – (optional, defaults to false) indicates whether this request should automatically end after speech is no longer detected.
  • interim_results – (optional, defaults to false) indicates that this stream request should return temporary results that may be refined at a later time.

Streaming responses

Streaming speech recognition results are returned within a series of responses of type StreamingRecognitionResponse. Response consists of the following fields:

  • speechEventType contains events of type SpeechEventType.
  • results contains the list of results, which may be either interim or final result, of type
    StreamingRecognitionResult. The results list contains following the sub-fields:

    • alternatives contains a list of alternative transcriptions.
    • isFinalinterim or are final.
    • stability indicates the volatility of results obtained so far, with 0.0 indicating complete instability while 1.0 indicates complete stability.

 

Download this project from GitHub

Related Post

Google Cloud Natural Language API in Android APP

Google Cloud Vision API in Android APP

Speech Recognition Using TensorFlow

 

 

Google Cloud Natural Language API in Android APP

The Natural Language API lets you extract entities, sentiment, and syntax from your text.Real world example is customer feedback platform.

In this tutorial, I’ll introduce you to the Cloud Natural Language platform and show you how to use it to analyze text.

Prerequisites


  • Google Cloud Platform account(You can use 12 months free trial)

 

1.Acquiring an API Key


To Use Google Cloud Natural Language API services in the app, you need an API key. You can get one by creating a new project in the Google Cloud Platform console.

Once the project has been created, go to API Manager > Dashboard and press the Enable API button.

Enable Natural Language API

To get API key, go to the Credentials tab, press the Create Credentials button, and select API key.

Google Cloud Vision API Key

2.Creating a New Android Project


Google provides client libraries to simplify the process of building and sending requests and receiving and parsing responses.

Add the following compile dependencies to the app build.gradle:

Add INTERNET permission in the AndroidManifest.xml file.

To interact with the API using the Google API Client library, you must create a CloudNaturalLanguage object using the CloudNaturalLanguage.Builder class. Its constructor also expects an HTTP transport and a JSON factory.

Furthermore, by assigning a CloudNaturalLanguageRequestInitializer instance to it, you can force it to include your API key in all its
requests.

All the text you want to analyze using the API must be placed inside a Document object. The Document object must also contain configuration information, such as the language of the text and whether it is formatted as plain text or HTML. Add the following code:

Next, you must create a Features object specifying the features you are interested in analyzing. The following code shows you how to create a Features object that says you want to extract entities and run sentiment analysis only.

Use the Document and Features objects to compose an AnnotateTextRequest object, which can be passed to the annotateText() method to generate an AnnotateTextResponse object.

Entity Analysis


We get the name of the entity, for example, Google.The type of the entity is organisation. Then we get back some metadata.MID ID that maps to Google’s Knowledge Graph.

You can extract a list of entities from the AnnotateTextResponse object by calling its getEntities() method.

Analyze Entity

 

Sentiment Analysis


Analyze the sentiment of your text.If we have this restaurant review,

The food at that restaurant has stale,I will not be going back.

If I might want to flag the most positive and most negative once and then respond just to those. So we get two number back from the Natural Language API to help us do this.

The first thing we get back is score, which will tell us on a scale from -1 to 1 how positive or negative is this text? In this example, we get negative 0.8, which is almost fully negative.

Then we get magnitude, which tells us regardless of being positive or negative, how strong is the sentiment in this text?And this is a range from 0 to infinity, and it’s normalize based on the length of the text.So we get a pretty small number here,0.8 because this is just a small piece of text.

You can extract the overall sentiment of the transcript by calling the getDocumentSentiment() method. To get the actual score of the sentiment, however, you must also call the getScore() method, which returns a float.

analyze sentiment

 

Download this project from GitHub

Related Post

Google Cloud Vision API in Android APP

Google Cloud Speech API in Android APP

Google Cloud Vision API in Android APP

Google recently launched a Cloud Machine Learning platform, which offers Neural Networks that have been pre-trained model to perform a variety of tasks.You can use them by simply making a few REST API calls or Client library.

Google Cloud Vision API use to understand the content of an image by machine learning models using  REST API. It quickly classifies images into thousands of categories, detects objects and faces within images, and finds and reads printed words contained within images.

In this tutorial, I’ll introduce you to the Cloud Machine Learning platform and show you how to use it to create a smart Android app that can recognize the real-world object.

Prerequisites


  • A device running Android 4.4+
  • Google Cloud Platform account(You can use 12 months free trial)

1.Acquiring an API Key


To Use Google Vision API services in the app, you need an API key. You can get one by creating a new project in the Google Cloud Platform console.

Once the project has been created, go to API Manager > Dashboard and press the Enable API button.

Enable Google Vision API

To get API key, go to the Credentials tab, press the Create Credentials button, and select API key.

Google Cloud Vision API

2.Creating a New Android Project


Google provides client libraries to simplify the process of building and sending requests and receiving and parsing responses.

Add the following compile dependencies to app build.gradle:

Add INTERNET permission in the AndroidManifest.xml file.

Step 1: Create an Intent

Creating a new intent with the ACTION_IMAGE_CAPTURE action and passing it to the startActivityForResult() method, you can ask the default camera app of the user’s device to take pictures and pass them on to your app. Add the following code to your Activity class:

Receive the images captured by the default camera app in onActivityResult() method of activity class. you’ll have access to a Bundle object containing all the image data. You can render the image data by simply converting it into a Bitmap and passing it to the ImageView widget.

Step 3: Encode the Image

The Vision API cannot use Bitmap objects directly. It expects a Base64-encoded string of compressed image data.To compress the image data, you can use the compress() method of the Bitmap class. As its arguments, the method expects the compression format to use, the output quality desired, and a ByteArrayOutputStream object. The following code compresses the bitmap using the JPEG format.

Step 4: Create Feature

The Feature indicates what type of image detection task to perform. Describe the type of Vision tasks to perform over images by using Features. Features encode the Vision vertical to operate on and the number of top-scoring results to return.

This is the Java data model class that specifies how to parse/serialize into the JSON that is transmitted over HTTP when working with the Cloud Vision API.

Step 5: Create Request

Create the request for performing Vision tasks over a user-provided image, with user-requested features.

Step 6: Process the Image

Now, you need to interact with the Vision API. Start by creating a HttpTransport and VisionRequestInitializer that contains your API key:

1.Label Detection


The Vision API can detect and extract information about entities within an image.Labels can identify objects, locations, activities, animal species, products, and more.

Vision API Label Detection

2.Landmark Detection


Landmark requests detect well-known natural and human-made landmarks and return identifying information such as an entity ID, the landmark’s name and location, and the bounding box that surrounds the landmark in the image.

Vision API Landmark Detection

3.Logo Detection


Logo detection requests detect popular product and corporate logos within an image.

4.Safe Search Detection 


Safe Search requests examine an image for potentially unsafe or undesirable content. Likelihood of such imagery is returned in 4 categories:

  • adult indicates content generally suited for 18 years plus, such as nudity, sexual activity, and pornography (including cartoons or anime).
  • spoof indicates content that has been modified from the original to make it funny or offensive.
  • medical indicates content such as surgeries or MRIs.
  • violent indicates violent content, including but not limited to the presence of blood, war images, weapons, injuries, or car crashes.Vision API Safe Search

5.Image Properties


An image properties request returns the dominant colors in the image as RGB values and percent of the total pixel count.

Vision API Image Property

 

Conclusion

In this tutorial, you learned how to use the Cloud Vision, which is part of the Google Cloud Machine Learning platform, in an Android app. There are many more such APIs offered by the platform. You can learn more about them by referring to the official documentation.

 


Download this project from GitHub

 

Related Post

Android TensorFlow Machine Learning

Google Cloud Natural Language API in Android APP

Google Cloud Speech API in Android APP

Phone Number Authentication with Firebase Auth

Over 90% of people who have issue login into an app is going to leave. Sign-in is such an important part of your growth funnel. Using phone number as an effective identity for authentication.

People prefer to use their phone as their identity instead of an email address or another form of identification. Phone numbers are higher quality. 

Firebase Phone Auth


Many apps that actually don’t have phone number authentication today, building it in the first place can be a real struggle.

Firebase allow is it enables basic sign-in and sign-up flows. On Android Firebase actually, allow a few and built-in enhancements. Firebase has a really new cool piece of functionality called instant verification on Android.

Firebase enables basic sign-in and sign-up flows.

All a user would need to do is enter their phone number, and you can go ahead and pass it over to Firebase.

Firebase will validate the phone number, handle any normalization logic that needs to happen, if this is a new user, or a returning one, and go ahead and generate a one-time code associated with this request.

The new user will simply pass back the code into your application, and which you would just pass right back on to Firebase, and Firebase will verify it. Firebase will handle any of the complex logic, and deduplication, if you send multiple SMS by mistake or any of the other edge cases that might arise. The user will go ahead and enter their code. Firebase will validate it. Firebase will go ahead and mint cryptographically signed tokens, persist sessions the whole authentication. You won’t have to worry about how any of it.

Auto-retrieval


The best part of all of this is your user never actually had to leave your application, go to an SMS tab, wait for it and then copy and paste the code into your UI. They stayed in your application the entire time. It’s a really seamless experience for them overall. That’s all that you ever to do. All this powerful functionality is also really easy to use.

Configuration


As a pre-requisite, ensure your application is configured for use with Firebase.

Then, add the FirebaseUI auth library dependency. Please check the latest version

The second thing you need to do is just enable phone auth within your project in the Firebase console.

 

Enable Firebase Phone Auth

It’s really easy to use. In order to get all the awesome functionality the actual ability to generate and send codes, the auto-retrieval, even instant verification, comes down to just one single API call, and the implementation of the callbacks that you care about.

What’s about UI?

Building authentication UI can be pretty complex.FirebaseUI wrote out all the code for different phone authentication flows that.

The first thing you’ll notice is Firebase UI integrates with hint selector directly. You didn’t write any additional code for this. It’s able to pick up user’s phone number from the device, and I can just simply enter it directly into the app.

Now when you hit verify phone number, it’ll go ahead and send a code which will directly get right off the SMS. The code was sent to your device immediately written from the SIM. All I did was really tap my phone number and everything else was taken for me. The SMS was delivered. It was passed. It was taken. I never left the app.

Instant Verification


Instant verification checks if firebase has verified this phone number on the device recently. If it has, it can actually instantly verify the phone number the next time around without needing to send any additional SMS. It thinks that this means there is no wait time, on SMS, or anything else, just pure simplicity verify.

Automatic SMS Verification with the Phone Selector and SMS Retriever API

Using SMS Retriever API, you can perform SMS-based verification in your app automatically, without requiring the user to manually type verification codes, and without requiring any extra app permissions.

The Google Play Services SDK(10.2 and newer) offering you to enable read the phone number and the verification SMS automatically without requiring these extra permissions.

1.Phone Selector API


Phone Selector API provides the Phone number to your app with a much better user experience and no extra permissions.Using this API, you can launch a dialogue, which shows the phone numbers on the device to the user.

First, you create a hint request object and set the phone number identifier supported field to true.

Then, you get a pending intent from that hint request for the phone number selector dialogue.

Once the user selects the phone number, that phone number is returned to your app in the onActivityResult().

2.Reading the verification code automatically using SMS retriever API


SMS Retriever API provides you the message content to your app without requiring any extra permissions.The key part is that it provides you only the message targeted your app.You have to verification code in your SMS message and include app-specific hash.This app-specific hash is a static hash that you can just include in the SMS template without requiring many code changes.

SMS Retriever API

Start the SMS retriever

This makes it wait for one matching SMS, which includes the app-specific hash.

Once the SMS with the app-specific hash is received on the device, it is provided to your app via broadcast.

In your broadcast receiver, you can get the message content from the extras.Once you have the message content, you can extract the verification code, and verify the code just like you would normally do.

Register this BroadcastReceiver with the intent filter.

After starting the SMS retriever, you can just send the SMS with the verification code and the app-specific hash to the phone using any backend infrastructure of yours.

3.Construct a verification message


Construct the verification message that you will send to the user’s device. This message must:

  • Be no longer than 140 bytes
  • Begin with one of the following strings:
    • [#]
    • Two consecutive zero-width space characters (U+200B)
  • End with an 11-character hash string that identifies your app

Otherwise, the contents of the verification message can be whatever you choose. It is helpful to create a message from which you can easily extract the one-time code later on. For example, a valid verification message might look like the following:

Computing your app’s hash string


You can get your app’s hash string with the AppSignatureHelper class from the SMS retriever sample app. However, if you use the helper class, be sure to remove it from your app after you get the hash string. Do not use hash strings dynamically computed on the client in your verification messages.

Android TensorFlow Machine Learning

TensorFlow is an open source software library for machine learning, developed by Google and currently used in many of their projects.

In this article, we will create an Android app that can recognize five types of fruits.

Machine learning model inside your mobile app


Using the machine learnings, you can reduce the significant amount of traffic, and you can get much faster responses from your web server.Because you can extract the meaning from the raw data at client side.

For example, if you are using machine learning for image recognition, you can have the machine learning model running inside your mobile application so that your mobile application can recognize what kind of object is in each image.So that you can just send the label, such as a flower or human face, to the server.That can reduce the traffic.

Build an application that is powered by machine learning

TensorFlow is very valuable for people like me because I don’t have any sophisticated mathematical background.

Implement TensorFlow in Android


Android added a JSON integration, which makes step easier.Just add one line to the build.gradle, and the Gradle take care or the rest of steps.Under the library archive, holding TensorFlow shared object is downloaded from JCenter, linked against the application automatically.

Android release inference library to integrate TensorFlow for Java Application.

Add your model to the project


We need the pre-trained model and label file.In the previous tutorial, we train model which does the image classification on a given image.You can download the model from here.Unzip this zip file, you will get retrained_labels.txt(label for objects) and rounded_graph.pb (pre-trained model).

Put retrained_labels.txt and rounded_graph.pb into android/assets directory.

At first, create TensorFlow inference interface, opening the model file from the asset in the APK.Then, Set up the input feed using Feed API.

On mobile, the input feed tends to be retrieved from various sensors like a camera, accelerometer, then run the interface.

Finally, you can fetch the results using fetch method over there.You would notice that those calls are all blocking calls.So you’d want to run them in a worker thread, rather than the main thread because API would take a long time.This one is Java API.you can use regular C++ API as well.

Download this project from GitHub

 

Related Past

Image Classify Using TensorFlow Lite

Google Cloud Vision API in Android APP

TenserFlow Lite

Train Image classifier with TensorFlow

Train your Object Detection model locally with TensorFlow

 

 

Retrain TensorFlow Model for Image Classification

In this post, we’re going to retrain an Image Classifier TensorFlow Model. You’ll need to install TensorFlow and you’ll need to understand how to use the command line.

1.Collect training data


We’re going to write a function to classify a piece of fruit Image. For starters, it will take an image of the fruit as input and predict whether it’s an apple or oranges as output.The more training data you have, the better a classifier you can create (at least  50  images of each, more is better).

The example folder fruits images should have a structure like this:

Image classifier folder

We will create a ~/tf_files/fruits folder and place each set of jpeg images in subdirectories (such as ~/tf_files/fruits/apple, ~/tf_files/fruits/orange etc)

The subfolder names are important.They define what label is applied to
each image, but the filenames themselves don’t matter.

A quick way to download multiple images at once is Chrome extension for batch download.

To retrain your classifier you need to run a couple of scripts. You only need to provide one thing–training data.

Step1: The retrain.py script is part of the TensorFlow repo.You need to download it manually, to the current directory(~/tf_files):

download retrain.py
Now, we have a trainer, we have data(Image), so let’s train! We will re-train the Inception v3 network.

Step2: Before starting the training active TensorFlow.

active tensorflow environment

Step3: Start your image retraining with one big command.

Train Image classifier

These commands download the inception model and retrain it to classify images for ~/tf_files/fruits.

Train Image classifier accuracy

This operation can take several minutes depending on how many images you have and how many training steps you specified.

The script will generate two files: the model in a protobuf file (retrained_graph.pb) and a label list of all the objects it can recognize (retrained_labels.txt).

retrained grape files

Clone the Git repository for testing model


The following command will clone the Git repository containing the files required for the test model.

Copy tf file

The repo contains two directories: android/, and scripts/

1.android/: Directory contains nearly all the files necessary to build a simple Android app that classifies images.

2.scripts/: Directory contains the python scripts. These include scripts to prepare, test and evaluate the model.

Now copy the tf_files directory from the first part, into /tensorflow-for-poets-2 working directory.

Test the Model


The scripts/ directory contains a simple command line script, label_image.py, to test the network.

test trained model

Optimize model for Android


TensorFlow installation includes a tool, optimize_for_inference, that removes all nodes that aren’t needed for a given set of input and output nodes.

Optimize for inference

It creates a new file at tf_files/optimized_graph.pb.

Make the model compressible


The retrained model is still 84MB in size at this point. That large download size may be a limiting factor for any app that includes it.

Neural network operation requires a bunch of matrix characterizations, which means tons of multiply and add operations. Current mobile devices are capable of doing some of them with specialized hardware.

Quantization


Quantization is one of the techniques to reduce both memory footprint and computer load.

Now use the quantize_graph script to apply changes:

quantize graph

It does this without any changes to the structure of the network, it simply quantizes the constants in place.It creates a new file at tf_files/rounded_graph.pb.

Every mobile app distribution system compresses the package before distribution. So test how much the graph can be compressed:

Quantize compare

You should see a significant improvement. I get 73% optimize model.

 

Related Post

Image Classify Using TensorFlow Lite

Train your Object Detection model locally with TensorFlow

Android TensorFlow Machine Learning

TenserFlow Lite