TensorFlow is an open source software library for machine learning, developed by Google and currently used in many of their projects.
In this article, we will create an Android app that can recognize five types of fruits.
Machine learning model inside your mobile app
Using the machine learnings, you can reduce the significant amount of traffic, and you can get much faster responses from your web server.Because you can extract the meaning from the raw data at client side. For example, if you are using machine learning for image recognition, you can have the machine learning model running inside your mobile application so that your mobile application can recognize what kind of object is in each image.So that you can just send the label, such as a flower or human face, to the server.That can reduce the traffic.
Build an application that is powered by machine learning
TensorFlow is very valuable for people like me because I don’t have any sophisticated mathematical background.
Implement TensorFlow in Android
Android added a JSON integration, which makes step easier.Just add one line to the build.gradle, and the Gradle take care or the rest of steps.Under the library archive, holding TensorFlow shared object is downloaded from JCenter, linked against the application automatically.
dependencies { .... compile 'org.tensorflow:tensorflow-android:1.2.0-preview' }
Android release inference library to integrate TensorFlow for Java Application.
Add your model to the project
We need the pre-trained model and label file.In the previous tutorial, we train model which does the image classification on a given image.You can download the model from here.Unzip this zip file, you will get retrained_labels.txt
(label for objects) and rounded_graph.pb
(pre-trained model). Put retrained_labels.txt
and rounded_graph.pb
into android/assets
directory.
public List<Recognition> recognizeImage(final Bitmap bitmap) { // Log this method so that it can be analyzed with systrace. Trace.beginSection("recognizeImage"); Trace.beginSection("preprocessBitmap"); // Preprocess the image data from 0-255 int to normalized float based // on the provided parameters. bitmap.getPixels(intValues, 0, bitmap.getWidth(), 0, 0, bitmap.getWidth(), bitmap.getHeight()); for (int i = 0; i < intValues.length; ++i) { final int val = intValues[i]; floatValues[i * 3 + 0] = (((val >> 16) & 0xFF) - imageMean) / imageStd; floatValues[i * 3 + 1] = (((val >> 8) & 0xFF) - imageMean) / imageStd; floatValues[i * 3 + 2] = ((val & 0xFF) - imageMean) / imageStd; } Trace.endSection(); // Copy the input data into TensorFlow. Trace.beginSection("feed"); inferenceInterface.feed(inputName, floatValues, 1, inputSize, inputSize, 3); Trace.endSection(); // Run the inference call. Trace.beginSection("run"); inferenceInterface.run(outputNames, logStats); Trace.endSection(); // Copy the output Tensor back into the output array. Trace.beginSection("fetch"); inferenceInterface.fetch(outputName, outputs); Trace.endSection(); // Find the best classifications. PriorityQueue<Recognition> pq =new PriorityQueue<Recognition>( 3, new Comparator<Recognition>() { @Override public int compare(Recognition lhs, Recognition rhs) { // Intentionally reversed to put high confidence at the head of the queue. return Float.compare(rhs.getConfidence(), lhs.getConfidence()); } }); for (int i = 0; i < outputs.length; ++i) { if (outputs[i] > THRESHOLD) { pq.add( new Recognition( "" + i, labels.size() > i ? labels.get(i) : "unknown", outputs[i], null)); } } final ArrayList<Recognition> recognitions = new ArrayList<Recognition>(); int recognitionsSize = Math.min(pq.size(), MAX_RESULTS); for (int i = 0; i < recognitionsSize; ++i) { recognitions.add(pq.poll()); } Trace.endSection(); // "recognizeImage" return recognitions; }
At first, create TensorFlow inference interface, opening the model file from the asset in the APK.Then, Set up the input feed using Feed API. On mobile, the input feed tends to be retrieved from various sensors like a camera, accelerometer, then run the interface. Finally, you can fetch the results using fetch method over there.You would notice that those calls are all blocking calls.So you’d want to run them in a worker thread, rather than the main thread because API would take a long time.This one is Java API.you can use regular C++ API as well.
Download this project from GitHub
Related Past
Image Classify Using TensorFlow Lite
Google Cloud Vision API in Android APP TenserFlow Lite
When I run the project it gives me this error:
Error:(5, 0) assert file(project.ext.ASSET_DIR + “/rounded_graph.pb”).exists()
| | | | | |
| | | | | false
| | | | C:\Users\User\AndroidStudioProjects\ImageClassifier-master/assets/rounded_graph.pb
| | | C:\Users\User\AndroidStudioProjects\ImageClassifier-master/assets
| | [email protected]5c34be05
| root project ‘ImageClassifier-master’
C:\Users\User\AndroidStudioProjects\ImageClassifier-master\assets\rounded_graph.pb
Open File
It’s because gradle couldn’t find android/assets/rounded_graph.pb,
or android/assets/retrained_labels.txt. You can download the model from here.Unzip this zip file, you will get retrained_labels.txt(label for objects) and rounded_graph.pb (pre-trained model).
Put retrained_labels.txt and rounded_graph.pb into android/assets directory.
im getting this error:
Gradle sync failed: Failed to find Build Tools revision 26.0.0 rc2:
this package is not available
if i change change the buildToolsVersion it gives the this error:
Gradle sync failed: This Gradle plugin requires Studio 3.0 minimum
@Maalik Did some changes to the following files and then it compiled and ran without an error
/ImageClassifier/build.gradle
classpath ‘com.android.tools.build:gradle:3.0.0-alpha4’ -> classpath ‘com.android.tools.build:gradle:2.3.3’
/ImageClassifier/app/build.gradle
buildToolsVersion ‘26.0.0 rc2’ -> buildToolsVersion ‘26.0.1’
dependencies {
implementation fileTree(include: [‘*.jar’], dir: ‘libs’)
androidTestImplementation(‘com.android.support.test.espresso:espresso-core:2.2.2’, {
exclude group: ‘com.android.support’, module: ‘support-annotations’
})
testImplementation ‘junit:junit:4.12’
implementation ‘com.android.support.constraint:constraint-layout:1.0.2’
compile ‘org.tensorflow:tensorflow-android:1.2.0-preview’
implementation ‘com.android.support:support-v4:26.0.0-beta2’
}———————————–>
dependencies {
compile fileTree(include: [‘*.jar’], dir: ‘libs’)
androidTestCompile(‘com.android.support.test.espresso:espresso-core:2.2.2’, {
exclude group: ‘com.android.support’, module: ‘support-annotations’
})
testCompile ‘junit:junit:4.12’
compile ‘com.android.support.constraint:constraint-layout:1.0.2’
compile ‘org.tensorflow:tensorflow-android:1.2.0-preview’
compile ‘com.android.support:support-v4:26.0.0-beta2’
}
And also make sure to get make a copy of the asserts folder with all its contents as mentioned in the earlier comments
/ImageClassifier/asserts -> /ImageClassifier/app/src/main/assets