Machine learning (ML) is not a new field, but it has been evolving over time. Nowadays, it is not only one of the most important field of AI, but also needed for everything. ML has been helping more and more to figure out issues through reliable predictions and efficient results. That means it has been changing the way of people live around the world.
Fortunately, there are tools which are integrated with Machine Learning, such as Azure ML, which is a service hosted in the Microsoft cloud. TensorFlow is an open source library of Google that has predictive models incorporated, Amazon AI, and so on. Furthermore, it is easier to use ML for many kinds of projects.
On the other hand, as the mobile market has been advancing as well. There are frameworks that make it easy to use Machine Learning, especially with mobile apps. We will focus on the advantages and possibilities offered by these tools.
This article will provide guidelines, information, and steps to start, experiment, and develop mobile applications with new tools that are integrated with Machine Learning. This tool is the ML Kit, which is an API that works with models. These models find human faces and track positions of facial landmarks in photos, videos, or live streams and provide information about the state of facial features. If you are going to create ML, it helps to train models. Core ML can be used to easily integrate machine learning models into an app and TensorFlow Lite helps integrate pre-trained models into mobile apps as well.
What is Machine Learning?
Machine Learning is a branch of Artificial Intelligence. It develops techniques that allow computers to learn. This learning is possible thanks to the detection of patterns within a set of data, so it is the program that predicts by itself what situations could happen or not. These calculations allow the machine to learn in order to generate reliable decisions and results.
What is Machine Learning for?
Machine learning opens up opportunities for creating new and engaging experiences. It has a lot of practical applications that drive real business results, such as time and money savings, that have the potential to dramatically impact the future of an organization.
Common Machine Learning Use Cases
Here are some well-known examples of machine learning which are already a part of our lives or on the way to be one.
- Google’s self-driving cars
- Detecting credit card fraud
- Facial recognition in Google Photos and Facebook
- Apple Siri, Google Now, Amazon Echo and Windows Cortana
- Email providers using machine learning to detect and handle spam emails
- The recommendation engines used by Amazon and Netflix to show you items and movies based on your previous site interaction
- Face ID to unlock devices.
Why do inference on smartphones?
There are many reasons to argue about why using Machine Learning in our mobile applications is much better. Here there are some of them:
- Data privacy
- Free computing power
- Always available (offline and online)
- Optimized (CPU vs GPU) for device performance
- Minimizes memory footprint
- Minimizes power consumption
- Real-time use cases
- No latency and fast execution
Introducing ML Kit
The Face API finds human faces in photos, videos, or live streams. It also finds and tracks positions of facial landmarks such as the eyes, nose, and mouth. With these technologies, you can edit photos and video, enhance video feeds with effects and decorations, create hands-free controls for games and apps, or react when a person winks or smiles and so on.
This function automatically determines if two faces are likely to correspond to the same person. However, this API only provides functionality for face detection and not face recognition for the time being.
This aspect extends face detection to video sequences. Any face appearing in a video for any length of time can be tracked. That is, faces that are detected in consecutive video frames can be identified as being the same person. Note that this is not a form of face recognition. This mechanism just makes inferences based on the position and motion of the face(s) in a video sequence.
The Barcode API detects barcodes in real-time, on any device, in any orientation. It can also detect and parse several 1D and 2D barcodes in different formats at the same time.
Text Recognition API
The Text Recognition API recognizes text in a lot of languages. It also represents the structure of recognized text, including paragraphs and lines. It can automate tedious data entry for credit cards, receipts, and business cards, as well as help organize photos, translate documents, or increase accessibility.
TensorFlow Lite to use models in mobile apps
This framework helps pre-trained models run in mobile apps. For instance, it can convert a custom trained model to the TensorFlow Lite file format (.tflite) using the TensorFlow Lite Converter. Then you can use that converted file in your mobile application.
Introducing Create ML and Core ML
Create ML is a framework that creates and trains custom machine learning models on Macs. It is important to mention that Create ML was created this year in the annual event of developers WWDC. For instance, you can train a model to recognize or classify fish by showing it lots of images of different fishes. When the model is performing well enough, you’re ready to integrate it into your app using Core ML.
Otherwise, Core ML is another framework, which is used to integrate machine learning models into an app easily (macOS, iOS, watchOS and tvOS). Some of its features are: Deep Neural Networks, Recurrent Neural Networks, Support Vector Machines, Tree Ensembles, Linear Models and so on. In the Image 1, we can see the integration of a model into an app.
Image 1: Machine learning models into an app (source: developer.apple.com).
A trained model is the result of applying a machine learning algorithm to a set of training data. The model makes predictions based on new input data. For instance, a model that’s been trained on a region’s historical house prices may be able to predict a house’s price when given the number of bedrooms and bathrooms. Core ML is optimized for on-device performance, which minimizes memory footprint and power consumption. Running strictly on the device ensures the privacy of user data and guarantees that your app remains functional and responsive when a network connection is unavailable. Among some types of models that Core ML has:
- Sentiment Analysis
- Handwriting Recognition
- Scene classification
- Style transfer
- Music Tagging
- Predicting text
As you can see in the Image 2, it supports Vision for image analysis, Foundation for natural language processing and GameplayKit for evaluating learned decision trees. Core ML itself builds on top of low-level primitives like Accelerate and BNNS, as well as Metal Performance Shaders.
Image 2: General structure of ML services (source: developer.apple.com).
Vision is a new and powerful framework that provides solutions to computer vision challenges through a consistent interface. First of all, we have to understand how to use the Vision API to detect faces, compute facial landmarks, track objects, and more. It takes things even further by providing custom machine learning models for Vision tasks using CoreML.
Face Detection and Recognition
This detects face or facial-features (such as the eyes and mouth and so on) in an image. On the image 3 there are 9 faces detected.
Image 3: Face detection (source: Fabiola Ramirez).
It finds and recognizes barcodes in an image through an image analysis request. It detects information as well.
This function finds regions of visible text in an image through an image analysis request. It detects information about regions of text detected as well.
Object Detection and Tracking
It tracks movement of a previously identified rectangular object across multiple images or video frames. It also provides the position and extent of a detected image feature.
Natural Language API (NSLinguisticTagger)
This tool provides a uniform interface to a variety of natural language processing functionality with support for many different languages and scripts. It can be used to segment natural language text into paragraphs, sentences, or words, and tag information about those tokens, such as part of speech, lexical class, lemma, script, and language.
This is an object-oriented framework that provides foundational tools and technologies for building games.It architects and organizes a game logic. It incorporates common gameplay behaviors such as random number generation, artificial intelligence, pathfinding, and agent behavior.
Where do I get the models?
Build your apps with the ready-to-use CoreML models below, or use CoreML Tools to easily convert custom models into the CoreML format.
There are some models which are ready to use.
- Inception v3
Custom Models of machine learning packages
Fortunately, Apple has developed a tool that converts models to Core ML format. At the moment, these models have to follow the next library formats:
Use Core ML Tools to convert trained models to Core ML
CoreML Tools is a python package that can be used to convert models from machine learning toolboxes into the CoreML format. In particular, it can be used to:
- Convert existing models to .mlmodel format from popular machine learning tools including Keras, Caffe, scikit-learn, libsvm, and XGBoost.
- Express models in .mlmodel format through a simple API.
- Make predictions with an .mlmodel (on select platform for testing purposes).
## Download and install python package
caffe_model = ('flowers.caffemodel', 'flowers.prototxt')
Image 4: Convert popular ML libraries to Core ML (source: developer.apple.com).
Integrating a Core ML Model into an app
This tool lets you integrate a broad variety of machine learning model types into your app. In addition to supporting extensive deep learning with over 30 layer types, it also supports standard models such as tree ensembles, SVMs, and generalized linear models. First of all, there are some requirements that should be considered before beginning:
- iOS 11.0+ Beta
- Xcode 9.0 Beta
- Swift 3.0+
Image 5: Integrate Core ML into an project (source developer.apple.com).
As you can see in Image 5, the model just has to be dragged into the Xcode project. The model should be instantiated as object, then the function prediction should be called. For instance, the input is an image and the output is a string.
|let flowerModel = FlowerClassifier() if let prediction = try?flowerModel.prediction(flowerImage: image)return
There are just a few steps to an easy integration of machine learning models, which enables you to build apps with intelligent new features using just a few lines of code. You just have to try it.
Currently, Machine Learning is helping, not only to figure out solutions for difficult problems, but also to change the way of people are living worldwide. Therefore, it is important to be updated and to take advantage of tools or libraries that already offer Machine Learning services. As everybody knows, the mobile market has been growing as well, so there are frameworks with Machine Learning that can be incorporated into mobile apps, and they can work online or offline.
If you are looking for a software partner who will work towards your own business goals and success, then Avantica is your solution. We offer dedicated teams, team augmentation, and individual projects to our clients, and are constantly looking for the best methodologies in order to give you the best results.
Let’s start a project together