<img height="1" width="1" style="display:none" src="https://www.facebook.com/tr?id=160269078105920&amp;ev=PageView&amp;noscript=1">
Sign up for our blog, talk to a specialist, or just send us an email

Visual Search for Mobile eCommerce Applications and UX

Tuesday 12 of June, 2018./ Reading Time: 3 minutes./ By Avantica

Changing Search for eCommerce

2.5 billion people will have smartphones by 2019. Combine this with the fact that 80% of Americans are shopping online. What is the result? A lot of people shopping online using their mobile devices.

If you are designing and developing eCommerce applications or any kind of social media application, then you need to make sure you are taking UX-friendly visual search into account.

What is Visual Search?

Visual search, or image search, is a growing trend as shoppers move out of stores and onto mobile devices. It’s the idea that, instead of searching for a product by typing in a description of what you’re looking for, you take a picture of it, upload it, and use it to find similar, or even the same items.

As you can imagine, this trend has huge implications for the eCommerce world. No longer will it be sufficient for meta-descriptions and the like to pick up keywords in text searches. Instead, eCommerce companies should plan for allowing this visual search functionality. If your target audience can’t find you through image search, but they can find your competitor, then guess who will be getting their business.

This trend is especially exciting from a user experience (UX) point of view. UX is all about making sure a digital platform, product, or service is good for the user. UX is about solving pains, gaining user acceptance, and being user-friendly. If you think about it this way, then image search is the ultimate UX solution. It solves the pain of searching for specific products using text, users have already indicated that they love it, and it is really easy to do with a smartphone, which billions of people already have.

Visual Search Technology

Visual search is made possible with a deep learning model that allows the software to analyze the pixels in the user’s image and compare it to the pixels in the available images to identify which images are more similar.

In order to do this, you need specific artificial intelligence technologies, such as computer vision and machine learning. The software needs to be able to identify, first what it is looking at. And secondly, it needs to be able to match that with similar images.

One way to make this work is to use a deep learning model called convolutional neural network that processes the images. These neural networks are made of neurons with learnable weights and biases. As neurons receive inputs, they perform a dot product, and follow it with a non-linearity. This kind of model assumes that the input is an image, so it is specialized for this function. Through an analysis that includes many layers of comparison with other pre-identified images, the machine-learning model is able to tell you, with degrees of certainty, what is in the image.

And with that output, the software can compare to other pre-listed images. If those are your images, then you are on-track to get that sale.

Who is using visual search?

Google Image Search

Google offers a visual search option that identifies the image and shows you images from the web that are similar to it. It can also find the original image, if there are any other sizes, and if anyone has used your image. You can use another image from the web, or upload your own.

Additionally, Google Cloud has a Vision API that analyzes your photo, identifies what is in it, with percentage of certainty. It can also see whether there is any inappropriate material and extract text within the image.

Find it on eBay and Image Search

Find it on eBay allows you to use links of images from the web, or any social media platform, to find the same, or similar, products on eBay. Image Search allows you to do the same, but with your own picture from your phone.

We tested it out. And it works. We found a glass, candle, jacket, speaker, and even an apple using eBay image search.

Microsoft Azure Cognitive Service Vision APIs

This API uses software that analyzes images and recognizes what is in the images. There is also an API that can identify the face, including facial expressions and characteristics, and whether the faces in two separate images belong to the same person or different people.

Avantica Human Parts

Avantica implemented a neural network that we pre-trained to identify certain body parts. The application uses the cellphone’s hardware to view the user’s physical body, identifies with 99.89% certainty the body part indicated, then lays a 3D interface over the skeleton with the identified body parts.

About Avantica

If you are looking for a software partner who will work towards your own business goals and success, then Avantica is your solution. We offer dedicated teams, team augmentation, and individual projects to our clients, and are constantly looking for the best methodologies in order to give you the best results.

 

Let’s start a project together

Join our newsletter

 

PREVIOUS
Event-Based Computing (AWS Lambda) and the Serverless Future
NEXT
Modern Log Management | The Right Way with ELK Stack