Building the Magic Mirror

Blog post author
Ross Malpass
May 22, 2018

The start of May marked an important stage in the evolution of Ombori, as we unveiled our latest creation in the heart of Manhattan. It's an interactive voice-controlled installation in H&M's flagship Times Square store which offers shoppers a truly unique experience. We call it the Magic Mirror.

Here's a quick summary of what the Magic Mirror does.

It's an in-store screen that appears to display traditional promotional material. However, when a customer stops and looks at it, it asks them if they'd like to take a selfie or get fashion information. If they ask for fashion information, it asks them to pick a category, then shows them highlights from the H&M range. If they choose to take a selfie, it uses this to create a selection of images which look like magazine covers. The customer can then download their chosen selfie to their phone using a QR code. Whichever option they choose, customers can also sign up for the H&M newsletter and get a discount on their next purchase.

It's all voice-controlled - no touch screen, and no typing, and customers love it.

I have great memories of helping a 70-year old New York woman and her friend figure out how to use a QR code so they could save their selfies and send them to their families. Best of all, they were excited to come back next weekend and show their friends!

mirror2

QR codes work - proved!

Early results show that it's been a huge success. 86% of customers who use the mirror to take a selfie use the QR code to download the image, which is far above our initial estimates.

It completely disproves the myth that QR codes are dead - if you give customers a good reason to use a QR code, they'll do so. I have to admit to being surprised how well they worked. There was a lot of resistance to using QR codes, because although they were cool a few years ago, nobody had really made them work well. But the world has changed last September when Apple put a QR code reader onto the iPhone, and consumers are ready to adopt QR codes. Just to put that in perspective, the vast majority of those QR downloads were to Apple devices.

And more importantly from H&M's perspective, 10% of the people who used the QR code signed up for the newsletter. That gives them a direct connection with 8% of everyone who used the mirror, which is a huge win.

And, don't forget, all this is achieved without any involvement from store staff. It's an entirely self-service experience which draws the customer in, gives them something fun to do for a minute or so, and then encourages them to spend more time - and money - in the store and then again later, online.

Making it work

Building this was a great challenge for our team, and our partners at Microsoft and Visual Art in Stockholm who were responsible for the visual design and the installation. Our initial tech demo in March mostly did what we expected, but it revealed a number of technical and UX issues that needed addressing before it was ready for public use. These required some extensive rebuilds, and in some cases major rethinking of the underlying software architecture and components. The Microsoft Azure stack was critical to making the Mirror work, but they were very open to bringing in third party tools and software, and they gave us the freedom to use whatever we needed to make the project a success.

Face recognition

Face recognition is a core part of the UX.

We wanted a smooth and stylish way for the device to switch from passive display mode to interactive mode. We didn't want the user to have to touch the device or talk to it - we wanted the device to self-activate without the user having to do anything. What we did was to have the camera - a 4K LG camera - constantly scanning the area in front of the device. When it detected a human face looking at it for more than a second, the device assumed that someone was actively interested, not just passing by. This would wake it and cause it to begin the interactive process.

The problem with our initial implementation was that it was too resource-intensive to run effectively on the machine and it didn't respond fast enough. People would look at it, and when nothing happened, they walked away. We wanted the device to be stand-alone, and not rely on accessing a remote server, so we needed to find more efficient solution. We couldn't make it work using JavaScript, so instead we had to use WebAssembly and the OpenCV C++ library to create a faster, slicker face recognition system.

Audio interface

Working in an audio environment presents a lot of potential problems.

After our initial trials, we decided we didn't like the voice, so we had to try several different voices before we found one tha actually worked. We also realized that audio alone is not enough. In a noisy environment it can be hard to hear the device, and customers who have hearing or language difficulties can have problems understanding what was said. So as well as speech, we added subtitles.

Another thing we found was that people weren't always sure what they should do. We added a microphone icon to the screen when it was time for the customer to say something. This simple change made it much more intuitive and we saw a huge increase in user engagement.

State machine

Our initial design was very Web-structured. When a user made a selection, this would trigger a transition to a new "page". However, we soon found that this was hard to work with, and hard to modify. One change in the flow involved a lot of recoding, and it was prone to error.

So we created a finite state machine to track the user's progress and triggered all the transitions and options off their current state. This made it easy to adjust the workflow, add new options, and so on, in response to the clients' feedback.

Performance

Performance was a big issue all the way through. As I mentioned above, we wanted to minimize bandwidth and run as much as possible locally. We used service workers wherever possible to do as much as we could offline.

mirror1-2


Ombori's take on omnichannel

I'm really happy with the way this project turned out. It's an exciting step forwards for Ombori as we move away from just providing mobile commerce apps to offering truly omnichannel retail experiences to our clients.

In our view, omnichannel isn't just about having an app and a Web site. It's about finding new, exciting ways to engage customers, using all the technologies available to us. For many brick-and-mortar retailers to survive the next few years, they will need to offer customers unique in-store experiences that make shopping fun, as well as fully integrated online experiences that will keep them coming back instead of turning to easy solutions such as Amazon. Installations like the Magic Mirror are set to become a lot more common.

Working with Visual Art was a lot of fun too: between us, we transformed the concept of digital signage into something truly innovative and memorable. Their expertise in design and hardware meshed perfectly with our skills in interactive software: together, we make a great team.

From a technical point of view this project allowed us to fold many different components into our existing Grid system. Computer vision and speech interfaces have really opened up new possibilties for us, and in the process of development we came up with a lot of new ideas for ways in which customers can interact with brands instore and elsewhere. I'm looking forward to telling you about some of the new projects we're working on over the coming months.

Youtube poster

Let us help

Improving your Omnichannel journeys, Visitor Management or Customer Experiences?

Looking to deploy IoT, Digital Signage or Mobile apps?

Reach out by e-mail hello@ombori.com or use the form here and we'll be happy to help!

Email* Message*