top of page
logo.png

This is the 3D AR virtual dressing software designed

to facilitate dressing experience. By involving

motion-sensing technology and innovative UI design,

users can interact with the software effortlessly to

virtually try on different clothes with different backgrounds.

Method

Journey map

User Scenario

Team

Lingzhi Li

Ting Liu

landing vmirror.png

O V E R V I E W

Virtual Mirror

Category

AR Software

Role

UX Designer

& Developer

Time

3 Months

Analytical Thinker | Inquisitive Explorer | Art-lover

Background

Shopping is enjoyable for most people, especially for woman. However, the excitement can be quickly dampened by tedious dressing process. For example, people have to wait fitting room for a long time. While dressing, they have to be very careful in case their makeup may stain new clothes. Besides, in winter, customers also have to take off a bunch of clothes to try on new one. All of these lead to frustrating dressing experience. So I begin to think how to improve dressing experience? 

Empathize & Define

Empathize & Define
To understand what factors are related to unsatisfying dressing experience, I stayed in a shopping mall for a whole afternoon to observe and interview customers. By synthesizing what I find, I figured out the real problem and real needs of customers.

INTERVIEW

Firstly, I polled some of my friends. From their feedbacks, I found price, type of clothes, weather, and service quality are all speciously connected to bad dressing experience.

interview.png

JOURNEY MAP

Failing to get a clear definition of the problem from the interview, I went to a shopping mall and stayed there for a whole afternoon to observe how people behave in a real context and interviewed some frustrated customers. I visualized my observation and analysis into this journey map.

Asset 1-min.png
Brainstorm & Ideate

CONCLUSION

By analyzing findings from observation and interviews,I finally defined what the real pinpoint of dressing experience and underlying needs of users.

Problem

Trivial Dressing Process

Goal

Virtual Dressing

Brainstorm&Ideate

With a clear definition of the problem and a goal to be achieved, we brainstormed solutions as involving AR technology by using Kinect to achieve core function as virtually trying on. And I sketched a special device to further optimize user experience and augment reality.

SPECIAL DEVICE

This idea was inspired by how people look at the mirror to check fitting result and Microsoft Kinect’s motion-sensing game. When users stand in front of this special device, they feel like they are standing in front of a mirror. The only different thing is they can interact with this “mirror” to choose different clothes and check the fitting results.

USER SCENARIOS

After deciding the technical solutions, I begin to design features and flows of the software. In this process, I reviewed the previous interview and constructed user scenarios to help me organize the architecture of the software.

use scenario.png

Based on result from user scenarios, I organized those features into 4 parts, which are also 4 category of navigator. 

Clothes

Try on different clothes

Background

Check fitting result in different backgrounds

Recommendation

Try on recommended clothes

Help

Instruction & Consumer Service

UI FLOW

With defined components of navigation, I made the UI flow to simulate the operation flow. This helped me check whether the process is logic and coherent. 

Onboarding

onboarding_2.png
Interaction & Interface

Task 1: Change clothes

clothing.png

Task 2: Change the background

bg.png

Task 3: Try recommendations

rec.png
Implement & Progamming

Interaction&Interface

The most challenging but also interesting thing in designing interfaces for AR product is there are no official design 

guidelines to guide me so that I need to be very creative.

LAYOUT

In order to understand how people interact with a motion-sensitive screen, I did the literature review and learned from other  VR products such as Veer. I  concluded some interface design guidelines are:

  • Take advantage of intuitive gestures

  • Applicable to both hands

  • Not each control should be a button or handle

  • Distinguish from environment

  • Avoid small buttons which require precise control

layout.png

" UI widget too small "

" not balanced "

" occupy much space "

" UI widget too small "

" occupy much space "

Bingo!

CUBE ICON

Considering that the area of the interface in “mirror” is limited because most space is used to show the fitting result, I have to cover enough information in a limited interface. Those are some iteration.

logo1.png
icon2.png
logo3.png
menu icon.png
Asset 6.png

Inspired by card design, I came up with an idea as cube-styled navigation by taking advantage of the motion-sensitive screen. In this way, users can intuitively rotate the cube to browse by waving their hands. At the same time, the screen space is saved.

Implement&Programming

We use Unity 3D as develop engine through C#.  In the implement process, my contribution is realizing the background changing and height estimator feature. Currently we accomplished three mode except recommendation due to lack of big data of users figure and clothes information. We also collaborated to write a paper about developing technology

MULTI-MODAL INTERACTION

Try to imagine how to interact with a screen without touching? Gestures, voice, and cursor are three ways to interact with
VR/AR. The most exciting part of VR/AR is that they can provide multi-modal experience. Users can take advantage of three ways to interact smoothly and intuitively. In terms of this product, three methods have both merits and weaknesses. 

channels of interaction.png

After weighing the strengths and weaknesses of each method, I decided to combine Gesture with Cursor as main interaction. This combination is more efficient and able to fulfill the needs.

Asset 8.png

Wave is the most

effortless interaction

Cursor is the most habitual interaction

IMPLEMENT

The technical solution is utilizing users’ skeletons to drive a synchronous transformation of 3D clothes. Our work mainly including three parts: USER , CLOTHES, and FITTING.

User: Extraction of Users' Skeletons

Users' skeletons and movement can be extracted and tracked in real time by Microsoft Kinect.

黑白1.jpg

Clothes: 3D Modeling of Clothes

I constructed a hierarchy structure of human skeletons and skinned meshes of clothes in 3Ds Max. Clothes’ models are rigged to users’ skeleton by manually assigning weight to each vertex on meshes. The vertex weight is allocated according to joint-to-mesh distance and the flexibility of bones. The closer vertexes of mesh get to the key bone point, the more they are affected by the motion of a certain bone.

Weight Allocation based on a skeleton of
Clothes’_skeletons_are_rigged._.jpg

Fitting: Synchronous Transformation

The synchronous transformation between clothes and users is realized through an avatar in Unity 3D. Users’ skeletons extracted by Kinect and rigged clothes models are mapped to this avatar. Through this medium, users’ movement can drive the transformation and deformation of clothes.

黑白2.jpg

Visual

mode-clothes.png

Mode 2: Background

Users can check whether the clothes are suitable for the certain occasion under this mode.

 

When users change from other mode to this mode, the clothes they are dressing won’t disappear and the information of clothes also shows at the same time.

mode-rec_1.png

Mode 1: Clothes

Users can try on different clothes or the same clothes in different colors.

 

When a user turns around, the software will automatically take a photo for users so that they can check the fitting result of the back.

mode-bg.png

Mode 3: Recommendation

Users can ask recommendation according to discount, figure, and occasions under this mode.

 

When users change from other modes to this mode, the clothes they are dressing won’t disappear and the information of clothes also shows at the same time. 

WHAT I LEARNED

Emphasizing users underlying needs and concerns is very important to reframe the problem.

 

AR/VR is a completely new area with infinite possibility of new interaction.

 

The lack of haptic feedback from clothes such as weight and textile is the main problem that blocks the software getting popular in real life.

bottom of page