This is the 3D AR virtual dressing software designed
to facilitate dressing experience. By involving
motion-sensing technology and innovative UI design,
users can interact with the software effortlessly to
virtually try on different clothes with different backgrounds.
O V E R V I E W
Analytical Thinker | Inquisitive Explorer | Art-lover
Shopping is enjoyable for most people, especially for woman. However, the excitement can be quickly dampened by tedious dressing process. For example, people have to wait fitting room for a long time. While dressing, they have to be very careful in case their makeup may stain new clothes. Besides, in winter, customers also have to take off a bunch of clothes to try on new one. All of these lead to frustrating dressing experience. So I begin to think how to improve dressing experience?
Empathize & Define
To understand what factors are related to unsatisfying dressing experience, I stayed in a shopping mall for a whole afternoon to observe and interview customers. By synthesizing what I find, I figured out the real problem and real needs of customers.
Firstly, I polled some of my friends. From their feedbacks, I found price, type of clothes, weather, and service quality are all speciously connected to bad dressing experience.
Failing to get a clear definition of the problem from the interview, I went to a shopping mall and stayed there for a whole afternoon to observe how people behave in a real context and interviewed some frustrated customers. I visualized my observation and analysis into this journey map.
By analyzing findings from observation and interviews,I finally defined what the real pinpoint of dressing experience and underlying needs of users.
Trivial Dressing Process
With a clear definition of the problem and a goal to be achieved, we brainstormed solutions as involving AR technology by using Kinect to achieve core function as virtually trying on. And I sketched a special device to further optimize user experience and augment reality.
This idea was inspired by how people look at the mirror to check fitting result and Microsoft Kinect’s motion-sensing game. When users stand in front of this special device, they feel like they are standing in front of a mirror. The only different thing is they can interact with this “mirror” to choose different clothes and check the fitting results.
After deciding the technical solutions, I begin to design features and flows of the software. In this process, I reviewed the previous interview and constructed user scenarios to help me organize the architecture of the software.
Based on result from user scenarios, I organized those features into 4 parts, which are also 4 category of navigator.
Try on different clothes
Check fitting result in different backgrounds
Try on recommended clothes
Instruction & Consumer Service
With defined components of navigation, I made the UI flow to simulate the operation flow. This helped me check whether the process is logic and coherent.
Task 1: Change clothes
Task 2: Change the background
Task 3: Try recommendations
The most challenging but also interesting thing in designing interfaces for AR product is there are no official design
guidelines to guide me so that I need to be very creative.
In order to understand how people interact with a motion-sensitive screen, I did the literature review and learned from other VR products such as Veer. I concluded some interface design guidelines are:
Take advantage of intuitive gestures
Applicable to both hands
Not each control should be a button or handle
Distinguish from environment
Avoid small buttons which require precise control
" UI widget too small "
" not balanced "
" occupy much space "
" UI widget too small "
" occupy much space "
Considering that the area of the interface in “mirror” is limited because most space is used to show the fitting result, I have to cover enough information in a limited interface. Those are some iteration.
Inspired by card design, I came up with an idea as cube-styled navigation by taking advantage of the motion-sensitive screen. In this way, users can intuitively rotate the cube to browse by waving their hands. At the same time, the screen space is saved.
We use Unity 3D as develop engine through C#. In the implement process, my contribution is realizing the background changing and height estimator feature. Currently we accomplished three mode except recommendation due to lack of big data of users figure and clothes information. We also collaborated to write a paper about developing technology.
Try to imagine how to interact with a screen without touching? Gestures, voice, and cursor are three ways to interact with
VR/AR. The most exciting part of VR/AR is that they can provide multi-modal experience. Users can take advantage of three ways to interact smoothly and intuitively. In terms of this product, three methods have both merits and weaknesses.
After weighing the strengths and weaknesses of each method, I decided to combine Gesture with Cursor as main interaction. This combination is more efficient and able to fulfill the needs.
Wave is the most
Cursor is the most habitual interaction
The technical solution is utilizing users’ skeletons to drive a synchronous transformation of 3D clothes. Our work mainly including three parts: USER , CLOTHES, and FITTING.
User: Extraction of Users' Skeletons
Users' skeletons and movement can be extracted and tracked in real time by Microsoft Kinect.
Clothes: 3D Modeling of Clothes
I constructed a hierarchy structure of human skeletons and skinned meshes of clothes in 3Ds Max. Clothes’ models are rigged to users’ skeleton by manually assigning weight to each vertex on meshes. The vertex weight is allocated according to joint-to-mesh distance and the flexibility of bones. The closer vertexes of mesh get to the key bone point, the more they are affected by the motion of a certain bone.
Fitting: Synchronous Transformation
The synchronous transformation between clothes and users is realized through an avatar in Unity 3D. Users’ skeletons extracted by Kinect and rigged clothes models are mapped to this avatar. Through this medium, users’ movement can drive the transformation and deformation of clothes.
Mode 2: Background
Users can check whether the clothes are suitable for the certain occasion under this mode.
When users change from other mode to this mode, the clothes they are dressing won’t disappear and the information of clothes also shows at the same time.
Mode 1: Clothes
Users can try on different clothes or the same clothes in different colors.
When a user turns around, the software will automatically take a photo for users so that they can check the fitting result of the back.
Mode 3: Recommendation
Users can ask recommendation according to discount, figure, and occasions under this mode.
When users change from other modes to this mode, the clothes they are dressing won’t disappear and the information of clothes also shows at the same time.
WHAT I LEARNED
Emphasizing users underlying needs and concerns is very important to reframe the problem.
AR/VR is a completely new area with infinite possibility of new interaction.
The lack of haptic feedback from clothes such as weight and textile is the main problem that blocks the software getting popular in real life.