CN104854623A - Avatar-based virtual dressing room - Google Patents

Avatar-based virtual dressing room Download PDF

Info

Publication number
CN104854623A
CN104854623A CN201380040978.4A CN201380040978A CN104854623A CN 104854623 A CN104854623 A CN 104854623A CN 201380040978 A CN201380040978 A CN 201380040978A CN 104854623 A CN104854623 A CN 104854623A
Authority
CN
China
Prior art keywords
user
wearable items
incarnation
image
described user
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201380040978.4A
Other languages
Chinese (zh)
Inventor
J.卡普尔
S.琼斯
K.楚诺达
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Microsoft Technology Licensing LLC
Original Assignee
Microsoft Technology Licensing LLC
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US13/565,586 external-priority patent/US9646340B2/en
Application filed by Microsoft Technology Licensing LLC filed Critical Microsoft Technology Licensing LLC
Publication of CN104854623A publication Critical patent/CN104854623A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/16Cloth
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Abstract

A method to help a user visualize how a wearable article will look on the user's body. Enacted on a computing system, the method includes receiving an image of the user's body from an image-capture component. Based on the image, a posable, three-dimensional, virtual avatar is constructed to substantially resemble the user. In this example method, data is obtained that identifies the wearable article as being selected for the user. This data includes a plurality of metrics that at least partly define the wearable article. Then, a virtualized form of the wearable article is attached to the avatar, which is provided to a display component for the user to review.

Description

Based on the virtual fitting room of incarnation
Background technology
Although day by day close to technology, the people in Modern World are more and more busier.But, for many people, high priority is kept to the attention of appearance.Many people continue making time and safeguard and its wardrobe that increases, purchase special equipment etc.In some cases, must making time to go retail shop to try on and to buy clothes and accessories.Select the process of the appropriate articles of suitable size may be very time-consuming by trying a series of candidate item on.Online shopping provides the very fast alternative of conventional retail sight.Although there is its advantage, but online shopping presents some shortcoming.A shortcoming is, people may be difficult to visual given article and how will seem when being dressed by this people---this is owing to the abundant change in the body size in population and shape, hair and skin color etc.
Summary of the invention
An embodiment of the present disclosure provides the visual wearable items of a kind of user of help and will seem method how on the health of user.The method formulated on a computing system comprises the image of the health receiving user from image capture assemblies.Based on this image, the three-dimensional avatar that can pose is built as substantially alike with user.In this exemplary method, obtain data wearable items being identified as and selecting to be used for user.These data comprise the multiple tolerance limiting wearable items at least partly.Then, the virtual form of wearable items is attached to incarnation, and this incarnation is provided to display assembly and examines closely for user.
This summary is provided to be introduced in the selection of the concept further described in detailed description in simplified form.This summary is not intended to the key feature or the essential characteristic that identify theme required for protection, is not intended to the scope for limiting theme required for protection yet.And theme required for protection is not limited to solve in any part of the present disclosure the implementation of any shortcoming pointed out.
Accompanying drawing explanation
Fig. 1 shows and will seem environment how according to the visual wearable items of help user of embodiment of the present disclosure on the health of user.
Fig. 2 illustrates and will seem method how according to the visual wearable items of help user of embodiment of the present disclosure on the health of user.
Fig. 3 illustrates the method for the incarnation of the structure human experimenter according to embodiment of the present disclosure.
Fig. 4 shows each side of the dummy skeleton according to embodiment of the present disclosure.
Fig. 5 shows each side being connected to the dummy head grid of virtual body grid according to embodiment of the present disclosure.
Fig. 6 and 7 illustrates and wearable items is identified as according to the acquisition of embodiment of the present disclosure the exemplary method selecting to be used for the data of user.
Fig. 8 shows the presenting and examine closely of incarnation with the wearable items of at least one attachment according to embodiment of the present disclosure.
Fig. 9 illustrates the method making incarnation activity according to embodiment of the present disclosure.
Figure 10 schematically shows each side of the computing system according to embodiment of the present disclosure.
Embodiment
As the disclosure use, " incarnation " is that the three-dimensional computer of posing of human experimenter represents.Incarnation can use in various applications---in such as video-game, interactive training software, physical treatment or retail sight.More generally, expecting the virtual representation of experimenter Anywhere, incarnation can be used.
An example application of the incarnation in retail sight makes client " can try " various wearable items on virtually.Such article can comprise clothes, glasses, shoes, accessories, artificial limb, jewellery exemplarily, tatoo and/or cosmetics.Utilize this article of virtual form to increase its incarnation, client may can predict how it may seem when dressing corresponding actual object.The method may be used for reality visiting fitting room before pre-sifted article to save time.In addition, client can select to share with other people the image that it has the incarnation of the virtual objects of attachment.In some scenes, remotely can complete shared with physically non-existent friend or kinsfolk---such as via e-mail or mobile phone.Like this, client can have benefited from the suggestion of another people before making the decision-making buying article.Based in the online retail experience of incarnation, the whole process selected article, try article on and then buy article can be carried out in the personal air of the family of client or work place.
Fig. 1 shows the environment 10 of the incarnation for building human experimenter 12 in one embodiment, and described human experimenter is the user of environment.System comprises the user input device 14A being oriented to the view-based access control model aiming at experimenter.In the illustrated embodiment in which, user input device is operatively coupled to personal computer 16A, and it is operatively coupled to monitor 18A.In one non-limiting embodiment, user input device can be the Kinect system of the Microsoft from Washington, Redmond, and personal computer can be such as also from the video game system of the XBOX 360 and so on of Microsoft.Personal computer can comprise the logic subsystem 20 with the storage subsystem 22 be associated, as after this in greater detail.Storage subsystem can comprise the instruction making logic subsystem formulate each side of method described herein.
In FIG in illustrated embodiment, user input device 14A comprises the depth cameras 24 being configured to the depth map gathering experimenter 12.User input device also comprises the color camera 26 of the coloured image of the face being configured to gather at least experimenter.More generally, the character of the camera in user input device and number can be different in each embodiment of the present disclosure.Such camera can be configured to acquisition of image data, via downstream from this image data acquisition depth map.As used herein, term " depth map " refers to the array of the pixel of the corresponding region of the scene being registrated to imaging, wherein the degree of depth of the depth value instruction corresponding region of each pixel." degree of depth " is defined as the coordinate of the optical axis being parallel to depth cameras, its increase along with the distance from user input device and increasing.
In one embodiment, can common registration be carried out from the view data that stereocamera is right and mathematically combine to obtain depth map.In other embodiments, user input device 14A can be configured to the structuring infrared radiation that projection comprises many discrete features (such as, line or point).Depth cameras can be configured to irradiate the structuring of reflecting from experimenter carry out imaging.Based on the interval between the adjacent features in the regional of the experimenter of imaging, the depth map of experimenter can be built.
In other embodiments, user input device 14A can be configured to projection pulsed infrared radiation.Camera detects the pulsed exposure from experimenter's reflection to being configured to.These two cameras can comprise the electronic shutter being synchronized to pulsed exposure, but the integral time of camera can be different, to make from source to experimenter and the relative quantity then differentiating to the pixel of the pulsed exposure of camera the light received in the respective pixel of flight time and two cameras is recognizable.
Configuration described above makes it possible to realize various method and how will seem on the health of user to promote online retail experience etc. for the visual wearable items of help user.Continue by way of example now to describe some such methods with reference to above configuration.But, will appreciate that, also can be realized other method in method described herein and the scope of the present disclosure by different configurations.Any time access method that can operate in system 10, and can method be repeated.
Fig. 2 illustrates and helps the visual wearable items of user will to seem exemplary method how 28 on the health of user.Method can be formulated on the computing system of the system 16A of such as Fig. 1 and so on.
At 30 places of method 28, receive the image of the health of user from the image capture assemblies of such as user input device 14A and so on.Image comprises the health of user at least partially, but also can comprise head and/or the face of user.In the embodiment of expecting herein, image can be one of the set of the image of the user received from image capture assemblies.Set can comprise one or more depth map.
At 32 places, the three-dimensional avatar that can pose is built as substantially alike with user.(multiple) image that can receive based on 30 places at least partly builds incarnation.
Fig. 3 illustrates the exemplary method 34 of the incarnation building human experimenter.At 36 places, based on the one or more dummy skeletons obtaining experimenter in received depth map.Fig. 4 shows the example dummy skeleton 38 in an embodiment.Dummy skeleton comprises the multiple skeleton portion sections 40 being coupling in multiple joint 42 place pivotally.In certain embodiments, body part title can be assigned to each skeleton portion section and/or each joint.In the diagram, the body part title of each skeleton portion section 40 is by letter representation of enclosing: A is used for head, B is used for clavicle, C is used for upper arm, D is used for forearm, E is used for hand, F is used for trunk, G is used for pelvis, H is used for thigh, J is used for shank and K is used for foot.Similarly, the body part title in each joint 42 is by letter representation of enclosing: A is used for neck, B is used for shoulder, C is used for ancon, D is used for wrist, E is used for lower back, F is used for buttocks, G is used for knee and H is used for ankle.Naturally, the skeleton portion section shown in Fig. 4 and the layout in joint restrictive anything but.The dummy skeleton consistent with the disclosure can comprise skeleton portion section and the joint of in fact any type and number.
In one embodiment, each joint can be assigned with various parameter---the angle of such as, specify the Cartesian coordinates of joint position, specifying joint to rotate and the additional parameter of specifying the structure of corresponding body part (unlimited hand, closed hand etc.).Dummy skeleton can be taked to comprise the form for any or all of data structure in these parameters in each joint.Like this, the metric data---its size, shape, orientation, position etc.---limiting dummy skeleton can be distributed to joint.
Turn back to Fig. 3 now, the skeleton portion section of dummy skeleton and/or joint can adapt to the depth map at 36 places of method 34.This action can determine the appropriate location in each joint of skeleton, the anglec of rotation and other parameter value.Via any applicable Method for minimization, the length of skeleton portion section and the position in joint and the anglec of rotation can be regulated to reach an agreement for each level line (contour) with depth map.In certain embodiments, the action of adaptive skeleton portion section can comprise multiple level lines body part title being distributed to depth map.Alternatively, body part title can be distributed before minimizing.Therefore, can by or part pass on adaptation procedure based on body part title.Such as, that had previously trained may be used for some pixel from depth map to be denoted as to belong to given body part gathering of body model; Then the skeleton portion section being suitable for this body part can adapt to indicated pixel.If given level line is by the head of naming as experimenter, so adaptation procedure can be tried hard to the adaptive skeleton portion section being coupled to single joint pivotally of this level line, i.e. neck.If level line is named as forearm, so adaptation procedure can be tried hard to adaptation and is coupled to skeleton portion section---the joint, one, each end of portion's section in two joints.And if determine that given level line is unlikely to correspond to any body part of experimenter, so this level line can conductively-closed or otherwise eliminate from follow-up skeleton adaptation.Aforementioned description should not be construed as the scope that restriction can be used for the method building dummy skeleton, because can derive dummy skeleton from depth map in any suitable manner and not depart from the scope of the present disclosure.
Continue in figure 3, at 44 places, from the set of the dummy skeleton results characterizing metric that 36 obtain.Such tolerance can correspond to the distance between the predetermined point of dummy skeleton.In certain embodiments, the set of characterizing metric can relate to the size and shape of the health of experimenter intelligibly.Such as, characterizing metric can comprise experimenter following in one or more: highly, thigh length, arm length, shoulder width, chest radius, waist radius, buttocks radius, arm radius and thigh radius.
At 46 places, the set of the characterizing metric of results is provided to the algorithm being configured to export the virtual body grid alike with the health of experimenter as input like this.In other words, algorithm calculates the virtual body grid as the function of characterizing metric.In certain embodiments, the algorithm for this object can be the algorithm using machine learning training.More particularly, algorithm can be the algorithm having used the scope of the actual persons class model in the scope of posture and the human model in single posture to train, as in PCT application PCT/CN2012/077303 in greater detail, the full text of described application is merged by reference.
Continue in figure 3, at 48 places of method 34, build the dummy head grid different from virtual body grid.Dummy head grid can be built based on the second depth map different from the first depth map mentioned above.When experimenter than during collection the first depth map closer to depth cameras time, the second depth map can be gathered.In one embodiment, the second depth map can be the compound that three different images of the head of experimenter are caught: front elevation, the turn right view of 30 degree and the view of 30 degree of turning left.In the second depth map, can than the facial characteristics differentiating experimenter in the first depth map more subtly.In other embodiments, the head of experimenter can rotate the angle being greater than or less than 30 degree between continuous print image capture.
In order to build dummy head grid, dummy head net template can be made to be out of shape to minimize the distance between the point on the second depth map and the corresponding point on dummy head grid.Then the color and texture that always derive from one or more image captures of the color camera of user input device can be utilized to the dummy head grid that increases.Like this, dummy head grid can individual character turn to actual human experimenter alike---and namely, it can be presented on those alike facial characteristics of shape and skin color/both textures aspect and experimenter.
At 50 places, virtual body Mesh connection is to dummy head grid.In this step, first the head of virtual body template grid is deleted, and is then connected to dummy head template grid by carrying out triangulation to two unlimited borders of template grid.When virtual body grid and dummy head grid ready time, then the model connected stores in systems in which and is loaded.Two template grids are replaced, because it has identical connectivity by two virtual grids respectively.The convergent-divergent of dummy head grid is regulated according to the ratio consistent with virtual body grid.Summit around neck is also level and smooth, and other summit keeps fixing.Like this, geometrically real and seamless head/health grid can be built.
At 52 places, utilize the skin color and/or dermatoglyph that are suitable for experimenter to the virtual body grid that increases.In certain embodiments, can based on the color image data from user input device 14A---the color image data in region such as from the face comprising experimenter selects skin color and/or dermatoglyph.In other words, be applied to the skin color of virtual body grid and/or dermatoglyph and can synthesize that of the face mating experimenter.In one embodiment, first system is selected the health texture image in pre-designed database and then modulates low frequency color component to make its overall color consistent with the color of skin of face.Fig. 5 shows the head/health grid 56 of example dummy head grid 54 and combination.According to the method set forth herein, head and body model can represent the size and shape of the health of user exactly.
Turn back to Fig. 2 now, at 58 places of method 28, obtain the data specific wearable items being identified as and selecting to be used for user.Such data can comprise the multiple tolerance limiting wearable items at least partly.In one embodiment, multiple tolerance can limit the configuration of article.It can limit such as stitch mode.Data also can comprise (multiple) material---such as (multiple) fabric identifying that wearable items is made up of it, and the data of (multiple) color of (multiple) material.Fig. 6 and 7 respectively illustrates exemplary method 58A and 58B that may be used for obtaining such data.
In method 58A, be used for the wearable items of user via the input selection from another people.In one embodiment, another people can be friend or the relatives of user.In another embodiment, another people can be selected based on the standard be applicable to by the Local or Remote computing system of such as computing system 16A and so on.Such as, that the similar buying behavior with user can be shown due to this people---such as, buy the history of the project of identical actual items or same pattern or Price Range and select another people.
At 60 places, via computing system 16A for user provides request from the mechanism of such input of another people.In any convenient manner, user can open channel to receive the proposal identifying one or more wearable items from another people.Input from another people can be taked some multi-form.An exemplary forms of input can comprise to be dressed by another people and the instruction of the one or more equipments proposed for user.When the channel by opening wide receives proposal, at 62 places, can be selected to be used for the wearable items proposed by user by computing system 16A.At 64 places, retrieval limits related data---the such as metric data, material data, color data etc. of wearable items.In one embodiment, the database can safeguarded from supplier or the manufacturer by article carrys out retrieve data.Method returns from 64.
In certain embodiments, transmit for the proposal of wearable items and/or can be email channels for the various channels of request proposed, text or image information channel, push away the communication channel of literary composition and/or one or more community network.
Turn to Fig. 7 now, in method 58B, select wearable items by the selection engine of the Local or Remote computing system calling such as computing system 16 and so on.Herein in illustrated embodiment, the stock (inventory) selecting engine to be configured to the article from user-accessible selects wearable items.In one embodiment, stock can comprise the article for being sold to user---such as, and the stock of a supplier or the combination stock of multiple supplier.In another embodiment, stock can comprise " famous person's wardrobe " of the article being had by specific famous person or sign.In other embodiments, stock can comprise (or being limited to) and has been in article in the wardrobe of user.In this embodiment, stock can be filled in any suitable manner.Such as, can by the tag scan of article bought in the data file of computing system 16.In other example, user input device 14A may be used for the one or more images gathering actual object---and dress during image acquisition or do not dress.Once fill, then can for various data---such as the stock of wearable items inquired in the record of the number of the clothes items of particular color, classification, pattern and/or brand.Method described herein also can provide the various modes that the stock of user is reduced.Such as, the undesired project in wardrobe can be identified by user---directly or indirectly (such as, being stopped using by it)---and issue is sold for by Internet service.
Turn back to now as illustrated method, at 66 places, resolve the stock of user-accessible for wearable items.In the follow-up decision frame of method 58B, analyze wearable items about selected standard.Will appreciate that, select the standard comprised in the figure 7 to be illustrative, and not to limit in all senses.Naturally, illustrated any decision block, included different frames etc. can be omitted.
At 68 places of method 58B, determine whether article are suitable for weather condition that is current or prediction.If wearable items is suitable for weather condition, so method proceeds to 70.But if wearable items is not suitable, so method turns back to 66, wherein again resolves stock for next wearable items.Therefore, engine is selected can be configured to select wearable items based on the weather that is current or prediction in the area of user at least partly.
At 70 places, determine whether wearable items can be used for dressing.If article can be used for dressing, so method proceeds to 72.But if article are not useable for dressing, so method turns back to 66.Like this, the stock of the article of user-accessible can get rid of the current article being not useable for dressing.Therefore, illustrated method can cooperate with the method be associated of the multiple article in the wardrobe of track user to determine which project is current available.Article can due to its be in laundry or drycleaner's place, borrow temporarily unavailable to another people etc.
In a related embodiment, the method be associated can follow the trail of each article when finally cleaned in the wardrobe of user, and thus gets rid of the article needing cleaning.In another related embodiment, the method be associated can article in the wardrobe of track user to determine how to dress each project continually.Therefore, the stock of the article of user-accessible can get rid of by the article of user's wearing higher than threshold frequency.This feature is confirmed at 72 places of method 58B, wherein determines whether wearable items has dressed too much.If wearable items is not yet dressed too much, so method proceeds to 74.But if article have been dressed too much, so method has turned back to 66.
At 74 places, determine whether wearable items meets selected match-on criterion.Therefore, wearable items mentioned above can be wearable first article; Select engine can be configured to based on one or more the first article selected about wearable second article coupling in color, pattern and/or brand.In one embodiment, wearable second article can be selected for the article dressed and/or buy by user.In another embodiment, wearable second article can be the article be in the wardrobe of user.Therefore, engine is selected can be configured to select the first article about the second article coupling be in the wardrobe of user based on one or more in color, pattern and/or brand from the stock of the article for being sold to user.
And the second article can be one of multiple article in the wardrobe of the user that the first article match.In some sense, method 58B may be used for the breach that is full of in the wardrobe of user.This feature can be formulated according to the algorithm of the set of searching for the article found in the wardrobe of user.The aiming field of search can comprise the wardrobe of other the selected users on network.If find set in the wardrobe of other users a large amount of, together with the overage not included in set, so algorithm can return this addition item using as the first wearable items selected by method 58B.
Continue in method 58B, if wearable items meets match-on criterion, so method proceeds to 64.Otherwise method turns back to 66.At 64 places, to retrieve the data about wearable items for the mode described above of method 58A.Method returns from 64.
The each side of method 58A or 58B should not understood with restrictive, sense, because also otherwise can select the wearable items for user.Such as, wearable items can be selected by the automation services of the supplier of wearable items.In certain embodiments, user can from the multiple services like this provided by supplier---and the service (such as, Joe Hammerswing line, Bonnie Belle line etc.) be associated from the wearable items of being signed by different famous person is selected.
Again turn back to Fig. 2, at 76 places, the virtual form of wearable items is resized the health adapting to user.In the embodiment of expecting herein, the size of the health of estimating user can be carried out based on such as constructed incarnation or any data for building incarnation.Such as, can from dummy skeleton or according to dummy skeleton based on (multiple) depth map extract the size of the health of user.
At 78 places, the virtual form of wearable items is attached to incarnation.Locate at this processing stage, the metric data of selected article can be allocated based on the bottom topology of incarnation.The material data of wearable items may be used for determining how wearable items will meet topology.Such as, the elasticity of fabric can be used as fabric by stretching, extension to be adapted to the instruction of the degree of the health of user.Multiple wearable items is by the embodiment that is retrieved simultaneously wherein, and the virtual form of first, second, third wearable items etc. can be attached to incarnation.
At 80 places, the incarnation with the virtual form of (multiple) wearable items of attachment is provided to display assembly---such as, the display 18A of Fig. 1---examine closely for by user.Fig. 8 illustrates presenting and examining closely of the incarnation 82 of the user 12 in an exemplary scene.Herein, the virtual form of boots 84 and jacket 86 is attached to incarnation.In certain embodiments, incarnation the incarnation of---such as, user wishes people therewith in sight---can present for being presented on display assembly together together with another people.In one embodiment, can with the array of the different gestures such as selected by user or with the animation form expressing different gestures to provide incarnation.
Fig. 9 illustrates the exemplary method 88 making incarnation activity.At 34 places of method 88, build incarnation.In one embodiment, incarnation can be built according to illustrated method above.At 90 places, specify the initial expectation posture of animation incarnation.Initial expectation posture can be have stretch out arm, there is the right hand lifted, the stance with the left leg that lifts etc.At 92 places, reorientate dummy skeleton based on expectation posture.At 94 places, as described above, the distortion of virtual body grid is calculated based on the dummy skeleton of reorientating.In one embodiment, calculate distortion can comprise linear covering (skinning) models applying to virtual body grid.In another embodiment, calculate distortion can comprise mechanical covering Simulation Application to virtual body grid.In mechanical covering emulation, the human body of user and the motion of skin are simulated as the function of the displacement of dummy skeleton according to Newton's physics.From 94, method can utilize the follow-up expectation posture of specifying to turn back to 90.Such as, follow-up expectation posture can be the posture that changes of increment ground.Therefore, repetition can be carried out to make incarnation activity to the step of the distortion of reorientating dummy skeleton and calculating virtual body grid.
In the method for expecting herein, in fact any input mechanism may be used for the initial and follow-up expectation posture of specifying incarnation.Such mechanism can comprise the verbal order of instructing health movement or from the selection in the middle of the menu of health movement and/or the gesture via user interface.In another embodiment, the real-time skeleton of user input device 14A is utilized to follow the trail of the movement that can guide animation incarnation.More particularly, user can move himself simply by the mode expected with incarnation health to specify the movement of incarnation.User input device can be configured to the movement of the health of track user, and the stream of gesture data is provided to the personal computer of formulating method 88 thereon.
In certain embodiments, user can authorize it to have presenting with at least one other people and sharing of the incarnation of (multiple) wearable items of attachment alternatively.Therefore, the incarnation that user can ask another people to deliver to relate to user seems comment how when the project of wearing.By following the trail of the comment of another people, user can have benefited from the suggestion of another people.
Again turn back to Fig. 2, at 96 places of method 28, select for user and the mechanism of the wearable items of being examined closely by user for user provides to buy in.In one embodiment, mechanism can comprise the channel from computing system 16A to the online retail service of the supplier selected for the wearable items bought.Like this, method 28 provides by the selection of wearable items and buys the basic functionality guiding user.
Once user has agreed to select, received selected wearable items (if article are purchased) and put on article, then can the subsequent step of formulating method 28.At 98 places, when user is by when dressing the actual object received, receive one or more images of user.At 100 places, preserve the follow-up close examination of these images for user.Like this, user can safeguard that it seems record how in each project of its wardrobe.
Each side of the present disclosure is set forth by example with reference to illustrative embodiment described above.Assembly, treatment step and other element that in one or more embodiments can be substantially the same be identified synergistically and are described with minimum repeating.But, will point out, also can be different to a certain extent by the element identified synergistically.Also will point out, the accompanying drawing comprised in the disclosure is schematic and usual not drawn on scale.But, each drawing ratio of the assembly shown in accompanying drawing, length breadth ratio and number can intentionally distortion to be easier to find out some feature or relation.
In the method illustrated herein and/or describe, some in indicated process steps can be omitted without departing from the scope of the disclosure.Similarly, always can not require that the order of indicated process steps is to reach expected results, but be convenient to illustrate and describe and be provided.That can repeat to implement in illustrated action, function or operation is one or more, and this depends on used specific policy.
In certain embodiments, Method and Process described above can be tied to the computing system of one or more computing equipment.Specifically, such Method and Process can be implemented as computer applied algorithm or service, application programming interface (API), storehouse and/or other computer program.
Figure 10 schematically shows the non-limiting example of the one or more computing system 16 can implemented in Method and Process described above.Computing system 16 is shown in simplified form.Will appreciate that, in fact can use any computer architecture without departing from the scope of the disclosure.In different embodiments, computing system 16 can take the form of the following: mainframe computer, server computer, desk-top computer, laptop computer, flat computer, home entertaining computing machine, network computing device, game station, mobile computing device, mobile communication equipment (such as, smart phone) etc.
Computing system 16 comprises logic subsystem 20 and storage subsystem 22.Computing system 16 can comprise other assembly unshowned in display subsystem 18, input subsystem 14, communication subsystem 102 and/or Figure 10 alternatively.Computing system 16 also can comprise such as one or more user input device of such as keyboard, mouse, game console, camera, microphone and/or touch-screen and so on or mutual with it alternatively.Such user input device can be formed input subsystem 14 a part or can be mutual with input subsystem 14.
Logic subsystem 20 comprises the one or more physical equipments being configured to perform instruction.Such as, logic subsystem can be configured to the instruction performing the part built as one or more application, service, program, routine, storehouse, object, assembly, data structure or other logic.Such instruction can be implemented as execution task, realizes data type, converts the state of one or more assembly or otherwise reach expected result.
Logic subsystem can comprise the one or more processors being configured to executive software instruction.Extraly or alternatively, logic subsystem can comprise the one or more hardware or firmware logic machine that are configured to perform hardware or firmware instructions.The processor of logic subsystem can be monokaryon or multinuclear, and the program performed thereon can be arranged to order, parallel or distributed treatment.Logic subsystem can comprise the independent assembly be distributed in the middle of two or more equipment alternatively, and described equipment can remotely be located and/or be configured to for associated treatment.The each side of logic subsystem can be come virtual by configuring with cloud computing the networked computing device capable of making remote access be configured and perform.
Storage subsystem 22 comprises the non-both transient devices of one or more physics, and it is configured to the data and/or the instruction that keep being performed to realize Method and Process described herein by logic subsystem.When realizing such Method and Process, the state of storage subsystem 22 can be converted---such as, to keep different data.
Storage subsystem 22 can comprise moveable media and/or built-in device.Storage subsystem 22 can comprise optical memory devices (such as, CD, DVD, HD-DVD, Blu-ray Disc etc.), semiconductor memory devices (such as, RAM, EPROM, EEPROM, etc.) and/or magnetic memory device (such as, hard drive, disk drive, magnetic tape drive, MRAM etc.) etc.Storage subsystem 22 can comprise the equipment of volatibility, non-volatile, dynamic, static, read/write, read-only, random access, sequential access, position addressable, file addressable and/or content addressable.
To understand, storage subsystem 22 comprises the non-both transient devices of one or more physics.But in certain embodiments, each side of instruction described herein can keep the pure signal---such as electromagnetic or optical signal etc.---of finite duration to propagate in transient state mode by can't help physical equipment.And the information about data of the present disclosure and/or other form can be propagated by pure signal.
In certain embodiments, logic subsystem 20 and storage subsystem 22 each side can together be integrated into and can be formulated at least partly in functional one or more hardware logic assembly described herein by it.Such hardware logic assembly such as can comprise field programmable gate array (FPGA), program and special IC (PASIC/ASIC), program and Application Specific Standard Product (PSSP/ASSP), SOC (system on a chip) (SOC) system and complicated programmable logic device (CPLD).
Term " module ", " program " and " engine " may be used for describing the aspect being embodied as the computing system 16 implementing specific function.In some cases, instantiation module, program or engine can be come via the logic subsystem 20 performing the instruction kept by storage subsystem 22.Therefore, Figure 10 illustrates engine 104, and it comprises selection engine as described above etc.Will appreciate that, the different module of instantiation, program and/or engine can be come from identical application, service, code block, object, storehouse, routine, API, function etc.Similarly, the identical module of instantiation, program and/or engine can be come by different application, service, code block, object, routine, API, function etc.Executable file, data file, storehouse, driver, script, data-base recording etc. separately or in groups can be contained in term " module ", " program " and " engine ".
To understand, as used herein, " service " is across the executable application program of multiple user conversation.Service can be used for one or more system component, program and/or other service.In some implementations, service may operate on one or more server computing device.
When included, display subsystem 18 may be used for the visual representation presenting the data kept by storage subsystem 22.This visual representation can take the form of graphic user interface (GUI).When Method and Process described herein changes the data kept by storage subsystem, and when therefore converting the state of storage subsystem, the state of display subsystem 18 similarly can convert the change visually represented in bottom data.Display subsystem 18 can comprise the one or more display apparatus in fact utilizing any type of technology.Such display apparatus can combine with shared annex and logic subsystem 20 and/or storage subsystem 22, or such display apparatus can be peripheral display equipment.
When included, communication subsystem 102 can be configured to computing system 16 and other computing device communication one or more to be coupled.Communication subsystem 102 can comprise the wired and/or Wireless Telecom Equipment with one or more different communication protocol compatibility.As non-limiting example, communication subsystem can be arranged to the communication via wireless telephony network or wired or wireless LAN (Local Area Network) or wide area network.In certain embodiments, communication subsystem can allow computing system 16 to send a message to miscellaneous equipment via the network of such as internet and so on and/or from miscellaneous equipment receipt message.
Finally, will point out, theme of the present disclosure comprises various process disclosed herein, system and is configured to and further feature, function, all novelties of action and/or character and non-obvious combination and sub-portfolio, and its any and all equivalent.

Claims (10)

1. the visual wearable items of help user formulated on a computing system will seem a method how on the health of described user, and described method comprises:
The image of the health of described user is received from image capture assemblies;
Based on described image, build substantially alike with the described user three-dimensional avatar of posing;
Obtain the data described wearable items being identified as and selecting to be used for described user, described data comprise the multiple tolerance limiting described wearable items at least partly;
The virtual form of described wearable items is attached to described incarnation; And
The incarnation of the virtual form with the described wearable items of attachment is provided to display assembly examine closely for described user.
2. method according to claim 1, wherein receive described image and comprise the one or more depth map of reception, wherein build described incarnation and comprise the head and the body model that build described user based on described one or more depth map, and wherein said head and body model represent the size and shape of the health of described user exactly.
3. method according to claim 1, wherein selects to be used for described user by described wearable items via the input from another people.
4. method according to claim 3, also comprises and another people described is chosen as the people showing the buying behavior similar with the buying behavior of described user about wearable items.
5. method according to claim 1, wherein selects described wearable items by the automation services of the supplier of described wearable items.
6. method according to claim 1, described wearable items selected by the selection engine wherein via described computing system, and described selection engine is configured to from the stock of the article of described user-accessible to select described wearable items.
7. method according to claim 6, wherein said wearable items is the first wearable items, and wherein said selection engine is configured to based on one or more the first wearable items selected about the second wearable items coupling in color, pattern and/or brand.
8. method according to claim 7, wherein said second wearable items has been selected for the article dressed and/or buy by described user.
9. method according to claim 1, wherein described incarnation is provided to described display assembly comprise described user is provided incarnation together with the incarnation of another people.
10. method according to claim 1, is also included in the image that receives described user when dressing described wearable items and preserves described image and examine closely for described user.
CN201380040978.4A 2012-08-02 2013-08-01 Avatar-based virtual dressing room Pending CN104854623A (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US13/565586 2012-08-02
US13/565,586 US9646340B2 (en) 2010-04-01 2012-08-02 Avatar-based virtual dressing room
PCT/US2013/053115 WO2014022608A2 (en) 2012-08-02 2013-08-01 Avatar-based virtual dressing room

Publications (1)

Publication Number Publication Date
CN104854623A true CN104854623A (en) 2015-08-19

Family

ID=48948553

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201380040978.4A Pending CN104854623A (en) 2012-08-02 2013-08-01 Avatar-based virtual dressing room

Country Status (3)

Country Link
EP (1) EP2880637A2 (en)
CN (1) CN104854623A (en)
WO (1) WO2014022608A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097442A (en) * 2016-08-24 2016-11-09 广东华邦云计算股份有限公司 A kind of intelligent simulation dressing system and application process thereof
TWI625687B (en) * 2016-11-01 2018-06-01 緯創資通股份有限公司 Interactive clothes and accessories fitting method, display system and computer-readable recording medium thereof
CN111837152A (en) * 2018-01-24 2020-10-27 耐克创新有限合伙公司 System, platform and method for personalized shopping using virtual shopping assistant

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105336005B (en) * 2014-06-27 2018-12-14 华为技术有限公司 A kind of method, apparatus and terminal obtaining target object sign data
US9754410B2 (en) 2017-02-15 2017-09-05 StyleMe Limited System and method for three-dimensional garment mesh deformation and layering for garment fit visualization
US10242498B1 (en) 2017-11-07 2019-03-26 StyleMe Limited Physics based garment simulation systems and methods
US10373373B2 (en) 2017-11-07 2019-08-06 StyleMe Limited Systems and methods for reducing the stimulation time of physics based garment simulations
US10776979B2 (en) 2018-05-31 2020-09-15 Microsoft Technology Licensing, Llc Virtual skeleton based on computing device capability profile
WO2022234240A1 (en) * 2021-05-05 2022-11-10 Retail Social Limited Systems and methods for the display of virtual clothing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6901379B1 (en) * 2000-07-07 2005-05-31 4-D Networks, Inc. Online shopping with virtual modeling and peer review
US20100030578A1 (en) * 2008-03-21 2010-02-04 Siddique M A Sami System and method for collaborative shopping, business and entertainment
US20110022965A1 (en) * 2009-07-23 2011-01-27 Apple Inc. Personalized shopping avatar
CN102201099A (en) * 2010-04-01 2011-09-28 微软公司 Motion-based interactive shopping environment

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7194428B2 (en) * 2001-03-02 2007-03-20 Accenture Global Services Gmbh Online wardrobe

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6901379B1 (en) * 2000-07-07 2005-05-31 4-D Networks, Inc. Online shopping with virtual modeling and peer review
US20100030578A1 (en) * 2008-03-21 2010-02-04 Siddique M A Sami System and method for collaborative shopping, business and entertainment
US20110022965A1 (en) * 2009-07-23 2011-01-27 Apple Inc. Personalized shopping avatar
CN102201099A (en) * 2010-04-01 2011-09-28 微软公司 Motion-based interactive shopping environment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HAUSWIESNER S, STRAKA M, REITMAYR G: "Free viewpoint virtual try-on with commodity depth cameras", 《PROCEEDINGS OF THE 10TH INTERNATIONAL CONFERENCE ON VIRTUAL REALITY CONTINUUM AND ITS APPLICATIONS IN INDUSTRY》 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106097442A (en) * 2016-08-24 2016-11-09 广东华邦云计算股份有限公司 A kind of intelligent simulation dressing system and application process thereof
TWI625687B (en) * 2016-11-01 2018-06-01 緯創資通股份有限公司 Interactive clothes and accessories fitting method, display system and computer-readable recording medium thereof
US10373333B2 (en) 2016-11-01 2019-08-06 Wistron Corporation Interactive clothes and accessories fitting method and display system thereof
CN111837152A (en) * 2018-01-24 2020-10-27 耐克创新有限合伙公司 System, platform and method for personalized shopping using virtual shopping assistant

Also Published As

Publication number Publication date
WO2014022608A3 (en) 2015-04-02
EP2880637A2 (en) 2015-06-10
WO2014022608A2 (en) 2014-02-06

Similar Documents

Publication Publication Date Title
KR101911133B1 (en) Avatar construction using depth camera
US9646340B2 (en) Avatar-based virtual dressing room
CN104854623A (en) Avatar-based virtual dressing room
US9098873B2 (en) Motion-based interactive shopping environment
CN102470274B (en) Auto-generating a visual representation
JP2015531098A5 (en)
Asteriadis et al. Estimating human motion from multiple kinect sensors
CN104508709B (en) Animation is carried out to object using human body
US11868515B2 (en) Generating textured polygon strip hair from strand-based hair for a virtual character
Chen et al. KinÊtre: animating the world with the human body
JP5865357B2 (en) Avatar / gesture display restrictions
US11557076B2 (en) Computer generated hair groom transfer tool
WO2018208477A1 (en) Creating a mixed-reality video based upon tracked skeletal features
Gültepe et al. Real-time virtual fitting with body measurement and motion smoothing
CN102622774A (en) Living room movie creation
Vitali et al. Acquisition of customer’s tailor measurements for 3D clothing design using virtual reality devices
CN115244495A (en) Real-time styling for virtual environment motion
Parger et al. UNOC: Understanding occlusion for embodied presence in virtual reality
US20200013232A1 (en) Method and apparatus for converting 3d scanned objects to avatars
Thalmann A new generation of synthetic actors: the real-time and interactive perceptive actors
US11386615B2 (en) Creating a custom three-dimensional body shape model
Vladimirov et al. Overview of Methods for 3D Reconstruction of Human Models with Applications in Fashion E-Commerce
Shen et al. Automatic pose tracking and motion transfer to arbitrary 3d characters
US20200013233A1 (en) Method and apparatus for fitting an accessory object to an avatar
Treepong et al. The development of an augmented virtuality for interactive face makeup system

Legal Events

Date Code Title Description
PB01 Publication
EXSB Decision made by sipo to initiate substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20150819