WO2007148855A1 - A system for simulating a cyber-character which a user feature is reflected - Google Patents

A system for simulating a cyber-character which a user feature is reflected Download PDF

Info

Publication number
WO2007148855A1
WO2007148855A1 PCT/KR2006/004490 KR2006004490W WO2007148855A1 WO 2007148855 A1 WO2007148855 A1 WO 2007148855A1 KR 2006004490 W KR2006004490 W KR 2006004490W WO 2007148855 A1 WO2007148855 A1 WO 2007148855A1
Authority
WO
WIPO (PCT)
Prior art keywords
simulation
user
image
face image
accessory
Prior art date
Application number
PCT/KR2006/004490
Other languages
French (fr)
Inventor
Chang-Hwan Lee
Hyun-Jin Kim
Original Assignee
Maxuracy Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Maxuracy Co., Ltd. filed Critical Maxuracy Co., Ltd.
Publication of WO2007148855A1 publication Critical patent/WO2007148855A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • G06Q30/0603Catalogue ordering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/10Services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q30/00Commerce
    • G06Q30/06Buying, selling or leasing transactions
    • G06Q30/0601Electronic shopping [e-shopping]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T13/00Animation
    • G06T13/203D [Three Dimensional] animation
    • G06T13/403D [Three Dimensional] animation of characters, e.g. humans, animals or virtual beings

Definitions

  • the invention relates to a system for simulating a cyber-character which a user feature is reflected, and more particularly, to a system for simulating a cyber-character which the user feature is reflected, which flexibly link-disposes a computation module capable of creating a CU (Cyber-character in which a User feature (individual feature of a user such as face shape and body size) is reflected) and storing the CU in the system or distributing the CU in an exterior contents server, based on a wireless/wired on-line communication network such as internet network, a computation module capable of complexly applying/providing a product/service such as clothes, accessory, makeup product, plastic operation and hair beauty service to the CU to carry out a simulation and a computation module capable of real-time selling/guiding the product/ service such as clothes, accessory, makeup product, plastic operation and hair beauty service in accordance with needs of a user, thereby enabling the user to posses a high- quality service, which is substantially useful in buying the
  • the shops/stores which individually sell/handle/provide the clothes, accessory, makeup product, plastic operation, hair beauty service and the like, are dispersed in all directions. Accordingly, even if a user has a desire for carrying out virtual simulations of the various products/services at one time, the user cannot realize the desire unless the user visits the corresponding shops/ stores (or web sites managed by the shops/stores) one by one.
  • An object of the invention is to provide a system for simulating a cyber-character which the user feature is reflected, which flexibly link-disposes a computation module capable of creating a cyber-character in which a user feature (face shape and body size) is reflected (hereinafter, abbreviated to "CU") and storing the CU in the system or distributing the CU in an exterior contents server, based on a wireless/wired on-line communication network such as internet network, a computation module capable of complexly applying/providing a product/service such as clothes, accessory, makeup product, plastic operation and hair beauty service to the CU to carry out a simulation and a computation module capable of real-time selling/guiding the product/service such as clothes, accessory, makeup product, plastic operation and hair beauty service in accordance with needs of a user, thereby enabling the user to posses a high-quality service, which is substantially useful in buying the fashion/beauty product or fashion/ beauty service, in a one-stop manner, without a difficulty in visiting
  • a CU simulation system comprising: a CU simulation operating server that is selectively signal-connected to a capture image acquiring system based on an on-line communication network and collectively controls a process of creating a cyber-character which the user feature is reflected (CU) and a process of virtually applying a beauty/fashion related image to the CU to carry out a simulation, while notifying/operating a guide window for guiding an information input and observation in a part of the capture image acquiring system; a CU creating unit that is controlled by the CU simulation operating server, converts a predetermined standard face image to be matched to a user face image created/ transmitted by the capture image acquiring system so as to create a CU face image, converts a predetermined standard body image to be matched to a user body feature created/transmitted by the capture image acquiring system so as to create a CU body image and combines the CU face image and the CU body image to create a CU; and a CU simulation operating server that is selectively signal-connected to
  • FIG. 1 conceptually shows a general structure of a CU simulation system according to an embodiment of the invention
  • FIGS. 2 and 9 conceptually show a noticed state of a guide window according to an embodiment of the invention
  • FIG. 3 conceptually shows a general operating process of a CU creating unit according to an embodiment of the invention
  • FIG. 4 conceptually shows a detailed structure of a CU creating unit according to an embodiment of the invention
  • FIG. 5 conceptually shows an operating process of a similar standard face image loading module according to an embodiment of the invention
  • FIG 6 conceptually shows an operating process of a CU face image creating engine according to an embodiment of the invention
  • FIG 7 conceptually shows an operating process of a CU body image loading engine according to an embodiment of the invention
  • FIG 8 conceptually shows an operating process of a CU creating engine according to an embodiment of the invention
  • FIG 10 conceptually shows an operating process of a clothes simulation module according to an embodiment of the invention
  • FIG. 11 conceptually shows a detailed structure of a clothes simulation module according to an embodiment of the invention.
  • FIG. 12 conceptually shows a noticed state of a clothes simulation guide platform according to an embodiment of the invention
  • FIG. 13 conceptually shows a detailed structure of a plastic operation simulation module according to an embodiment of the invention
  • FIGS. 14 and 16 conceptually show a noticed state of a plastic operation simulation guide platform according to an embodiment of the invention
  • FIG. 15 conceptually shows an operating process of a plastic operation simulation engine according to an embodiment of the invention
  • FIG. 17 conceptually shows a detailed structure of a makeup simulation engine according to an embodiment of the invention
  • FIG. 18 conceptually shows a noticed state of a makeup simulation guide platform according to an embodiment of the invention
  • FIG. 25 FIG.
  • FIG. 19 conceptually shows an operating process of a makeup simulation engine ac cording to an embodiment of the invention
  • FIG. 20 conceptually shows an operating process of an accessory simulation module according to an embodiment of the invention
  • FIG. 21 conceptually shows a detailed structure of an accessory simulation module according to an embodiment of the invention
  • FIG. 22 conceptually shows a noticed state of an accessory simulation guide platform according to an embodiment of the invention
  • FIG. 23 conceptually shows an operating process of a hair simulation module according to an embodiment of the invention
  • FIG. 24 conceptually shows a detailed structure of a hair simulation module according to an embodiment of the invention
  • FIGS. 25 and 26 conceptually show a noticed state of a hair simulation guide platform according to an embodiment of the invention.
  • a CU simulation system 100 is selectively signal-connected to capture image acquiring systems 2 that are widely distributed/arranged at various places, based on a series of wired/wireless communication network 30 such as internet network.
  • the capture image acquiring system 2 is provided in a booth B that is distributed at each place or provided in a shop/store M that sells/handles/provides clothes, accessory, makeup product, plastic operation, hair beauty service and the like.
  • the capture image acquiring system 2 comprises a photographing set that has a capture camera, a pattern generator and an illumination device and link-operates the capture camera, the pattern generator and the illumination device to generate a capture image for a virtual plastic operation having reflected a plastic operation target part while going up and down depending on an outside computation control, the capture camera, the pattern generator and the illumination device being provided to one place and dispersion-disposed depending on functions thereof; and a device driving control tool that belongs to an information processing device 1 electrically connected to the photographing set and selectively communicates with the photographing set to control driving states of the capture camera, the pattern generator and the illumination device and the up and down states of the photographing set, correspondingly to photographing environments.
  • a detailed structure of the capture image acquiring system 2 is specifically described in a Korean Patent Application No. 2005-101592 entitled with "SYSTEM OF ACQUIRING CAPTURE IMAGE FOR VIRTUAL PLASTIC OPERATION", which is filed by the applicant.
  • a user after visiting the booth B, a user can progress a process of personally operating the capture image acquiring system 2.
  • a user after visiting the shop/ store M that sells/handles/provides clothes, accessory, makeup product, plastic operation, hair beauty service and the like, a user can progress a process of using the capture image acquiring system 2 in accordance with the guide of the shop/store.
  • the user can stably transmit a face image, body information (height, age, weight and the like) and the like of the user to the CU simulation system 100 of the invention.
  • the CU simulation system 100 comprises a
  • CU simulation operating server 101 that is selectively signal-connected to the capture image acquiring system 2, an exterior contents service server 40, a product selling server 20, a product distributor server 10, a communication terminal 50 (for example, mobile communication terminal) of a plastic operation/beauty service provider and the like via an interface unit 102, a wired/wireless on-line communication network 30 and the like, and a CU creating unit 110, a CU simulation unit 200, an operating aiding unit 130 and a product selling guide unit 150, which are collectively controlled by the CU simulation operating server 101.
  • the CU simulation operating server 101 collectively controls a process of creating a CU, a process of virtually applying a beauty/fashion related image to the CU to carry out a simulation, and a process of guiding a product that is selected by the user, while notifying/operating a guide window 301 for guiding an information input and observation in a part of the capture image acquiring system 2, as shown in Fig. 2 (or Fig. 9).
  • the operating aiding unit 130 which is controlled by the CU simulation operating server 101, extracts various simulation data stored in a simulation operating information D/B 170 (for example, user registration data, shop/store registration data, text/video/link/image/setting data for notifying/operating the guide window, exterior server registration data, exterior communication terminal registration data, simulation creation data, CU related creation data and the like) or stores new data in the simulation operating information D/B 170 in accordance with an event progress of each computation unit disposed in the system 100, thereby enabling a series of simulation service processes by the CU simulation operating server 101 to be normally progressed without specific problems.
  • a simulation operating information D/B 170 for example, user registration data, shop/store registration data, text/video/link/image/setting data for notifying/operating the guide window, exterior server registration data, exterior communication terminal registration data, simulation creation data, CU related creation data and the like
  • the CU creating unit 110 which is controlled by the CU simulation operating server 101, converts a predetermined/pre- stored standard face image to be matched to a user face image 401 created/transmitted by the capture image acquiring system 2 so as to create a CU face image 407, converts a predetermined standard body image 404 to be matched to a user body feature information 402 created/transmitted by the capture image acquiring system 2 so as to create a CU body image 406 and combines the CU face image 407 and the CU body image 406 to create a CU 408.
  • the CU simulation unit 200 which is controlled by the CU simulation operating server 101, selectively combines a plurality of beauty/fashion related images with the CU 408 or CU face image 407 or modifies the CU 408 or CU face image, 407 with which the plurality of beauty/fashion related images (clothes images, hair images, accessory images, makeup images and the like) are selectively combined, to carry out a simulation, in accordance with a computation event of the user through the guide window 301.
  • the product selling guide unit 150 which is controlled by the CU simulation operating server 101, progresses a series of communication processes with the product distributor server 10, the product selling server 20, the communication terminal 50 of the plastic operation/beauty service provider and the like via the interface unit 102 in accordance with a request of the user having completed the simulation, thereby enabling the product/service selected by the user, such as clothes, accessory, makeup product, plastic operation and hair beauty service, to be sold real time.
  • the CU creating unit 110 comprises a CU creation control module 111 that collectively controls a general CU creating process while using/ operating a processing buffer I l ia for supporting a processing progress of self-control computation modules, and a user face image receiving module 113, a user face image preprocessing module 114, a specific point appointing module 117, a similar standard face image loading module 115, a CU face image creating engine 118, a user body information receiving module 119, a CU body image loading engine 120, a CU creating engine 121, a CU exterior-providing module 123 and an information output module 122, which are collectively controlled by the CU creation control module 111.
  • the user face image receiving module 113 forms a selective communication relation with the capture image acquiring system 2 via an information exchange module 112 and receives the user face image 401 that is transmitted from the capture image acquiring system 2.
  • the user face image preprocessing module 114 appropriately carries out a preprocess of the user face image 401, which is received by the user face image receiving module 113 (for example, a process of removing a background of the user face image, a process of recovering a lost part of the user face image and the like).
  • the specific point appointing module 117 gives/appoints a series of specific points to a main part (for example, vicinity of eye, philtrum, nose and the like) of the user face image that has been preprocessed by the user face image preprocessing module 114, thereby enabling a feature of the user face image to be easily recognized (needless to say, the process of appointing the specific points may be carried out in advance by the capture image acquiring system).
  • the similar standard face image loading module 115 forms a series of communication relations with a standard face/ body image storing library 116, as shown in Fig.
  • the CU face image creating engine 118 progresses a process of acquiring "a difference degree (for example, a degree indicating how different two positions are) between a position (for example, m ) of the k specific point appointed to the main part of the user face image 401 and a position (for example, v ) of vertex constituting a polygon mesh (PS) of the similar standard face k image 405.”
  • a difference degree for example, a degree indicating how different two positions are
  • the CU face image creating engine 118 analyzes/acquires the difference between the two positions under limited condition as shown in an equation 1, i.e., limited condition that "there is little difference between the position m of the specific point appointed to the main part of the user face image 401 and the position v k of vertex of the similar standard face image 405," thereby enabling a process of acquiring the CU face image 407 to be progressed more rapidly while minimizing deformation of the similar standard face image 405.
  • equation 1 i.e., limited condition that "there is little difference between the position m of the specific point appointed to the main part of the user face image 401 and the position v k of vertex of the similar standard face image 405," thereby enabling a process of acquiring the CU face image 407 to be progressed more rapidly while minimizing deformation of the similar standard face image 405.
  • v position of k vertex constituting a polygon mesh of the similar standard face k image.
  • CU face image 407 for example V are estimated through a least square method as shown in the equation 2. Therefore, in the invention, the vertexes constituting the polygon mesh PS of the similar standard face image 405, for example, v exhibits, within a minimum deformation range, a transition pattern that becomes optimally similar to features of the vertexes constituting a polygon mesh PT of the user face image 401 (i.e., a feature error of the two vertexes is minimized).
  • the similar standard face image 405 can be finally changed into the CU face image 407 having optimally reflected the feature of the user face image 401, while minimizing the deformation of the similar standard face image.
  • V position of I vertex that will constitute a polygon mesh of the CU face image
  • [59] TT : 1 transform matrix of i triangle constituting a polygon mesh of the similar standard face image
  • TT : ttrraannssffoorrmm mmaattrrix of j triangle neighboring to T ,
  • the item min ⁇ t- j, ⁇ ⁇ I l J ,- J ' , ⁇ ⁇ 7 included in the equation 2 is an item that estimates a V value so that the transform matrix T i of i triangle Pl constituting a polygon mesh PS of the similar standard face image 405 can be transformed while having an uttermost similar value to the transform matrix T of j triangle P2 neighboring to T , in a situation that v is converted into V to j i i I form the polygon mesh PN of the CU face image 407 in earnest (refer to Fig. 6).
  • the CU face image 407 can maintain an optimized very smooth shape due to an increase in a similarity of the polygon meshes neighboring to each other.
  • Equation 2 included in the equation 2 is an item that estimates a V value so that the position v of i vertex constituting a polygon mesh PS of the similar standard face image 405 can be transformed while possibly minimizing a difference with the position c of the i vertex constituting a polygon mesh PT of the user face image 401 and nearest corresponding to v , in a situation that v is converted into V to form the polygon mesh PN of the CU face image 407 in earnest (refer to Fig. 6).
  • V is finally estimated by the calculation of the equation 2 including "an adjusting item that makes the position of vertex of the similar standard face image adjacent to the position of vertex of the user face image 401 to the highest degree" and v i is transformed into V I to constitute the polygon mesh of the CU face image 407
  • the CU face image 407 can naturally form a shape that is closest to the feature of the user face image 401.
  • w , w , w and the like included in each item of the equation 2 are weight s m d factors of the corresponding items.
  • the CU face image creating engine 118 differently sets the weight factors of the respective items depending on conditions (for example, sets w : 0.01, w : 0.1 and w : 0.2) in the calculation situation of the equation 2, thereby s m d enabling the CU face image 407, which will be finally completed, to have a shape matched to the user face image 401, more efficiently.
  • the user body information receiving module 119 which is controlled by the CU creation control module 111, forms a selective communication relation with the capture image acquiring system 2 via the information exchange module 112 and receives the user body information (height information, weight information, sex information, age information and the like) that is transmitted from the capture image acquiring system 2.
  • the CU body image loading engine 120 forms a communication relation with the standard face/body image storing library 116, as shown in Fig. 7, and selectively loads the CU body image 406 conforming with the body feature of the user, from the standard body images 404 that are predetermined/pre-stored.
  • the CU creating engine 121 When the creation (loading) of the CU face image 407 and the CU body image 406, in which the user feature is reflected, is completed as the computation modules carry out the functions thereof, the CU creating engine 121 immediately combines the CU face image 407 and the CU body image 406, for example combines the CU face image 407 with a face part 406a of the CU body image 406, thereby creating a completed CU 408.
  • the CU 408 in which the individual features of the user (face feature, body feature and the like) are appropriately reflected, can be stably created.
  • the information output module 122 which is controlled by the CU creation control module 111, outputs/transmits the completed CU (or CU face image and the like) to the CU simulation operating server 101 via the information exchange module 112, thereby enabling the corresponding CU (or CU face image and the like) to be stably stored/preserved in the simulation operating information D/B 170 and to be stably provided to the user through the guide window 302 as shown in Fig. 9.
  • the CU exterior-providing module 123 which is controlled by the CU creation control module 111, communicates with the exterior contents service server 40 via the information exchange module 112 and the interface module 102 to transmit/distribute the CU to the exterior contents service server 40, thereby the cyber character, which is managed by the exterior contents service server 40 (for example, an exterior contents service server having a cooperation relation with the system of the invention), to be batch-replaced with the CU 408.
  • the user can have an advantage in that the cyber character of various web-sites that the user uses (for example, chatting web-site, blog, mini homepage and the like) is batch-replaced with the CU 408 in which the face and body features of the user are reflected in an appropriate manner.
  • the cyber character of various web-sites that the user uses for example, chatting web-site, blog, mini homepage and the like
  • the CU simulation unit 200 of the invention comprises a clothes simulation module 210, a plastic operation simulation module 220, a makeup simulation module 230, an accessory simulation module 240 and a hair simulation module 250.
  • the clothes simulation module 210 loads a CU 408a with which a beauty/fashion related image (for example, hair image, accessory image, makeup image and the like) is selectively combined (alternatively, original CU having not combined with a separate image), as shown in Fig. 10, and then combines a clothes image 410 having reflected the selected item and the body feature of the user with the CU 408a, thereby creating a CU 411 wearing the clothes and simulating the created CU.
  • a beauty/fashion related image for example, hair image, accessory image, makeup image and the like
  • the clothes simulation module 210 comprises a clothes simulation control section 211 that uses/operates the processing buffer 211a for supporting a processing progress of self-control computation modules and collectively controls an overall clothes simulation process, and a clothes simulation guide platform operating section 215, a user body information acquiring section 213, a user selection c lothes information acquiring section 214, a user matching clothes image loading section 219, a CU loading section 216, a clothes simulation engine 217 and a clothes buying guide section 212, which are collectively controlled by the simulation control section 211.
  • the clothes simulation guide platform operating section 215 extracts a variety of operating information stored in the self-information storage area, for example skin information for creating a guide platform, image information, text information, link information and setting information, creates a "clothes simulation guide platform 308 for guiding a clothes simulation process" as shown in Fig. 12 based on the extracted information and notifies/operates the clothes simulation guide platform 308 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
  • skin information for creating a guide platform for example skin information for creating a guide platform, image information, text information, link information and setting information
  • the user body information acquiring section 213 communicates with the CU creating unit 110 via an information exchange section 210a in accordance with a computation event of a user through the clothes simulation guide platform 308, thereby acquiring the user body information transmitted from the capture image acquiring system 2.
  • the user selection clothes information acquiring section 214 communicates with the clothes simulation guide platform operating section 215, thereby acquiring the information about the clothes selected by the user.
  • the CU loading section 216 communicates with the operating aiding unit 130 via the information exchange section 210a, thereby loading the CU 408a stored in the simulation operating information D/B 170 (refer to Fig. 10).
  • the CU 408a may take a shape in which a hair image, an accessory image, a makeup image and the like are selectively combined or may, alternatively, take an original image shape in which a separate image is not combined, depending on the situations.
  • the user matching clothes image loading section 219 which is controlled by the clothes simulation control section 211, communicates with a clothes image library 218 to load a series of user matching clothes images 410, in which the body feature of the user and the information about the clothes selected by the user are reflected, from the clothes images 409 that have been already stored in the library (refer to Fig. 10).
  • the clothes simulation engine 217 immediately progresses a series of image combining routines to combine the user matching clothes image 410 with the CU 408a with which the beauty/fashion related image is selectively combined (or original CU), thereby creating the CU 411 wearing the clothes and simulating the created CU through the clothes simulation guide platform 308.
  • the user can efficiently buy/select the clothes through the clothes wearing shape of the CU 411 similar to a real figure of the user.
  • the user can progress the clothes simulation process with more sense for the real while comparing the clothes simulation process with the other aspects of the user (hair aspect of the user, accessory aspect, makeup aspect and the like). Accordingly, the user can naturally have an advantage in that the clothes buying/selection efficiency is maximized at the optimized state.
  • the clothes buying guide section 212 which is controlled by the clothes simulation control section 211, immediately extracts the clothes related information stored in an information storage area 212a thereof (for example, ID information of the clothes selected by the user, distributor information of the clothes selected by the user and the like), creates a series of buying request information based on the extracted information and transmits the created buying request information to the CU simulation operating server 101, the product selling guide unit 150 and the like.
  • the product selling guide unit 150 selectively transmits the corresponding buying request information to the product distributor server 10
  • the product selling server 20 in this case, the product selling sever is a product selling sever that is directly managed by the system of the invention
  • the user can real-time buy the clothes that the user wants, simultaneously with the series of clothes simulations, without separate complex processes (needless to say, a series of subsequent processes, for example a settlement process may be progressed by the product distributor server and the product selling server).
  • the plastic operation simulation module 220 which communicates with the CU simulation operating server 101 together with the clothes simulation module 210, loads a CU face image 407a with which the beauty/fashion related image (for example, hair image, accessory image, makeup image and the like) is selectively combined (or original CU face image having not combined with a separate image) and then modifies the CU face image 407a in accordance with a plastic operation selection item of the user, thereby creating a plastic operation face image 412 and simulating the created image (refer to Fig. 15).
  • the beauty/fashion related image for example, hair image, accessory image, makeup image and the like
  • the plastic operation simulation module 220 comprises a plastic operation simulation control section 221 that uses/operates a processing buffer 221a for supporting a processing progress of self-control computation modules and collectively controls an overall plastic operation simulation process, and a plastic operation simulation guide platform operating section 222, a CU face image acquiring section 223, a plastic operation simulation engine 224 and a plastic operation product buying guide section 225, which are collectively controlled by the plastic operation simulation control section 221.
  • the plastic operation simulation guide platform operating section 222 extracts a variety of operating information stored in a self- information storage area, for example skin information for creating a guide platform, image information, text information, link information and setting information, creates a "plastic operation simulation guide platform 310 for guiding a plastic operation simulation process" as shown in Fig. 14 based on the extracted information and notifies/operates the plastic operation simulation guide platform 310 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
  • skin information for creating a guide platform for example skin information for creating a guide platform, image information, text information, link information and setting information
  • creates a "plastic operation simulation guide platform 310 for guiding a plastic operation simulation process" as shown in Fig. 14 based on the extracted information and notifies/operates the plastic operation simulation guide platform 310 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
  • the CU face image acquiring section 223 communicates with the operating aiding unit 130 via an information exchange section 220a in accordance with a computation event of the user through the plastic operation simulation guide platform 310, thereby loading a CU face image 407a stored in the simulation operating information D/B 170.
  • the CU face image 407a may take a shape in which a hair image, an accessory image, a makeup image, a clothes image and the like are selectively combined or may, alternatively, take an original image shape in which a separate image is not combined, depending on the situations.
  • the plastic operation simulation engine 224 progresses a series of image conversion routines to modify constitutional elements (for example, polygon mesh, specific point coordinates, color and the like) of the CU face image 407 a with which the beauty/fashion related image is selectively combined (or, original CU face image), thereby creating a series of plastic operation face images 412 and simulating the created images through the plastic operation simulation guide platform 310, as shown in Fig. 15.
  • constitutional elements for example, polygon mesh, specific point coordinates, color and the like
  • the user can progress the plastic operation simulation process with more sense for the real while comparing the plastic operation simulation process with the other aspects of the user (hair aspect of the user, accessory aspect, makeup aspect and the like). Accordingly, the user can naturally have an advantage in that the plastic operation selection efficiency is maximized at the optimized state.
  • the plastic operation product buying guide section 225 which is controlled by the plastic operation simulation control section 221, immediately extracts the plastic operation related information stored in a plastic operation product related information storage area 226 (for example, ID information of the plastic operation pattern selected by the user, information of a medical institution capable of operating a plastic operation pattern selected by the user and the like), creates a series of medical institution lists 312 based on the extracted information, as shown in Fig. 16, and displays the created medical institution lists 312 through a part or all of the plastic operation simulation guide platform 310.
  • a plastic operation product related information storage area 226 for example, ID information of the plastic operation pattern selected by the user, information of a medical institution capable of operating a plastic operation pattern selected by the user and the like
  • the plastic operation product buying guide section 225 immediately creates buying request information, in which the ID information of the plastic operation pattern selected by the user, the medical institution information capable of operating the plastic operation pattern selected by the user and the like are reflected, and transmits the created buying request information to the CU simulation operating server 101, the product selling guide unit 150 and the like.
  • the product selling guide unit 150 when the product selling guide unit 150 selectively transmits the corresponding buying request information to the communication terminal (for example, mobile communication terminal) of the plastic operation/beauty service provider in a text message and/or voice message type, the user can real-time commit (reserve) a specific plastic operation (or, plastic operation consultation) to the medical institution that the user wants, simultaneously with the series of plastic operation simulations, without separate complex processes.
  • the communication terminal for example, mobile communication terminal
  • the user can real-time commit (reserve) a specific plastic operation (or, plastic operation consultation) to the medical institution that the user wants, simultaneously with the series of plastic operation simulations, without separate complex processes.
  • the makeup simulation module 230 which communicates with the CU simulation operating server 101 together with the respective simulation modules, loads the CU face image 407a with which the beauty/fashion related image (for example, hair image, accessory image, clothes image, plastic operation image and the like) is selectively combined (modified) (or original CU face image having not combined with a separate image) and then modifies the CU face image 407a in accordance with a makeup product selection item of the user, thereby creating a makeup face image 412 and simulating the created image (refer to Fig. 19).
  • the beauty/fashion related image for example, hair image, accessory image, clothes image, plastic operation image and the like
  • the makeup simulation module 230 comprises a makeup simulation control section 231 that uses/operates a processing buffer 231a for supporting a processing progress of self-control computation modules and collectively controls an overall makeup simulation process, and a makeup simulation guide platform operating section 232, a CU face image acquiring section 233, a makeup simulation engine 234 and a makeup product buying guide section 235, which are collectively controlled by the makeup simulation control section 231.
  • the makeup simulation guide platform operating section 232 extracts a variety of operating information stored in a self- information storage area, for example skin information for creating a guide platform, image information, text information, link information and setting information, creates a "makeup simulation guide platform 314 for guiding a makeup simulation process" as shown in Fig. 18 based on the extracted information and notifies/operates the makeup simulation guide platform 314 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
  • the CU face image acquiring section 233 communicates with the operating aiding unit 130 via an information exchange section 230a in accordance with a computation event of the user through the makeup simulation guide platform 314, thereby loading a CU face image 407a stored in the simulation operating information D/B 170.
  • the CU face image 407a may take a shape in which a hair image, an accessory image, a clothes image, a plastic operation image and the like are selectively combined (or modified) or may, alternatively, take an original image shape in which a separate image is not combined, depending on the situations.
  • the makeup simulation engine 234 progresses a series of image conversion routines to modify constitutional elements (for example, polygon mesh, specific point coordinates, color and the like) of the CU face image 407 a with which the beauty/fashion related image is selectively combined (or, original CU face image), in accordance with features of the makeup product selected by the user (in this case, index information about how to change the polygon mesh, coordinates, colors and the like of the face image in accordance with the features of the respective makeup products may be stored/managed in a part of the makeup simulation engine in advance), thereby creating a series of makeup face images 413 and simulating the created images through the makeup simulation guide platform 314, as shown in Fig. 19.
  • a makeup product selection item of the user for example, type of a makeup product, color of a makeup product and the like
  • the user can progress the makeup simulation process with more sense for the real while comparing the makeup simulation process with the other aspects of the user (hair aspect of the user, accessory aspect, makeup aspect and the like). Accordingly, the user can naturally have an advantage in that the makeup product buying/selection efficiency is maximized at the optimized state.
  • the makeup product buying guide section 235 which is controlled by the makeup simulation control section 231, immediately extracts the makeup product related information stored in a makeup product related information storage area 236 (for example, ID information of the makeup product selected by the user, distributor information of the makeup product selected by the user and the like), creates a series of buying request information based on the extracted information, and transmits the created buying request information to the CU simulation operating server 101, the product selling guide unit 150 and the like.
  • a makeup product related information storage area 236 for example, ID information of the makeup product selected by the user, distributor information of the makeup product selected by the user and the like
  • the product selling guide unit when the product selling guide unit selectively transmits the corresponding buying request information to the product distributor server 10, the product selling server 20 and the like, the user can real-time buy a makeup product that the user wants, simultaneously with the makeup simulations, without separate complex processes (needless to say, a series of subsequent processes, for example a settlement process may be progressed by the product distributor server and the product selling server).
  • the accessory simulation module 240 which communicates with the CU simulation operating server 101 together with the respective simulation modules, loads a CU 408a with which the beauty/fashion related image (for example, hair image, clothes image, makeup image and the like) is selectively combined (or original CU having not combined with a separate image) and then combines an accessory image, in which the selection item of the user is reflected, with the CU 408a, thereby creating a CU 415 wearing an accessory and simulating the created CU.
  • the beauty/fashion related image for example, hair image, clothes image, makeup image and the like
  • the accessory simulation module 240 comprises an accessory simulation control section 241 that uses/operates a processing buffer 241a for supporting a processing progress of self-control computation modules and col- lectively controls an overall accessory simulation process, and an accessory simulation guide platform operating section 244, a user selection accessory information acquiring section 243, a user selection accessory image loading section 249, a CU loading section 244, an accessory simulation engine 247 and an accessory buying guide section 242, which are collectively controlled by the accessory simulation control section 241.
  • the accessory simulation guide platform operating section 244 extracts a variety of operating information stored in a self- information storage area, for example skin information for creating a guide platform, image information, text information, link information and setting information, creates an "accessory simulation guide platform 316 for guiding an accessory simulation process" as shown in Fig. 22 based on the extracted information and notifies/operates the accessory simulation guide platform 316 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
  • skin information for creating a guide platform
  • image information image information
  • text information text information
  • link information and setting information creates an "accessory simulation guide platform 316 for guiding an accessory simulation process" as shown in Fig. 22 based on the extracted information and notifies/operates the accessory simulation guide platform 316 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
  • the user selection accessory information acquiring section 243 communicates with the accessory simulation guide platform operating section 244 in accordance with a computation event of the user through the guide platform 316, thereby acquiring the information about the accessory selected by the user.
  • the CU loading section 245 communicates with the operating aiding unit 130 via an information exchange section 240a, thereby loading a CU 408a stored in the simulation operating information D/B 170.
  • the CU 408a may take a shape in which a hair image, a clothes image, a makeup image and the like are selectively combined or may, alternatively, take an original image shape in which a separate image is not combined, depending on the situations.
  • the user selection accessory image loading section 249 which is controlled by the accessory simulation control section 241, communicates with an accessory image library 248, thereby loading a series of user selection accessory images 414, in which the user accessory selection information is reflected, from accessory images 414a that have been previously stored in the library (refer to Fig. 20).
  • the accessory simulation engine 247 immediately progresses a series of image combining routines to combine the user selection accessory image with the CU 408a with which the beauty/fashion related image is selectively combined (or original CU), thereby creating a CU 415 wearing the accessory and simulating the created CU through the accessory simulation guide platform 316.
  • the user can efficiently buy/select the accessory product through the accessory wearing figure of the CU 415 similar to a real figure of the user.
  • the user can progress the accessory simulation process with more sense for the real while comparing the accessory simulation process with the other aspects of the user (hair aspect of the user, clothes wearing aspect, makeup aspect and the like). Accordingly, the user can naturally have an advantage in that the accessory product buying/selection efficiency is maximized at the optimized state.
  • the accessory buying guide section 242 which is controlled by the accessory simulation control section 241, immediately extracts the accessory related information stored in an information storage area 242a thereof (for example, ID information of the accessory selected by the user, distributor information of the accessory selected by the user and the like), creates a series of buying request information based on the extracted information and transmits the created buying request information to the CU simulation operating server 101, the product selling guide unit 150 and the like.
  • the accessory related information stored in an information storage area 242a thereof for example, ID information of the accessory selected by the user, distributor information of the accessory selected by the user and the like
  • the product selling guide unit 150 when the product selling guide unit 150 selectively transmits the corresponding buying request information to the product distributor server 10, the product selling server 20 and the like, the user can real-time buy the accessory that the user wants, simultaneously with the series of accessory simulations, without separate complex processes (needless to say, a series of subsequent processes, for example a settlement process may be progressed by the product distributor server and the product selling server).
  • the hair simulation module 250 which communicates with the CU simulation operating server together with the respective simulation modules, loads a CU face image 407a with which the beauty/ fashion related image (for example, accessory image, clothes image, makeup image and the like) is selectively combined (or original CU face image having not combined with a separate image) and then combines a hair image 416, in which the selection item of the user is reflected, with the loaded CU face image 407 a, thereby creating a CU face image 417 taking a specific hair pattern and simulating the created image, as shown in Fig. 23.
  • the beauty/ fashion related image for example, accessory image, clothes image, makeup image and the like
  • the hair simulation module 250 comprises a hair simulation control section 251 that uses/operates a processing buffer 251a for supporting a processing progress of self-control computation modules and collectively controls an overall plastic operation simulation process, and a hair simulation guide platform operating section 254, a user selection hair information acquiring section 253, a user selection hair image loading section 259, a CU face image loading section 255, a hair simulation engine 257 and a hair beauty product buying guide section 252, which are collectively controlled by the hair simulation control section 251.
  • the hair simulation guide platform operating section 254 extracts a variety of operating information stored in a self-information storage area, for example skin information for creating a guide platform, image information, text information, link information and setting information, creates a "hair simulation guide platform 318 for guiding a hair simulation process" as shown in Fig. 25 based on the extracted information and notifies/operates the hair simulation guide platform 318 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
  • skin information for creating a guide platform for example skin information for creating a guide platform, image information, text information, link information and setting information
  • the user selection hair information acquiring section 253 communicates with the hair simulation guide platform operating section 254 in accordance with a computation event of the user through the guide platform 318, thereby acquiring the information about the hair selected by the user.
  • the CU face image loading section 255 communicates with the operating aiding unit 130 via an information exchange section 250a, thereby loading a CU face image 407a stored in the simulation operating information D/B 170 (refer to Fig. 23).
  • the CU face image 407a may take a shape in which an accessory image, a clothes image, a makeup image and the like are selectively combined or may, alternatively, take an original image shape in which a separate image is not combined, depending on the situations.
  • the user selection hair image loading section 251 which is controlled by the hair simulation control section 251, communicates with a hair image library 258, thereby loading a series of user selection hair images 416a, in which the user selection hair information is reflected, from accessory images 414a that have been previously stored in the library (refer to Fig. 23).
  • the hair simulation engine 257 immediately progresses a series of image combining routines to combine the user selection hair image with the CU face image 407a with which the beauty/fashion related image is selectively combined (or original CU face image), thereby creating a CU face image 417 having a specific hair image pattern and simulating the created CU face image through the hair simulation guide platform 318.
  • the user can efficiently determine "how to change the hair of the user?" through the hair figure of the CU face image 417 similar to a real figure of the user.
  • the user can progress the hair simulation process with more sense for the real while comparing the hair simulation process with the other aspects of the user (clothes aspect of the user, accessory aspect, makeup aspect and the like). Accordingly, the user can naturally have an advantage in that the hair beauty selection efficiency is maximized at the optimized state.
  • the hair beauty product buying guide section 252 which is controlled by the hair simulation control section 251, immediately extracts the hair beauty product related information stored in an information storage area 252a thereof (for example, ID information of the hair pattern selected by the user, information about a shop capable of treating the hair pattern selected by the user and the like), creates a series of hair shop lists 320 as shown in Fig. 26 based on the extracted information and displays the created hair shop lists through a part or all of the hair simulation guide platform 318.
  • the hair beauty product buying guide section 252 immediately creates buying request information, in which the ID information of the hair pattern selected by the user, information about a hair shop capable of treating the hair pattern selected by the user and the like are reflected, and transmits the created buying request information to the CU simulation operating server 101, the product selling guide unit 150 and the like.
  • the product selling guide unit 150 when the product selling guide unit 150 selectively transmits the corresponding buying request information to the communication terminal (for example, mobile communication terminal) of the plastic operation/beauty service provider in a text message and/or voice message type, the user can real-time commit (reserve) a specific hair beauty treatment (or, hair beauty consultation) to the hair shop that the user wants, simultaneously with the series of hair simulations, without separate complex processes.
  • the communication terminal for example, mobile communication terminal
  • the user can real-time commit (reserve) a specific hair beauty treatment (or, hair beauty consultation) to the hair shop that the user wants, simultaneously with the series of hair simulations, without separate complex processes.
  • a system for simulating a cyber-character which the user feature is reflected which flexibly link-disposes a computation module capable of creating a cyber-character in which a user feature (face shape and body size) is reflected (CU) and storing the CU in the system or distributing the CU in an exterior contents server, based on a wireless/wired on-line communication network such as internet network, a computation module capable of complexly applying/providing a product/service such as clothes, accessory, makeup product, plastic operation and hair beauty service to the CU to carry out a simulation and a computation module capable of real-time selling/guiding the product/ service such as clothes, accessory, makeup product, plastic operation and hair beauty service in accordance with needs of a user, thereby enabling the user to posses a high- quality service, which is substantially useful in buying the fashion/beauty product or fashion/beauty service, in a one-stop manner, without a difficulty in

Abstract

Disclosed is a system for simulating a cyber-character which the user feature is reflected. According to the invention, it is possible to provide a system for simulating a cyber-character which the user feature is reflected (CU), which flexibly link-disposes a computation module capable of creating the CU and storing the CU in the system or distributing the CU in an exterior contents server, based on a wireless/wired on-line communication network, a computation module capable of complexly applying/providing a product/service to the CU to carry out a simulation and a computation module capable of real-time selling/guiding the product/service in accordance with needs of a user, thereby enabling the user to posses a high-quality service.

Description

Description
A SYSTEM FOR SIMULATING A CYBER-CHARACTER WHICH A USER FEATURE IS REFLECTED
Technical Field
[1] The invention relates to a system for simulating a cyber-character which a user feature is reflected, and more particularly, to a system for simulating a cyber-character which the user feature is reflected, which flexibly link-disposes a computation module capable of creating a CU (Cyber-character in which a User feature (individual feature of a user such as face shape and body size) is reflected) and storing the CU in the system or distributing the CU in an exterior contents server, based on a wireless/wired on-line communication network such as internet network, a computation module capable of complexly applying/providing a product/service such as clothes, accessory, makeup product, plastic operation and hair beauty service to the CU to carry out a simulation and a computation module capable of real-time selling/guiding the product/ service such as clothes, accessory, makeup product, plastic operation and hair beauty service in accordance with needs of a user, thereby enabling the user to posses a high- quality service, which is substantially useful in buying the fashion/beauty product or fashion/beauty service, in a one-stop manner, without a difficulty in visiting various sites or shops/stores. Background Art
[2] In recent years, as living conditions become prosperous, a social interest in beauty and fashion management is also increased. Following the social environment, a class of users who want to buy a variety of visible/invisible beauty/fashion products/services relating to clothes, accessory, makeup product, plastic operation, hair beauty service and the like is also increased.
[3] Recently, as various software infra structures are steeply expanded, the shops/ stores, which sell, handle and provide the clothes, accessory, makeup product, plastic operation, hair beauty service and the like, equip a computation tool capable of applying/providing a variety of products/services selected by a user to a cyber- character to carry out a simulation in an information processing device such as computer, thereby enabling the user to smoothly buy the product/service.
[4] However, most of the computation tools according to the prior art stick to a "classic mechanism that applies/provides the product/service to a body of a cyber-character having a typical face shape", irrespective of individual face shapes and body types of the respective users. Accordingly, it is very high a "probability that even when actually applying/providing a product/service, which satisfies a user's taste in a cyber-character oriented simulation, to a body of the user, the user will not be satisfied with the product/service. "
[5] Moreover, according to the above prior art, the shops/stores, which individually sell/handle/provide the clothes, accessory, makeup product, plastic operation, hair beauty service and the like, are dispersed in all directions. Accordingly, even if a user has a desire for carrying out virtual simulations of the various products/services at one time, the user cannot realize the desire unless the user visits the corresponding shops/ stores (or web sites managed by the shops/stores) one by one. As a result, the user should put up with the inconvenience (since the products/services such as clothes, accessory, makeup product, plastic operation, hair beauty service and the like form a close relationship with each other, it is very important to carry out the virtual simulations of the products/services at one time, from the viewpoint of the user to buy the products/services. Disclosure of Invention Technical Problem
[6] The invention has been made to solve the above problems occurring in the prior art.
An object of the invention is to provide a system for simulating a cyber-character which the user feature is reflected, which flexibly link-disposes a computation module capable of creating a cyber-character in which a user feature (face shape and body size) is reflected (hereinafter, abbreviated to "CU") and storing the CU in the system or distributing the CU in an exterior contents server, based on a wireless/wired on-line communication network such as internet network, a computation module capable of complexly applying/providing a product/service such as clothes, accessory, makeup product, plastic operation and hair beauty service to the CU to carry out a simulation and a computation module capable of real-time selling/guiding the product/service such as clothes, accessory, makeup product, plastic operation and hair beauty service in accordance with needs of a user, thereby enabling the user to posses a high-quality service, which is substantially useful in buying the fashion/beauty product or fashion/ beauty service, in a one-stop manner, without a difficulty in visiting various sites or shops/stores. Technical Solution
[7] In order to achieve the above object, there is provided a CU simulation system comprising: a CU simulation operating server that is selectively signal-connected to a capture image acquiring system based on an on-line communication network and collectively controls a process of creating a cyber-character which the user feature is reflected (CU) and a process of virtually applying a beauty/fashion related image to the CU to carry out a simulation, while notifying/operating a guide window for guiding an information input and observation in a part of the capture image acquiring system; a CU creating unit that is controlled by the CU simulation operating server, converts a predetermined standard face image to be matched to a user face image created/ transmitted by the capture image acquiring system so as to create a CU face image, converts a predetermined standard body image to be matched to a user body feature created/transmitted by the capture image acquiring system so as to create a CU body image and combines the CU face image and the CU body image to create a CU; and a CU simulation unit that is controlled by the CU simulation operating server, and selectively combines a plurality of beauty/fashion related images to the CU or CU face image or modifies the CU or CU face image to which the plurality of beauty/fashion related images are selectively combined, to carry out a simulation, in accordance with a computation event of the user through the guide window. Brief Description of the Drawings
[8] The above and other objects, features and advantages of the present invention will be more apparent from the following detailed description taken in conjunction with the accompanying drawings, in which:
[9] FIG. 1 conceptually shows a general structure of a CU simulation system according to an embodiment of the invention;
[10] FIGS. 2 and 9 conceptually show a noticed state of a guide window according to an embodiment of the invention;
[11] FIG. 3 conceptually shows a general operating process of a CU creating unit according to an embodiment of the invention;
[12] FIG. 4 conceptually shows a detailed structure of a CU creating unit according to an embodiment of the invention;
[13] FIG. 5 conceptually shows an operating process of a similar standard face image loading module according to an embodiment of the invention;
[14] FIG 6 conceptually shows an operating process of a CU face image creating engine according to an embodiment of the invention;
[15] FIG 7 conceptually shows an operating process of a CU body image loading engine according to an embodiment of the invention;
[16] FIG 8 conceptually shows an operating process of a CU creating engine according to an embodiment of the invention;
[17] FIG 10 conceptually shows an operating process of a clothes simulation module according to an embodiment of the invention;
[18] FIG. 11 conceptually shows a detailed structure of a clothes simulation module according to an embodiment of the invention;
[19] FIG. 12 conceptually shows a noticed state of a clothes simulation guide platform according to an embodiment of the invention; [20] FIG. 13 conceptually shows a detailed structure of a plastic operation simulation module according to an embodiment of the invention; [21] FIGS. 14 and 16 conceptually show a noticed state of a plastic operation simulation guide platform according to an embodiment of the invention; [22] FIG. 15 conceptually shows an operating process of a plastic operation simulation engine according to an embodiment of the invention; [23] FIG. 17 conceptually shows a detailed structure of a makeup simulation engine according to an embodiment of the invention; [24] FIG. 18 conceptually shows a noticed state of a makeup simulation guide platform according to an embodiment of the invention; [25] FIG. 19 conceptually shows an operating process of a makeup simulation engine ac cording to an embodiment of the invention; [26] FIG. 20 conceptually shows an operating process of an accessory simulation module according to an embodiment of the invention; [27] FIG. 21 conceptually shows a detailed structure of an accessory simulation module according to an embodiment of the invention; [28] FIG. 22 conceptually shows a noticed state of an accessory simulation guide platform according to an embodiment of the invention; [29] FIG. 23 conceptually shows an operating process of a hair simulation module according to an embodiment of the invention; [30] FIG. 24 conceptually shows a detailed structure of a hair simulation module according to an embodiment of the invention; and [31] FIGS. 25 and 26 conceptually show a noticed state of a hair simulation guide platform according to an embodiment of the invention.
Mode for the Invention [32] Hereinafter, a preferred embodiment of the present invention will be described with reference to the accompanying drawings. In the following description of the present invention, a detailed description of known functions and configurations incorporated herein will be omitted when it may make the subject matter of the present invention rather unclear. [33] [34] As shown in Fig. 1, a CU simulation system 100 according to an embodiment of the invention is selectively signal-connected to capture image acquiring systems 2 that are widely distributed/arranged at various places, based on a series of wired/wireless communication network 30 such as internet network. [35] In this case, the capture image acquiring system 2 is provided in a booth B that is distributed at each place or provided in a shop/store M that sells/handles/provides clothes, accessory, makeup product, plastic operation, hair beauty service and the like.
[36] The capture image acquiring system 2 comprises a photographing set that has a capture camera, a pattern generator and an illumination device and link-operates the capture camera, the pattern generator and the illumination device to generate a capture image for a virtual plastic operation having reflected a plastic operation target part while going up and down depending on an outside computation control, the capture camera, the pattern generator and the illumination device being provided to one place and dispersion-disposed depending on functions thereof; and a device driving control tool that belongs to an information processing device 1 electrically connected to the photographing set and selectively communicates with the photographing set to control driving states of the capture camera, the pattern generator and the illumination device and the up and down states of the photographing set, correspondingly to photographing environments. A detailed structure of the capture image acquiring system 2 is specifically described in a Korean Patent Application No. 2005-101592 entitled with "SYSTEM OF ACQUIRING CAPTURE IMAGE FOR VIRTUAL PLASTIC OPERATION", which is filed by the applicant.
[37] Here, after visiting the booth B, a user can progress a process of personally operating the capture image acquiring system 2. Alternatively, after visiting the shop/ store M that sells/handles/provides clothes, accessory, makeup product, plastic operation, hair beauty service and the like, a user can progress a process of using the capture image acquiring system 2 in accordance with the guide of the shop/store. As a result, the user can stably transmit a face image, body information (height, age, weight and the like) and the like of the user to the CU simulation system 100 of the invention.
[38] At this time, as shown in the drawings, the CU simulation system 100 comprises a
CU simulation operating server 101 that is selectively signal-connected to the capture image acquiring system 2, an exterior contents service server 40, a product selling server 20, a product distributor server 10, a communication terminal 50 (for example, mobile communication terminal) of a plastic operation/beauty service provider and the like via an interface unit 102, a wired/wireless on-line communication network 30 and the like, and a CU creating unit 110, a CU simulation unit 200, an operating aiding unit 130 and a product selling guide unit 150, which are collectively controlled by the CU simulation operating server 101.
[39] In this case, the CU simulation operating server 101 collectively controls a process of creating a CU, a process of virtually applying a beauty/fashion related image to the CU to carry out a simulation, and a process of guiding a product that is selected by the user, while notifying/operating a guide window 301 for guiding an information input and observation in a part of the capture image acquiring system 2, as shown in Fig. 2 (or Fig. 9).
[40] At this time, the operating aiding unit 130, which is controlled by the CU simulation operating server 101, extracts various simulation data stored in a simulation operating information D/B 170 (for example, user registration data, shop/store registration data, text/video/link/image/setting data for notifying/operating the guide window, exterior server registration data, exterior communication terminal registration data, simulation creation data, CU related creation data and the like) or stores new data in the simulation operating information D/B 170 in accordance with an event progress of each computation unit disposed in the system 100, thereby enabling a series of simulation service processes by the CU simulation operating server 101 to be normally progressed without specific problems.
[41] Under such infra structure, the CU creating unit 110, which is controlled by the CU simulation operating server 101, converts a predetermined/pre- stored standard face image to be matched to a user face image 401 created/transmitted by the capture image acquiring system 2 so as to create a CU face image 407, converts a predetermined standard body image 404 to be matched to a user body feature information 402 created/transmitted by the capture image acquiring system 2 so as to create a CU body image 406 and combines the CU face image 407 and the CU body image 406 to create a CU 408.
[42] In addition, the CU simulation unit 200, which is controlled by the CU simulation operating server 101, selectively combines a plurality of beauty/fashion related images with the CU 408 or CU face image 407 or modifies the CU 408 or CU face image, 407 with which the plurality of beauty/fashion related images (clothes images, hair images, accessory images, makeup images and the like) are selectively combined, to carry out a simulation, in accordance with a computation event of the user through the guide window 301.
[43] Furthermore, the product selling guide unit 150, which is controlled by the CU simulation operating server 101, progresses a series of communication processes with the product distributor server 10, the product selling server 20, the communication terminal 50 of the plastic operation/beauty service provider and the like via the interface unit 102 in accordance with a request of the user having completed the simulation, thereby enabling the product/service selected by the user, such as clothes, accessory, makeup product, plastic operation and hair beauty service, to be sold real time.
[44] Here, as shown in Fig. 4, the CU creating unit 110 comprises a CU creation control module 111 that collectively controls a general CU creating process while using/ operating a processing buffer I l ia for supporting a processing progress of self-control computation modules, and a user face image receiving module 113, a user face image preprocessing module 114, a specific point appointing module 117, a similar standard face image loading module 115, a CU face image creating engine 118, a user body information receiving module 119, a CU body image loading engine 120, a CU creating engine 121, a CU exterior-providing module 123 and an information output module 122, which are collectively controlled by the CU creation control module 111.
[45] The user face image receiving module 113 forms a selective communication relation with the capture image acquiring system 2 via an information exchange module 112 and receives the user face image 401 that is transmitted from the capture image acquiring system 2. The user face image preprocessing module 114 appropriately carries out a preprocess of the user face image 401, which is received by the user face image receiving module 113 (for example, a process of removing a background of the user face image, a process of recovering a lost part of the user face image and the like).
[46] In addition, the specific point appointing module 117 gives/appoints a series of specific points to a main part (for example, vicinity of eye, philtrum, nose and the like) of the user face image that has been preprocessed by the user face image preprocessing module 114, thereby enabling a feature of the user face image to be easily recognized (needless to say, the process of appointing the specific points may be carried out in advance by the capture image acquiring system). The similar standard face image loading module 115 forms a series of communication relations with a standard face/ body image storing library 116, as shown in Fig. 5, and selectively loads a similar standard face image 405 that is closest to the user face image 401 (for example, an image whose coordinates of the specific points are similar to the user face image), from the predetermined/pre- stored standard face images 403 (refer to Fig. 3).
[47] Under such structure, as shown in Fig. 6, the CU face image creating engine 118 progresses a process of acquiring "a difference degree (for example, a degree indicating how different two positions are) between a position (for example, m ) of the k specific point appointed to the main part of the user face image 401 and a position (for example, v ) of vertex constituting a polygon mesh (PS) of the similar standard face k image 405."
[48] In this case, the CU face image creating engine 118 analyzes/acquires the difference between the two positions under limited condition as shown in an equation 1, i.e., limited condition that "there is little difference between the position m of the specific point appointed to the main part of the user face image 401 and the position v k of vertex of the similar standard face image 405," thereby enabling a process of acquiring the CU face image 407 to be progressed more rapidly while minimizing deformation of the similar standard face image 405. [49] <equation 1> [50] v ≡ m
[51] where, m : position of k specific point appointed to the main part of the user face k image, and
[52] v : position of k vertex constituting a polygon mesh of the similar standard face k image. [53] When "a difference degree between a position m of the specific point appointed to k the main part of the user face image 401 and a position v of vertex constituting a k polygon mesh of the similar standard face image 405" is acquired through the above process, the CU face image creating engine 118 calculates a below equation 2 based on the difference degree to obtain a summation of respective items, thereby progressing a process of estimating positions of vertexes that will constitute a polygon mesh PN of the CU face image 407, for example V .
[54] At this time, the positions of vertexes that will constitute a polygon mesh PN of the
CU face image 407, for example V are estimated through a least square method as shown in the equation 2. Therefore, in the invention, the vertexes constituting the polygon mesh PS of the similar standard face image 405, for example, v exhibits, within a minimum deformation range, a transition pattern that becomes optimally similar to features of the vertexes constituting a polygon mesh PT of the user face image 401 (i.e., a feature error of the two vertexes is minimized). As a result, in "a process of converting the positions v of the vertexes constituting the polygon mesh PS of the similar standard face image 405 into the positions V of vertexes that will constitute a polygon mesh PN of the CU face image 407", which will be progressed later, the similar standard face image 405 can be finally changed into the CU face image 407 having optimally reflected the feature of the user face image 401, while minimizing the deformation of the similar standard face image.
[55] <equation 2>
[56]
T-^., - ■ ■ - τ *= 1 y tαeJ CO S lI 3" ,-JI I
[57]
[58] where, V : position of I vertex that will constitute a polygon mesh of the CU face image,
[59] TT :: 1 transform matrix of i triangle constituting a polygon mesh of the similar standard face image,
[60] TT :: ttrraannssffoorrmm mmaattrrix of j triangle neighboring to T ,
[61] I: ideal transform matrix that is almost same as T , [62] v i : position of i vertex constituting a polygon mesh of the similar standard face image, [63] c i : position of i vertex constituting a polygon mesh of the user face image while nearest corresponding to v , and i
[64] matrix norm
I l I l j-
: Frobenius norm. [65] At this time, the item min Λt- j, ∑ Σ I l J ,- J ' , \ \ 7 included in the equation 2 is an item that estimates a V value so that the transform matrix T i of i triangle Pl constituting a polygon mesh PS of the similar standard face image 405 can be transformed while having an uttermost similar value to the transform matrix T of j triangle P2 neighboring to T , in a situation that v is converted into V to j i i I form the polygon mesh PN of the CU face image 407 in earnest (refer to Fig. 6). Needless to say, when V is finally estimated by the calculation of the equation 2 including "an adjusting item that minimizes a transform matrix difference between the neighborhoods of the polygon meshes of the similar standard face image 405" and v is i transformed into V to constitute the polygon mesh PN of the CU face image 407, the CU face image 407 can maintain an optimized very smooth shape due to an increase in a similarity of the polygon meshes neighboring to each other. [66] In addition, the item
.&, -,
IT-T-Il-L ΛC ,„ 2—i I l T , -J\ \ s- included in the equation 2 is an item that estimates a V value so that the transform matrix T of i triangle Pl constituting a polygon mesh PN of the similar standard face image 405 can be transformed while having an uttermost close value to the ideal transform matrix I that is almost same as T , in a situation that v is converted into V to i i I form the polygon mesh PN of the CU face image 407 in earnest (refer to Fig. 6). Needless to say, when V is finally estimated by the calculation of the equation 2 including "an adjusting item that minimizes a deformed degree of the polygon meshes PS of a source character" and v i is transformed into V I to constitute the polygon meshes PN of the CU face image 407, the vertexes constituting the polygon mesh PS of the similar standard face image 405 can naturally form the CU face image 407 having optimally reflected the feature of the user face image 401, even within the minimum deformation range. [67] Furthermore, the item N
V lI I 2
included in the equation 2 is an item that estimates a V value so that the position v of i vertex constituting a polygon mesh PS of the similar standard face image 405 can be transformed while possibly minimizing a difference with the position c of the i vertex constituting a polygon mesh PT of the user face image 401 and nearest corresponding to v , in a situation that v is converted into V to form the polygon mesh PN of the CU face image 407 in earnest (refer to Fig. 6). Needless to say, when V is finally estimated by the calculation of the equation 2 including "an adjusting item that makes the position of vertex of the similar standard face image adjacent to the position of vertex of the user face image 401 to the highest degree" and v i is transformed into V I to constitute the polygon mesh of the CU face image 407, the CU face image 407 can naturally form a shape that is closest to the feature of the user face image 401. [68] Here, w , w , w and the like included in each item of the equation 2 are weight s m d factors of the corresponding items. The CU face image creating engine 118 differently sets the weight factors of the respective items depending on conditions (for example, sets w : 0.01, w : 0.1 and w : 0.2) in the calculation situation of the equation 2, thereby s m d enabling the CU face image 407, which will be finally completed, to have a shape matched to the user face image 401, more efficiently.
[69] In the mean time, in the above processes, the user body information receiving module 119, which is controlled by the CU creation control module 111, forms a selective communication relation with the capture image acquiring system 2 via the information exchange module 112 and receives the user body information (height information, weight information, sex information, age information and the like) that is transmitted from the capture image acquiring system 2. The CU body image loading engine 120 forms a communication relation with the standard face/body image storing library 116, as shown in Fig. 7, and selectively loads the CU body image 406 conforming with the body feature of the user, from the standard body images 404 that are predetermined/pre-stored.
[70] When the creation (loading) of the CU face image 407 and the CU body image 406, in which the user feature is reflected, is completed as the computation modules carry out the functions thereof, the CU creating engine 121 immediately combines the CU face image 407 and the CU body image 406, for example combines the CU face image 407 with a face part 406a of the CU body image 406, thereby creating a completed CU 408. As a result, when the above processes are completed, the CU 408, in which the individual features of the user (face feature, body feature and the like) are appropriately reflected, can be stably created. [71] Under such circumstances, the information output module 122, which is controlled by the CU creation control module 111, outputs/transmits the completed CU (or CU face image and the like) to the CU simulation operating server 101 via the information exchange module 112, thereby enabling the corresponding CU (or CU face image and the like) to be stably stored/preserved in the simulation operating information D/B 170 and to be stably provided to the user through the guide window 302 as shown in Fig. 9.
[72] At this time, when the user selects a specific item 302a of the guide window 302, the CU exterior-providing module 123, which is controlled by the CU creation control module 111, communicates with the exterior contents service server 40 via the information exchange module 112 and the interface module 102 to transmit/distribute the CU to the exterior contents service server 40, thereby the cyber character, which is managed by the exterior contents service server 40 (for example, an exterior contents service server having a cooperation relation with the system of the invention), to be batch-replaced with the CU 408. As a result, the user can have an advantage in that the cyber character of various web-sites that the user uses (for example, chatting web-site, blog, mini homepage and the like) is batch-replaced with the CU 408 in which the face and body features of the user are reflected in an appropriate manner.
[73] In the mean time, as shown in Fig. 1, the CU simulation unit 200 of the invention comprises a clothes simulation module 210, a plastic operation simulation module 220, a makeup simulation module 230, an accessory simulation module 240 and a hair simulation module 250.
[74] Herein, while communicating with the CU simulation operating server 101, when the user clicks on a clothes simulation item 303 of various items notified in the guide window 302 as shown in Fig. 9, the clothes simulation module 210 loads a CU 408a with which a beauty/fashion related image (for example, hair image, accessory image, makeup image and the like) is selectively combined (alternatively, original CU having not combined with a separate image), as shown in Fig. 10, and then combines a clothes image 410 having reflected the selected item and the body feature of the user with the CU 408a, thereby creating a CU 411 wearing the clothes and simulating the created CU.
[75] In this case, as shown in Fig. 11, the clothes simulation module 210 comprises a clothes simulation control section 211 that uses/operates the processing buffer 211a for supporting a processing progress of self-control computation modules and collectively controls an overall clothes simulation process, and a clothes simulation guide platform operating section 215, a user body information acquiring section 213, a user selection c lothes information acquiring section 214, a user matching clothes image loading section 219, a CU loading section 216, a clothes simulation engine 217 and a clothes buying guide section 212, which are collectively controlled by the simulation control section 211.
[76] The clothes simulation guide platform operating section 215 extracts a variety of operating information stored in the self-information storage area, for example skin information for creating a guide platform, image information, text information, link information and setting information, creates a "clothes simulation guide platform 308 for guiding a clothes simulation process" as shown in Fig. 12 based on the extracted information and notifies/operates the clothes simulation guide platform 308 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
[77] Here, the user body information acquiring section 213 communicates with the CU creating unit 110 via an information exchange section 210a in accordance with a computation event of a user through the clothes simulation guide platform 308, thereby acquiring the user body information transmitted from the capture image acquiring system 2. The user selection clothes information acquiring section 214 communicates with the clothes simulation guide platform operating section 215, thereby acquiring the information about the clothes selected by the user.
[78] In addition, the CU loading section 216 communicates with the operating aiding unit 130 via the information exchange section 210a, thereby loading the CU 408a stored in the simulation operating information D/B 170 (refer to Fig. 10). In this case, depending on the processes of the other computation modules, which have been already progressed, the CU 408a may take a shape in which a hair image, an accessory image, a makeup image and the like are selectively combined or may, alternatively, take an original image shape in which a separate image is not combined, depending on the situations.
[79] Furthermore, the user matching clothes image loading section 219, which is controlled by the clothes simulation control section 211, communicates with a clothes image library 218 to load a series of user matching clothes images 410, in which the body feature of the user and the information about the clothes selected by the user are reflected, from the clothes images 409 that have been already stored in the library (refer to Fig. 10).
[80] When the CU 408a and the user matching clothes image 410 are completely acquired through the above process, the clothes simulation engine 217 immediately progresses a series of image combining routines to combine the user matching clothes image 410 with the CU 408a with which the beauty/fashion related image is selectively combined (or original CU), thereby creating the CU 411 wearing the clothes and simulating the created CU through the clothes simulation guide platform 308. As a result, the user can efficiently buy/select the clothes through the clothes wearing shape of the CU 411 similar to a real figure of the user. [81] Herein, when the CU 411 takes a shape having combined the hair image, the accessory image, the makeup image and the like in advance, the user can progress the clothes simulation process with more sense for the real while comparing the clothes simulation process with the other aspects of the user (hair aspect of the user, accessory aspect, makeup aspect and the like). Accordingly, the user can naturally have an advantage in that the clothes buying/selection efficiency is maximized at the optimized state.
[82] At this time, when the user having observed the clothes simulation process determines to buy the corresponding clothes and clicks on a buying item 309 of the clothes simulation guide platform 308, the clothes buying guide section 212, which is controlled by the clothes simulation control section 211, immediately extracts the clothes related information stored in an information storage area 212a thereof (for example, ID information of the clothes selected by the user, distributor information of the clothes selected by the user and the like), creates a series of buying request information based on the extracted information and transmits the created buying request information to the CU simulation operating server 101, the product selling guide unit 150 and the like. Correspondingly to this process, when the product selling guide unit 150 selectively transmits the corresponding buying request information to the product distributor server 10, the product selling server 20 (in this case, the product selling sever is a product selling sever that is directly managed by the system of the invention), the user can real-time buy the clothes that the user wants, simultaneously with the series of clothes simulations, without separate complex processes (needless to say, a series of subsequent processes, for example a settlement process may be progressed by the product distributor server and the product selling server).
[83] In the mean time, when the user clicks on a plastic operation simulation item 304 of the various items notified in the guide window 302 as shown in Fig. 9, the plastic operation simulation module 220, which communicates with the CU simulation operating server 101 together with the clothes simulation module 210, loads a CU face image 407a with which the beauty/fashion related image (for example, hair image, accessory image, makeup image and the like) is selectively combined (or original CU face image having not combined with a separate image) and then modifies the CU face image 407a in accordance with a plastic operation selection item of the user, thereby creating a plastic operation face image 412 and simulating the created image (refer to Fig. 15).
[84] In this case, as shown in Fig. 13, the plastic operation simulation module 220 comprises a plastic operation simulation control section 221 that uses/operates a processing buffer 221a for supporting a processing progress of self-control computation modules and collectively controls an overall plastic operation simulation process, and a plastic operation simulation guide platform operating section 222, a CU face image acquiring section 223, a plastic operation simulation engine 224 and a plastic operation product buying guide section 225, which are collectively controlled by the plastic operation simulation control section 221.
[85] Here, the plastic operation simulation guide platform operating section 222 extracts a variety of operating information stored in a self- information storage area, for example skin information for creating a guide platform, image information, text information, link information and setting information, creates a "plastic operation simulation guide platform 310 for guiding a plastic operation simulation process" as shown in Fig. 14 based on the extracted information and notifies/operates the plastic operation simulation guide platform 310 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
[86] Here, the CU face image acquiring section 223 communicates with the operating aiding unit 130 via an information exchange section 220a in accordance with a computation event of the user through the plastic operation simulation guide platform 310, thereby loading a CU face image 407a stored in the simulation operating information D/B 170. In this case, depending on the processes of the other computation modules, which have been already progressed, the CU face image 407a may take a shape in which a hair image, an accessory image, a makeup image, a clothes image and the like are selectively combined or may, alternatively, take an original image shape in which a separate image is not combined, depending on the situations.
[87] When the CU face image 407a are completely acquired through the above process and a plastic operation selection item of the user (for example, part selected for a plastic operation, plastic operation pattern and the like) is transmitted through the plastic operation simulation guide platform operating section 222, the plastic operation simulation engine 224 progresses a series of image conversion routines to modify constitutional elements (for example, polygon mesh, specific point coordinates, color and the like) of the CU face image 407 a with which the beauty/fashion related image is selectively combined (or, original CU face image), thereby creating a series of plastic operation face images 412 and simulating the created images through the plastic operation simulation guide platform 310, as shown in Fig. 15. As a result, the user can efficiently determine "how to get plastic operation surgery for which part of the user?" through a plastic operation figure of the CU 412 similar to the real figure of the user.
[88] Herein, when the CU face image 407a takes a shape having combined the hair image, the accessory image, the makeup image and the like in advance, the user can progress the plastic operation simulation process with more sense for the real while comparing the plastic operation simulation process with the other aspects of the user (hair aspect of the user, accessory aspect, makeup aspect and the like). Accordingly, the user can naturally have an advantage in that the plastic operation selection efficiency is maximized at the optimized state.
[89] At this time, when the user having observed the plastic operation simulation process determines the corresponding plastic operation (or consultation) and clicks on a buying item 311 of the plastic operation simulation guide platform 310, the plastic operation product buying guide section 225, which is controlled by the plastic operation simulation control section 221, immediately extracts the plastic operation related information stored in a plastic operation product related information storage area 226 (for example, ID information of the plastic operation pattern selected by the user, information of a medical institution capable of operating a plastic operation pattern selected by the user and the like), creates a series of medical institution lists 312 based on the extracted information, as shown in Fig. 16, and displays the created medical institution lists 312 through a part or all of the plastic operation simulation guide platform 310.
[90] Here, when the user selects a plastic operation medical institution that the user wants and clicks on a related item 313, the plastic operation product buying guide section 225 immediately creates buying request information, in which the ID information of the plastic operation pattern selected by the user, the medical institution information capable of operating the plastic operation pattern selected by the user and the like are reflected, and transmits the created buying request information to the CU simulation operating server 101, the product selling guide unit 150 and the like. Correspondingly to this process, when the product selling guide unit 150 selectively transmits the corresponding buying request information to the communication terminal (for example, mobile communication terminal) of the plastic operation/beauty service provider in a text message and/or voice message type, the user can real-time commit (reserve) a specific plastic operation (or, plastic operation consultation) to the medical institution that the user wants, simultaneously with the series of plastic operation simulations, without separate complex processes.
[91] In the mean time, when the user clicks on a makeup simulation item 305 of the various items notified in the guide window 302 as shown in Fig. 9, the makeup simulation module 230, which communicates with the CU simulation operating server 101 together with the respective simulation modules, loads the CU face image 407a with which the beauty/fashion related image (for example, hair image, accessory image, clothes image, plastic operation image and the like) is selectively combined (modified) (or original CU face image having not combined with a separate image) and then modifies the CU face image 407a in accordance with a makeup product selection item of the user, thereby creating a makeup face image 412 and simulating the created image (refer to Fig. 19). [92] In this case, as shown in Fig. 17, the makeup simulation module 230 comprises a makeup simulation control section 231 that uses/operates a processing buffer 231a for supporting a processing progress of self-control computation modules and collectively controls an overall makeup simulation process, and a makeup simulation guide platform operating section 232, a CU face image acquiring section 233, a makeup simulation engine 234 and a makeup product buying guide section 235, which are collectively controlled by the makeup simulation control section 231.
[93] Here, the makeup simulation guide platform operating section 232 extracts a variety of operating information stored in a self- information storage area, for example skin information for creating a guide platform, image information, text information, link information and setting information, creates a "makeup simulation guide platform 314 for guiding a makeup simulation process" as shown in Fig. 18 based on the extracted information and notifies/operates the makeup simulation guide platform 314 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
[94] Here, the CU face image acquiring section 233 communicates with the operating aiding unit 130 via an information exchange section 230a in accordance with a computation event of the user through the makeup simulation guide platform 314, thereby loading a CU face image 407a stored in the simulation operating information D/B 170. In this case, depending on the processes of the other computation modules, which have been already progressed, the CU face image 407a may take a shape in which a hair image, an accessory image, a clothes image, a plastic operation image and the like are selectively combined (or modified) or may, alternatively, take an original image shape in which a separate image is not combined, depending on the situations.
[95] When the CU face image 407a are completely acquired through the above process and a makeup product selection item of the user (for example, type of a makeup product, color of a makeup product and the like) is transmitted through the makeup simulation guide platform operating section 232, the makeup simulation engine 234 progresses a series of image conversion routines to modify constitutional elements (for example, polygon mesh, specific point coordinates, color and the like) of the CU face image 407 a with which the beauty/fashion related image is selectively combined (or, original CU face image), in accordance with features of the makeup product selected by the user (in this case, index information about how to change the polygon mesh, coordinates, colors and the like of the face image in accordance with the features of the respective makeup products may be stored/managed in a part of the makeup simulation engine in advance), thereby creating a series of makeup face images 413 and simulating the created images through the makeup simulation guide platform 314, as shown in Fig. 19. As a result, the user can efficiently buy/select a makeup product through a makeup figure of the CU face image 413 similar to the real figure of the user.
[96] Herein, when the CU face image 413 takes a shape having combined the hair image, the accessory image, the clothes image and the like in advance, the user can progress the makeup simulation process with more sense for the real while comparing the makeup simulation process with the other aspects of the user (hair aspect of the user, accessory aspect, makeup aspect and the like). Accordingly, the user can naturally have an advantage in that the makeup product buying/selection efficiency is maximized at the optimized state.
[97] At this time, when the user having observed the makeup simulation process determines to buy the corresponding makeup product and clicks on a buying item 315 of the makeup simulation guide platform 314, the makeup product buying guide section 235, which is controlled by the makeup simulation control section 231, immediately extracts the makeup product related information stored in a makeup product related information storage area 236 (for example, ID information of the makeup product selected by the user, distributor information of the makeup product selected by the user and the like), creates a series of buying request information based on the extracted information, and transmits the created buying request information to the CU simulation operating server 101, the product selling guide unit 150 and the like. Correspondingly to this process, when the product selling guide unit selectively transmits the corresponding buying request information to the product distributor server 10, the product selling server 20 and the like, the user can real-time buy a makeup product that the user wants, simultaneously with the makeup simulations, without separate complex processes (needless to say, a series of subsequent processes, for example a settlement process may be progressed by the product distributor server and the product selling server).
[98] In the mean time, when the user clicks on an accessory simulation item 306 of the various items notified in the guide window 302 as shown in Fig. 9, the accessory simulation module 240, which communicates with the CU simulation operating server 101 together with the respective simulation modules, loads a CU 408a with which the beauty/fashion related image (for example, hair image, clothes image, makeup image and the like) is selectively combined (or original CU having not combined with a separate image) and then combines an accessory image, in which the selection item of the user is reflected, with the CU 408a, thereby creating a CU 415 wearing an accessory and simulating the created CU.
[99] In this case, as shown in Fig. 21, the accessory simulation module 240 comprises an accessory simulation control section 241 that uses/operates a processing buffer 241a for supporting a processing progress of self-control computation modules and col- lectively controls an overall accessory simulation process, and an accessory simulation guide platform operating section 244, a user selection accessory information acquiring section 243, a user selection accessory image loading section 249, a CU loading section 244, an accessory simulation engine 247 and an accessory buying guide section 242, which are collectively controlled by the accessory simulation control section 241.
[100] Here, the accessory simulation guide platform operating section 244 extracts a variety of operating information stored in a self- information storage area, for example skin information for creating a guide platform, image information, text information, link information and setting information, creates an "accessory simulation guide platform 316 for guiding an accessory simulation process" as shown in Fig. 22 based on the extracted information and notifies/operates the accessory simulation guide platform 316 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
[101] Under such circumstances, the user selection accessory information acquiring section 243 communicates with the accessory simulation guide platform operating section 244 in accordance with a computation event of the user through the guide platform 316, thereby acquiring the information about the accessory selected by the user.
[102] In addition, the CU loading section 245 communicates with the operating aiding unit 130 via an information exchange section 240a, thereby loading a CU 408a stored in the simulation operating information D/B 170. In this case, depending on the processes of the other computation modules, which have been already progressed, the CU 408a may take a shape in which a hair image, a clothes image, a makeup image and the like are selectively combined or may, alternatively, take an original image shape in which a separate image is not combined, depending on the situations.
[103] Furthermore, the user selection accessory image loading section 249, which is controlled by the accessory simulation control section 241, communicates with an accessory image library 248, thereby loading a series of user selection accessory images 414, in which the user accessory selection information is reflected, from accessory images 414a that have been previously stored in the library (refer to Fig. 20).
[104] When the CU 408a and the user selection accessory image 414 are completely acquired through the above process, the accessory simulation engine 247 immediately progresses a series of image combining routines to combine the user selection accessory image with the CU 408a with which the beauty/fashion related image is selectively combined (or original CU), thereby creating a CU 415 wearing the accessory and simulating the created CU through the accessory simulation guide platform 316. As a result, the user can efficiently buy/select the accessory product through the accessory wearing figure of the CU 415 similar to a real figure of the user. [105] Herein, when the CU 408a takes a shape having combined the hair image, the clothes image, the makeup image and the like in advance, the user can progress the accessory simulation process with more sense for the real while comparing the accessory simulation process with the other aspects of the user (hair aspect of the user, clothes wearing aspect, makeup aspect and the like). Accordingly, the user can naturally have an advantage in that the accessory product buying/selection efficiency is maximized at the optimized state.
[106] At this time, when the user having observed the accessory simulation process determines to buy the corresponding accessory and clicks on a buying item 317 of the accessory simulation guide platform 316, the accessory buying guide section 242, which is controlled by the accessory simulation control section 241, immediately extracts the accessory related information stored in an information storage area 242a thereof (for example, ID information of the accessory selected by the user, distributor information of the accessory selected by the user and the like), creates a series of buying request information based on the extracted information and transmits the created buying request information to the CU simulation operating server 101, the product selling guide unit 150 and the like. Correspondingly to this process, when the product selling guide unit 150 selectively transmits the corresponding buying request information to the product distributor server 10, the product selling server 20 and the like, the user can real-time buy the accessory that the user wants, simultaneously with the series of accessory simulations, without separate complex processes (needless to say, a series of subsequent processes, for example a settlement process may be progressed by the product distributor server and the product selling server).
[107] In the mean time, when the user clicks on a hair simulation item 307 of the various items notified in the guide window 302 as shown in Fig. 9, the hair simulation module 250, which communicates with the CU simulation operating server together with the respective simulation modules, loads a CU face image 407a with which the beauty/ fashion related image (for example, accessory image, clothes image, makeup image and the like) is selectively combined (or original CU face image having not combined with a separate image) and then combines a hair image 416, in which the selection item of the user is reflected, with the loaded CU face image 407 a, thereby creating a CU face image 417 taking a specific hair pattern and simulating the created image, as shown in Fig. 23.
[108] In this case, as shown in Fig. 24, the hair simulation module 250 comprises a hair simulation control section 251 that uses/operates a processing buffer 251a for supporting a processing progress of self-control computation modules and collectively controls an overall plastic operation simulation process, and a hair simulation guide platform operating section 254, a user selection hair information acquiring section 253, a user selection hair image loading section 259, a CU face image loading section 255, a hair simulation engine 257 and a hair beauty product buying guide section 252, which are collectively controlled by the hair simulation control section 251.
[109] Here, the hair simulation guide platform operating section 254 extracts a variety of operating information stored in a self-information storage area, for example skin information for creating a guide platform, image information, text information, link information and setting information, creates a "hair simulation guide platform 318 for guiding a hair simulation process" as shown in Fig. 25 based on the extracted information and notifies/operates the hair simulation guide platform 318 through a part or all of the guide window 302 shown in Fig. 9 while communicating with the CU simulation operating server 101.
[110] Under such circumstances, the user selection hair information acquiring section 253 communicates with the hair simulation guide platform operating section 254 in accordance with a computation event of the user through the guide platform 318, thereby acquiring the information about the hair selected by the user.
[I l l] In addition, the CU face image loading section 255 communicates with the operating aiding unit 130 via an information exchange section 250a, thereby loading a CU face image 407a stored in the simulation operating information D/B 170 (refer to Fig. 23). In this case, depending on the processes of the other computation modules, which have been already progressed, the CU face image 407a may take a shape in which an accessory image, a clothes image, a makeup image and the like are selectively combined or may, alternatively, take an original image shape in which a separate image is not combined, depending on the situations.
[112] Furthermore, the user selection hair image loading section 251, which is controlled by the hair simulation control section 251, communicates with a hair image library 258, thereby loading a series of user selection hair images 416a, in which the user selection hair information is reflected, from accessory images 414a that have been previously stored in the library (refer to Fig. 23).
[113] When the CU face image 407a and the user selection hair image 416 are completely acquired through the above process, the hair simulation engine 257 immediately progresses a series of image combining routines to combine the user selection hair image with the CU face image 407a with which the beauty/fashion related image is selectively combined (or original CU face image), thereby creating a CU face image 417 having a specific hair image pattern and simulating the created CU face image through the hair simulation guide platform 318. As a result, the user can efficiently determine "how to change the hair of the user?" through the hair figure of the CU face image 417 similar to a real figure of the user.
[114] Herein, when the CU face image 407a takes a shape having combined the clothes image, the accessory image, the makeup image and the like in advance, the user can progress the hair simulation process with more sense for the real while comparing the hair simulation process with the other aspects of the user (clothes aspect of the user, accessory aspect, makeup aspect and the like). Accordingly, the user can naturally have an advantage in that the hair beauty selection efficiency is maximized at the optimized state.
[115] At this time, when the user having observed the hair simulation process determines a beauty treatment (or beauty consultation) of the corresponding hair and clicks on a buying item 319 of the hair simulation guide platform 318, the hair beauty product buying guide section 252, which is controlled by the hair simulation control section 251, immediately extracts the hair beauty product related information stored in an information storage area 252a thereof (for example, ID information of the hair pattern selected by the user, information about a shop capable of treating the hair pattern selected by the user and the like), creates a series of hair shop lists 320 as shown in Fig. 26 based on the extracted information and displays the created hair shop lists through a part or all of the hair simulation guide platform 318.
[116] Here, when the user selects a hair shop that the user wants and clicks on a related item 321, the hair beauty product buying guide section 252 immediately creates buying request information, in which the ID information of the hair pattern selected by the user, information about a hair shop capable of treating the hair pattern selected by the user and the like are reflected, and transmits the created buying request information to the CU simulation operating server 101, the product selling guide unit 150 and the like. Correspondingly to this process, when the product selling guide unit 150 selectively transmits the corresponding buying request information to the communication terminal (for example, mobile communication terminal) of the plastic operation/beauty service provider in a text message and/or voice message type, the user can real-time commit (reserve) a specific hair beauty treatment (or, hair beauty consultation) to the hair shop that the user wants, simultaneously with the series of hair simulations, without separate complex processes. Industrial Applicability
[117] As described above, according to the invention, it is possible to provide a system for simulating a cyber-character which the user feature is reflected, which flexibly link-disposes a computation module capable of creating a cyber-character in which a user feature (face shape and body size) is reflected (CU) and storing the CU in the system or distributing the CU in an exterior contents server, based on a wireless/wired on-line communication network such as internet network, a computation module capable of complexly applying/providing a product/service such as clothes, accessory, makeup product, plastic operation and hair beauty service to the CU to carry out a simulation and a computation module capable of real-time selling/guiding the product/ service such as clothes, accessory, makeup product, plastic operation and hair beauty service in accordance with needs of a user, thereby enabling the user to posses a high- quality service, which is substantially useful in buying the fashion/beauty product or fashion/beauty service, in a one-stop manner, without a difficulty in visiting various sites or shops/stores. While the invention has been shown and described with reference to certain preferred embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made thereto without departing from the spirit and scope of the invention as defined by the appended claims.

Claims

Claims
[1] A CU simulation system comprising: a CU simulation operating server that is selectively signal-connected to a capture image acquiring system based on an on-line communication network and collectively controls a process of creating a cyber-character which the user feature is reflected (CU) and a process of virtually applying a beauty/fashion related image to the CU to carry out a simulation, while notifying/operating a guide window for guiding an information input and observation in a part of the capture image acquiring system; a CU creating unit that is controlled by the CU simulation operating server, converts a predetermined standard face image to be matched to a user face image created/transmitted by the capture image acquiring system so as to create a CU face image, converts a predetermined standard body image to be matched to a user body feature created/transmitted by the capture image acquiring system so as to create a CU body image and combines the CU face image and the CU body image to create a CU; and a CU simulation unit that is controlled by the CU simulation operating server, and selectively combines a plurality of beauty/fashion related images to the CU or CU face image or modifies the CU or CU face image to which the plurality of beauty/fashion related images are selectively combined, to carry out a simulation, in accordance with a computation event of the user through the guide window.
[2] The CU simulation system according to claim 1, wherein the CU creating unit comprises: a CU creation control unit that collectively controls an overall CU creating process; a similar standard face image loading module that is controlled by the CU creation control unit and selectively loads a similar standard face image closest to the user face image from predetermined standard face images; a CU face image creating engine that is controlled by the CU creation control unit, estimates positions of vertexes that will constitute a polygon mesh of the CU face image on the basis of a difference degree between a position of a specific point appointed to the user face image and a position of a vertex that will constitute a polygon mesh of the similar standard face image, and converts the positions of the vertexes constituting the polygon mesh of the similar standard face image into positions of the vertexes constituting the polygon mesh of the CU face image, thereby creating a CU face image matched to the user face image; a CU body image loading engine that is controlled by the CU creation control unit and selectively loads a CU body image conforming with the user body feature from predetermined standard body images; and a CU creating engine that is controlled by the CU creation control unit and combines the CU face image and the CU body image to create a completed CU.
[3] The CU simulation system according to claim 2, wherein the CU face image creating engine calculates a following equation to estimate the positions of the vertexes that will constitute the polygon mesh of the CU face image.
<equation 2>
Figure imgf000026_0001
Figure imgf000026_0003
Figure imgf000026_0002
"»' , - c Λ where, V : position of I vertex that will constitute a polygon mesh of the CU face image,
T : transform matrix of i triangle constituting a polygon mesh of the similar standard face image, T : transform matrix of j triangle neighboring to T ,
I: ideal transform matrix that is almost same as T , vv :: ppoossiition of i vertex constituting a polygon mesh of the similar standard face image, c : position of i vertex constituting a polygon mesh of the user face image while nearest corresponding to v , and matrix norm
I l I l jr
: Frobenius norm.
[4] The CU simulation system according to claim 2, further comprising a CU exterior-providing module that is controlled by the CU creation control module, transmits/distributes the CU to an exterior contents service server in accordance with a computation event of a user through the guide window under state that the CU is completely created by the CU creating engine, and enables a cyber- character that is managed by the exterior contents service server to be batch- replaced with the CU.
[5] The CU simulation system according to claim 1, wherein the CU simulation unit comprises: a clothes simulation module that communicates with the CU simulation operating server, loads a CU with which a beauty/fashion related image is selectively combined, in accordance with a computation event of a user through the guide window, and then combines the CU with a clothes image in which a selection item and a body feature of the user are reflected, thereby carrying out a simulation; a plastic operation simulation module that communicates with the CU simulation operating server, loads a CU face image with which a beauty/fashion related image is selectively combined, in accordance with a computation event of a user through the guide window, and then modifies the CU face image in accordance with a plastic operation selection item of the user, thereby carrying out a simulation; a makeup simulation module that communicates with the CU simulation operating server, loads a CU face image with which a beauty/fashion related image is selectively combined, in accordance with a computation event of a user through the guide window, and then modifies the CU face image in accordance with a makeup product selection item of the user, thereby carrying out a simulation; an accessory simulation module that communicates with the CU simulation operating server, loads a CU with which a beauty/fashion related image is se- lectively combined, in accordance with a computation event of a user through the guide window, and then combines the CU with an accessory image in which a selection item of the user is reflected, thereby carrying out a simulation; and a hair simulation module that communicates with the CU simulation operating server, loads a CU face image with which a beauty/fashion related image is selectively combined, in accordance with a computation event of a user through the guide window, and then combines the CU face image with a hair image in which a selection item of the user is reflected, thereby carrying out a simulation.
[6] The CU simulation system according to claim 5, wherein the clothes simulation module comprises: a clothes simulation control section that collectively controls an overall clothes simulation process; a clothes simulation guide platform operating section that is controlled by the clothes simulation control section, creates a clothes simulation guide platform for guiding the clothes simulation process and notifies/operates the clothes simulation guide platform through a part or all of the guide window while communicating with the CU simulation operating server; a user matching clothes image loading section that is controlled by the clothes simulation control section and loads a user matching clothes image in which the body feature of the user and the information about the clothes selected by the user are reflected, from pre- stored clothes images in accordance with a computation event of the user through the clothes simulation guide platform; and a clothes simulation engine that is controlled by the clothes simulation control section and combines the user matching cloths image with the CU with which a beauty/fashion related image is selectively combined, thereby carrying out a simulation, in accordance with a computation event of the user through the clothes simulation guide platform.
[7] The CU simulation system according to claim 5, wherein the plastic operation simulation module comprises: a plastic operation simulation control section that collectively controls an overall plastic operation simulation process; a plastic operation simulation guide platform operating section that is controlled by the plastic operation simulation control section, creates a plastic operation simulation guide platform for guiding the plastic operation simulation process and notifies/operates the plastic operation simulation guide platform through a part or all of the guide window while communicating with the CU simulation operating server; and a plastic operation simulation engine that is controlled by the plastic operation simulation control section and modifies a feature of the CU face image with which a beauty/fashion related image is selectively combined, thereby carrying out a simulation, in accordance with a plastic operation selection event of the user through the clothes simulation guide platform.
[8] The CU simulation system according to claim 5, wherein the makeup simulation module comprises: a makeup simulation control section that collectively controls an overall makeup simulation process; a makeup simulation guide platform operating section that is controlled by the makeup simulation control section, creates a makeup simulation guide platform for guiding the makeup simulation process and notifies/operates the makeup simulation guide platform through a part or all of the guide window while communicating with the CU simulation operating server; and a makeup simulation engine that is controlled by the makeup simulation control section, loads feature information corresponding to a specific makeup product appointed by the user from pre-stored makeup product feature information, and then modifies a feature of the CU face image with which a beauty/fashion related image is selectively combined, thereby carrying out a simulation, in accordance with the loaded feature information of the makeup product.
[9] The CU simulation system according to claim 5, wherein the accessory simulation module comprises: an accessory simulation control section that collectively controls an overall accessory simulation process; an accessory simulation guide platform operating section that is controlled by the accessory simulation control section, creates an accessory simulation guide platform for guiding the accessory simulation process and notifies/operates the accessory simulation guide platform through a part or all of the guide window while communicating with the CU simulation operating server; a user selection accessory image loading section that is controlled by the accessory simulation control section and loads an accessory image selected by the user from pre-stored accessory images, in accordance with a computation event of the user through the accessory simulation guide platform; and an accessory simulation engine that is controlled by the accessory simulation control section and combines the accessory image selected by the user with the CU with which a beauty/fashion related image is selectively combined, thereby carrying out a simulation, in accordance with a computation event of the user through the accessory simulation guide platform.
[10] The CU simulation system according to claim 5, wherein the hair simulation module comprises: a hair simulation control section that collectively controls an overall hair simulation process; a hair simulation guide platform operating section that is controlled by the hair simulation control section, creates a hair simulation guide platform for guiding the hair simulation process and notifies/operates the hair simulation guide platform through a part or all of the guide window while communicating with the
CU simulation operating server; a user selection hair image loading section that is controlled by the hair simulation control section and loads a hair image selected by the user from pre- stored hair images, in accordance with a computation event of the user through the hair simulation guide platform; and a hair simulation engine that is controlled by the hair simulation control section and combines the hair image selected by the user with the CU face image with which a beauty/fashion related image is selectively combined, thereby carrying out a simulation, in accordance with a computation event of the user through the hair simulation guide platform.
PCT/KR2006/004490 2006-06-23 2006-10-31 A system for simulating a cyber-character which a user feature is reflected WO2007148855A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2006-0057000 2006-06-23
KR1020060057000A KR100830673B1 (en) 2006-06-23 2006-06-23 The system which simulates the cyber-character which the user feature is reflected

Publications (1)

Publication Number Publication Date
WO2007148855A1 true WO2007148855A1 (en) 2007-12-27

Family

ID=38833577

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2006/004490 WO2007148855A1 (en) 2006-06-23 2006-10-31 A system for simulating a cyber-character which a user feature is reflected

Country Status (2)

Country Link
KR (1) KR100830673B1 (en)
WO (1) WO2007148855A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015003081A1 (en) * 2013-07-03 2015-01-08 Glasses.Com, Inc. Systems and methods for recommending products via crowdsourcing and detecting user characteristics
US10079244B2 (en) 2008-01-15 2018-09-18 Micron Technology, Inc. Semiconductor constructions and NAND unit cells
CN110263795A (en) * 2019-06-04 2019-09-20 华东师范大学 One kind is based on implicit shape and schemes matched object detection method

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR101158453B1 (en) * 2010-11-15 2012-06-19 주식회사 영우씨엔아이 Apparatus and Method for coordinating a simulated clothes with the three dimensional effect at plane using the two dimensions image data
KR101431521B1 (en) * 2013-12-10 2014-08-21 이병환 Method for operating beauty information portal site
KR101905501B1 (en) * 2016-12-29 2018-10-08 주식회사 카카오 Method and apparatus of recommending contents
KR102209888B1 (en) * 2018-08-17 2021-01-29 동의대학교 산학협력단 A method for inputting body shape information on a terminal and a method for wearing virtual clothing based on inputted body shape information and a system therefor

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010073961A (en) * 2000-01-24 2001-08-03 윤덕용 Human Head/Face Modeller Generation Method
KR20010105511A (en) * 2000-05-12 2001-11-29 장일수 electronic commerce system using virtual reality simulation and method thereof
US20020052805A1 (en) * 2000-10-31 2002-05-02 Junji Seki Sales transaction support method, sales transaction support apparatus
US20020072974A1 (en) * 2000-04-03 2002-06-13 Pugliese Anthony V. System and method for displaying and selling goods and services in a retail environment employing electronic shopper aids
US20020156703A1 (en) * 2000-06-09 2002-10-24 Katsuyoshi Abekawa Product displaying/selling system
US20030065255A1 (en) * 2001-10-01 2003-04-03 Daniela Giacchetti Simulation of an aesthetic feature on a facial image

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20040020184A (en) * 2002-08-30 2004-03-09 주식회사데이콤 A method for Internet application service using characters based on the real time video image

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20010073961A (en) * 2000-01-24 2001-08-03 윤덕용 Human Head/Face Modeller Generation Method
US20020072974A1 (en) * 2000-04-03 2002-06-13 Pugliese Anthony V. System and method for displaying and selling goods and services in a retail environment employing electronic shopper aids
KR20010105511A (en) * 2000-05-12 2001-11-29 장일수 electronic commerce system using virtual reality simulation and method thereof
US20020156703A1 (en) * 2000-06-09 2002-10-24 Katsuyoshi Abekawa Product displaying/selling system
US20020052805A1 (en) * 2000-10-31 2002-05-02 Junji Seki Sales transaction support method, sales transaction support apparatus
US20030065255A1 (en) * 2001-10-01 2003-04-03 Daniela Giacchetti Simulation of an aesthetic feature on a facial image

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10079244B2 (en) 2008-01-15 2018-09-18 Micron Technology, Inc. Semiconductor constructions and NAND unit cells
WO2015003081A1 (en) * 2013-07-03 2015-01-08 Glasses.Com, Inc. Systems and methods for recommending products via crowdsourcing and detecting user characteristics
CN110263795A (en) * 2019-06-04 2019-09-20 华东师范大学 One kind is based on implicit shape and schemes matched object detection method
CN110263795B (en) * 2019-06-04 2023-02-03 华东师范大学 Target detection method based on implicit shape model and graph matching

Also Published As

Publication number Publication date
KR20070122042A (en) 2007-12-28
KR100830673B1 (en) 2008-05-20

Similar Documents

Publication Publication Date Title
US20210177124A1 (en) Information processing apparatus, information processing method, and computer-readable storage medium
JP7253017B2 (en) AUGMENTED REALITY SYSTEM AND METHOD USING REFLECTION
US11682155B2 (en) Skeletal systems for animating virtual avatars
US11763510B2 (en) Avatar animation using markov decision process policies
WO2007148855A1 (en) A system for simulating a cyber-character which a user feature is reflected
US11868515B2 (en) Generating textured polygon strip hair from strand-based hair for a virtual character
JP4359784B2 (en) Face image synthesis method and face image synthesis apparatus
US20120299912A1 (en) Avatar-based virtual dressing room
JP7278724B2 (en) Information processing device, information processing method, and information processing program
EP1436800A1 (en) Industrial augmented reality
JP6470438B1 (en) Mirror device and program
TWI780919B (en) Method and apparatus for processing face image, electronic device and storage medium
US20130120425A1 (en) Character generating system, character generating method, and program
CN110637324B (en) Three-dimensional data system and three-dimensional data processing method
CN113313072A (en) Method, device and equipment for constructing intelligent dynamic page and storage medium
WO2014022608A2 (en) Avatar-based virtual dressing room
CN114638929A (en) Online virtual fitting method and device, electronic equipment and storage medium
KR102136137B1 (en) Customized LED mask pack manufacturing apparatus thereof
CN114270402A (en) Method and apparatus for constructing three-dimensional scanning human body model
KR100950053B1 (en) The system which provide a specialized advertisement contents where the data which the user designates is reflected
KR102509296B1 (en) Method for Matching Nail Tips
JP2020077270A (en) Method for generating 3D object arranged in virtual reality space
CN110288446A (en) A kind of custom made clothing method and relevant device
JP2005107960A (en) Wardrobe providing method and wardrobe providing program
WO2023189838A1 (en) Information processing device, information processing method, and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 06812329

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: LOSS OF RIGHTS COMMUNICATION (EPO F1205A OF 030309)

122 Ep: pct application non-entry in european phase

Ref document number: 06812329

Country of ref document: EP

Kind code of ref document: A1