WO2013177448A1 - Systems and methods for feature tracking - Google Patents
Systems and methods for feature tracking Download PDFInfo
- Publication number
- WO2013177448A1 WO2013177448A1 PCT/US2013/042504 US2013042504W WO2013177448A1 WO 2013177448 A1 WO2013177448 A1 WO 2013177448A1 US 2013042504 W US2013042504 W US 2013042504W WO 2013177448 A1 WO2013177448 A1 WO 2013177448A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- gpu
- images
- variance
- patch
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/04—Texture mapping
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T15/00—3D [Three Dimensional] image rendering
- G06T15/08—Volume rendering
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
- G06T17/30—Polynomial surface description
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T19/00—Manipulating 3D models or images for computer graphics
- G06T19/006—Mixed reality
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
-
- G—PHYSICS
- G02—OPTICS
- G02C—SPECTACLES; SUNGLASSES OR GOGGLES INSOFAR AS THEY HAVE THE SAME FEATURES AS SPECTACLES; CONTACT LENSES
- G02C13/00—Assembling; Repairing; Cleaning
- G02C13/003—Measuring during assembly or fitting of spectacles
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2200/00—Indexing scheme for image data processing or generation, in general
- G06T2200/04—Indexing scheme for image data processing or generation, in general involving 3D image data
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/16—Cloth
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/61—Scene description
Definitions
- a computer-implemented method for processing, by a graphical processor unit (GPU), a plurality of images of a user a plurality of images of a user.
- a plurality of features detected by the GPU in a first image of the plurality of images of the user may be selected.
- Each selected feature may include one or more pixels.
- a search may be performed for the plurality of features selected in the first image.
- a variance may be calculated, on the GPU, for each selected feature found in the second image.
- the variance may indicate a degree to which a portion of the second image varies from a corresponding portion of the first image.
- the calculated variance may be stored in a variance file.
- one or more patches may be selected from among the selected features in the first image based on the calculated variance of each selected feature.
- Each patch may include a square area of pixels centered on one of the selected features of the first image.
- the first image may be removed from memory.
- the one or more patches may be selected based on a predetermined threshold of calculated variance.
- the one or more patches may be selected based on a predetermined number of patches.
- each variance may be divided into first and second elements. The first element may be stored in a first file and the second element may be stored in a second file.
- a cross-correlation algorithm may be performed on a GPU to determine how a first patch, selected among the one or more patches, is positioned in first and second sample images of the plurality of images of the user.
- performing the cross-correlation algorithm on the GPU may include determining a pose of the user in the first and second sample images, performing a fast Fourier transform (FFT) on the first patch, and performing the FFT on the first and second sample images.
- the first sample image may be placed in the real element of a complex number and the second sample image may be placed in the imaginary element of the complex number.
- the FFT of the first patch may be stored in a third file.
- performing the cross-correlation algorithm on the GPU may include multiplying element-wise the FFT of the first patch by the FFT of the first and second sample images, calculating an inverse FFT of the multiplied FFTs, resulting in a first score for the first sample image and a second score for the second sample image, and normalizing the result of the cross-correlation by dividing both first and second scores by the calculated variance stored in the variance file.
- the cross-correlation of each selected patch may be performed on the GPU simultaneously.
- a second cross-correlation algorithm may be performed on third and fourth sample images of the plurality of images of the user using the FFT of the first patch stored in the third file to determine how the first patch is positioned in the third and fourth sample images.
- a position of the selected feature of the first patch may be determined as a point in a virtual three-dimensional (3-D) space.
- a computing device configured to process, by a graphical processor unit (GPU), a plurality of images of a user.
- the device may include a processor and memory in electronic communication with the processor.
- the memory may store instructions that are executable by the GPU to select a plurality of features detected by the GPU in a first image of the plurality of images of the user. Each selected feature may include one or more pixels.
- the instructions may be executable by the GPU to search, in a second image of the plurality of images of the user, for the plurality of features selected in the first image.
- the instructions may be executable by the GPU to calculate variance, on the GPU, for each selected feature found in the second image.
- the variance may indicate a degree to which a portion of the second image varies from a corresponding portion of the first image.
- the instructions may be executable by the GPU to store the calculated variance in a variance file.
- a computer-program product to process, by a graphical processor unit (GPU), a plurality of images of a user is also described.
- the computer-program product may include a non-transitory computer-readable medium that stores instruc- tions.
- the instructions may be executable by the GPU to select a plurality of features detected by the GPU in a first image of the plurality of images of the user. Each selected feature may include one or more pixels.
- the instructions may be executable by the GPU to search, in a second image of the plurality of images of the user, for the plurality of features selected in the first image.
- the instructions may be executable by the GPU to calculate variance, on the GPU, for each selected feature found in the second image.
- the variance may indicate a degree to which a portion of the second image varies from a corresponding portion of the first image.
- the instructions may be executable by the GPU to store the calculated variance in a variance file.
- a computer-implemented method for processing, by a graphical processor unit (GPU), a plurality of images of a user a plurality of images of a user.
- a plurality of features detected by the GPU in a first image of the plurality of images of the user may be selected.
- Each selected feature may include one or more pixels.
- a search may be performed for the plurality of features selected in the first image.
- a variance may be calculated, on the GPU, for each selected feature found in the second image.
- the variance may indicate a degree to which a portion of the second image varies from a corresponding portion of the first image.
- the calculated variance may be stored in a variance file.
- FIG. 1 is a block diagram illustrating one embodiment of an environment in which the present systems and methods may be implemented;
- FIG. 2 is a block diagram illustrating another embodiment of an environment in which the present systems and methods may be implemented;
- FIG. 3 is a block diagram illustrating one example of a graphical processor unit (GPU);
- FIG. 4 is a block diagram illustrating one example of a feature de- tection module
- FIG. 5 is a block diagram illustrating one example of a cross correlation module
- FIG. 6 is a diagram illustrating an example of a device for capturing an image of a user
- FIG. 7 illustrates an example arrangement of features detected in the depicted images of a user
- FIG. 8 is a flow diagram illustrating one embodiment of a method for detecting features in images
- FIG. 9 is a flow diagram illustrating one embodiment of a method for performing cross-correlation algorithms on a GPU
- FIG. 10 is a flow diagram illustrating one embodiment of a method for performing a cross-correlation algorithm on two images simultaneously.
- FIG. 11 depicts a block diagram of a computer system suitable for implementing the present systems and methods.
- the systems and methods described herein relate to processing, by a graphical processor unit (GPU), a plurality of images of a user.
- the systems and methods described herein relate to feature detection and normalized cross-correlation (i.e., template matching) of a set of images.
- Feature detection may include performing a "computer vision algorithm" to detect an "interesting" part of an image.
- Features may be used as a starting point in a computer vision algorithm.
- a desirable property of a feature detector may be repeatability, i.e., whether the same feature may be detected in one or more different images of the same scene.
- the image data may be copied from the GPU to the central processing unit (CPU), back to the GPU, and so forth. Copying image data back and forth between the GPU and CPU causes a bottleneck on the CPU, wasting valuable resources and computing cycles.
- feature detection of each image, as well as the cross-correlation of each interest point (i.e., corners, blobs, and/or points used in image analysis to detect distinguishable features of an image) on an image may be performed one at a time, one after the other.
- feature detection and cross-correlation may be performed on the GPU simultaneously.
- the fast Fourier transform FFT
- element-wise multiplication inverse FFT
- normalization may be performed simultaneously, all in parallel, for every interest point of a given image.
- the variance (degree of variation between pixels of two or more images) may not be calculated because of the computing costs involved calculating variance on the CPU.
- calculating variance on a GPU may be relatively fast compared to the CPU, and the results of the variance may be stored and used later in cross-correlation.
- rounding errors in the cross-correlation may be most affected when the variance of an interest point, or an extracted patch of the interest point, is low.
- the most "trackable" features in an image may be those that have the highest variance.
- FIG. 1 is a block diagram illustrating one embodiment of an environment 100 in which the present systems and methods may be implemented.
- the systems and methods described herein may be performed on a single device (e.g., device 102).
- a GPU 104 may be located on the device 102.
- devices 102 include mobile devices, smart phones, tablet computing devices, personal computing devices, computers, servers, etc.
- a device 102 may include a GPU 104, a camera 106, and a display 108.
- the device 102 may be coupled to a database 1 10.
- the database 1 10 may be internal to the device 102.
- the database 1 10 may be external to the device 102.
- the database 1 10 may include variance data 1 12 and FFT patch data 1 14.
- the GPU 104 may enable feature detection and normalized cross-correlation to be performed in efficient, parallel operations.
- the GPU 104 may obtain multiple images of the user.
- the GPU 104 may capture multiple images of a user via the camera 106.
- the GPU 104 may capture a video (e.g., a 5 second video) via the camera 106.
- the GPU 104 may use variance data 1 12 and FFT patch data 1 14 in relation to feature detection, cross-correlation, and 3-D modeling of a user.
- the GPU may detect a degree of variation between a feature, or interest point, detected in a first image, and the same feature detected in a second image.
- the GPU 104 may store the detected variance in the variance data 1 12.
- the GPU 104 may generate a patch of an interest point.
- a patch may be a square set of pixels (e.g., l Opx by l Opx square of pixels) centered on the interest point.
- the GPU 104 may perform an FFT algorithm on the patch.
- the GPU 104 may store the FFT of the patch in the FFT patch data 1 14.
- the GPU 104 may use, and reuse, the stored variance and FFT of the patch in relation to performing feature detection and feature tracking cross- correlation of one or more images.
- FIG. 2 is a block diagram illustrating another embodiment of an environment 200 in which the present systems and methods may be implemented.
- a device 102-a may communicate with a server 206 via a net- work 204.
- Example of networks 204 include, local area networks (LAN), wide area networks (WAN), virtual private networks (VPN), wireless networks (using 802.1 1 , for example), cellular networks (using 3G and/or LTE, for example), etc.
- the network 204 may include the internet.
- the device 102-a may be one example of the device 102 illustrated in FIG. 1 .
- the device 102-a may include the camera 106, the display 108, and an application 202.
- the device 102-a may not include a GPU 104.
- both a device 102-a and a server 206 may include a GPU 104 where at least a portion of the functions of the GPU 104 are performed separately and/or concurrently on both the device 102-a and the server 206.
- the server 206 may include the GPU 104 and may be coupled to the database 1 10.
- the GPU 104 may access the variance data 1 12 and the FFT patch data 1 14 in the database 1 10 via the server 206.
- the database 1 10 may be internal or external to the server 206. In some embodiments, the database 1 10 may be accessible by the device 102-a and/or the server 206 over the network 204.
- the application 202 may capture multiple images via the camera 106. For example, the application 202 may use the camera 106 to capture a video. Upon capturing the multiple images, the application 202 may process the multiple images to generate image data. In some embodiments, the application 202 may transmit one or more images to the server 206. Additionally or alternatively, the application 202 may transmit to the server 206 the image data or at least one file associated with the image data.
- the GPU 104 may process multiple images of a user to detect features in an image, track the same features among multiple images, and determine a point in a virtual 3-D space corresponding to the tracked features.
- the application 202 may process one or more image captured by the camera 106 in order to generate a 3-D model of a user.
- FIG. 3 is a block diagram illustrating one example of a GPU 104-a.
- the GPU 104-a may be one example of the GPU 104 depicted in FIGS. 1 and/or 2.
- the GPU 104-a may include a feature detection module 302 and a cross- correlation module 304.
- the feature detection module 302 may examine a pixel of an image to determine whether the pixel includes a feature of interest. In some embodiments, the feature detection module 302 detects a face and/or head of a user in an image. In some embodiments, the feature detection module 302 detects features of the user's head and/or face. In some embodiments, the feature detection module 302 may detect an edge, corner, interest point, blob, and/or ridge in an image of a user. An edge may be points of an image where there is a boundary (or an edge) between two image regions, or a set of points in the image which have a relatively strong gradient magnitude. Corners and interest points may be used interchangeably.
- An interest point may refer to a point-like feature in an image, which has a local two dimensional structure.
- the feature detection module 302 may search for relatively high levels of curvature in an image gradient to detect an interest point and/or corner (e.g., corner of an eye, corner of a mouth).
- the feature detection module 302 may detect in an image of a user's face the corners of the eyes, eye centers, pupils, eye brows, point of the nose, nostrils, cor- ners of the mouth, lips, center of the mouth, chin, ears, forehead, cheeks, and the like.
- a blob may include a complementary description of image structures in terms of regions, as opposed to corners that may be point-like in comparison.
- the feature detection module 302 may detect a smooth, non- point-like area (i.e., blob) in an image. Additionally, or alternatively, in some embodiments, the feature detection module 302 may detect a ridge of points in the image. In some embodiments, the feature detection module 302 may extract a local image patch around a detected feature in order to track the feature in other images.
- a smooth, non- point-like area i.e., blob
- the feature detection module 302 may detect a ridge of points in the image.
- the feature detection module 302 may extract a local image patch around a detected feature in order to track the feature in other images.
- the cross-correlation module 304 may process a feature detected by the feature detection module 302.
- the cross-correlation module 304 may track a feature detected by the feature detection module 302 and determine a position of a point in a virtual 3-D space corresponding to the tracked feature. Operations of the respective feature detection and cross-correlation modules 302 and 304 are discussed in further detail below.
- FIG. 4 is a block diagram illustrating one example of a feature detection module 302-a.
- the feature detection module 302-a may be one example of the feature detection module 302 illustrated in FIG. 3.
- the feature de- tection module 302 may include a selection module 402, a comparing module 404, a variance module 406, and a patch module 408.
- the selection module 402 may select a feature detected by the GPU 104 in a first image.
- the first image may be one of multiple images.
- the multiple images may include images of a user.
- Each detected feature may include one or more pixels.
- the detected features may include edges, interest points, corners, blobs, ridges, and/or any other visual or pixel feature contained in an image (e.g., corners of the eyes, eye centers, point of the nose, corners of the mouth, etc.).
- the comparing module 404 may search a second image of the plurality of images of the user to find a match for the feature detected in the first image.
- the selection module 402 may select the detected corner of the user's eye as a feature to be tracked in subsequent images.
- the comparing module 404 may compare the detected corner of the user's eye in image 1 to the corresponding detected corner of the user's eye in image 2.
- the variance module 406 may calculate a variance for each selected feature found in the second image.
- the variance may indicate a degree a portion of the second image varies from a corresponding portion of the first image. For example, upon comparing the detected corner of the user's eye in image 1 to the corresponding detected corner of the user's eye in image 2, the variance module 406 may determine the degree of variation between the corner of the user's eye in image 2 and the corner of the user's eye in image 1. In some embodi- ments, the variance module 406 may write the calculated variance to a file and store the file in a storage medium. In some embodiments, the variance module 406 may store the variance in the variance data 1 12.
- the variance module 406 may divide a calculated variance into first and second elements.
- the variance module 406 may store the first element in a first file and the second element in a second file within the variance data 1 12.
- the variance module 406 divides the calculated variance based on a low order element of the variance and a high order element of the variance. For example, if the variance were calculated to be a decimal number equal to 123,456, then the variance module 406 may divide the hundreds element (i.e., low order element) from the thousands element (i.e., high order element).
- the variance module 406 may store the thousands element " 123" in a first high order element file in the variance data 1 12, and may store the hundreds element "456" in a low order element file in the variance data 1 12.
- the GPU 102 and the elements of the GPU 102 may access the high and low order element files stored in the variance data 1 12 in order to implement the calculated variance in one or more processes related to feature detection, detection tracking, cross-correlation, and generation of a 3-D model.
- the patch module 408 may select, or extract, from among the selected features in the first image one or more patches based on the calculated variance of each selected feature.
- Each extracted patch may include a square area of pixels centered on one of the selected features, or interest points, of the first image.
- the patch module 408 may remove the first image from memory.
- the patch module 408 may select and extract the one or more patches based on a predetermined threshold of calculated variance. Because a relatively low variance may cause a rounding error in the normalization of a cross-correlation process, in some embodiments, the patch module 408 may extract only those patches where the associated variance is above a certain level of variance. In some embodiments, the patch module 408 may select the one or more patches based on a predetermined number of patches.
- FIG. 5 is a block diagram illustrating one example of a cross correlation module 304-a.
- the cross-correlation module 304-a may be one example of the cross-correlation module 304 illustrated in FIG. 3.
- the cross- correlation module 304-a may include a pose detection module 502, a FFT module 504, a multiplication module 506, and a normalizing module 508.
- the cross-correlation module 304-a may perform a cross-correlation algorithm to determine how a patch, selected among the one or more patches, is positioned in first and second sample images of the plurality of images of the user. In some embodiments, the cross-correlation module 304-a may determine a position of the selected feature of the first patch as a point in a vir- tual three-dimensional (3-D) space. In some configurations, as part of the process to perform the cross-correlation algorithm, the pose detection module 502 may determine a pose of the user in the first and second sample images. The FFT module 504 may perform a FFT on the first patch. In some embodiments, the FFT of the first patch may be written to a file and stored in the FFT patch data 1 14.
- the FFT module 504 may perform a FFT on the first and second sample images.
- the FFT module 504 may place the first sample image in a real element of a complex number (e.g., "a" in (a + bi)), and may place the second sample image in an imaginary element of the complex number (e.g., "b" in (a + bi)).
- the multiplication module 506 may multiply element-wise the FFT of the first patch by the FFT of the first and second sample images.
- the FFT module 504 may calculate an inverse FFT of the multiplied FFTs, resulting in a first score for the first sample image and a second score for the second sample image.
- the normalization module 508 may normalize the result of the cross-correlation by dividing both first and second scores by the calculated variance stored in the variance file. For example, the normalization module 508 may access the high and low order element files stored in the variance data 1 12 to divide the first and second scores by the calculated variance.
- the cross correlation module 304-a may perform the cross-correlation of every selected patch of a given image simultaneously.
- the patch module 408 may extract 32 patches from image 1. The 32 patches may be cross-correlated simultaneously with image 2 and image 3 by placing image 2 and 3 in the elements of a complex number, performing the FFT of each patch, and normalizing the result by the stored variance.
- the FFTs of each patch may be stored by the patch module 408 in the FFT patch data 1 14 for subsequent cross-correlation operations.
- the cross-correlation module 304-a may perform a second cross-correlation algorithm on images 4 and 5 using the same FFT patches stored in the FFT patch data 1 14.
- FIG. 6 is a diagram 600 illustrating an example of a device 102-b for capturing an image 604 of a user 602.
- the device 102-b may be one example of the device 102 illustrated in FIGS. 1 and/or 2.
- the device 102-b may include a camera 106-b, and display 108-b.
- the camera 106-b and display 108-b may be examples of the respective camera 106 and display 108 illustrated in FIGS. 1 and/or 2.
- the user may operate the device 102-b.
- the application 202 may allow the user to interact with and/or operate the device 102-b.
- the camera 106-a may allow the user to capture an image 604 of the user 602.
- the GPU 104 may perform feature detection and feature tracking in relation to the images of the user captured by the device 102-b. Additionally, or alternatively, the GPU 104 may perform a normalized cross-correlation algorithm on one or more of the images of the user to track one or more features detected in each image.
- the GPU 104 may determine a position of the selected feature as one or more points in a virtual three-dimensional (3-D) space to enable the generation of a 3-D model of the user.
- FIG. 7 illustrates an example arrangement 700 of a feature 706 detected in the depicted images 702 and 704.
- the example arrangement 700 may include a first image of the user 702 and a second image of the user 704.
- the feature detection module 302 may detect a feature 708 on the first image of the user 702 (e.g., an eye of the user).
- the selection module 402 may select the feature 708.
- the patch module 408 may extract the selected feature 708 as a patch 706.
- the patch module 408 may remove the image 702 from memory.
- the comparing module 404 may compare the patch 706 to the second image of the user 704.
- the variance module 406 may measure the degree of variation between the patch 706 and the detected feature 710. For example, the variation module 406 may detect that the detected feature 710 has shifted in one or more directions and/or changed shape (e.g., change in the shape of the eye due to rotation of the face) in relation to the patch 706.
- the variance module 406 may store the vari- ance in the variance data 1 12. The stored variance may be used to normalize a cross-correlation of the patch 706 with one or more images.
- the FFT module 504 may take the FFT of the patch 706 and store the FFT of the patch 706 in the FFT patch data 1 14.
- the cross-correlation module 304 may use the stored FFT of the patch 706 in the performance of multiple cross-correlation algorithms.
- FIG. 8 is a flow diagram illustrating one embodiment of a method 800 for detecting features in images.
- the method 800 may be implemented by the GPU 104 illustrated in FIGS. 1 , 2, and/or 3.
- the method 800 may be implemented by the application 202 illustrated in FIG. 2.
- a plurality of features detected by the GPU in a first image of the plurality of images of the user may be selected. Each selected feature may include one or more pixels.
- a search may be performed for the plurality of features selected in the first image.
- a variance may be calculated, on the GPU, for each selected feature found in the second image.
- the variance may indicate a degree to which a portion of the second image varies from a corresponding portion of the first image.
- the calculated variance may be stored in a variance file.
- FIG. 9 is a flow diagram illustrating one embodiment of a method
- the method 900 may be implemented by the GPU 104 illustrated in FIGS. 1 , 2, and/or 3. In some configurations, the method 900 may be implemented by the application 202 illustrated in FIG. 2.
- one or more patches may be selected from among selected features in a first image based on a calculated variance of each selected fea- ture.
- a first cross-correlation algorithm may be performed on a GPU using a FFT of a first patch stored in a file to determine how the first patch is positioned in first and second sample images.
- a second cross-correlation algorithm may be performed on the GPU on third and fourth sample images using the FFT of the first patch stored in the file to determine how the first patch is posi- tioned in third and fourth sample images.
- FIG. 10 is a flow diagram illustrating one embodiment of a method 1000 for performing a cross-correlation algorithm on two images simultaneously.
- the method 1000 may be implemented by the GPU 104 illustrated in FIGS. 1 , 2, and/or 3.
- the method 1000 may be im- plemented by the application 202 illustrated in FIG. 2.
- a pose of the user in the first and second sample images may be determined.
- a FFT may be performed on a patch extracted from an image.
- a FFT may be performed on the first and second sample images.
- the first sample image may be placed in a real element of a complex number and the second sample image may be placed in an imaginary element of the complex number.
- the FFT of the first patch may be multiplied element- wise by the FFT of the first and second sample images.
- an inverse FFT of the multiplied FFTs may be calculated, resulting in a first score for the first sample image and a second score for the second sample image.
- the result of the cross-correlation may be normalized by dividing both first and second scores by the calculated variance stored in the variance file.
- FIG. 11 depicts a block diagram of a computer system 1 100 suitable for implementing the present systems and methods.
- the depicted computer system 1 100 may be one example of a server 206 depicted in FIG 2.
- the system 1 100 may be one example of a device 102 depicted in FIGS. 1 , 2, and/or 6.
- Computer system 1 100 includes a bus 1 102 which interconnects major subsystems of computer system 1 100, such as a GPU 1 104, a system memory 1 106 (typically RAM, but which may also include ROM, flash RAM, or the like), an input/output controller 1 108, an external audio device, such as a speaker system 1 1 10 via an audio output interface 1 1 12, an external device, such as a display screen 1 1 14 via display adapter 1 1 16, serial ports 1 1 18 and mouse 1 146, a keyboard 1 122 (interfaced with a keyboard controller 1 124), multiple USB devices 1 126 (interfaced with a USB controller 1 128), a storage interface 1 130, a host bus adapter (HBA) interface card 1 136A operative to connect with a Fibre Channel network 1 138 , a host bus adapter (HBA) interface card 1 136B operative to connect to a SCSI bus 1 140, and an optical disk drive 1 142 operative to receive an optical disk 1 144.
- HBA host
- the GPU 1 104 may be one example of the GPU 104 depicted in FIGS. 1 , 2, and/or 3. Also included are a mouse 1 146 (or other point-and-click device, coupled to bus 1 102 via serial port 1 1 18), a modem 1 148 (coupled to bus 1 102 via serial port 1 120), and a network interface 1 150 (coupled directly to bus 1 102).
- a mouse 1 146 or other point-and-click device, coupled to bus 1 102 via serial port 1 1 18
- modem 1 148 coupled to bus 1 102 via serial port 1 120
- network interface 1 150 coupled directly to bus 1 102
- Bus 1 102 allows data communication between GPU 1 104 and system memory 1 106, which may include read-only memory (ROM) or flash memory (neither shown), and random access memory (RAM) (not shown), as previously noted.
- the RAM is generally the main memory into which the operating system and application programs are loaded.
- the ROM or flash memory can contain, among other code, the Basic Input-Output system (BIOS) which controls basic hardware operation such as the interaction with peripheral components or devices.
- BIOS Basic Input-Output system
- one or more instructions related to the operations of the GPU 1 104 to implement the present systems and methods may be stored within the system memory 1 106.
- Applications resident with computer system 1 100 are generally stored on and accessed via a non-transitory computer readable medium, such as a hard disk drive (e.g.
- applications can be in the form of electronic signals modulated in accordance with the application and data communication technology when accessed via network modem 1 148 or interface 1 150.
- Storage interface 1 130 can connect to a standard computer readable medium for storage and/or retrieval of information, such as a fixed disk drive 1 152.
- Fixed disk drive 1 152 may be a part of computer system 1 100 or may be separate and accessed through other interface systems.
- Modem 1 148 may provide a direct connection to a remote server via a telephone link or to the Internet via an internet service provider (ISP).
- ISP internet service provider
- Network interface 1 150 may provide a direct connection to a remote server via a direct network link to the Internet via a POP (point of presence).
- Network interface 1 150 may provide such connection using wireless techniques, including digital cellular telephone connection, Cellular Digital Packet Data (CDPD) connection, digital satellite data connection or the like.
- CDPD Cellular Digital Packet Data
- FIG. 1 1 Many other devices or subsystems (not shown) may be connected in a similar manner (e.g. , document scanners, digital cameras and so on). Conversely, all of the devices shown in FIG. 1 1 need not be present to practice the present systems and methods.
- the devices and subsystems can be interconnected in different ways from that shown in FIG. 1 1 .
- the operation of at least some of the computer system 1 100 such as that shown in FIG. 1 1 is readily known in the art and is not dis- cussed in detail in this application.
- Code to implement the present disclosure can be stored in a non-transitory computer-readable medium such as one or more of system memory 1 106, fixed disk 1 152, or optical disk 1 144.
- the operating system provided on computer system 1 100 may be MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, Linux®, or another known operating system.
- a signal can be directly transmitted from a first block to a second block, or a signal can be modified (e.g. , amplified, attenuated, delayed, latched, buffered, inverted, filtered, or otherwise modified) between the blocks.
- a signal can be directly transmitted from a first block to a second block, or a signal can be modified (e.g. , amplified, attenuated, delayed, latched, buffered, inverted, filtered, or otherwise modified) between the blocks.
- modified signals e.g. , amplified, attenuated, delayed, latched, buffered, inverted, filtered, or otherwise modified
- a signal input at a second block can be conceptualized as a second signal derived from a first signal output from a first block due to physical limitations of the circuitry involved (e.g., there will inevitably be some attenuation and delay). Therefore, as used herein, a second signal derived from a first signal includes the first signal or any modifications to the first signal, whether due to circuit limitations or due to passage through other circuit elements which do not change the informational and/or final functional aspect of the first signal.
- each block diagram compo- nent, flowchart step, operation, and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide range of hardware, software, or firmware (or any combination thereof) configurations.
- any disclosure of components contained within other components should be considered exemplary in nature since many other architectures can be implemented to achieve the same functionality.
Abstract
A computer-implemented method for processing, by a graphical processor unit (GPU), a plurality of images of a user. A plurality of features detected by the GPU in a first image of the plurality of images of the user is selected. Each selected feature includes one or more pixels. In a second image of the plurality of images of the user, a search is performed for the plurality of features selected in the first image. A variance is calculated, on the GPU, for each selected feature found in the second image. The variance indicates a degree to which a portion of the second image varies from a corresponding portion of the first image. The calculated variance is stored in a variance file.
Description
SYSTEMS AND METHODS FOR FEATURE TRACKING
RELATED APPLICATIONS
[0001] This application claims priority to U.S. Provisional Application No. 61/650,983, entitled SYSTEMS AND METHODS TO VIRTUALLY TRY-ON PRODUCTS, filed on May 23, 2012; U.S. Provisional Application No. 61/735,951 , entitled SYSTEMS AND METHODS TO VIRTUALLY TRY-ON PRODUCTS, filed on December 1 1 , 2012; and U.S. Application No. 13/775,764, filed on 25 February 2013, entitled SYSTEMS AND METHODS FOR FEATURE TRACKING, all of which are incorporated herein in their entirety by this reference.
BACKGROUND
[0002] The use of computer systems and computer-related technologies continues to increase at a rapid pace. This increased use of computer systems has influenced the advances made to computer-related technologies. Indeed, computer systems have increasingly become an integral part of the business world and the activities of individual consumers. Computers have opened up an entire industry of internet shopping. In many ways, online shopping has changed the way consumers purchase products. For example, a consumer may want to know what they will look like in and/or with a product. On the webpage of a certain product, a photograph of a model with the particular product may be shown. However, users may want to see more accurate depictions of themselves in relation to various products.
DISCLOSURE OF THE INVENTION
[0003] According to at least one embodiment, a computer-implemented method for processing, by a graphical processor unit (GPU), a plurality of images of a user. A plurality of features detected by the GPU in a first image of the plurality of images of the user may be selected. Each selected feature may include one or more pixels. In a second image of the plurality of images of the user, a search may be performed for the plurality of features selected in the first image. A variance may be calculated, on the GPU, for each selected feature found in the second image. The variance may indicate a degree to which a portion of the second image varies from a
corresponding portion of the first image. The calculated variance may be stored in a variance file.
[0004] In one embodiment, one or more patches may be selected from among the selected features in the first image based on the calculated variance of each selected feature. Each patch may include a square area of pixels centered on one of the selected features of the first image. In some configurations, upon selecting the one or more patches in the first image, the first image may be removed from memory. In some embodiments, the one or more patches may be selected based on a predetermined threshold of calculated variance. In one embodiment, the one or more patches may be selected based on a predetermined number of patches. In some configurations, each variance may be divided into first and second elements. The first element may be stored in a first file and the second element may be stored in a second file.
[0005] In one embodiment, a cross-correlation algorithm may be performed on a GPU to determine how a first patch, selected among the one or more patches, is positioned in first and second sample images of the plurality of images of the user. In some embodiments, performing the cross-correlation algorithm on the GPU may include determining a pose of the user in the first and second sample images, performing a fast Fourier transform (FFT) on the first patch, and performing the FFT on the first and second sample images. The first sample image may be placed in the real element of a complex number and the second sample image may be placed in the imaginary element of the complex number. The FFT of the first patch may be stored in a third file. In some configurations, performing the cross-correlation algorithm on the GPU may include multiplying element-wise the FFT of the first patch by the FFT of the first and second sample images, calculating an inverse FFT of the multiplied FFTs, resulting in a first score for the first sample image and a second score for the second sample image, and normalizing the result of the cross-correlation by dividing both first and second scores by the calculated variance stored in the variance file.
[0006] In one embodiment, the cross-correlation of each selected patch may be performed on the GPU simultaneously. In some configurations, on the GPU, a second cross-correlation algorithm may be performed on third and fourth sample images of the plurality of images of the user using the FFT of the first patch stored
in the third file to determine how the first patch is positioned in the third and fourth sample images. In some embodiments, a position of the selected feature of the first patch may be determined as a point in a virtual three-dimensional (3-D) space.
[0007] A computing device configured to process, by a graphical processor unit (GPU), a plurality of images of a user is also described. The device may include a processor and memory in electronic communication with the processor. The memory may store instructions that are executable by the GPU to select a plurality of features detected by the GPU in a first image of the plurality of images of the user. Each selected feature may include one or more pixels. In one embodiment, the instructions may be executable by the GPU to search, in a second image of the plurality of images of the user, for the plurality of features selected in the first image. In some configurations, the instructions may be executable by the GPU to calculate variance, on the GPU, for each selected feature found in the second image. The variance may indicate a degree to which a portion of the second image varies from a corresponding portion of the first image. In one embodiment, the instructions may be executable by the GPU to store the calculated variance in a variance file.
[0008] A computer-program product to process, by a graphical processor unit (GPU), a plurality of images of a user is also described. The computer-program product may include a non-transitory computer-readable medium that stores instruc- tions. The instructions may be executable by the GPU to select a plurality of features detected by the GPU in a first image of the plurality of images of the user. Each selected feature may include one or more pixels. In one embodiment, the instructions may be executable by the GPU to search, in a second image of the plurality of images of the user, for the plurality of features selected in the first image. In some configurations, the instructions may be executable by the GPU to calculate variance, on the GPU, for each selected feature found in the second image. The variance may indicate a degree to which a portion of the second image varies from a corresponding portion of the first image. In one embodiment, the instructions may be executable by the GPU to store the calculated variance in a variance file.
[0009] According to at least one embodiment, a computer-implemented method for processing, by a graphical processor unit (GPU), a plurality of images of a user. A plurality of features detected by the GPU in a first image of the plurality
of images of the user may be selected. Each selected feature may include one or more pixels. In a second image of the plurality of images of the user, a search may be performed for the plurality of features selected in the first image. A variance may be calculated, on the GPU, for each selected feature found in the second image. The variance may indicate a degree to which a portion of the second image varies from a corresponding portion of the first image. The calculated variance may be stored in a variance file.
[0010] Features from any of the above-mentioned embodiments may be used in combination with one another in accordance with the general principles de- scribed herein. These and other embodiments, features, and advantages will be more fully understood upon reading the following detailed description in conjunction with the accompanying drawings and claims.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] The accompanying drawings illustrate a number of exemplary em- bodiments and are a part of the specification. Together with the following description, these drawings demonstrate and explain various principles of the instant disclosure.
[0012] FIG. 1 is a block diagram illustrating one embodiment of an environment in which the present systems and methods may be implemented;
[0013] FIG. 2 is a block diagram illustrating another embodiment of an environment in which the present systems and methods may be implemented;
[0014] FIG. 3 is a block diagram illustrating one example of a graphical processor unit (GPU);
[0015] FIG. 4 is a block diagram illustrating one example of a feature de- tection module;
[0016] FIG. 5 is a block diagram illustrating one example of a cross correlation module;
[0017] FIG. 6 is a diagram illustrating an example of a device for capturing an image of a user;
[0018] FIG. 7 illustrates an example arrangement of features detected in the depicted images of a user;
[0019] FIG. 8 is a flow diagram illustrating one embodiment of a method for detecting features in images;
[0020] FIG. 9 is a flow diagram illustrating one embodiment of a method for performing cross-correlation algorithms on a GPU;
[0021] FIG. 10 is a flow diagram illustrating one embodiment of a method for performing a cross-correlation algorithm on two images simultaneously; and
[0022] FIG. 11 depicts a block diagram of a computer system suitable for implementing the present systems and methods.
[0023] While the embodiments described herein are susceptible to various modifications and alternative forms, specific embodiments have been shown by way of example in the drawings and will be described in detail herein. However, the exemplary embodiments described herein are not intended to be limited to the particular forms disclosed. Rather, the instant disclosure covers all modifications, equivalents, and alternatives falling within the scope of the appended claims. BEST MODE(S) FOR CARRYING OUT THE INVENTION
[0024] The systems and methods described herein relate to processing, by a graphical processor unit (GPU), a plurality of images of a user. Specifically, the systems and methods described herein relate to feature detection and normalized cross-correlation (i.e., template matching) of a set of images. Feature detection may include performing a "computer vision algorithm" to detect an "interesting" part of an image. Features may be used as a starting point in a computer vision algorithm. A desirable property of a feature detector may be repeatability, i.e., whether the same feature may be detected in one or more different images of the same scene. Typically, when working with feature detection and template matching of images, the image data may be copied from the GPU to the central processing unit (CPU), back to the GPU, and so forth. Copying image data back and forth between the GPU and CPU causes a bottleneck on the CPU, wasting valuable resources and computing cycles. On the CPU, feature detection of each image, as well as the cross-correlation of each interest point (i.e., corners, blobs, and/or points used in image analysis to detect distinguishable features of an image) on an image, may be performed one at a time, one after the other. However, because the results of one cross correlation do
not depend on the results from another, feature detection and cross-correlation may be performed on the GPU simultaneously. Using a GPU instead of the CPU, the fast Fourier transform (FFT), element-wise multiplication, inverse FFT, and normalization may be performed simultaneously, all in parallel, for every interest point of a given image.
[0025] Additionally, in a typical implementation, the variance (degree of variation between pixels of two or more images) may not be calculated because of the computing costs involved calculating variance on the CPU. However, calculating variance on a GPU may be relatively fast compared to the CPU, and the results of the variance may be stored and used later in cross-correlation. However, rounding errors in the cross-correlation may be most affected when the variance of an interest point, or an extracted patch of the interest point, is low. Thus, in a GPU implementation, the most "trackable" features in an image may be those that have the highest variance.
[0026] FIG. 1 is a block diagram illustrating one embodiment of an environment 100 in which the present systems and methods may be implemented. In some embodiments, the systems and methods described herein may be performed on a single device (e.g., device 102). For example, a GPU 104 may be located on the device 102. Examples of devices 102 include mobile devices, smart phones, tablet computing devices, personal computing devices, computers, servers, etc.
[0027] In some configurations, a device 102 may include a GPU 104, a camera 106, and a display 108. In one example, the device 102 may be coupled to a database 1 10. In one embodiment, the database 1 10 may be internal to the device 102. In another embodiment, the database 1 10 may be external to the device 102. In some configurations, the database 1 10 may include variance data 1 12 and FFT patch data 1 14.
[0028] In one embodiment, the GPU 104 may enable feature detection and normalized cross-correlation to be performed in efficient, parallel operations. In some configurations, the GPU 104 may obtain multiple images of the user. For ex- ample, the GPU 104 may capture multiple images of a user via the camera 106. For instance, the GPU 104 may capture a video (e.g., a 5 second video) via the camera 106. In some configurations, the GPU 104 may use variance data 1 12 and FFT patch
data 1 14 in relation to feature detection, cross-correlation, and 3-D modeling of a user. For example, the GPU may detect a degree of variation between a feature, or interest point, detected in a first image, and the same feature detected in a second image. The GPU 104 may store the detected variance in the variance data 1 12. In some configurations, the GPU 104 may generate a patch of an interest point. A patch may be a square set of pixels (e.g., l Opx by l Opx square of pixels) centered on the interest point. In some embodiments, the GPU 104 may perform an FFT algorithm on the patch. The GPU 104 may store the FFT of the patch in the FFT patch data 1 14. The GPU 104 may use, and reuse, the stored variance and FFT of the patch in relation to performing feature detection and feature tracking cross- correlation of one or more images.
[0029] FIG. 2 is a block diagram illustrating another embodiment of an environment 200 in which the present systems and methods may be implemented. In some embodiments, a device 102-a may communicate with a server 206 via a net- work 204. Example of networks 204 include, local area networks (LAN), wide area networks (WAN), virtual private networks (VPN), wireless networks (using 802.1 1 , for example), cellular networks (using 3G and/or LTE, for example), etc. In some configurations, the network 204 may include the internet. In some configurations, the device 102-a may be one example of the device 102 illustrated in FIG. 1 . For example, the device 102-a may include the camera 106, the display 108, and an application 202. It is noted that in some embodiments, the device 102-a may not include a GPU 104. In some embodiments, both a device 102-a and a server 206 may include a GPU 104 where at least a portion of the functions of the GPU 104 are performed separately and/or concurrently on both the device 102-a and the server 206.
[0030] In some embodiments, the server 206 may include the GPU 104 and may be coupled to the database 1 10. For example, the GPU 104 may access the variance data 1 12 and the FFT patch data 1 14 in the database 1 10 via the server 206. The database 1 10 may be internal or external to the server 206. In some embodiments, the database 1 10 may be accessible by the device 102-a and/or the server 206 over the network 204.
[0031] In some configurations, the application 202 may capture multiple images via the camera 106. For example, the application 202 may use the camera
106 to capture a video. Upon capturing the multiple images, the application 202 may process the multiple images to generate image data. In some embodiments, the application 202 may transmit one or more images to the server 206. Additionally or alternatively, the application 202 may transmit to the server 206 the image data or at least one file associated with the image data.
[0032] In some configurations, the GPU 104 may process multiple images of a user to detect features in an image, track the same features among multiple images, and determine a point in a virtual 3-D space corresponding to the tracked features. In some embodiments, the application 202 may process one or more image captured by the camera 106 in order to generate a 3-D model of a user.
[0033] FIG. 3 is a block diagram illustrating one example of a GPU 104-a. The GPU 104-a may be one example of the GPU 104 depicted in FIGS. 1 and/or 2. As depicted, the GPU 104-a may include a feature detection module 302 and a cross- correlation module 304.
[0034] In some configurations, the feature detection module 302 may examine a pixel of an image to determine whether the pixel includes a feature of interest. In some embodiments, the feature detection module 302 detects a face and/or head of a user in an image. In some embodiments, the feature detection module 302 detects features of the user's head and/or face. In some embodiments, the feature detection module 302 may detect an edge, corner, interest point, blob, and/or ridge in an image of a user. An edge may be points of an image where there is a boundary (or an edge) between two image regions, or a set of points in the image which have a relatively strong gradient magnitude. Corners and interest points may be used interchangeably. An interest point may refer to a point-like feature in an image, which has a local two dimensional structure. In some embodiments, the feature detection module 302 may search for relatively high levels of curvature in an image gradient to detect an interest point and/or corner (e.g., corner of an eye, corner of a mouth). Thus, the feature detection module 302 may detect in an image of a user's face the corners of the eyes, eye centers, pupils, eye brows, point of the nose, nostrils, cor- ners of the mouth, lips, center of the mouth, chin, ears, forehead, cheeks, and the like. A blob may include a complementary description of image structures in terms of regions, as opposed to corners that may be point-like in comparison. Thus, in
some embodiments, the feature detection module 302 may detect a smooth, non- point-like area (i.e., blob) in an image. Additionally, or alternatively, in some embodiments, the feature detection module 302 may detect a ridge of points in the image. In some embodiments, the feature detection module 302 may extract a local image patch around a detected feature in order to track the feature in other images.
[0035] In some embodiments, the cross-correlation module 304 may process a feature detected by the feature detection module 302. The cross-correlation module 304 may track a feature detected by the feature detection module 302 and determine a position of a point in a virtual 3-D space corresponding to the tracked feature. Operations of the respective feature detection and cross-correlation modules 302 and 304 are discussed in further detail below.
[0036] FIG. 4 is a block diagram illustrating one example of a feature detection module 302-a. The feature detection module 302-a may be one example of the feature detection module 302 illustrated in FIG. 3. As depicted, the feature de- tection module 302 may include a selection module 402, a comparing module 404, a variance module 406, and a patch module 408.
[0037] In one embodiment, the selection module 402 may select a feature detected by the GPU 104 in a first image. The first image may be one of multiple images. The multiple images may include images of a user. Each detected feature may include one or more pixels. For example, the detected features may include edges, interest points, corners, blobs, ridges, and/or any other visual or pixel feature contained in an image (e.g., corners of the eyes, eye centers, point of the nose, corners of the mouth, etc.). In some configurations, the comparing module 404 may search a second image of the plurality of images of the user to find a match for the feature detected in the first image. For example, upon detecting the corner of a user's eye, the selection module 402 may select the detected corner of the user's eye as a feature to be tracked in subsequent images. The comparing module 404 may compare the detected corner of the user's eye in image 1 to the corresponding detected corner of the user's eye in image 2.
[0038] In some embodiments, the variance module 406 may calculate a variance for each selected feature found in the second image. The variance may indicate a degree a portion of the second image varies from a corresponding portion of
the first image. For example, upon comparing the detected corner of the user's eye in image 1 to the corresponding detected corner of the user's eye in image 2, the variance module 406 may determine the degree of variation between the corner of the user's eye in image 2 and the corner of the user's eye in image 1. In some embodi- ments, the variance module 406 may write the calculated variance to a file and store the file in a storage medium. In some embodiments, the variance module 406 may store the variance in the variance data 1 12.
[0039] In some configurations, the variance module 406 may divide a calculated variance into first and second elements. The variance module 406 may store the first element in a first file and the second element in a second file within the variance data 1 12. In some embodiments, the variance module 406 divides the calculated variance based on a low order element of the variance and a high order element of the variance. For example, if the variance were calculated to be a decimal number equal to 123,456, then the variance module 406 may divide the hundreds element (i.e., low order element) from the thousands element (i.e., high order element). Storing the calculated variance as 3 -digit decimal numbers, the variance module 406 may store the thousands element " 123" in a first high order element file in the variance data 1 12, and may store the hundreds element "456" in a low order element file in the variance data 1 12. In some embodiments, the GPU 102 and the elements of the GPU 102 may access the high and low order element files stored in the variance data 1 12 in order to implement the calculated variance in one or more processes related to feature detection, detection tracking, cross-correlation, and generation of a 3-D model.
[0040] In one embodiment, the patch module 408 may select, or extract, from among the selected features in the first image one or more patches based on the calculated variance of each selected feature. Each extracted patch may include a square area of pixels centered on one of the selected features, or interest points, of the first image. Upon selecting and extracting the one or more patches in the first image, in some embodiments, the patch module 408 may remove the first image from memory. In some configurations, the patch module 408 may select and extract the one or more patches based on a predetermined threshold of calculated variance. Because a relatively low variance may cause a rounding error in the normalization of
a cross-correlation process, in some embodiments, the patch module 408 may extract only those patches where the associated variance is above a certain level of variance. In some embodiments, the patch module 408 may select the one or more patches based on a predetermined number of patches.
[0041] FIG. 5 is a block diagram illustrating one example of a cross correlation module 304-a. The cross-correlation module 304-a may be one example of the cross-correlation module 304 illustrated in FIG. 3. As depicted, the cross- correlation module 304-a may include a pose detection module 502, a FFT module 504, a multiplication module 506, and a normalizing module 508.
[0042] In some configurations, the cross-correlation module 304-a may perform a cross-correlation algorithm to determine how a patch, selected among the one or more patches, is positioned in first and second sample images of the plurality of images of the user. In some embodiments, the cross-correlation module 304-a may determine a position of the selected feature of the first patch as a point in a vir- tual three-dimensional (3-D) space. In some configurations, as part of the process to perform the cross-correlation algorithm, the pose detection module 502 may determine a pose of the user in the first and second sample images. The FFT module 504 may perform a FFT on the first patch. In some embodiments, the FFT of the first patch may be written to a file and stored in the FFT patch data 1 14.
[0043] In some embodiments, the FFT module 504 may perform a FFT on the first and second sample images. The FFT module 504 may place the first sample image in a real element of a complex number (e.g., "a" in (a + bi)), and may place the second sample image in an imaginary element of the complex number (e.g., "b" in (a + bi)). In some configurations, the multiplication module 506 may multiply element-wise the FFT of the first patch by the FFT of the first and second sample images. The FFT module 504 may calculate an inverse FFT of the multiplied FFTs, resulting in a first score for the first sample image and a second score for the second sample image. In some embodiments, the normalization module 508 may normalize the result of the cross-correlation by dividing both first and second scores by the calculated variance stored in the variance file. For example, the normalization module 508 may access the high and low order element files stored in the variance data 1 12 to divide the first and second scores by the calculated variance.
[0044] In some embodiments, the cross correlation module 304-a may perform the cross-correlation of every selected patch of a given image simultaneously. For example, the patch module 408 may extract 32 patches from image 1. The 32 patches may be cross-correlated simultaneously with image 2 and image 3 by placing image 2 and 3 in the elements of a complex number, performing the FFT of each patch, and normalizing the result by the stored variance. The FFTs of each patch may be stored by the patch module 408 in the FFT patch data 1 14 for subsequent cross-correlation operations. In some embodiments, the cross-correlation module 304-a may perform a second cross-correlation algorithm on images 4 and 5 using the same FFT patches stored in the FFT patch data 1 14.
[0045] FIG. 6 is a diagram 600 illustrating an example of a device 102-b for capturing an image 604 of a user 602. The device 102-b may be one example of the device 102 illustrated in FIGS. 1 and/or 2. As depicted, the device 102-b may include a camera 106-b, and display 108-b. The camera 106-b and display 108-b may be examples of the respective camera 106 and display 108 illustrated in FIGS. 1 and/or 2.
[0046] In one embodiment, the user may operate the device 102-b. For example, the application 202 may allow the user to interact with and/or operate the device 102-b. In one embodiment, the camera 106-a may allow the user to capture an image 604 of the user 602. The GPU 104 may perform feature detection and feature tracking in relation to the images of the user captured by the device 102-b. Additionally, or alternatively, the GPU 104 may perform a normalized cross-correlation algorithm on one or more of the images of the user to track one or more features detected in each image. The GPU 104 may determine a position of the selected feature as one or more points in a virtual three-dimensional (3-D) space to enable the generation of a 3-D model of the user.
[0047] FIG. 7 illustrates an example arrangement 700 of a feature 706 detected in the depicted images 702 and 704. As depicted, the example arrangement 700 may include a first image of the user 702 and a second image of the user 704. In some embodiments, the feature detection module 302 may detect a feature 708 on the first image of the user 702 (e.g., an eye of the user). The selection module 402 may select the feature 708. The patch module 408 may extract the selected feature
708 as a patch 706. Upon extracting the patch 706, in some embodiments, the patch module 408 may remove the image 702 from memory. The comparing module 404 may compare the patch 706 to the second image of the user 704. Upon finding a match between the patch 706 and the detected feature 710 of the of the second image 704, the variance module 406 may measure the degree of variation between the patch 706 and the detected feature 710. For example, the variation module 406 may detect that the detected feature 710 has shifted in one or more directions and/or changed shape (e.g., change in the shape of the eye due to rotation of the face) in relation to the patch 706. In some embodiments, the variance module 406 may store the vari- ance in the variance data 1 12. The stored variance may be used to normalize a cross-correlation of the patch 706 with one or more images. In some embodiments, the FFT module 504 may take the FFT of the patch 706 and store the FFT of the patch 706 in the FFT patch data 1 14. In some embodiments, the cross-correlation module 304 may use the stored FFT of the patch 706 in the performance of multiple cross-correlation algorithms.
[0048] FIG. 8 is a flow diagram illustrating one embodiment of a method 800 for detecting features in images. In some configurations, the method 800 may be implemented by the GPU 104 illustrated in FIGS. 1 , 2, and/or 3. In some configurations, the method 800 may be implemented by the application 202 illustrated in FIG. 2.
[0049] At block 802, a plurality of features detected by the GPU in a first image of the plurality of images of the user may be selected. Each selected feature may include one or more pixels. At block 804, in a second image of the plurality of images of the user, a search may be performed for the plurality of features selected in the first image.
[0050] At block 806, a variance may be calculated, on the GPU, for each selected feature found in the second image. At block 808, the variance may indicate a degree to which a portion of the second image varies from a corresponding portion of the first image. The calculated variance may be stored in a variance file.
[0051] FIG. 9 is a flow diagram illustrating one embodiment of a method
900 for performing cross-correlation algorithms on a GPU. In some configurations, the method 900 may be implemented by the GPU 104 illustrated in FIGS. 1 , 2,
and/or 3. In some configurations, the method 900 may be implemented by the application 202 illustrated in FIG. 2.
[0052] At block 902, one or more patches may be selected from among selected features in a first image based on a calculated variance of each selected fea- ture. At block 904, a first cross-correlation algorithm may be performed on a GPU using a FFT of a first patch stored in a file to determine how the first patch is positioned in first and second sample images. At block 906, a second cross-correlation algorithm may be performed on the GPU on third and fourth sample images using the FFT of the first patch stored in the file to determine how the first patch is posi- tioned in third and fourth sample images.
[0053] FIG. 10 is a flow diagram illustrating one embodiment of a method 1000 for performing a cross-correlation algorithm on two images simultaneously. In some configurations, the method 1000 may be implemented by the GPU 104 illustrated in FIGS. 1 , 2, and/or 3. In some configurations, the method 1000 may be im- plemented by the application 202 illustrated in FIG. 2.
[0054] At block 1002, a pose of the user in the first and second sample images may be determined. At block 1004, a FFT may be performed on a patch extracted from an image. At block 1006, a FFT may be performed on the first and second sample images. The first sample image may be placed in a real element of a complex number and the second sample image may be placed in an imaginary element of the complex number.
[0055] At block 1008, the FFT of the first patch may be multiplied element- wise by the FFT of the first and second sample images. At block 1010, an inverse FFT of the multiplied FFTs may be calculated, resulting in a first score for the first sample image and a second score for the second sample image. At block 1012, the result of the cross-correlation may be normalized by dividing both first and second scores by the calculated variance stored in the variance file.
[0056] FIG. 11 depicts a block diagram of a computer system 1 100 suitable for implementing the present systems and methods. The depicted computer system 1 100 may be one example of a server 206 depicted in FIG 2. Alternatively, the system 1 100 may be one example of a device 102 depicted in FIGS. 1 , 2, and/or 6. Computer system 1 100 includes a bus 1 102 which interconnects major subsystems of
computer system 1 100, such as a GPU 1 104, a system memory 1 106 (typically RAM, but which may also include ROM, flash RAM, or the like), an input/output controller 1 108, an external audio device, such as a speaker system 1 1 10 via an audio output interface 1 1 12, an external device, such as a display screen 1 1 14 via display adapter 1 1 16, serial ports 1 1 18 and mouse 1 146, a keyboard 1 122 (interfaced with a keyboard controller 1 124), multiple USB devices 1 126 (interfaced with a USB controller 1 128), a storage interface 1 130, a host bus adapter (HBA) interface card 1 136A operative to connect with a Fibre Channel network 1 138 , a host bus adapter (HBA) interface card 1 136B operative to connect to a SCSI bus 1 140, and an optical disk drive 1 142 operative to receive an optical disk 1 144. The GPU 1 104 may be one example of the GPU 104 depicted in FIGS. 1 , 2, and/or 3. Also included are a mouse 1 146 (or other point-and-click device, coupled to bus 1 102 via serial port 1 1 18), a modem 1 148 (coupled to bus 1 102 via serial port 1 120), and a network interface 1 150 (coupled directly to bus 1 102).
[0057] Bus 1 102 allows data communication between GPU 1 104 and system memory 1 106, which may include read-only memory (ROM) or flash memory (neither shown), and random access memory (RAM) (not shown), as previously noted. The RAM is generally the main memory into which the operating system and application programs are loaded. The ROM or flash memory can contain, among other code, the Basic Input-Output system (BIOS) which controls basic hardware operation such as the interaction with peripheral components or devices. For example, one or more instructions related to the operations of the GPU 1 104 to implement the present systems and methods may be stored within the system memory 1 106. Applications resident with computer system 1 100 are generally stored on and accessed via a non-transitory computer readable medium, such as a hard disk drive (e.g. , fixed disk 1 152), an optical drive (e.g. , optical drive 1 142), or other storage medium. Additionally, applications can be in the form of electronic signals modulated in accordance with the application and data communication technology when accessed via network modem 1 148 or interface 1 150.
[0058] Storage interface 1 130, as with the other storage interfaces of computer system 1 100, can connect to a standard computer readable medium for storage and/or retrieval of information, such as a fixed disk drive 1 152. Fixed disk drive
1 152 may be a part of computer system 1 100 or may be separate and accessed through other interface systems. Modem 1 148 may provide a direct connection to a remote server via a telephone link or to the Internet via an internet service provider (ISP). Network interface 1 150 may provide a direct connection to a remote server via a direct network link to the Internet via a POP (point of presence). Network interface 1 150 may provide such connection using wireless techniques, including digital cellular telephone connection, Cellular Digital Packet Data (CDPD) connection, digital satellite data connection or the like.
[0059] Many other devices or subsystems (not shown) may be connected in a similar manner (e.g. , document scanners, digital cameras and so on). Conversely, all of the devices shown in FIG. 1 1 need not be present to practice the present systems and methods. The devices and subsystems can be interconnected in different ways from that shown in FIG. 1 1 . The operation of at least some of the computer system 1 100 such as that shown in FIG. 1 1 is readily known in the art and is not dis- cussed in detail in this application. Code to implement the present disclosure can be stored in a non-transitory computer-readable medium such as one or more of system memory 1 106, fixed disk 1 152, or optical disk 1 144. The operating system provided on computer system 1 100 may be MS-DOS®, MS-WINDOWS®, OS/2®, UNIX®, Linux®, or another known operating system.
[0060] Moreover, regarding the signals described herein, those skilled in the art will recognize that a signal can be directly transmitted from a first block to a second block, or a signal can be modified (e.g. , amplified, attenuated, delayed, latched, buffered, inverted, filtered, or otherwise modified) between the blocks. Although the signals of the above described embodiment are characterized as trans- mitted from one block to the next, other embodiments of the present systems and methods may include modified signals in place of such directly transmitted signals as long as the informational and/or functional aspect of the signal is transmitted between blocks. To some extent, a signal input at a second block can be conceptualized as a second signal derived from a first signal output from a first block due to physical limitations of the circuitry involved (e.g., there will inevitably be some attenuation and delay). Therefore, as used herein, a second signal derived from a first signal includes the first signal or any modifications to the first signal, whether due to
circuit limitations or due to passage through other circuit elements which do not change the informational and/or final functional aspect of the first signal.
[0061] While the foregoing disclosure sets forth various embodiments using specific block diagrams, flowcharts, and examples, each block diagram compo- nent, flowchart step, operation, and/or component described and/or illustrated herein may be implemented, individually and/or collectively, using a wide range of hardware, software, or firmware (or any combination thereof) configurations. In addition, any disclosure of components contained within other components should be considered exemplary in nature since many other architectures can be implemented to achieve the same functionality.
[0062] The process parameters and sequence of steps described and/or illustrated herein are given by way of example only and can be varied as desired. For example, while the steps illustrated and/or described herein may be shown or discussed in a particular order, these steps do not necessarily need to be performed in the order illustrated or discussed. The various exemplary methods described and/or illustrated herein may also omit one or more of the steps described or illustrated herein or include additional steps in addition to those disclosed.
[0063] Furthermore, while various embodiments have been described and/or illustrated herein in the context of fully functional computing systems, one or more of these exemplary embodiments may be distributed as a program product in a variety of forms, regardless of the particular type of computer-readable media used to actually carry out the distribution. The embodiments disclosed herein may also be implemented using software modules that perform certain tasks. These software modules may include script, batch, or other executable files that may be stored on a computer-readable storage medium or in a computing system. In some embodiments, these software modules may configure a computing system to perform one or more of the exemplary embodiments disclosed herein.
[0064] The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discus- sions above are not intended to be exhaustive or to limit the invention to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best ex-
plain the principles of the present systems and methods and their practical applications, to thereby enable others skilled in the art to best utilize the present systems and methods and various embodiments with various modifications as may be suited to the particular use contemplated.
[0065] Unless otherwise noted, the terms "a" or "an," as used in the specification and claims, are to be construed as meaning "at least one of." In addition, for ease of use, the words "including" and "having," as used in the specification and claims, are interchangeable with and have the same meaning as the word "comprising." In addition, the term "based on" as used in the specification and the claims is to be con- strued as meaning "based at least upon."
Claims
1. A computer-implemented method for processing, by a graphical processor unit (GPU), a plurality of images of a user, the method comprising:
selecting a plurality of features detected by the GPU in a first image of the plurality of images of the user, wherein each feature comprises one or more pixels;
in a second image of the plurality of images of the user, searching for the plurality of features selected in the first image;
calculating, on the GPU, a variance for each selected feature found in the second image, wherein the variance indicates a degree a portion of the second image varies from a corresponding portion of the first image; and
storing the calculated variance in a variance file.
2. The method of claim 1 , further comprising:
selecting from among the selected features in the first image one or more
patches based on the calculated variance of each selected feature, wherein each patch comprises a square area of pixels centered on one of the selected features of the first image.
3. The method of claim 2, further comprising:
upon selecting the one or more patches in the first image, removing the first image from memory.
4. The method of claim 2, wherein selecting the one or more patches further comprises:
selecting the one or more patches based on a predetermined threshold of calculated variance.
5. The method of claim 2, wherein selecting the one or more patches further comprises:
selecting the one or more patches based on a predetermined number of patches.
6. The method of claim 1 , further comprising:
dividing each variance into first and second elements; and
storing the first element in a first file and the second element in a second file.
7. The method of claim 1 , further comprising:
performing on the GPU a cross-correlation algorithm to determine how a first patch, selected among the one or more patches, is positioned in first and second sample images of the plurality of images of the user.
8. The method of claim 7, wherein performing the cross-correlation algorithm on the GPU comprises:
determining a pose of the user in the first and second sample images;
performing a fast Fourier transform (FFT) on the first patch, wherein the FFT of the first patch is stored in a third file;
performing the FFT on the first and second sample images, placing the first sample image in a real element of a complex number and placing the second sample image in an imaginary element of the complex number; multiplying element-wise the FFT of the first patch by the FFT of the first and second sample images;
calculating an inverse FFT of the multiplied FFTs, resulting in a first score for the first sample image and a second score for the second sample image; and
normalizing the result of the cross-correlation by dividing both first and second scores by the calculated variance stored in the variance file.
9. The method of claim 8, further comprising:
performing on the GPU a second cross-correlation algorithm on third and fourth sample images of the plurality of images of the user using the FFT of the first patch stored in the third file to determine how the first patch is positioned in the third and fourth sample images.
10. The method of claim 7, wherein performing on the GPU the cross-correlation algorithm further comprises
performing on the GPU the cross-correlation of each selected patch simulta- neously.
1 1. The method of claim 1 , further comprising:
determining a position of the selected feature of the first patch as a point in a virtual three-dimensional (3-D) space.
12. A computing device configured to process, by a graphical processor unit (GPU), a plurality of images of a user, comprising:
the GPU;
memory in electronic communication with the GPU;
instructions stored in the memory, the instructions being executable by the
GPU to:
select a plurality of features detected by the GPU in a first image of the plurality of images of the user, wherein each feature comprises one or more pixels;
in a second image of the plurality of images of the user, search for the plurality of features selected in the first image;
calculate, on the GPU, a variance for each selected feature found in the second image, wherein the variance indicates a degree a portion of the second image varies from a corresponding portion of the first image; and
storing the calculated variance in a variance file.
13. The computing device of claim 12, wherein the instructions are executable by the GPU to:
select from among the selected features in the first image one or more patches based on the calculated variance of each selected feature, wherein each patch comprises a square area of pixels centered on one of the selected features of the first image;
upon selecting the one or more patches in the first image, remove the first image from memory.
14. The computing device of claim 13, wherein the instructions are executable by the GPU to:
select the one or more patches based on a predetermined threshold of calculated variance.
15. The computing device of claim 12, wherein the instructions are executable by the GPU to:
perform on the GPU a cross-correlation algorithm to determine how a first patch, selected among the one or more patches, is positioned in first and second sample images of the plurality of images of the user.
16. The computing device of claim 15 , wherein performing the cross-correlation algorithm on the GPU comprises instructions executable by the GPU to:
determine a pose of the user in the first and second sample images;
perform a fast Fourier transform (FFT) on the first patch, wherein the FFT of the first patch is stored in a third file;
perform the FFT on the first and second sample images, placing the first sample image in a real element of a complex number and placing the second sample image in an imaginary element of the complex number; multiply element-wise the FFT of the first patch by the FFT of the first and second sample images;
calculate an inverse FFT of the multiplied FFTs, resulting in a first score for the first sample image and a second score for the second sample image; and
normalize the result of the cross-correlation by dividing both first and second scores by the calculated variance stored in the variance file.
17. The computing device of claim 16, wherein the instructions are executable by the GPU to:
perform on the GPU a second cross-correlation algorithm on third and fourth sample images of the plurality of images of the user using the FFT of the first patch stored in the third file to determine how the first patch is positioned in the third and fourth sample images.
18. The computing device of claim 15, wherein the instructions are executable by the GPU to:
perform on the GPU the cross-correlation of each selected patch simultaneously.
19. A computer-program product for processing, by a graphical processor unit (GPU), a plurality of images of a user, the computer-program product comprising a non-transitory computer-readable medium storing instructions thereon, the instructions being executable by the GPU to:
select a plurality of features detected by the GPU in a first image of the plurality of images of the user, wherein each feature comprises one or more pixels;
in a second image of the plurality of images of the user, search for the plurality of features selected in the first image;
calculate, on the GPU, a variance for each selected feature found in the second image, wherein the variance indicates a degree a portion of the second image varies from a corresponding portion of the first image; and
store the calculated variance in a variance file.
20. The computer-program product of claim 19, wherein the instructions are executable by the GPU to:
perform on the GPU a first cross-correlation algorithm of each selected patch simultaneously to determine how each patch is positioned in first and second sample images of the plurality of images of the user; perform on the GPU a second cross-correlation algorithm of each selected patch simultaneously to determine how each patch is positioned in third and fourth sample images of the plurality of images of the user; and
normalize the results of the first and second cross-correlations by dividing the results by the calculated variance stored in the variance file.
Applications Claiming Priority (6)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US201261650983P | 2012-05-23 | 2012-05-23 | |
US61/650,983 | 2012-05-23 | ||
US201261735951P | 2012-12-11 | 2012-12-11 | |
US61/735,951 | 2012-12-11 | ||
US13/775,764 US9208608B2 (en) | 2012-05-23 | 2013-02-25 | Systems and methods for feature tracking |
US13/775,764 | 2013-02-25 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2013177448A1 true WO2013177448A1 (en) | 2013-11-28 |
Family
ID=49621242
Family Applications (5)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2013/042517 WO2013177459A1 (en) | 2012-05-23 | 2013-05-23 | Systems and methods for rendering virtual try-on products |
PCT/US2013/042509 WO2013177453A1 (en) | 2012-05-23 | 2013-05-23 | Systems and methods for efficiently processing virtual 3-d data |
PCT/US2013/042520 WO2013177462A1 (en) | 2012-05-23 | 2013-05-23 | Systems and methods for generating a 3-d model of a virtual try-on product |
PCT/US2013/042514 WO2013177457A1 (en) | 2012-05-23 | 2013-05-23 | Systems and methods for generating a 3-d model of a user for a virtual try-on product |
PCT/US2013/042504 WO2013177448A1 (en) | 2012-05-23 | 2013-05-23 | Systems and methods for feature tracking |
Family Applications Before (4)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/US2013/042517 WO2013177459A1 (en) | 2012-05-23 | 2013-05-23 | Systems and methods for rendering virtual try-on products |
PCT/US2013/042509 WO2013177453A1 (en) | 2012-05-23 | 2013-05-23 | Systems and methods for efficiently processing virtual 3-d data |
PCT/US2013/042520 WO2013177462A1 (en) | 2012-05-23 | 2013-05-23 | Systems and methods for generating a 3-d model of a virtual try-on product |
PCT/US2013/042514 WO2013177457A1 (en) | 2012-05-23 | 2013-05-23 | Systems and methods for generating a 3-d model of a user for a virtual try-on product |
Country Status (5)
Country | Link |
---|---|
US (6) | US9311746B2 (en) |
EP (2) | EP2852934B1 (en) |
AU (1) | AU2013266187B2 (en) |
CA (1) | CA2874531C (en) |
WO (5) | WO2013177459A1 (en) |
Families Citing this family (74)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104021590A (en) * | 2013-02-28 | 2014-09-03 | 北京三星通信技术研究有限公司 | Virtual try-on system and virtual try-on method |
US20140240354A1 (en) * | 2013-02-28 | 2014-08-28 | Samsung Electronics Co., Ltd. | Augmented reality apparatus and method |
US9965887B2 (en) * | 2013-03-05 | 2018-05-08 | Autodesk, Inc. | Technique for mapping a texture onto a three-dimensional model |
US9338440B2 (en) * | 2013-06-17 | 2016-05-10 | Microsoft Technology Licensing, Llc | User interface for three-dimensional modeling |
US9304332B2 (en) | 2013-08-22 | 2016-04-05 | Bespoke, Inc. | Method and system to create custom, user-specific eyewear |
EP2919450B1 (en) * | 2014-03-11 | 2020-09-09 | Wipro Limited | A method and a guided imaging unit for guiding a user to capture an image |
US10366487B2 (en) * | 2014-03-14 | 2019-07-30 | Samsung Electronics Co., Ltd. | Electronic apparatus for providing health status information, method of controlling the same, and computer-readable storage medium |
SG11201607055QA (en) * | 2014-03-25 | 2016-09-29 | Dws Srl | Improved computer-implemented method for defining the points of development of supporting elements of an object made by means of a stereolithography process |
KR102225620B1 (en) * | 2014-04-03 | 2021-03-12 | 한화테크윈 주식회사 | Camera modeling system |
AU2015255730A1 (en) * | 2014-05-08 | 2016-11-24 | Luxottica Retail North America Inc. | Systems and methods for scaling an object |
CN104217350B (en) * | 2014-06-17 | 2017-03-22 | 北京京东尚科信息技术有限公司 | Virtual try-on realization method and device |
US9665984B2 (en) * | 2014-07-31 | 2017-05-30 | Ulsee Inc. | 2D image-based 3D glasses virtual try-on system |
US10176625B2 (en) | 2014-09-25 | 2019-01-08 | Faro Technologies, Inc. | Augmented reality camera for use with 3D metrology equipment in forming 3D images from 2D camera images |
KR20160046399A (en) * | 2014-10-20 | 2016-04-29 | 삼성에스디에스 주식회사 | Method and Apparatus for Generation Texture Map, and Database Generation Method |
FR3027505B1 (en) * | 2014-10-27 | 2022-05-06 | H 43 | METHOD FOR CONTROLLING THE POSITIONING OF TEETH |
KR102272310B1 (en) * | 2014-11-18 | 2021-07-02 | 삼성전자주식회사 | Method of processing images, Computer readable storage medium of recording the method and an electronic apparatus |
US9506744B2 (en) * | 2014-12-16 | 2016-11-29 | Faro Technologies, Inc. | Triangulation scanner and camera for augmented reality |
KR102296820B1 (en) * | 2015-01-27 | 2021-09-02 | 삼성전자주식회사 | Method and apparatus for forming 2d texture map of facial image |
CN107431801A (en) | 2015-03-01 | 2017-12-01 | 奈克斯特Vr股份有限公司 | The method and apparatus for support content generation, sending and/or resetting |
US10373343B1 (en) | 2015-05-28 | 2019-08-06 | Certainteed Corporation | System for visualization of a building material |
MA51504A (en) * | 2016-01-25 | 2020-11-11 | Opticas Claravision S L | PROCESS FOR THE MANUFACTURING OF GLASSES AND FRAMES, WITH CUSTOM DESIGN AND REMOTE TRIMMING OF OPHTHALMIC LENSES, AND OPTICAL EQUIPMENT FOR MORPHOLOGICAL MEASUREMENT FOR IMPLEMENTING THIS PROCESS |
CN105913496B (en) * | 2016-04-06 | 2018-09-28 | 成都景和千城科技有限公司 | It is a kind of by true dress ornament rapid translating be three-dimensional dress ornament method and system |
US10672180B2 (en) | 2016-05-02 | 2020-06-02 | Samsung Electronics Co., Ltd. | Method, apparatus, and recording medium for processing image |
CA3024874A1 (en) * | 2016-06-01 | 2017-12-07 | Vidi Pty Ltd | An optical measuring and scanning system and methods of use |
US10008024B2 (en) | 2016-06-08 | 2018-06-26 | Qualcomm Incorporated | Material-aware three-dimensional scanning |
FR3053509B1 (en) * | 2016-06-30 | 2019-08-16 | Fittingbox | METHOD FOR OCCULATING AN OBJECT IN AN IMAGE OR A VIDEO AND ASSOCIATED AUGMENTED REALITY METHOD |
US10217275B2 (en) * | 2016-07-07 | 2019-02-26 | Disney Enterprises, Inc. | Methods and systems of performing eye reconstruction using a parametric model |
US10217265B2 (en) | 2016-07-07 | 2019-02-26 | Disney Enterprises, Inc. | Methods and systems of generating a parametric eye model |
US10762717B2 (en) * | 2016-09-29 | 2020-09-01 | Sony Interactive Entertainment America, LLC | Blend shape system with dynamic partitioning |
US10275925B2 (en) | 2016-09-29 | 2019-04-30 | Sony Interactive Entertainment America, LLC | Blend shape system with texture coordinate blending |
EP3305176A1 (en) * | 2016-10-04 | 2018-04-11 | Essilor International | Method for determining a geometrical parameter of an eye of a subject |
KR102246841B1 (en) * | 2016-10-05 | 2021-05-03 | 매직 립, 인코포레이티드 | Surface modeling systems and methods |
US10043317B2 (en) | 2016-11-18 | 2018-08-07 | International Business Machines Corporation | Virtual trial of products and appearance guidance in display device |
US10943100B2 (en) | 2017-01-19 | 2021-03-09 | Mindmaze Holding Sa | Systems, methods, devices and apparatuses for detecting facial expression |
US10679408B2 (en) * | 2017-02-02 | 2020-06-09 | Adobe Inc. | Generating a three-dimensional model from a scanned object |
US11367198B2 (en) * | 2017-02-07 | 2022-06-21 | Mindmaze Holding Sa | Systems, methods, and apparatuses for tracking a body or portions thereof |
CN109035381B (en) * | 2017-06-08 | 2021-11-09 | 福建天晴数码有限公司 | Cartoon picture hair rendering method and storage medium based on UE4 platform |
CN107481082A (en) * | 2017-06-26 | 2017-12-15 | 珠海格力电器股份有限公司 | A kind of virtual fit method and its device, electronic equipment and virtual fitting system |
CN107610239B (en) * | 2017-09-14 | 2020-11-03 | 广州帕克西软件开发有限公司 | Virtual try-on method and device for facial makeup |
JP2019070872A (en) * | 2017-10-05 | 2019-05-09 | カシオ計算機株式会社 | Image processing device, image processing method, and program |
US10613710B2 (en) * | 2017-10-22 | 2020-04-07 | SWATCHBOOK, Inc. | Product simulation and control system for user navigation and interaction |
GB2569546B (en) * | 2017-12-19 | 2020-10-14 | Sony Interactive Entertainment Inc | Determining pixel values using reference images |
CN109979013B (en) * | 2017-12-27 | 2021-03-02 | Tcl科技集团股份有限公司 | Three-dimensional face mapping method and terminal equipment |
US11328533B1 (en) | 2018-01-09 | 2022-05-10 | Mindmaze Holding Sa | System, method and apparatus for detecting facial expression for motion capture |
CN108629837A (en) * | 2018-01-09 | 2018-10-09 | 南京大学 | A kind of cloth real-time emulation method for virtual fitting |
US20190228580A1 (en) * | 2018-01-24 | 2019-07-25 | Facebook, Inc. | Dynamic Creation of Augmented Reality Effects |
US10839578B2 (en) * | 2018-02-14 | 2020-11-17 | Smarter Reality, LLC | Artificial-intelligence enhanced visualization of non-invasive, minimally-invasive and surgical aesthetic medical procedures |
US11126160B1 (en) * | 2018-04-29 | 2021-09-21 | Dustin Kyle Nolen | Method for producing a scaled-up solid model of microscopic features of a surface |
US10769851B1 (en) * | 2018-04-29 | 2020-09-08 | Dustin Kyle Nolen | Method for producing a scaled-up solid model of microscopic features of a surface |
US10789696B2 (en) * | 2018-05-24 | 2020-09-29 | Tfi Digital Media Limited | Patch selection for neural network based no-reference image quality assessment |
US11195324B1 (en) | 2018-08-14 | 2021-12-07 | Certainteed Llc | Systems and methods for visualization of building structures |
US10997396B2 (en) * | 2019-04-05 | 2021-05-04 | Realnetworks, Inc. | Face liveness detection systems and methods |
CN110175897A (en) * | 2019-06-03 | 2019-08-27 | 广东元一科技实业有限公司 | A kind of 3D synthesis fitting method and system |
CN114762319A (en) * | 2019-12-09 | 2022-07-15 | 索尼集团公司 | Data processing apparatus, data processing method, and program |
US11430142B2 (en) * | 2020-04-28 | 2022-08-30 | Snap Inc. | Photometric-based 3D object modeling |
CN111743273B (en) * | 2020-05-21 | 2022-09-30 | 中国地质大学(北京) | Polarizing jewelry and identity recognition method using same |
CN111773707A (en) * | 2020-08-11 | 2020-10-16 | 网易(杭州)网络有限公司 | Rendering processing method and device, electronic equipment and storage medium |
US11551421B1 (en) | 2020-10-16 | 2023-01-10 | Splunk Inc. | Mesh updates via mesh frustum cutting |
US11546437B1 (en) | 2020-10-16 | 2023-01-03 | Splunk Inc. | Playback of a stored networked remote collaboration session |
US11544904B1 (en) * | 2020-10-16 | 2023-01-03 | Splunk Inc. | Mesh updates in an extended reality environment |
US11798235B1 (en) | 2020-10-16 | 2023-10-24 | Splunk Inc. | Interactions in networked remote collaboration environments |
US11727643B1 (en) | 2020-10-16 | 2023-08-15 | Splunk Inc. | Multi-environment networked remote collaboration system |
US11127223B1 (en) | 2020-10-16 | 2021-09-21 | Splunkinc. | Mesh updates via mesh splitting |
US11776218B1 (en) | 2020-10-16 | 2023-10-03 | Splunk Inc. | Networked remote collaboration system |
US11563813B1 (en) | 2020-10-16 | 2023-01-24 | Splunk Inc. | Presentation of collaboration environments for a networked remote collaboration session |
US11494732B2 (en) | 2020-10-30 | 2022-11-08 | International Business Machines Corporation | Need-based inventory |
US11663764B2 (en) | 2021-01-27 | 2023-05-30 | Spree3D Corporation | Automatic creation of a photorealistic customized animated garmented avatar |
US20220207828A1 (en) * | 2020-12-30 | 2022-06-30 | Spree3D Corporation | Systems and methods of three-dimensional modeling for use in generating a realistic computer avatar and garments |
US20220343601A1 (en) * | 2021-04-21 | 2022-10-27 | Fyusion, Inc. | Weak multi-view supervision for surface mapping estimation |
US11769346B2 (en) | 2021-06-03 | 2023-09-26 | Spree3D Corporation | Video reenactment with hair shape and motion transfer |
US11836905B2 (en) | 2021-06-03 | 2023-12-05 | Spree3D Corporation | Image reenactment with illumination disentanglement |
US11854579B2 (en) | 2021-06-03 | 2023-12-26 | Spree3D Corporation | Video reenactment taking into account temporal information |
WO2023278488A1 (en) * | 2021-06-29 | 2023-01-05 | Variable Technologies Corporation | Systems and methods for volumizing and encoding two-dimensional images |
US20230027519A1 (en) * | 2021-07-13 | 2023-01-26 | Tencent America LLC | Image based sampling metric for quality assessment |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010026272A1 (en) * | 2000-04-03 | 2001-10-04 | Avihay Feld | System and method for simulation of virtual wear articles on virtual models |
US20030110099A1 (en) * | 2001-12-11 | 2003-06-12 | Philips Electronics North America Corporation | Virtual wearing of clothes |
US20090310861A1 (en) * | 2005-10-31 | 2009-12-17 | Sony United Kingdom Limited | Image processing |
US20110029561A1 (en) * | 2009-07-31 | 2011-02-03 | Malcolm Slaney | Image similarity from disparate sources |
US20110040539A1 (en) * | 2009-08-12 | 2011-02-17 | Szymczyk Matthew | Providing a simulation of wearing items such as garments and/or accessories |
Family Cites Families (476)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US3927933A (en) | 1973-08-06 | 1975-12-23 | Humphrey Instruments Inc | Apparatus for opthalmological prescription readout |
DE2934263C3 (en) | 1979-08-24 | 1982-03-25 | Fa. Carl Zeiss, 7920 Heidenheim | Method and device for the automatic measurement of the vertex power in the main sections of toric spectacle lenses |
US4698564A (en) | 1980-05-20 | 1987-10-06 | Slavin Sidney H | Spinning optics device |
US4522474A (en) | 1980-05-20 | 1985-06-11 | Slavin Sidney H | Spinning optics device |
US4534650A (en) | 1981-04-27 | 1985-08-13 | Inria Institut National De Recherche En Informatique Et En Automatique | Device for the determination of the position of points on the surface of a body |
US4539585A (en) | 1981-07-10 | 1985-09-03 | Spackova Daniela S | Previewer |
US4467349A (en) | 1982-04-07 | 1984-08-21 | Maloomian Laurence G | System and method for composite display |
EP0092364A1 (en) | 1982-04-14 | 1983-10-26 | The Hanwell Optical Co. Limited | A method of and apparatus for dimensioning a lens to fit a spectacle frame |
JPS5955411A (en) | 1982-09-24 | 1984-03-30 | Hoya Corp | Determination method of optimum thickness for spectacle lens |
US4613219A (en) | 1984-03-05 | 1986-09-23 | Burke Marketing Services, Inc. | Eye movement recording apparatus |
JPS6180222A (en) | 1984-09-28 | 1986-04-23 | Asahi Glass Co Ltd | Method and apparatus for adjusting spectacles |
US4781452A (en) | 1984-11-07 | 1988-11-01 | Ace Ronald S | Modular optical manufacturing system |
US5281957A (en) | 1984-11-14 | 1994-01-25 | Schoolman Scientific Corp. | Portable computer and head mounted display |
DE3517321A1 (en) | 1985-05-14 | 1986-11-20 | Fa. Carl Zeiss, 7920 Heidenheim | MULTIFOCAL EYEWEAR LENS WITH AT LEAST ONE SPECTACLE |
US5139373A (en) | 1986-08-14 | 1992-08-18 | Gerber Optical, Inc. | Optical lens pattern making system and method |
US4724617A (en) | 1986-08-14 | 1988-02-16 | Gerber Scientific Products, Inc. | Apparatus for tracing the lens opening in an eyeglass frame |
EP0260710A3 (en) | 1986-09-19 | 1989-12-06 | Hoya Corporation | Method of forming a synthetic image in simulation system for attachment of spectacles |
KR910000591B1 (en) | 1986-10-30 | 1991-01-26 | 가부시기가이샤 도시바 | Glasses frame picture process record method and it's system |
FR2636143B1 (en) | 1988-09-08 | 1990-11-02 | Briot Int | DATA TRANSMISSION DEVICE FOR FACILITATING AND ACCELERATING THE MANUFACTURE OF GLASSES |
US4957369A (en) | 1989-01-23 | 1990-09-18 | California Institute Of Technology | Apparatus for measuring three-dimensional surface geometries |
US5255352A (en) | 1989-08-03 | 1993-10-19 | Computer Design, Inc. | Mapping of two-dimensional surface detail on three-dimensional surfaces |
IE67140B1 (en) | 1990-02-27 | 1996-03-06 | Bausch & Lomb | Lens edging system |
FR2678405B1 (en) | 1991-06-25 | 1993-09-24 | Chainet Patrice | INSTALLATION FOR ASSISTING THE CASH MANAGEMENT OF A DISTRIBUTION CENTER, AS WELL AS THE PROCESS IMPLEMENTED FOR SAID INSTALLATION. |
US5257198A (en) | 1991-12-18 | 1993-10-26 | Schoyck Carol G Van | Method of transmitting edger information to a remote numerically controlled edger |
DE69332650T2 (en) | 1992-06-24 | 2003-08-21 | Hoya Corp | Manufacture of eyeglass lenses |
SE470440B (en) | 1992-08-12 | 1994-03-14 | Jan Erik Juto | Method and apparatus for rhino-osteometric measurement |
US5280570A (en) | 1992-09-11 | 1994-01-18 | Jordan Arthur J | Spectacle imaging and lens simulating system and method |
US5428448A (en) | 1993-10-20 | 1995-06-27 | Augen Wecken Plasticos S.R.L. De C.V. | Method and apparatus for non-contact digitazation of frames and lenses |
DE4427071A1 (en) | 1994-08-01 | 1996-02-08 | Wernicke & Co Gmbh | Procedure for determining boundary data |
US5550602A (en) | 1994-11-09 | 1996-08-27 | Johannes Braeuning | Apparatus and method for examining visual functions |
JP3543395B2 (en) | 1994-11-17 | 2004-07-14 | 株式会社日立製作所 | Service provision and usage |
WO1996024877A2 (en) | 1995-02-01 | 1996-08-15 | Bausch & Lomb Incorporated | Temple for eyewear |
US5844573A (en) | 1995-06-07 | 1998-12-01 | Massachusetts Institute Of Technology | Image compression by pointwise prototype correspondence using shape and texture information |
US5774129A (en) | 1995-06-07 | 1998-06-30 | Massachusetts Institute Of Technology | Image analysis and synthesis networks using shape and texture information |
US6016150A (en) * | 1995-08-04 | 2000-01-18 | Microsoft Corporation | Sprite compositor and method for performing lighting and shading operations using a compositor to combine factored image layers |
US5592248A (en) | 1995-11-16 | 1997-01-07 | Norton; Ross A. | Computerized method for fitting eyeglasses |
US5682210A (en) | 1995-12-08 | 1997-10-28 | Weirich; John | Eye contact lens video display system |
US5720649A (en) | 1995-12-22 | 1998-02-24 | Gerber Optical, Inc. | Optical lens or lap blank surfacing machine, related method and cutting tool for use therewith |
US5988862A (en) | 1996-04-24 | 1999-11-23 | Cyra Technologies, Inc. | Integrated system for quickly and accurately imaging and modeling three dimensional objects |
DE19616526A1 (en) | 1996-04-25 | 1997-11-06 | Rainer Jung | Machine for the machining of optical materials for the production of optical parts |
AU6358798A (en) | 1996-09-26 | 1998-04-17 | Mcdonnell Douglas Corporation | Head mounted display with fibre optic image transfer from flat panel |
US5809580A (en) | 1996-12-20 | 1998-09-22 | Bausch & Lomb Incorporated | Multi-sport goggle with interchangeable strap and tear-off lens system |
DE19702287C2 (en) | 1997-01-23 | 1999-02-11 | Wernicke & Co Gmbh | Method for determining the course of the facets on the edge of spectacle lenses to be processed and for controlling the processing of shapes in accordance with the determined course of the facets |
DE69810855T2 (en) | 1997-02-06 | 2003-10-16 | Luxottica Leasing S P A | CONFIGURATION OF AN ELECTRICAL CONNECTION FOR AN ELECTROOPTICAL DEVICE |
US5983201A (en) | 1997-03-28 | 1999-11-09 | Fay; Pierre N. | System and method enabling shopping from home for fitted eyeglass frames |
US6420698B1 (en) | 1997-04-24 | 2002-07-16 | Cyra Technologies, Inc. | Integrated system for quickly and accurately imaging and modeling three-dimensional objects |
CA2289533A1 (en) | 1997-05-15 | 1998-11-19 | Palantir Software, Inc. | Multimedia supplement for pc accessible recorded media |
AU753161B2 (en) | 1997-05-16 | 2002-10-10 | Hoya Corporation | System for making spectacles to order |
IT1293363B1 (en) | 1997-05-29 | 1999-02-25 | Killer Loop Eyewear Srl | INTERCONNECTION DEVICE, PARTICULARLY FOR GLASSES |
US6492986B1 (en) | 1997-06-02 | 2002-12-10 | The Trustees Of The University Of Pennsylvania | Method for human face shape and motion estimation based on integrating optical flow and deformable models |
US6208347B1 (en) | 1997-06-23 | 2001-03-27 | Real-Time Geometry Corporation | System and method for computer modeling of 3D objects and 2D images by mesh constructions that incorporate non-spatial data such as color or texture |
US6647146B1 (en) | 1997-08-05 | 2003-11-11 | Canon Kabushiki Kaisha | Image processing apparatus |
EP0901105A1 (en) | 1997-08-05 | 1999-03-10 | Canon Kabushiki Kaisha | Image processing apparatus |
DE69823116D1 (en) | 1997-08-05 | 2004-05-19 | Canon Kk | Image processing method and device |
US6018339A (en) | 1997-08-15 | 2000-01-25 | Stevens; Susan | Automatic visual correction for computer screen |
WO1999015945A2 (en) | 1997-09-23 | 1999-04-01 | Enroute, Inc. | Generating three-dimensional models of objects defined by two-dimensional image data |
USD426847S (en) | 1997-09-30 | 2000-06-20 | Bausch & Lomb Incorporated | Eyewear |
US5880806A (en) | 1997-10-16 | 1999-03-09 | Bausch & Lomb Incorporated | Eyewear frame construction |
US6249600B1 (en) | 1997-11-07 | 2001-06-19 | The Trustees Of Columbia University In The City Of New York | System and method for generation of a three-dimensional solid model |
US6139143A (en) | 1997-12-11 | 2000-10-31 | Bausch & Lomb Incorporated | Temple for eyewear having an integrally formed serpentine hinge |
US6310627B1 (en) | 1998-01-20 | 2001-10-30 | Toyo Boseki Kabushiki Kaisha | Method and system for generating a stereoscopic image of a garment |
AU753506B2 (en) | 1998-02-03 | 2002-10-17 | Tsuyoshi Saigo | Simulation system for wearing glasses |
DE19804428A1 (en) | 1998-02-05 | 1999-08-19 | Wernicke & Co Gmbh | Method for marking or drilling holes in spectacle lenses and device for carrying out the method |
US6356271B1 (en) | 1998-02-17 | 2002-03-12 | Silicon Graphics, Inc. | Computer generated paint stamp seaming compensation |
US6144388A (en) | 1998-03-06 | 2000-11-07 | Bornstein; Raanan | Process for displaying articles of clothing on an image of a person |
US6233049B1 (en) | 1998-03-25 | 2001-05-15 | Minolta Co., Ltd. | Three-dimensional measurement apparatus |
USD433052S (en) | 1998-05-06 | 2000-10-31 | Luxottica Leasing S.P.A. | Eyewear |
AU3976999A (en) | 1998-05-07 | 1999-11-23 | Bausch & Lomb Incorporated | Eyewear frame construction |
US6139141A (en) | 1998-05-20 | 2000-10-31 | Altair Holding Company | Auxiliary eyeglasses with magnetic clips |
US6072496A (en) | 1998-06-08 | 2000-06-06 | Microsoft Corporation | Method and system for capturing and representing 3D geometry, color and shading of facial expressions and other animated objects |
AU4558299A (en) | 1998-06-12 | 1999-12-30 | Bausch & Lomb Incorporated | Eyewear with replaceable lens system |
US5926248A (en) | 1998-06-26 | 1999-07-20 | Bausch & Lomb, Incorporated | Sunglass lens laminate |
US6563499B1 (en) | 1998-07-20 | 2003-05-13 | Geometrix, Inc. | Method and apparatus for generating a 3D region from a surrounding imagery |
US6999073B1 (en) | 1998-07-20 | 2006-02-14 | Geometrix, Inc. | Method and system for generating fully-textured 3D |
USD420380S (en) | 1998-07-31 | 2000-02-08 | Luxottica Leasing S.P.A. | Eyewear temple |
USD425543S (en) | 1998-08-21 | 2000-05-23 | Luxottica Leasing S.P.A. | Eyewear |
US6095650A (en) | 1998-09-22 | 2000-08-01 | Virtual Visual Devices, Llc | Interactive eyewear selection system |
KR100294923B1 (en) | 1998-10-02 | 2001-09-07 | 윤종용 | 3-D mesh coding/decoding method and apparatus for error resilience and incremental rendering |
US6222621B1 (en) | 1998-10-12 | 2001-04-24 | Hoyo Corporation | Spectacle lens evaluation method and evaluation device |
JP4086429B2 (en) | 1998-10-12 | 2008-05-14 | Hoya株式会社 | Evaluation method and apparatus for spectacle lens |
US6307568B1 (en) | 1998-10-28 | 2001-10-23 | Imaginarix Ltd. | Virtual dressing over the internet |
USD420037S (en) | 1998-11-19 | 2000-02-01 | Luxottica Leasing S.P.A. | Eyewear |
USD432156S (en) | 1998-11-19 | 2000-10-17 | Luxottica Leasing S.P.A. | Eyewear |
US6466205B2 (en) | 1998-11-19 | 2002-10-15 | Push Entertainment, Inc. | System and method for creating 3D models from 2D sequential image data |
US6132044A (en) | 1998-11-20 | 2000-10-17 | Luxottica Leasing S.P.A | Filter for a special purpose lens and method of making filter |
JP4025442B2 (en) | 1998-12-01 | 2007-12-19 | 富士通株式会社 | 3D model conversion apparatus and method |
WO2000033240A1 (en) | 1998-12-02 | 2000-06-08 | The Victoria University Of Manchester | Face sub-space determination |
US6281903B1 (en) | 1998-12-04 | 2001-08-28 | International Business Machines Corporation | Methods and apparatus for embedding 2D image content into 3D models |
US6024444A (en) | 1998-12-18 | 2000-02-15 | Luxottica Leasing S.P.A. | Eyewear lens retention apparatus and method |
KR100317138B1 (en) * | 1999-01-19 | 2001-12-22 | 윤덕용 | Three-dimensional face synthesis method using facial texture image from several views |
KR100292837B1 (en) | 1999-01-28 | 2001-06-15 | 장두순 | online ticket sales system and method for the same |
US6456287B1 (en) | 1999-02-03 | 2002-09-24 | Isurftv | Method and apparatus for 3D model creation based on 2D images |
USD424095S (en) | 1999-02-03 | 2000-05-02 | Luxottica Leasing S.P.A. | Eyewear front |
USD427227S (en) | 1999-02-09 | 2000-06-27 | Luxottica Leasing S.P.A. | Eyewear |
KR100608406B1 (en) | 1999-02-12 | 2006-08-02 | 호야 가부시키가이샤 | Eyeglass and its Manufacturing Method |
USD423034S (en) | 1999-02-19 | 2000-04-18 | Luxottica Leasing S.P.A. | Eyewear |
US6305656B1 (en) | 1999-02-26 | 2001-10-23 | Dash-It Usa Inc. | Magnetic coupler and various embodiments thereof |
USD420379S (en) | 1999-03-01 | 2000-02-08 | Luxottica Leasing S.P.A. | Eyewear front |
USD426568S (en) | 1999-03-01 | 2000-06-13 | Luxottica Leasing S.P.A. | Eyewear |
USD417883S (en) | 1999-03-03 | 1999-12-21 | Luxottica Leasing S.P.A. | Eyewear temple |
JP3599268B2 (en) | 1999-03-08 | 2004-12-08 | 株式会社ソニー・コンピュータエンタテインメント | Image processing method, image processing apparatus, and recording medium |
US20110026606A1 (en) | 1999-03-11 | 2011-02-03 | Thomson Licensing | System and method for enhancing the visibility of an object in a digital picture |
DE69934478T2 (en) | 1999-03-19 | 2007-09-27 | MAX-PLANCK-Gesellschaft zur Förderung der Wissenschaften e.V. | Method and apparatus for image processing based on metamorphosis models |
USD421764S (en) | 1999-04-05 | 2000-03-21 | Luxottica Leasing S.P.A. | Eyewear front |
USD427225S (en) | 1999-04-08 | 2000-06-27 | Luxottica Leasing S.P.A. | Goggle |
IT1312246B1 (en) | 1999-04-09 | 2002-04-09 | Francesco Lauricella | APPARATUS WITH PHOTOSENSITIVE CHESSBOARD FOR THE CONTROL OF PERSONALCOMPUTER BY CONCENTRATED LIGHT EMITTING, APPLIED TO THE HEAD |
AU4362000A (en) | 1999-04-19 | 2000-11-02 | I Pyxidis Llc | Methods and apparatus for delivering and viewing distributed entertainment broadcast objects as a personalized interactive telecast |
USD422011S (en) | 1999-04-29 | 2000-03-28 | Luxottica Leasing S.P.A. | Eyewear front |
USD434788S (en) | 1999-12-15 | 2000-12-05 | Luxottica Leasing S.P.A. | Eyewear |
USD424094S (en) | 1999-04-30 | 2000-05-02 | Luxottica Leasing S.P.A. | Eyewear |
USD423556S (en) | 1999-04-30 | 2000-04-25 | Luxottica Leasing S.P.A. | Eyewear |
USD423552S (en) | 1999-04-30 | 2000-04-25 | Luxottica Leasing S.P.A. | Eyewear |
USD424096S (en) | 1999-05-12 | 2000-05-02 | Luxottica Leasing S.P.A. | Eyewear front |
USD425542S (en) | 1999-05-27 | 2000-05-23 | Luxottica Leasing S.P.A. | Eyewear |
USD423553S (en) | 1999-06-24 | 2000-04-25 | Luxottica Leasing S.P.A. | Eyewear |
US6415051B1 (en) | 1999-06-24 | 2002-07-02 | Geometrix, Inc. | Generating 3-D models using a manually operated structured light source |
USD423554S (en) | 1999-06-24 | 2000-04-25 | Luxottica Leasing S.P.A. | Eyewear front |
US6760488B1 (en) | 1999-07-12 | 2004-07-06 | Carnegie Mellon University | System and method for generating a three-dimensional model from a two-dimensional image sequence |
USD423557S (en) | 1999-09-01 | 2000-04-25 | Luxottica Leasing S.P.A. | Eyewear temple |
US6922494B1 (en) | 1999-09-24 | 2005-07-26 | Eye Web, Inc. | Automated image scaling |
US6650324B1 (en) | 1999-10-29 | 2003-11-18 | Intel Corporation | Defining surface normals in a 3D surface mesh |
USD430591S (en) | 1999-11-01 | 2000-09-05 | Luxottica Leasing S.P.A. | Eyewear temple |
WO2001032074A1 (en) | 1999-11-04 | 2001-05-10 | Stefano Soatto | System for selecting and designing eyeglass frames |
US6583792B1 (en) | 1999-11-09 | 2003-06-24 | Newag Digital, Llc | System and method for accurately displaying superimposed images |
EP1228479B1 (en) | 1999-11-09 | 2005-04-27 | The University of Manchester | Object class identification, verification or object image synthesis |
US7663648B1 (en) | 1999-11-12 | 2010-02-16 | My Virtual Model Inc. | System and method for displaying selected garments on a computer-simulated mannequin |
US6671538B1 (en) | 1999-11-26 | 2003-12-30 | Koninklijke Philips Electronics, N.V. | Interface system for use with imaging devices to facilitate visualization of image-guided interventional procedure planning |
US7234937B2 (en) | 1999-11-30 | 2007-06-26 | Orametrix, Inc. | Unified workstation for virtual craniofacial diagnosis, treatment planning and therapeutics |
WO2001045029A2 (en) | 1999-12-10 | 2001-06-21 | Lennon Jerry W | Customer image capture and use thereof in a retailing system |
US6980690B1 (en) | 2000-01-20 | 2005-12-27 | Canon Kabushiki Kaisha | Image processing apparatus |
US6377281B1 (en) | 2000-02-17 | 2002-04-23 | The Jim Henson Company | Live performance control of computer graphic characters |
DE10007705A1 (en) | 2000-02-19 | 2001-09-06 | Keune Thomas | Method for matching spectacles to potential wearer via Internet, in which wearer records images of themselves wearing reference marker, using digital camera connected to computer and these are transmitted to server |
US6419549B2 (en) | 2000-02-29 | 2002-07-16 | Asahi Kogaku Kogyo Kabushiki Kaisha | Manufacturing method of spectacle lenses and system thereof |
JP4898055B2 (en) | 2000-03-08 | 2012-03-14 | ぴあ株式会社 | Ticket transfer system |
US6807290B2 (en) | 2000-03-09 | 2004-10-19 | Microsoft Corporation | Rapid computer modeling of faces for animation |
JP3853756B2 (en) | 2000-03-17 | 2006-12-06 | 株式会社トプコン | Eyeglass lens processing simulation equipment |
JP2001330806A (en) | 2000-03-17 | 2001-11-30 | Topcon Corp | Spectacle frame synthesizing system and spectacle frame sales method |
EP1136869A1 (en) | 2000-03-17 | 2001-09-26 | Kabushiki Kaisha TOPCON | Eyeglass frame selecting system |
JP2001340296A (en) | 2000-03-30 | 2001-12-11 | Topcon Corp | Optimetric system |
EP1268173B1 (en) | 2000-03-30 | 2005-12-07 | Q2100, Inc. | Apparatus and system for the production of plastic lenses |
AU2000263275A1 (en) | 2000-04-14 | 2001-10-30 | Pavel Efimovich Golikov | Method for increasing visual working capacity when one is working with display facilities, light-filter devices for performing said method and method for producing these devices |
DE10033983A1 (en) | 2000-04-27 | 2001-10-31 | Frank Mothes | Appliance for determining eyeglass centering data with the aid of video camera and computer |
US7224357B2 (en) | 2000-05-03 | 2007-05-29 | University Of Southern California | Three-dimensional modeling based on photographic images |
US6968075B1 (en) | 2000-05-09 | 2005-11-22 | Chang Kurt C | System and method for three-dimensional shape and size measurement |
AU6056301A (en) | 2000-05-18 | 2001-11-26 | Visionix Ltd. | Spectacles fitting system and fitting methods useful therein |
WO2001091016A1 (en) | 2000-05-25 | 2001-11-29 | Realitybuy, Inc. | A real time, three-dimensional, configurable, interactive product display system and method |
US6535223B1 (en) | 2000-05-31 | 2003-03-18 | Schmidt Laboratories, Inc. | Method and system for determining pupiltary distant and element height |
US7489768B1 (en) | 2000-06-01 | 2009-02-10 | Jonathan Strietzel | Method and apparatus for telecommunications advertising |
FR2810770B1 (en) | 2000-06-23 | 2003-01-03 | France Telecom | REFINING A TRIANGULAR MESH REPRESENTATIVE OF AN OBJECT IN THREE DIMENSIONS |
DE10031042A1 (en) | 2000-06-26 | 2002-01-03 | Autodesk Inc | A method for generating a 2D view of a 3D model, the 3D model comprising at least one object |
FI20001688A (en) | 2000-07-20 | 2002-01-21 | Juhani Wahlgren | Method and apparatus for simultaneous text display of theater, opera, etc. performances or event or fiction information |
US7523411B2 (en) | 2000-08-22 | 2009-04-21 | Bruce Carlin | Network-linked interactive three-dimensional composition and display of saleable objects in situ in viewer-selected scenes for purposes of object promotion and procurement, and generation of object advertisements |
US7062722B1 (en) | 2000-08-22 | 2006-06-13 | Bruce Carlin | Network-linked interactive three-dimensional composition and display of saleable objects in situ in viewer-selected scenes for purposes of promotion and procurement |
US6791584B1 (en) | 2000-09-05 | 2004-09-14 | Yiling Xie | Method of scaling face image with spectacle frame image through computer |
US6386562B1 (en) | 2000-09-11 | 2002-05-14 | Hui Shan Kuo | Scooter having changeable steering mechanism |
US6664956B1 (en) | 2000-10-12 | 2003-12-16 | Momentum Bilgisayar, Yazilim, Danismanlik, Ticaret A. S. | Method for generating a personalized 3-D face model |
WO2002035280A1 (en) | 2000-10-27 | 2002-05-02 | Hoya Corporation | Production method for spectacle lens and supply system for spectacle lens |
US7736147B2 (en) | 2000-10-30 | 2010-06-15 | Align Technology, Inc. | Systems and methods for bite-setting teeth models |
US6792401B1 (en) | 2000-10-31 | 2004-09-14 | Diamond Visionics Company | Internet-based modeling kiosk and method for fitting and selling prescription eyeglasses |
US6661433B1 (en) | 2000-11-03 | 2003-12-09 | Gateway, Inc. | Portable wardrobe previewing device |
CA2326087A1 (en) | 2000-11-16 | 2002-05-16 | Craig Summers | Inward-looking imaging system |
US6493073B2 (en) | 2000-12-11 | 2002-12-10 | Sheldon L. Epstein | System and method for measuring properties of an optical component |
IL140494A0 (en) | 2000-12-22 | 2002-02-10 | Pneumatic control system for a biopsy device | |
GB0101371D0 (en) | 2001-01-19 | 2001-03-07 | Virtual Mirrors Ltd | Production and visualisation of garments |
US7016824B2 (en) * | 2001-02-06 | 2006-03-21 | Geometrix, Inc. | Interactive try-on platform for eyeglasses |
GB2372165A (en) | 2001-02-10 | 2002-08-14 | Hewlett Packard Co | A method of selectively storing images |
DE10106562B4 (en) | 2001-02-13 | 2008-07-03 | Rodenstock Gmbh | Method for demonstrating the influence of a particular spectacle frame and the optical glasses used in this spectacle frame |
US6726463B2 (en) | 2001-02-20 | 2004-04-27 | Q2100, Inc. | Apparatus for preparing an eyeglass lens having a dual computer system controller |
US6893245B2 (en) | 2001-02-20 | 2005-05-17 | Q2100, Inc. | Apparatus for preparing an eyeglass lens having a computer system controller |
US7051290B2 (en) | 2001-02-20 | 2006-05-23 | Q2100, Inc. | Graphical interface for receiving eyeglass prescription information |
US6808381B2 (en) | 2001-02-20 | 2004-10-26 | Q2100, Inc. | Apparatus for preparing an eyeglass lens having a controller |
US6950804B2 (en) | 2001-02-26 | 2005-09-27 | Pika Media | Systems and methods for distributing targeted multimedia content and advertising |
US7034818B2 (en) | 2001-03-16 | 2006-04-25 | Mitsubishi Electric Research Laboratories, Inc. | System and method for converting range data to 3D models |
US6943789B2 (en) | 2001-03-16 | 2005-09-13 | Mitsubishi Electric Research Labs, Inc | Conversion of adaptively sampled distance fields to triangles |
US6873720B2 (en) | 2001-03-20 | 2005-03-29 | Synopsys, Inc. | System and method of providing mask defect printability analysis |
ITBO20010218A1 (en) | 2001-04-13 | 2002-10-13 | Luxottica S P A | FRAME FOR GLASSES WITH PERFECTED STRATIFORM COATING |
US7156655B2 (en) | 2001-04-13 | 2007-01-02 | Orametrix, Inc. | Method and system for comprehensive evaluation of orthodontic treatment using unified workstation |
US7717708B2 (en) | 2001-04-13 | 2010-05-18 | Orametrix, Inc. | Method and system for integrated orthodontic treatment planning using unified workstation |
US7003515B1 (en) | 2001-05-16 | 2006-02-21 | Pandora Media, Inc. | Consumer item matching method and system |
JP3527489B2 (en) | 2001-08-03 | 2004-05-17 | 株式会社ソニー・コンピュータエンタテインメント | Drawing processing method and apparatus, recording medium storing drawing processing program, drawing processing program |
US20030030904A1 (en) | 2001-08-13 | 2003-02-13 | Mei-Ling Huang | Stereoscopic viewing assembly with adjustable fields of vision in two dimensions |
US7123263B2 (en) | 2001-08-14 | 2006-10-17 | Pulse Entertainment, Inc. | Automatic 3D modeling system and method |
DE10140656A1 (en) | 2001-08-24 | 2003-03-13 | Rodenstock Optik G | Process for designing and optimizing an individual lens |
WO2003021394A2 (en) | 2001-08-31 | 2003-03-13 | Solidworks Corporation | Simultaneous use of 2d and 3d modeling data |
US7103211B1 (en) | 2001-09-04 | 2006-09-05 | Geometrix, Inc. | Method and apparatus for generating 3D face models from one camera |
US6961439B2 (en) | 2001-09-26 | 2005-11-01 | The United States Of America As Represented By The Secretary Of The Navy | Method and apparatus for producing spatialized audio signals |
US7634103B2 (en) | 2001-10-01 | 2009-12-15 | L'oreal S.A. | Analysis using a three-dimensional facial image |
US7081893B2 (en) * | 2001-10-10 | 2006-07-25 | Sony Computer Entertainment America Inc. | System and method for point pushing to render polygons in environments with changing levels of detail |
US7209557B2 (en) | 2001-10-18 | 2007-04-24 | Lenovo Singapore Pte, Ltd | Apparatus and method for computer screen security |
US6682195B2 (en) | 2001-10-25 | 2004-01-27 | Ophthonix, Inc. | Custom eyeglass manufacturing method |
US7434931B2 (en) | 2001-10-25 | 2008-10-14 | Ophthonix | Custom eyeglass manufacturing method |
KR100446635B1 (en) | 2001-11-27 | 2004-09-04 | 삼성전자주식회사 | Apparatus and method for depth image-based representation of 3-dimensional object |
US7221809B2 (en) | 2001-12-17 | 2007-05-22 | Genex Technologies, Inc. | Face recognition system and method |
US20040217956A1 (en) | 2002-02-28 | 2004-11-04 | Paul Besl | Method and system for processing, compressing, streaming, and interactive rendering of 3D color image data |
JP2003271965A (en) | 2002-03-19 | 2003-09-26 | Fujitsu Ltd | Program, method and device for authentication of hand- written signature |
EP1495447A1 (en) * | 2002-03-26 | 2005-01-12 | KIM, So-Woon | System and method for 3-dimension simulation of glasses |
WO2003084448A1 (en) | 2002-04-11 | 2003-10-16 | Sendo Co., Ltd. | Color-blindness correcting eyeglass and method for manufacturing color-blindness correcting eyeglass |
US20040004633A1 (en) | 2002-07-03 | 2004-01-08 | Perry James N. | Web-based system and method for ordering and fitting prescription lens eyewear |
WO2004010384A2 (en) | 2002-07-23 | 2004-01-29 | Imagecom, Inc. | System and method for creating and updating a three-dimensional model and creating a related neutral file format |
JP2004062565A (en) | 2002-07-30 | 2004-02-26 | Canon Inc | Image processor and image processing method, and program storage medium |
US6775128B2 (en) | 2002-10-03 | 2004-08-10 | Julio Leitao | Protective cover sleeve for laptop computer screens |
US6825838B2 (en) | 2002-10-11 | 2004-11-30 | Sonocine, Inc. | 3D modeling system |
EP1450201B1 (en) | 2003-02-22 | 2009-09-16 | Hans-Joachim Ollendorf | Method for determining interpupillery distance |
JP2006522411A (en) | 2003-03-06 | 2006-09-28 | アニメトリックス,インク. | Generating an image database of objects containing multiple features |
JP3742394B2 (en) | 2003-03-07 | 2006-02-01 | デジタルファッション株式会社 | Virtual try-on display device, virtual try-on display method, virtual try-on display program, and computer-readable recording medium storing the program |
US7711155B1 (en) | 2003-04-14 | 2010-05-04 | Videomining Corporation | Method and system for enhancing three dimensional face modeling using demographic classification |
US7242807B2 (en) | 2003-05-05 | 2007-07-10 | Fish & Richardson P.C. | Imaging of biometric information based on three-dimensional shapes |
US20040223631A1 (en) | 2003-05-07 | 2004-11-11 | Roman Waupotitsch | Face recognition based on obtaining two dimensional information from three-dimensional face shapes |
TW594594B (en) | 2003-05-16 | 2004-06-21 | Ind Tech Res Inst | A multilevel texture processing method for mapping multiple images onto 3D models |
US7421097B2 (en) | 2003-05-27 | 2008-09-02 | Honeywell International Inc. | Face identification verification using 3 dimensional modeling |
US20040257364A1 (en) | 2003-06-18 | 2004-12-23 | Basler Gregory A. | Shadow casting within a virtual three-dimensional terrain model |
GB2403883B (en) | 2003-07-08 | 2007-08-22 | Delcam Plc | Method and system for the modelling of 3D objects |
US7814436B2 (en) | 2003-07-28 | 2010-10-12 | Autodesk, Inc. | 3D scene orientation indicator system with scene orientation change capability |
US7151545B2 (en) | 2003-08-06 | 2006-12-19 | Landmark Graphics Corporation | System and method for applying accurate three-dimensional volume textures to arbitrary triangulated surfaces |
US20060161474A1 (en) | 2003-08-06 | 2006-07-20 | David Diamond | Delivery of targeted offers for move theaters and other retail stores |
US7426292B2 (en) | 2003-08-07 | 2008-09-16 | Mitsubishi Electric Research Laboratories, Inc. | Method for determining optimal viewpoints for 3D face modeling and face recognition |
US7212664B2 (en) | 2003-08-07 | 2007-05-01 | Mitsubishi Electric Research Laboratories, Inc. | Constructing heads from 3D models and 2D silhouettes |
US20050111705A1 (en) | 2003-08-26 | 2005-05-26 | Roman Waupotitsch | Passive stereo sensing for 3D facial shape biometrics |
KR100682889B1 (en) * | 2003-08-29 | 2007-02-15 | 삼성전자주식회사 | Method and Apparatus for image-based photorealistic 3D face modeling |
JP2005100367A (en) | 2003-09-02 | 2005-04-14 | Fuji Photo Film Co Ltd | Image generating apparatus, image generating method and image generating program |
WO2005029158A2 (en) | 2003-09-12 | 2005-03-31 | Neal Michael R | Method of interactive system for previewing and selecting eyewear |
EP1671483B1 (en) | 2003-10-06 | 2014-04-09 | Disney Enterprises, Inc. | System and method of playback and feature control for video players |
JP4860472B2 (en) | 2003-10-09 | 2012-01-25 | ユニヴァーシティ オブ ヨーク | Image recognition |
US7290201B1 (en) | 2003-11-12 | 2007-10-30 | Xilinx, Inc. | Scheme for eliminating the effects of duty cycle asymmetry in clock-forwarded double data rate interface applications |
US7889209B2 (en) * | 2003-12-10 | 2011-02-15 | Sensable Technologies, Inc. | Apparatus and methods for wrapping texture onto the surface of a virtual object |
JP3828538B2 (en) | 2003-12-25 | 2006-10-04 | 株式会社東芝 | Semiconductor integrated circuit device and differential small amplitude data transmission device |
US20050208457A1 (en) | 2004-01-05 | 2005-09-22 | Wolfgang Fink | Digital object recognition audio-assistant for the visually impaired |
WO2005076210A1 (en) | 2004-02-05 | 2005-08-18 | Vodafone K.K. | Image processing method, image processing apparatus, and mobile communication terminal apparatus |
FR2866718B1 (en) | 2004-02-24 | 2006-05-05 | Essilor Int | CENTRAL-BLOCKING DEVICE OF AN OPHTHALMIC LENSES LENS, AUTOMATIC DETECTION METHOD AND ASSOCIATED MANUAL CENTERING METHODS |
US7154529B2 (en) | 2004-03-12 | 2006-12-26 | Hoke Donald G | System and method for enabling a person to view images of the person wearing an accessory before purchasing the accessory |
JP2005269022A (en) | 2004-03-17 | 2005-09-29 | Ricoh Co Ltd | Encoder and encoding method, encoded data editor and editing method and program, and recording medium |
EP1728467A4 (en) | 2004-03-26 | 2009-09-16 | Hoya Corp | Spectacle lens supply system, spectacle wearing parameter measurement device, spectacle wearing inspection system, spectacle lens, and spectacle |
US7441895B2 (en) | 2004-03-26 | 2008-10-28 | Hoya Corporation | Spectacle lens supply system, spectacle wearing parameter measurement apparatus, spectacle wearing test system, spectacle lens, and spectacle |
WO2006078265A2 (en) | 2004-03-30 | 2006-07-27 | Geometrix | Efficient classification of three dimensional face models for human identification and other applications |
US20070013873A9 (en) | 2004-04-29 | 2007-01-18 | Jacobson Joseph M | Low cost portable computing device |
US7630580B1 (en) | 2004-05-04 | 2009-12-08 | AgentSheets, Inc. | Diffusion-based interactive extrusion of 2D images into 3D models |
US7436988B2 (en) | 2004-06-03 | 2008-10-14 | Arizona Board Of Regents | 3D face authentication and recognition based on bilateral symmetry analysis |
US7804997B2 (en) | 2004-06-10 | 2010-09-28 | Technest Holdings, Inc. | Method and system for a three dimensional facial recognition system |
US7133048B2 (en) | 2004-06-30 | 2006-11-07 | Mitsubishi Electric Research Laboratories, Inc. | Variable multilinear models for facial synthesis |
DE102004032191A1 (en) | 2004-07-02 | 2006-01-19 | Scanbull Software Gmbh | Three-dimensional object representing method for e.g. digital camera, involves transferring object representation to mobile terminal via communications network, and representing opinion of object based on received representation |
US20060012748A1 (en) | 2004-07-15 | 2006-01-19 | Parikumar Periasamy | Dynamic multifocal spectacle frame |
US7218323B1 (en) | 2004-08-13 | 2007-05-15 | Ngrain (Canada) Corporation | Method and system for rendering voxel data while addressing multiple voxel set interpenetration |
SE528068C2 (en) | 2004-08-19 | 2006-08-22 | Jan Erik Solem Med Jsolutions | Three dimensional object recognizing method for e.g. aircraft, involves detecting image features in obtained two dimensional representation, and comparing recovered three dimensional shape with reference representation of object |
US7219995B2 (en) | 2004-08-25 | 2007-05-22 | Hans-Joachim Ollendorf | Apparatus for determining the distance between pupils |
EP1794703A4 (en) | 2004-09-17 | 2012-02-29 | Cyberextruder Com Inc | System, method, and apparatus for generating a three-dimensional representation from one or more two-dimensional images |
DE102004059448A1 (en) | 2004-11-19 | 2006-06-01 | Rodenstock Gmbh | Method and apparatus for manufacturing a spectacle lens; System and computer program product for manufacturing a spectacle lens |
WO2006062908A2 (en) | 2004-12-09 | 2006-06-15 | Image Metrics Limited | Optical motion capturing with fill-in of missing marker points using a human model derived from principal component analysis |
US20060127852A1 (en) | 2004-12-14 | 2006-06-15 | Huafeng Wen | Image based orthodontic treatment viewing system |
KR100511210B1 (en) | 2004-12-27 | 2005-08-30 | 주식회사지앤지커머스 | Method for converting 2d image into pseudo 3d image and user-adapted total coordination method in use artificial intelligence, and service besiness method thereof |
US20110102553A1 (en) | 2007-02-28 | 2011-05-05 | Tessera Technologies Ireland Limited | Enhanced real-time face models from stereo imaging |
US7533453B2 (en) | 2005-01-24 | 2009-05-19 | Yancy Virgil T | E-facet optical lens |
WO2006084385A1 (en) | 2005-02-11 | 2006-08-17 | Macdonald Dettwiler & Associates Inc. | 3d imaging system |
US20120021835A1 (en) | 2005-02-11 | 2012-01-26 | Iprd Labs Llc | Systems and methods for server based video gaming |
US20060212150A1 (en) | 2005-02-18 | 2006-09-21 | Sims William Jr | Method of providing 3D models |
CN101208723A (en) | 2005-02-23 | 2008-06-25 | 克雷格·萨默斯 | Automatic scene modeling for the 3D camera and 3D video |
JP4473754B2 (en) | 2005-03-11 | 2010-06-02 | 株式会社東芝 | Virtual fitting device |
US20060216680A1 (en) | 2005-03-24 | 2006-09-28 | Eharmony.Com | Selection of relationship improvement content for users in a relationship |
EP1861821A2 (en) | 2005-03-24 | 2007-12-05 | Image Metrics Limited | Method and system for characterization of knee joint morphology |
DE102005014775B4 (en) | 2005-03-31 | 2008-12-11 | Nokia Siemens Networks Gmbh & Co.Kg | Method, communication arrangement and communication device for controlling access to at least one communication device |
US9001215B2 (en) | 2005-06-02 | 2015-04-07 | The Invention Science Fund I, Llc | Estimating shared image device operational capabilities or resources |
US7830384B1 (en) | 2005-04-27 | 2010-11-09 | Image Metrics Limited | Animating graphical objects using input video |
US7415152B2 (en) | 2005-04-29 | 2008-08-19 | Microsoft Corporation | Method and system for constructing a 3D representation of a face from a 2D representation |
US7609859B2 (en) | 2005-06-14 | 2009-10-27 | Mitsubishi Electric Research Laboratories, Inc. | Method and system for generating bi-linear models for faces |
WO2006138525A2 (en) | 2005-06-16 | 2006-12-28 | Strider Labs | System and method for recognition in 2d images using 3d class models |
US7756325B2 (en) | 2005-06-20 | 2010-07-13 | University Of Basel | Estimating 3D shape and texture of a 3D object based on a 2D image of the 3D object |
US7953675B2 (en) | 2005-07-01 | 2011-05-31 | University Of Southern California | Tensor voting in N dimensional spaces |
JP2007013768A (en) * | 2005-07-01 | 2007-01-18 | Konica Minolta Photo Imaging Inc | Imaging apparatus |
US7961914B1 (en) | 2005-07-12 | 2011-06-14 | Smith Robert J D | Portable storage apparatus with integral biometric-based access control system |
CN100403974C (en) | 2005-07-27 | 2008-07-23 | 李信亮 | Method for preparing eyeglass based on photograph of customer's header up loaded and optometry data |
ITBO20050524A1 (en) | 2005-08-05 | 2007-02-06 | Luxottica Srl | LENS FOR MASKS AND GLASSES |
JP4659554B2 (en) | 2005-08-09 | 2011-03-30 | 株式会社メニコン | Ophthalmic lens manufacturing system and manufacturing method |
DE102005038859A1 (en) | 2005-08-17 | 2007-03-01 | Rodenstock Gmbh | Tool for calculating the performance of progressive lenses |
US8218836B2 (en) | 2005-09-12 | 2012-07-10 | Rutgers, The State University Of New Jersey | System and methods for generating three-dimensional images from two-dimensional bioluminescence images and visualizing tumor shapes and locations |
US7563975B2 (en) | 2005-09-14 | 2009-07-21 | Mattel, Inc. | Music production system |
DE102005048436B3 (en) | 2005-10-07 | 2007-03-29 | Buchmann Deutschland Gmbh | Blank lenses adapting method, for spectacle frame, involves testing whether adjustment of one lens requires corresponding concurrent adjustment of another lens to maintain preset tolerances corresponding to vertical amplitude of fusion |
US7755619B2 (en) | 2005-10-13 | 2010-07-13 | Microsoft Corporation | Automatic 3D face-modeling from video |
US7768528B1 (en) | 2005-11-09 | 2010-08-03 | Image Metrics Limited | Replacement of faces in existing video |
US20070104360A1 (en) | 2005-11-09 | 2007-05-10 | Smedia Technology Corporation | System and method for capturing 3D face |
JP2009515578A (en) | 2005-11-10 | 2009-04-16 | ブラッコ・リサーチ・ソシエテ・アノニム | Immediate visualization of contrast agent concentration for imaging applications |
KR100735564B1 (en) | 2005-12-02 | 2007-07-04 | 삼성전자주식회사 | Apparatus, system, and method for mapping information |
JP3962803B2 (en) | 2005-12-16 | 2007-08-22 | インターナショナル・ビジネス・マシーンズ・コーポレーション | Head detection device, head detection method, and head detection program |
WO2007074912A1 (en) * | 2005-12-27 | 2007-07-05 | Nec Corporation | Data compression method and device, data decompression method and device and program |
KR101334173B1 (en) | 2006-01-11 | 2013-11-28 | 삼성전자주식회사 | Method and apparatus for encoding/decoding graphic data |
US7856125B2 (en) | 2006-01-31 | 2010-12-21 | University Of Southern California | 3D face reconstruction from 2D images |
US7587082B1 (en) | 2006-02-17 | 2009-09-08 | Cognitech, Inc. | Object recognition based on 2D images and 3D models |
US8145545B2 (en) | 2006-02-23 | 2012-03-27 | Nainesh B Rathod | Method of enabling a user to draw a component part as input for searching component parts in a database |
EP2030171A1 (en) * | 2006-04-10 | 2009-03-04 | Avaworks Incorporated | Do-it-yourself photo realistic talking head creation system and method |
US8026917B1 (en) | 2006-05-01 | 2011-09-27 | Image Metrics Ltd | Development tools for animated character rigging |
US8433157B2 (en) | 2006-05-04 | 2013-04-30 | Thomson Licensing | System and method for three-dimensional object reconstruction from two-dimensional images |
US20070262988A1 (en) | 2006-05-09 | 2007-11-15 | Pixar Animation Studios | Method and apparatus for using voxel mip maps and brick maps as geometric primitives in image rendering process |
EP1862110A1 (en) | 2006-05-29 | 2007-12-05 | Essilor International (Compagnie Generale D'optique) | Method for optimizing eyeglass lenses |
US7573489B2 (en) | 2006-06-01 | 2009-08-11 | Industrial Light & Magic | Infilling for 2D to 3D image conversion |
US7573475B2 (en) | 2006-06-01 | 2009-08-11 | Industrial Light & Magic | 2D to 3D image conversion |
WO2008002630A2 (en) | 2006-06-26 | 2008-01-03 | University Of Southern California | Seamless image integration into 3d models |
US8204334B2 (en) | 2006-06-29 | 2012-06-19 | Thomson Licensing | Adaptive pixel-based filtering |
DE102006030204A1 (en) | 2006-06-30 | 2008-01-03 | Rodenstock Gmbh | Pair of spectacle lenses in anisometropia |
DE102006033491A1 (en) | 2006-07-19 | 2008-01-31 | Rodenstock Gmbh | Device and method for determining a wearing position of spectacles, computer program device |
DE102006033490A1 (en) | 2006-07-19 | 2008-01-31 | Rodenstock Gmbh | Apparatus and method for determining a position of a spectacle lens relative to a spectacle frame, computer program device |
EP1881457B1 (en) | 2006-07-21 | 2017-09-13 | Dassault Systèmes | Method for creating a parametric surface symmetric with respect to a given symmetry operation |
US20080136814A1 (en) | 2006-09-17 | 2008-06-12 | Chang Woo Chu | System and method for generating 3-d facial model and animation using one video camera |
KR101478669B1 (en) | 2006-09-29 | 2015-01-02 | 톰슨 라이센싱 | Automatic parameter estimation for adaptive pixel-based filtering |
US8073196B2 (en) | 2006-10-16 | 2011-12-06 | University Of Southern California | Detection and tracking of moving objects from a moving platform in presence of strong parallax |
US20080112610A1 (en) | 2006-11-14 | 2008-05-15 | S2, Inc. | System and method for 3d model generation |
US7656402B2 (en) | 2006-11-15 | 2010-02-02 | Tahg, Llc | Method for creating, manufacturing, and distributing three-dimensional models |
US8330801B2 (en) | 2006-12-22 | 2012-12-11 | Qualcomm Incorporated | Complexity-adaptive 2D-to-3D video sequence conversion |
KR100800804B1 (en) | 2006-12-27 | 2008-02-04 | 삼성전자주식회사 | Method for photographing panorama picture |
TW200828043A (en) | 2006-12-29 | 2008-07-01 | Cheng-Hsien Yang | Terminal try-on simulation system and operating and applying method thereof |
KR20080086945A (en) | 2006-12-29 | 2008-09-29 | 이상민 | Apparatus and method for coordination simulation for on-line shopping mall |
EP2126841A2 (en) | 2007-01-16 | 2009-12-02 | Optasia Medical, Limited | Image processing systems and methods |
US8199152B2 (en) | 2007-01-16 | 2012-06-12 | Lucasfilm Entertainment Company Ltd. | Combining multiple session content for animation libraries |
US8542236B2 (en) | 2007-01-16 | 2013-09-24 | Lucasfilm Entertainment Company Ltd. | Generating animation libraries |
US8130225B2 (en) | 2007-01-16 | 2012-03-06 | Lucasfilm Entertainment Company Ltd. | Using animation libraries for object identification |
CA2675758C (en) | 2007-01-19 | 2015-05-19 | Thomson Licensing | Reducing contours in digital images |
WO2008089999A1 (en) | 2007-01-25 | 2008-07-31 | Rodenstock Gmbh | Method for optimising a spectacle lens |
US8789946B2 (en) | 2007-01-25 | 2014-07-29 | Rodenstock Gmbh | Reference points for ortho position |
WO2008089995A1 (en) | 2007-01-25 | 2008-07-31 | Rodenstock Gmbh | Method for calculating a spectacle lens having a variable position of the reference points |
US7699300B2 (en) | 2007-02-01 | 2010-04-20 | Toshiba Tec Kabushiki Kaisha | Sheet post-processing apparatus |
US8200502B2 (en) | 2007-02-14 | 2012-06-12 | Optivision, Inc. | Frame tracer web browser component |
US7665843B2 (en) | 2007-02-21 | 2010-02-23 | Yiling Xie | Method and the associate mechanism for stored-image database-driven spectacle frame fitting services over public network |
US20110071804A1 (en) | 2007-02-21 | 2011-03-24 | Yiling Xie | Method And The Associated Mechanism For 3-D Simulation Stored-Image Database-Driven Spectacle Frame Fitting Services Over Public Network |
US20080201641A1 (en) | 2007-02-21 | 2008-08-21 | Yiling Xie | Method And The Associated Mechanism For 3-D Simulation Stored-Image Database-Driven Spectacle Frame Fitting Services Over Public Network |
ATE472140T1 (en) | 2007-02-28 | 2010-07-15 | Fotonation Vision Ltd | SEPARATION OF DIRECTIONAL ILLUMINATION VARIABILITY IN STATISTICAL FACIAL MODELING BASED ON TEXTURE SPACE DECOMPOSITIONS |
US8286083B2 (en) | 2007-03-13 | 2012-10-09 | Ricoh Co., Ltd. | Copying documents from electronic displays |
JP4938093B2 (en) | 2007-03-23 | 2012-05-23 | トムソン ライセンシング | System and method for region classification of 2D images for 2D-TO-3D conversion |
WO2008118147A2 (en) | 2007-03-26 | 2008-10-02 | Thomson Licensing | Method and apparatus for detecting objects of interest in soccer video by color segmentation and shape anaylsis |
US20080240588A1 (en) | 2007-03-29 | 2008-10-02 | Mikhail Tsoupko-Sitnikov | Image processing method and image processing apparatus |
DE102007020031A1 (en) | 2007-04-27 | 2008-10-30 | Rodenstock Gmbh | Glasses, method of making glasses and computer program product |
US8059917B2 (en) | 2007-04-30 | 2011-11-15 | Texas Instruments Incorporated | 3-D modeling |
US20080271078A1 (en) | 2007-04-30 | 2008-10-30 | Google Inc. | Momentary Electronic Program Guide |
US20080279478A1 (en) | 2007-05-09 | 2008-11-13 | Mikhail Tsoupko-Sitnikov | Image processing method and image processing apparatus |
US20080278633A1 (en) | 2007-05-09 | 2008-11-13 | Mikhail Tsoupko-Sitnikov | Image processing method and image processing apparatus |
US8009880B2 (en) | 2007-05-11 | 2011-08-30 | Microsoft Corporation | Recovering parameters from a sub-optimal image |
US8212812B2 (en) | 2007-05-21 | 2012-07-03 | Siemens Corporation | Active shape model for vehicle modeling and re-identification |
WO2008147809A1 (en) | 2007-05-24 | 2008-12-04 | Schlumberger Canada Limited | Near surface layer modeling |
US20080297503A1 (en) | 2007-05-30 | 2008-12-04 | John Dickinson | System and method for reconstructing a 3D solid model from a 2D line drawing |
GB2449855A (en) | 2007-06-05 | 2008-12-10 | Steven Harbutt | System and method for measuring pupillary distance |
US7848548B1 (en) | 2007-06-11 | 2010-12-07 | Videomining Corporation | Method and system for robust demographic classification using pose independent model from sequence of face images |
US20080310757A1 (en) | 2007-06-15 | 2008-12-18 | George Wolberg | System and related methods for automatically aligning 2D images of a scene to a 3D model of the scene |
US20090010507A1 (en) | 2007-07-02 | 2009-01-08 | Zheng Jason Geng | System and method for generating a 3d model of anatomical structure using a plurality of 2d images |
FI120325B (en) | 2007-07-04 | 2009-09-15 | Theta Optics Ltd Oy | Method of making glasses |
DE102007032564A1 (en) | 2007-07-12 | 2009-01-15 | Rodenstock Gmbh | Method for checking and / or determining user data, computer program product and device |
WO2009023012A1 (en) | 2007-08-16 | 2009-02-19 | Nasir Wajihuddin | Interactive custom design and building of toy vehicle |
BRPI0818693A8 (en) | 2007-10-05 | 2018-08-14 | Essilor Int | METHOD FOR PROVIDING AN OPHTHALMIC LENS FOR EYECASTS BY CALCULATING OR SELECTING A DESIGN |
US8090160B2 (en) | 2007-10-12 | 2012-01-03 | The University Of Houston System | Automated method for human face modeling and relighting with application to face recognition |
WO2009067560A1 (en) | 2007-11-20 | 2009-05-28 | Big Stage Entertainment, Inc. | Systems and methods for generating 3d head models and for using the same |
US8144153B1 (en) | 2007-11-20 | 2012-03-27 | Lucasfilm Entertainment Company Ltd. | Model production for animation libraries |
US20090129402A1 (en) | 2007-11-21 | 2009-05-21 | Simple Star, Inc. | Method and System For Scheduling Multimedia Shows |
KR100914847B1 (en) | 2007-12-15 | 2009-09-02 | 한국전자통신연구원 | Method and apparatus for creating 3d face model by using multi-view image information |
KR100914845B1 (en) | 2007-12-15 | 2009-09-02 | 한국전자통신연구원 | Method and apparatus for 3d reconstructing of object by using multi-view image information |
KR100940862B1 (en) | 2007-12-17 | 2010-02-09 | 한국전자통신연구원 | Head motion tracking method for 3d facial model animation from a video stream |
US8160345B2 (en) | 2008-04-30 | 2012-04-17 | Otismed Corporation | System and method for image segmentation in generating computer models of a joint to undergo arthroplasty |
EP2031434B1 (en) | 2007-12-28 | 2022-10-19 | Essilor International | An asynchronous method for obtaining spectacle features to order |
EP2037314B1 (en) | 2007-12-28 | 2021-12-01 | Essilor International | A method and computer means for choosing spectacle lenses adapted to a frame |
KR101432177B1 (en) | 2008-01-21 | 2014-08-22 | 삼성전자주식회사 | Portable device and method for processing the photography the same, and photography processing system having it |
US8217934B2 (en) | 2008-01-23 | 2012-07-10 | Adobe Systems Incorporated | System and methods for rendering transparent surfaces in high depth complexity scenes using hybrid and coherent layer peeling |
US9305389B2 (en) | 2008-02-28 | 2016-04-05 | Autodesk, Inc. | Reducing seam artifacts when applying a texture to a three-dimensional (3D) model |
US8260006B1 (en) | 2008-03-14 | 2012-09-04 | Google Inc. | System and method of aligning images |
DE102008015189A1 (en) | 2008-03-20 | 2009-10-01 | Rodenstock Gmbh | Rescaling the target astigmatism for other additions |
GB2458388A (en) | 2008-03-21 | 2009-09-23 | Dressbot Inc | A collaborative online shopping environment, virtual mall, store, etc. in which payments may be shared, products recommended and users modelled. |
WO2009124151A2 (en) | 2008-04-01 | 2009-10-08 | University Of Southern California | Video feed target tracking |
JP2011517228A (en) | 2008-04-11 | 2011-05-26 | トムソン ライセンシング | System and method for improving visibility of objects in digital images |
US20110227923A1 (en) | 2008-04-14 | 2011-09-22 | Xid Technologies Pte Ltd | Image synthesis method |
WO2009128784A1 (en) | 2008-04-14 | 2009-10-22 | Xid Technologies Pte Ltd | Face expressions identification |
US8274506B1 (en) | 2008-04-28 | 2012-09-25 | Adobe Systems Incorporated | System and methods for creating a three-dimensional view of a two-dimensional map |
KR101085390B1 (en) | 2008-04-30 | 2011-11-21 | 주식회사 코아로직 | Image presenting method and apparatus for 3D navigation, and mobile apparatus comprising the same apparatus |
WO2009135183A1 (en) | 2008-05-02 | 2009-11-05 | Zentech, Inc. | Automated generation of 3d models from 2d computer-aided design (cad) drawings |
US8737721B2 (en) | 2008-05-07 | 2014-05-27 | Microsoft Corporation | Procedural authoring |
US8199988B2 (en) | 2008-05-16 | 2012-06-12 | Geodigm Corporation | Method and apparatus for combining 3D dental scans with other 3D data sets |
US8126249B2 (en) | 2008-05-30 | 2012-02-28 | Optasia Medical Limited | Methods of and system for detection and tracking of osteoporosis |
US8204299B2 (en) | 2008-06-12 | 2012-06-19 | Microsoft Corporation | 3D content aggregation built into devices |
US20090316945A1 (en) | 2008-06-20 | 2009-12-24 | Akansu Ali N | Transportable Sensor Devices |
US8284190B2 (en) | 2008-06-25 | 2012-10-09 | Microsoft Corporation | Registration of street-level imagery to 3D building models |
US8600121B2 (en) | 2008-07-02 | 2013-12-03 | C-True Ltd. | Face recognition system and method |
US8131063B2 (en) | 2008-07-16 | 2012-03-06 | Seiko Epson Corporation | Model-based object image processing |
US8155411B2 (en) | 2008-07-22 | 2012-04-10 | Pie Medical Imaging B.V. | Method, apparatus and computer program for quantitative bifurcation analysis in 3D using multiple 2D angiographic images |
US8248417B1 (en) | 2008-08-28 | 2012-08-21 | Adobe Systems Incorporated | Flattening 3D images |
TWI463864B (en) | 2008-08-29 | 2014-12-01 | Thomson Licensing | View synthesis with heuristic view merging |
EP2161611A1 (en) | 2008-09-04 | 2010-03-10 | Essilor International (Compagnie Générale D'Optique) | Method for optimizing the settings of an ophtalmic system |
JP5599798B2 (en) | 2008-09-04 | 2014-10-01 | エシロール アンテルナシオナル (コンパニー ジェネラル ドプティック) | Method for providing finishing parameters |
WO2010039976A1 (en) | 2008-10-03 | 2010-04-08 | 3M Innovative Properties Company | Systems and methods for multi-perspective scene analysis |
US8160325B2 (en) | 2008-10-08 | 2012-04-17 | Fujifilm Medical Systems Usa, Inc. | Method and system for surgical planning |
US20120075296A1 (en) | 2008-10-08 | 2012-03-29 | Strider Labs, Inc. | System and Method for Constructing a 3D Scene Model From an Image |
WO2010042990A1 (en) | 2008-10-16 | 2010-04-22 | Seeing Machines Limited | Online marketing of facial products using real-time face tracking |
KR20100050052A (en) | 2008-11-05 | 2010-05-13 | 김영준 | Virtual glasses wearing method |
EP3964163B1 (en) | 2008-11-20 | 2023-08-02 | Align Technology, Inc. | Orthodontic systems and methods including parametric attachments |
US8982122B2 (en) | 2008-11-24 | 2015-03-17 | Mixamo, Inc. | Real time concurrent design of shape, texture, and motion for 3D character animation |
TW201023092A (en) | 2008-12-02 | 2010-06-16 | Nat Univ Tsing Hua | 3D face model construction method |
TWI382354B (en) | 2008-12-02 | 2013-01-11 | Nat Univ Tsing Hua | Face recognition method |
IT1392623B1 (en) | 2008-12-23 | 2012-03-16 | Luxottica Srl | DEVICE VISUALIZER OF CRYPTED IMAGES VISIBLE ONLY THROUGH A POLARIZED FILTER AND PROCEDURE TO REALIZE IT. |
EP2202560A1 (en) | 2008-12-23 | 2010-06-30 | Essilor International (Compagnie Générale D'Optique) | A method for providing a spectacle ophthalmic lens by calculating or selecting a design |
IT1392435B1 (en) | 2008-12-23 | 2012-03-09 | Luxottica Srl | MULTILAYER FILM DEPICTING A BIDIMENSIONAL COLORED IMAGE ONLY VISIBLE THROUGH A POLARIZED FILTER AND PROCEDURE TO REALIZE IT. |
IT1392436B1 (en) | 2008-12-23 | 2012-03-09 | Luxottica Srl | MULTILAYER FILM DEPICTING A BIDIMENSIONAL COLORED IMAGE ONLY VISIBLE THROUGH A POLARIZED FILTER AND PROCEDURE TO REALIZE IT. |
DE102009005214A1 (en) | 2009-01-20 | 2010-07-22 | Rodenstock Gmbh | Automatic progressive lens design modification |
DE102009005206A1 (en) | 2009-01-20 | 2010-07-22 | Rodenstock Gmbh | Variable progressive lens design |
JP5710500B2 (en) | 2009-01-23 | 2015-04-30 | ローデンストック.ゲゼルシャフト.ミット.ベシュレンクテル.ハフツング | Design control using polygonal design |
KR101670282B1 (en) | 2009-02-10 | 2016-10-28 | 톰슨 라이센싱 | Video matting based on foreground-background constraint propagation |
US8355079B2 (en) | 2009-02-10 | 2013-01-15 | Thomson Licensing | Temporally consistent caption detection on videos using a 3D spatiotemporal method |
US8605989B2 (en) | 2009-02-13 | 2013-12-10 | Cognitech, Inc. | Registration and comparison of three dimensional objects in facial imaging |
US8204301B2 (en) | 2009-02-25 | 2012-06-19 | Seiko Epson Corporation | Iterative data reweighting for balanced model learning |
US8260038B2 (en) | 2009-02-25 | 2012-09-04 | Seiko Epson Corporation | Subdivision weighting for robust object model fitting |
US8260039B2 (en) | 2009-02-25 | 2012-09-04 | Seiko Epson Corporation | Object model fitting using manifold constraints |
US8208717B2 (en) | 2009-02-25 | 2012-06-26 | Seiko Epson Corporation | Combining subcomponent models for object image modeling |
US8605942B2 (en) | 2009-02-26 | 2013-12-10 | Nikon Corporation | Subject tracking apparatus, imaging apparatus and subject tracking method |
US8860723B2 (en) | 2009-03-09 | 2014-10-14 | Donya Labs Ab | Bounded simplification of geometrical computer data |
USD616918S1 (en) | 2009-03-27 | 2010-06-01 | Luxottica Group S.P.A. | Eyeglass |
US8372319B2 (en) | 2009-06-25 | 2013-02-12 | Liguori Management | Ophthalmic eyewear with lenses cast into a frame and methods of fabrication |
US8290769B2 (en) | 2009-06-30 | 2012-10-16 | Museami, Inc. | Vocal and instrumental audio effects |
US20110001791A1 (en) | 2009-07-02 | 2011-01-06 | Emaze Imaging Techonolgies Ltd. | Method and system for generating and displaying a three-dimensional model of physical objects |
US8553973B2 (en) | 2009-07-07 | 2013-10-08 | University Of Basel | Modeling methods and systems |
EP2457214B1 (en) | 2009-07-20 | 2015-04-29 | Thomson Licensing | A method for detecting and adapting video processing for far-view scenes in sports video |
WO2011011059A1 (en) | 2009-07-21 | 2011-01-27 | Thomson Licensing | A trajectory-based method to detect and enhance a moving object in a video sequence |
WO2011013079A1 (en) | 2009-07-30 | 2011-02-03 | Primesense Ltd. | Depth mapping based on pattern matching and stereoscopic information |
ES2356457B1 (en) | 2009-07-31 | 2012-09-07 | Innovaciones Via Solar, S.L | ORIENTABLE LAUNCH OF PIROTECHNICAL HOUSES. |
US8803950B2 (en) | 2009-08-24 | 2014-08-12 | Samsung Electronics Co., Ltd. | Three-dimensional face capturing apparatus and method and computer-readable medium thereof |
US8537200B2 (en) | 2009-10-23 | 2013-09-17 | Qualcomm Incorporated | Depth map generation techniques for conversion of 2D video data to 3D video data |
CN102713728A (en) | 2009-11-13 | 2012-10-03 | 依视路国际集团(光学总公司) | A method for providing a spectacle ophthalmic lens by calculating or selecting a design |
JP5463866B2 (en) | 2009-11-16 | 2014-04-09 | ソニー株式会社 | Image processing apparatus, image processing method, and program |
JP5969389B2 (en) | 2009-12-14 | 2016-08-17 | トムソン ライセンシングThomson Licensing | Object recognition video coding strategy |
WO2011075082A1 (en) | 2009-12-14 | 2011-06-23 | Agency For Science, Technology And Research | Method and system for single view image 3 d face synthesis |
KR20120099072A (en) | 2009-12-16 | 2012-09-06 | 톰슨 라이센싱 | Human interaction trajectory-based system |
GB2476968B (en) | 2010-01-15 | 2011-12-14 | Gareth Edwards | Golf grip training aid |
FR2955409B1 (en) * | 2010-01-18 | 2015-07-03 | Fittingbox | METHOD FOR INTEGRATING A VIRTUAL OBJECT IN REAL TIME VIDEO OR PHOTOGRAPHS |
KR101789845B1 (en) | 2010-01-22 | 2017-11-20 | 톰슨 라이센싱 | Methods and apparatus for sampling-based super resolution video encoding and decoding |
KR101791919B1 (en) | 2010-01-22 | 2017-11-02 | 톰슨 라이센싱 | Data pruning for video compression using example-based super-resolution |
WO2011090789A1 (en) | 2010-01-22 | 2011-07-28 | Thomson Licensing | Method and apparatus for video object segmentation |
WO2011097306A1 (en) | 2010-02-04 | 2011-08-11 | Sony Corporation | 2d to 3d image conversion based on image content |
IT1397832B1 (en) | 2010-02-08 | 2013-02-04 | Luxottica Srl | APPARATUS AND PROCEDURE FOR REALIZING A FRAME OF GLASSES IN THERMOPLASTIC MATERIAL. |
US20110211816A1 (en) | 2010-02-22 | 2011-09-01 | Richard Edwin Goedeken | Method and apparatus for synchronized workstation with two-dimensional and three-dimensional outputs |
CN102771113A (en) | 2010-02-24 | 2012-11-07 | 汤姆逊许可证公司 | Split screen for 3D |
US20120320153A1 (en) | 2010-02-25 | 2012-12-20 | Jesus Barcons-Palau | Disparity estimation for stereoscopic subtitling |
US20110227934A1 (en) | 2010-03-19 | 2011-09-22 | Microsoft Corporation | Architecture for Volume Rendering |
CA2793855A1 (en) | 2010-03-22 | 2011-09-29 | Luxottica Us Holdings Corporation | Ion beam assisted deposition of ophthalmic lens coatings |
US20110234591A1 (en) | 2010-03-26 | 2011-09-29 | Microsoft Corporation | Personalized Apparel and Accessories Inventory and Display |
US8194072B2 (en) | 2010-03-26 | 2012-06-05 | Mitsubishi Electric Research Laboratories, Inc. | Method for synthetically relighting images of objects |
US9959453B2 (en) | 2010-03-28 | 2018-05-01 | AR (ES) Technologies Ltd. | Methods and systems for three-dimensional rendering of a virtual augmented replica of a product image merged with a model image of a human-body feature |
WO2011127269A1 (en) | 2010-04-08 | 2011-10-13 | Delta Vidyo, Inc. | Remote gaze control system and method |
US8459792B2 (en) | 2010-04-26 | 2013-06-11 | Hal E. Wilson | Method and systems for measuring interpupillary distance |
DE102010018549B4 (en) | 2010-04-28 | 2022-08-18 | Rodenstock Gmbh | Computer-implemented method for calculating a spectacle lens taking into account the rotation of the eye, device for calculating or optimizing a spectacle lens, computer program product, storage medium, method for manufacturing a spectacle lens, device for manufacturing a spectacle lens and use of a spectacle lens |
DE102011009473B4 (en) | 2010-04-28 | 2022-03-17 | Rodenstock Gmbh | Computer-implemented method for calculating a spectacle lens with viewing-angle-dependent prescription data, device for calculating or optimizing a spectacle lens, computer program product, storage medium, method for manufacturing a spectacle lens, and use of a spectacle lens |
US9041765B2 (en) | 2010-05-12 | 2015-05-26 | Blue Jeans Network | Systems and methods for security and privacy controls for videoconferencing |
US8295589B2 (en) | 2010-05-20 | 2012-10-23 | Microsoft Corporation | Spatially registering user photographs |
DE102010021763A1 (en) | 2010-05-27 | 2011-12-01 | Carl Zeiss Vision Gmbh | Method for producing a spectacle lens and spectacle lens |
US8411092B2 (en) | 2010-06-14 | 2013-04-02 | Nintendo Co., Ltd. | 2D imposters for simplifying processing of plural animation objects in computer graphics generation |
US8861800B2 (en) | 2010-07-19 | 2014-10-14 | Carnegie Mellon University | Rapid 3D face reconstruction from a 2D image and methods using such rapid 3D face reconstruction |
KR101654777B1 (en) | 2010-07-19 | 2016-09-06 | 삼성전자주식회사 | Apparatus and method for scalable encoding 3d mesh, and apparatus and method for scalable decoding 3d mesh |
US20120038665A1 (en) | 2010-08-14 | 2012-02-16 | H8it Inc. | Systems and methods for graphing user interactions through user generated content |
US9519396B2 (en) | 2010-09-28 | 2016-12-13 | Apple Inc. | Systems, methods, and computer-readable media for placing an asset on a three-dimensional model |
US20120256906A1 (en) | 2010-09-30 | 2012-10-11 | Trident Microsystems (Far East) Ltd. | System and method to render 3d images from a 2d source |
US8307560B2 (en) | 2010-10-08 | 2012-11-13 | Levi Strauss & Co. | Shaped fit sizing system |
FR2966038B1 (en) | 2010-10-14 | 2012-12-14 | Magellan Interoptic | METHOD FOR MEASURING THE PUPILLARY GAP OF A PERSON AND ASSOCIATED DEVICE |
WO2012051654A1 (en) | 2010-10-20 | 2012-04-26 | Luxottica Retail Australia Pty Ltd | An equipment testing apparatus |
WO2012054972A1 (en) | 2010-10-26 | 2012-05-03 | Luxottica Retail Australia Pty Ltd | A merchandise retailing structure |
WO2012054983A1 (en) | 2010-10-29 | 2012-05-03 | Luxottica Retail Australia Pty Ltd | Eyewear selection system |
TWI476729B (en) | 2010-11-26 | 2015-03-11 | Inst Information Industry | Dimensional image and three - dimensional model of the combination of the system and its computer program products |
US9529939B2 (en) | 2010-12-16 | 2016-12-27 | Autodesk, Inc. | Surfacing algorithm for designing and manufacturing 3D models |
KR101796190B1 (en) | 2010-12-23 | 2017-11-13 | 한국전자통신연구원 | Apparatus and method for generating digital clone |
US8533187B2 (en) | 2010-12-23 | 2013-09-10 | Google Inc. | Augmentation of place ranking using 3D model activity in an area |
US20120176380A1 (en) | 2011-01-11 | 2012-07-12 | Sen Wang | Forming 3d models using periodic illumination patterns |
US8447099B2 (en) | 2011-01-11 | 2013-05-21 | Eastman Kodak Company | Forming 3D models using two images |
US8861836B2 (en) | 2011-01-14 | 2014-10-14 | Sony Corporation | Methods and systems for 2D to 3D conversion from a portrait image |
US9129438B2 (en) | 2011-01-18 | 2015-09-08 | NedSense Loft B.V. | 3D modeling and rendering from 2D images |
US8885050B2 (en) | 2011-02-11 | 2014-11-11 | Dialogic (Us) Inc. | Video quality monitoring |
US8553956B2 (en) | 2011-02-28 | 2013-10-08 | Seiko Epson Corporation | 3D current reconstruction from 2D dense MCG images |
US20130088490A1 (en) * | 2011-04-04 | 2013-04-11 | Aaron Rasmussen | Method for eyewear fitting, recommendation, and customization using collision detection |
US9070208B2 (en) | 2011-05-27 | 2015-06-30 | Lucasfilm Entertainment Company Ltd. | Accelerated subsurface scattering determination for rendering 3D objects |
US8520075B2 (en) | 2011-06-02 | 2013-08-27 | Dialogic Inc. | Method and apparatus for reduced reference video quality measurement |
US8442308B2 (en) * | 2011-08-04 | 2013-05-14 | Cranial Technologies, Inc | Method and apparatus for preparing image representative data |
CN103765479A (en) | 2011-08-09 | 2014-04-30 | 英特尔公司 | Image-based multi-view 3D face generation |
US20130271451A1 (en) | 2011-08-09 | 2013-10-17 | Xiaofeng Tong | Parameterized 3d face generation |
KR101381439B1 (en) * | 2011-09-15 | 2014-04-04 | 가부시끼가이샤 도시바 | Face recognition apparatus, and face recognition method |
US8743051B1 (en) | 2011-09-20 | 2014-06-03 | Amazon Technologies, Inc. | Mirror detection-based device functionality |
AU2012340573A1 (en) * | 2011-11-21 | 2014-07-17 | Icheck Health Connection, Inc. | Video game to monitor retinal diseases |
EP2615583B1 (en) * | 2012-01-12 | 2016-04-20 | Alcatel Lucent | Method and arrangement for 3D model morphing |
US8766979B2 (en) | 2012-01-20 | 2014-07-01 | Vangogh Imaging, Inc. | Three dimensional data compression |
US8813378B2 (en) | 2012-05-17 | 2014-08-26 | Carol S. Grove | System and method for drafting garment patterns from photographs and style drawings |
KR20150116641A (en) * | 2014-04-08 | 2015-10-16 | 한국과학기술연구원 | Apparatus for recognizing image, method for recognizing image thereof, and method for generating face image thereof |
-
2013
- 2013-02-22 US US13/774,985 patent/US9311746B2/en active Active
- 2013-02-22 US US13/774,978 patent/US9235929B2/en active Active
- 2013-02-22 US US13/774,983 patent/US20130314401A1/en not_active Abandoned
- 2013-02-22 US US13/774,958 patent/US9378584B2/en active Active
- 2013-02-25 US US13/775,764 patent/US9208608B2/en active Active
- 2013-05-23 CA CA2874531A patent/CA2874531C/en active Active
- 2013-05-23 WO PCT/US2013/042517 patent/WO2013177459A1/en active Application Filing
- 2013-05-23 AU AU2013266187A patent/AU2013266187B2/en active Active
- 2013-05-23 WO PCT/US2013/042509 patent/WO2013177453A1/en active Application Filing
- 2013-05-23 EP EP13793686.0A patent/EP2852934B1/en active Active
- 2013-05-23 WO PCT/US2013/042520 patent/WO2013177462A1/en active Application Filing
- 2013-05-23 WO PCT/US2013/042514 patent/WO2013177457A1/en active Application Filing
- 2013-05-23 EP EP13793957.5A patent/EP2852935B1/en active Active
- 2013-05-23 WO PCT/US2013/042504 patent/WO2013177448A1/en active Application Filing
-
2015
- 2015-04-28 US US14/698,655 patent/US10147233B2/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20010026272A1 (en) * | 2000-04-03 | 2001-10-04 | Avihay Feld | System and method for simulation of virtual wear articles on virtual models |
US20030110099A1 (en) * | 2001-12-11 | 2003-06-12 | Philips Electronics North America Corporation | Virtual wearing of clothes |
US20090310861A1 (en) * | 2005-10-31 | 2009-12-17 | Sony United Kingdom Limited | Image processing |
US20110029561A1 (en) * | 2009-07-31 | 2011-02-03 | Malcolm Slaney | Image similarity from disparate sources |
US20110040539A1 (en) * | 2009-08-12 | 2011-02-17 | Szymczyk Matthew | Providing a simulation of wearing items such as garments and/or accessories |
Also Published As
Publication number | Publication date |
---|---|
EP2852934A1 (en) | 2015-04-01 |
US20130314401A1 (en) | 2013-11-28 |
EP2852934A4 (en) | 2016-04-13 |
US10147233B2 (en) | 2018-12-04 |
US20130315487A1 (en) | 2013-11-28 |
WO2013177453A1 (en) | 2013-11-28 |
CA2874531C (en) | 2020-09-22 |
WO2013177462A1 (en) | 2013-11-28 |
US9208608B2 (en) | 2015-12-08 |
US20130314411A1 (en) | 2013-11-28 |
EP2852934B1 (en) | 2018-02-14 |
US9378584B2 (en) | 2016-06-28 |
US9311746B2 (en) | 2016-04-12 |
WO2013177457A1 (en) | 2013-11-28 |
WO2013177459A1 (en) | 2013-11-28 |
AU2013266187A1 (en) | 2014-12-18 |
AU2013266187B2 (en) | 2018-02-15 |
US20130314410A1 (en) | 2013-11-28 |
EP2852935A1 (en) | 2015-04-01 |
US20150235428A1 (en) | 2015-08-20 |
EP2852935A4 (en) | 2016-04-13 |
US20130314412A1 (en) | 2013-11-28 |
US9235929B2 (en) | 2016-01-12 |
CA2874531A1 (en) | 2013-11-28 |
EP2852935B1 (en) | 2020-08-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9208608B2 (en) | Systems and methods for feature tracking | |
US9679212B2 (en) | Liveness testing methods and apparatuses and image processing methods and apparatuses | |
CN109284733B (en) | Shopping guide negative behavior monitoring method based on yolo and multitask convolutional neural network | |
WO2016124103A1 (en) | Picture detection method and device | |
US11423633B2 (en) | Image processing to detect a rectangular object | |
US9846808B2 (en) | Image integration search based on human visual pathway model | |
CN108664897A (en) | Bank slip recognition method, apparatus and storage medium | |
US9977950B2 (en) | Decoy-based matching system for facial recognition | |
WO2020029466A1 (en) | Image processing method and apparatus | |
WO2019019595A1 (en) | Image matching method, electronic device method, apparatus, electronic device and medium | |
US10133955B2 (en) | Systems and methods for object recognition based on human visual pathway | |
EP4024270A1 (en) | Gesture recognition method, electronic device, computer-readable storage medium, and chip | |
US10586335B2 (en) | Hand segmentation in a 3-dimensional image | |
WO2019119396A1 (en) | Facial expression recognition method and device | |
EP2786314A1 (en) | Method and device for following an object in a sequence of at least two images | |
CN109815823B (en) | Data processing method and related product | |
US20140267793A1 (en) | System and method for vehicle recognition in a dynamic setting | |
CN110378328A (en) | A kind of certificate image processing method and processing device | |
CN110188602A (en) | Face identification method and device in video | |
CN112991555B (en) | Data display method, device, equipment and storage medium | |
Prakas et al. | Fast and economical object tracking using Raspberry pi 3.0 | |
CN114972146A (en) | Image fusion method and device based on generation countermeasure type double-channel weight distribution | |
WO2024077785A1 (en) | Image recognition method and apparatus based on convolutional neural network model, and terminal device | |
CN112565601B (en) | Image processing method, image processing device, mobile terminal and storage medium | |
CN114170665A (en) | Image detection method, image detection device, electronic apparatus, and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 13794668 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 13794668 Country of ref document: EP Kind code of ref document: A1 |