WO2014085573A1 - Line depth augmentation system and method for conversion of 2d images to 3d images - Google Patents

Line depth augmentation system and method for conversion of 2d images to 3d images Download PDF

Info

Publication number
WO2014085573A1
WO2014085573A1 PCT/US2013/072208 US2013072208W WO2014085573A1 WO 2014085573 A1 WO2014085573 A1 WO 2014085573A1 US 2013072208 W US2013072208 W US 2013072208W WO 2014085573 A1 WO2014085573 A1 WO 2014085573A1
Authority
WO
WIPO (PCT)
Prior art keywords
depth
mask
dimensional
image
area
Prior art date
Application number
PCT/US2013/072208
Other languages
French (fr)
Inventor
Jared Sandrew
Jill Hunt
Barry Sandrew
Tony Baldridge
James Prola
Original Assignee
Legend3D, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Legend3D, Inc. filed Critical Legend3D, Inc.
Publication of WO2014085573A1 publication Critical patent/WO2014085573A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/261Image signal generators with monoscopic-to-stereoscopic image conversion

Definitions

  • One or more embodiments of the invention are related to the field of image analysis and image enhancement and computer graphics processing of two-dimensional images into three-dimensional images. More particularly, but not by way of limitation, one or more embodiments of the invention enable a line depth augmentation system and method for conversion of 2D images to 3D images. Embodiments enable adding or augmenting lines with depth, for example in cell animation images or other images having limited color range on a region by region basis in which the lines appear, for example without modeling the lines as part of a wireframe or other depth model. This enables rapid conversion of two- dimensional images to three-dimensional images by enabling stereographers to quickly add depth to lines in images without modeling or precisely matching the depth of the line to the region in which the line appears.
  • Embodiments may output a stereoscopic image, e.g., a three-dimensional viewing enabled image, such as an anaglyph image or pair of images for left and right eye viewing of different horizontally offset viewpoints for left and right eyes, for example with lines having the desired depth.
  • a stereoscopic image e.g., a three-dimensional viewing enabled image, such as an anaglyph image or pair of images for left and right eye viewing of different horizontally offset viewpoints for left and right eyes, for example with lines having the desired depth.
  • Three-dimensional images include any type of image or images that provide different left and right eye views to encode depth, some types of three-dimensional images require use of special glasses to ensure the left eye viewpoint is shown to the left eye and the right eye viewpoint is shown to the right eye of an observer.
  • Existing systems that are utilized to convert two-dimensional images to three-dimensional images typically require rotoscoping of images to create outlines of regions in the images. The rotoscoped regions are then individually depth adjusted by hand to produce a left and right eye image, or single anaglyph image, or other three-dimensionally viewable image, such as a polarized three-dimensional image viewed with left and right lenses having different polarization angles for example.
  • the iterative workflow includes rotoscoping or modeling objects in each frame, adding depth and then rendering the frame into left and right viewpoints forming an anaglyph image or a left and right image pair.
  • the typical workflow involves an "iteration", i.e., sending the frames back to the workgroup responsible for masking the objects, (which can be in a country with cheap unskilled labor half way around the world), after which the masks are sent to the workgroup responsible for rendering the images, (again potentially in another country), wherein rendering is accomplished by either shifting input pixels left and right for cell animation images for example or ray tracing the path of light through each pixel in left and right images to simulate the light effects the path of light interacts with and for example bounces off of or through, which is computationally extremely expensive. After rendering, the rendered image pair is sent back to the quality assurance group.
  • Embodiments of the invention accept input for example from a person responsible for masking, to mask lines that may be depth augmented using an embodiment of the system for example.
  • the mask associated with a line may be automatically or semi-automatically created by the system.
  • the system for example may accept an input for the mask and search for high contrast areas and build masks iteratively, pixel-by-pixel through a fill algorithm or operation.
  • the mask may be made slightly larger than the line since the color surrounding the line is generally monochrome. Hence, when depth is added to the line there are no artifacts on the edges since the left and right offsets of area surrounding the line place image data that is of the same color in a horizontal offset over the same color.
  • the masked lines then may be viewed by a stereographer who then indicates the depths at which to place lines that exist in a two-dimensional image to convert the image to a three-dimensional image.
  • the person responsible for masking and the stereographer may work in either order or together on a particular image in parallel or serially as desired.
  • the input for depth of a line is accepted by the system and displayed at the depth indicated on the three- dimensional version of the two-dimensional input image.
  • the depth may be specified using a graphical input device, such as a graphics drawing tablet.
  • depths may be input via a keyboard, or via voice commands while drawing annotation information or symbols for example.
  • the line is offset by the proper horizontal left and right amount to make a stereoscopic image or pair of images that enable three-dimensional viewing.
  • the line may be adjusted in depth without altering the shape or depth of the region in which the line occurs since there are generally no visual indications of depth in a cell animation region having a single color or small variation of color for example.
  • Embodiments of the invention make this process extremely intuitive as the depth to apply to lines is input easily and avoids modeling and the associated labor required to model objects that have lines. Since animated images generally have no visual color variations on monochrome characters or homogenous color portions of objects or regions, the lines may be depth adjusted without applying varying depth of objects to match the depth applied to the lines in those objects, which saves a great deal of time in specify, an adjusting depth of lines, for example without complex modeling or re-rendering. [008] When rendering an image pair, left and right viewpoint images and left and right absolute translation files, or a single relative translation file may be generated and/or utilized by one or more embodiments of the invention.
  • the translation files specify the pixel offsets for each source pixel in the original 2D image, for example in relative or absolute form respectively. These files are generally related to an alpha mask for each layer, for example a layer for an actress, a layer for a door, a layer for a background, etc. These translation files, or maps are passed from the depth augmentation group that renders 3D images, to the quality assurance workgroup.
  • the Z depth of regions within the image may also be passed along with the alpha mask to the quality assurance group, who may then adjust depth as well without re- rendering with the original rendering software. This may be performed for example with generated missing background data from any layer so as to allow "downstream" real-time editing without re-rendering or ray-tracing for example.
  • Quality assurance may give feedback to the masking group or depth augmentation group for individuals so that these individuals may be instructed to produce work product as desired for the given project, without waiting for, or requiring the upstream groups to rework anything for the current project.
  • This allows for feedback yet eliminates iterative delays involved with sending work product back for rework and the associated delay for waiting for the reworked work product. Elimination of iterations such as this provide a huge savings in wall-time, or end-to-end time that a conversion project takes, thereby increasing profits and minimizing the workforce needed to implement the workflow.
  • embodiments of the invention minimize the time to augment lines with depth, for example for cell animation by accepting or automatically generating masks for lines to avoid modeling objects, or minimize modeling of objects in which the lines appear.
  • Embodiments of the system also save time by eliminating re-rendering by other work groups, and allow depth to be correctly input local to a work group.
  • Figure 1 shows an architectural view of an embodiment of the system.
  • Figure 2 shows an input two-dimensional image having lines at which depth is to be applied without requiring precise modeling of the object in which the lines appear.
  • Figure 3 shows a masked version of the two-dimensional image showing masked lines in an object to apply depth to and various regions of the object in which depth can be applied if desired.
  • Figure 4 shows annotations for desired depth of one of the lines shown in Figure 3, wherein the annotations may be viewed in three-dimensional depth with anaglyph glasses.
  • Figure 5 shows the input image converted to three-dimensional image in anaglyph format, which may be viewed in three-dimensional depth with anaglyph glasses.
  • Figure 6 illustrates a close up view of a portion of a line on the region wherein the line does not match the depth of the region but rather implies a depth for the human mind to interpret for the region.
  • Figure 7 illustrates a side view of the depth of the region and line shown in Figures 4- 6.
  • Figure 8 illustrates an example of a style guide for use in the creation of lines and other regions.
  • Figure 1 shows an architectural view of an embodiment of the system 100.
  • computer 101 is coupled with any combination of input devices including graphics tablet 102a, keyboard 102b, mouse 102c and/or microphone 102d.
  • Computer 101 may obtain a two-dimensional image and display the image on screen 103.
  • the image may be obtained from any local or remote memory device associated with or accessible by the computer for example.
  • Screen 103 may display a single image that may be viewed at depth, for example as an anaglyph using two different colors shifted left and right that may be viewed with glasses with lenses of two different colors, e.g., Red and Blue to view the image as a three-dimensional image for example.
  • the two-dimensional image may have multiple lines within regions that are to be converted to different depths, for example first line 151a, e.g., an eye or eyebrow and second line 151b, a nose for ease of illustration.
  • first line 151a e.g., an eye or eyebrow
  • second line 151b a nose for ease of illustration.
  • Any lines representing any object may be depth augmented using embodiments of the invention.
  • Embodiments of the system accept a mask having an area associated with a line, for example in a monochrome region or region that varies little in color, in the two-dimensional source image, for example line 151a or line 151b in the cartoon character shown as a face, that may be of a monochrome color, or limited color range for example that does not provide any visual indication of depth or depth variation.
  • a mask having an area associated with a line, for example in a monochrome region or region that varies little in color, in the two-dimensional source image, for example line 151a or line 151b in the cartoon character shown as a face, that may be of a monochrome color, or limited color range for example that does not provide any visual indication of depth or depth variation.
  • the methods for building the mask are detailed further below.
  • Embodiments of the system also accept a depth associated with the line or lines within the region, for example monochrome region, in the two-dimensional source image via an input device coupled with the computer.
  • Embodiments apply the depth to the area of the two-dimensional image defined by the mask to create a three-dimensional image, for example without altering a depth of a remainder of the monochrome region where said mask does not occur.
  • Other embodiments enable accepting a second depth for at least a portion of the mask and changing the depth of the output image without any modeling of the region in which the line occurs and without re-rendering or re-ray-tracing.
  • Embodiment accept the mask or accept the depth or both by accepting an input from the graphics tablet, or the mouse, or the keyboard, or the microphone or any combination thereof.
  • the system accepts an input location within the two- dimensional source image via one of the input devices and obtains the color of the pixel at the input location.
  • the area associated with the mask is increased to include all of the contiguous pixels that are within a predetermined range of the color of the pixel, for example analogous to a paint fill operation.
  • the predetermined range of the color may be set to a predetermined percentage of a volume of a color space, or predetermined threshold of luminance associated with the color, or set to zero to exactly match the selected pixel's color as a seed for the mask for example.
  • Embodiments optionally may increase the size of the mask by a predetermined size, for example a predetermined number of pixels or percentage of thickness of the line or any other method.
  • a predetermined size for example a predetermined number of pixels or percentage of thickness of the line or any other method. This provides for a more robust or tolerant depth setting operation that results in less artifacts since the region surrounding the line is primarily or totally of one color. In other words, shifting the line and a small portion of a single color around the line, left and right to create a left and right viewing angle causes the line and the single color near the line to be shifted over the same color when altering the depth of the line.
  • One or more embodiments of the system accepts the depth through analysis of motion data obtained from the graphics tablet or mouse, i.e., if the mouse or graphics tablet is sending data indicative of motion away from the user, then the depth may increase, or by moving the mouse or pen on the graphics tablet closer to the user, the depth may decrease.
  • the acceptance of depth may be performed by parsing alphanumeric data from the keyboard to determine the depth. This enables a user to type in a positive or negative number and set the area to augment depth associated with the mask through numeric input for example.
  • another annotation file may be associated with the two- dimensional source image and that image may be analyzed for script or text via optical character recognition software to obtain depth values.
  • voice recognition software may be utilized to input values for depth that are accepted by the system, for example with a particular mask selected via the mouse or graphics tablet, etc.
  • Positive or negative numbers may be utilized to indicate further or nearer depth depending on the particular studio or organization that is adding depth and embodiments of the invention may utilize any scale or range or units of measure to indicate depth.
  • Any method of creating the resulting three-dimensional image may be utilized include generating a pair of images that includes one image for viewing with a left and right eye respectively.
  • the system may generate a single anaglyph image for each input two-dimensional image for viewing with glasses having lenses of two different colors.
  • any other type of single image that encodes left and right eye information such as polarized images is in keeping with the spirit of the invention.
  • the area of the mask is displaced relative to the original location in the two-dimensional source image left and right based on the depth to create the three-dimensional image.
  • Figure 2 shows an input two-dimensional image.
  • a character having a monochrome region e.g., the face of the character, having lines, here an eyebrow and nose for example.
  • Figure 3 shows a masked version of the two-dimensional image showing regions within each object to optionally apply depth or shapes to. This enables a basic application of depth to an area without highly complex modeling to match the depth shape of the lines in the image.
  • the underlying source color associated with the particular regions 360a and 360b are of one color or of a limited color range (see Figure 2) although the region masks 360a-b are shown in different colors to indicate potentially different depths or depth shapes may be applied.
  • the underlying source color for regions 360c-e are also one color, (see Figure 2).
  • regions 360c-e are masks for underlying areas of one color, or at least close to it, and the lines to be depth adjusted occur thereon, left and right translations of the lines thereon may be performed by masking the lines and generously translating around the lines with slightly enlarged masks to ensure no artifacts occur.
  • mask 351a for the underlying line associated with the eyebrow and mask 351b for the underlying line associated with the nose for example are optionally of a predetermined thickness greater than the underlying line.
  • the mask region 360d for example may have a basic depth contour applied, but has not color or shading variations and hence the lines on that area may float over them to persuade the human mind that the underlying region actually has depth. Again, minimal application of depth to the background with depth applied to the lines on that limited color range region provides a visual indication of depth to the human mind, without requiring complex modeling of the regions that contain the lines.
  • the lines may be animated over time, for example enlarged, morphed or otherwise reshaped or moved or with a varying color for example to enhance the expression of the line.
  • the lines may float over the underlying region wherein the human mind does not interpret the region and line as having distinct depths.
  • only the lines that need to be depth altered to indicate depth for a region are masked and/or depth modified.
  • lines may be automatically selected by the system based on contrast, luminance, color or any other visual component or characteristic, such as thickness of a color area.
  • line isolation processing may include a dilation of color and/or blur of the image, followed by a subtraction of the image from the original image to detect lines, which works well in cell animation where objects may be of a homogenous color for example.
  • One or more embodiments may highlight all detected lines, lines over a certain length or thickness or curvature or any other feature associated with the lines.
  • Embodiments may show the suggested lines for depth application highlighting a suggested mask or inverting a color or in any other manner.
  • the lines selected may have boundaries over a certain distance to a different background color so that lines further away from the edge of a character are suggested for depth enhancement for example.
  • Embodiments of the invention accept an input to the suggested mask or accept inputs for drawing masks as well.
  • Figure 4 shows optional annotations for desired depth at a specific depth for general messages or at the depth of the desired region for example, wherein the annotations may be viewed in three-dimensional depth with anaglyph glasses.
  • the two-dimensional image is still in two-dimensions, i.e., the depth across the entire image does not vary.
  • the two-dimensional image along with the three-dimensional annotations specify the depths to apply to particular areas or regions and is used as an input to the depth augmentation group for example.
  • the depth group then moves the associated regions in depth to match the annotations in an intuitive manner that is extremely fast and provides a built-in sanity check for depth.
  • line 451a that represents an eyebrow on a monochrome region is shown without an associated annotation
  • line 451b that represents a nose is shown with an annotation that suggests a 10 pixel forward depth shift.
  • line 451a may float over the region in which the line appears, however, the human mind interprets the area where the line occurs as having the depth contour associated with the line even if the region does not have this contour or depth shape. This is because the limited color range of the region in which the line appears otherwise gives no visual indication as to the depth of the region.
  • Figure 5 shows the input image converted to three-dimensional image in anaglyph format, which may be viewed in three-dimensional depth with anaglyph glasses.
  • the individual lines 151a and 151b as shown in Figure 2, having corresponding areas of masks 351a and 351b respectively as shown in Figure 3 are shifted in depth to produce an output three-dimensional image.
  • the lines are not generally at the depth of the region in which they appear but are rather the elements in the figure that give the appearance of depth to the region in which the lines appear. This enables simple modeling of depth for the regions in which the lines appear as opposed to the creation of a 3D wire model, or other sophisticated volumetric module to be applied to each region.
  • the generous masking applied to the lines minimize the tracking of the lines between frame to frame while still providing the appearance of depth applied to the line and thus implied to the human mind for the whole region.
  • Figure 6 illustrates a close up view of a portion of a line on the region wherein the line does not match the depth of the region but rather implies a depth for the human mind to interpret for the region.
  • line 451a is of a depth that does not necessary correspond to the shape of, or the depth of the region in which the line resides.
  • Figure 7 illustrates a side view of the depth of the region and line shown in Figures 4- 6 wherein line 451a has had a depth added to portions of the line to offset line 451a from regions 360f and 360g in which the line appears.
  • the depth of line 451a has been offset from regions 360f and 360g in order to give regions 360f and 360g an implied depth without requiring modeling of the underlying region with wireframe models, etc.
  • This method may be applied to a line that lies in one region or multiple regions as described for exemplary purposes. This enables an extremely fast method for providing one or more regions with implied depth without requiring extensive depth modeling for the region or regions.
  • region 360f may have a simple curve or other depth offset applied as is shown in the exemplary side view of Figure 7 while the depth of line 451a is adjusted with a slightly larger mask than the line so as to imply depth to the region.
  • any masking errors are basically undetectable to the human eye as the shifted over- masked area falls on the same color background when shifted horizontally for left and right eye stereoscopic viewing.
  • the human eye believes the entire region to be at the depth contour provided by the line, however the amount of labor required to add depth to the image is greatly lowered with embodiments of the invention. See also Figure 6 with anaglyph glasses to view the lines at a different depth than the background.
  • Figure 8 illustrates an example of a style guide for use in the creation of lines and other regions, e.g., the eyelashes, the mole, the chin, etc.
  • the instructions 801 for augmenting lines with depth are shown for example in a key frame.
  • the instructions are then utilized to focus work on the lines that depth may be added to in order to minimize work, for example to minimize modeling of the regions having lines. This enables consistent work product to be generated for a particular scene and further lowers the amount of effort that is performed in augmenting a scene with depth.
  • embodiments of the invention minimize the amount of work required to generate depth associated with lines for areas without requiring complex modeling of the areas and enable the lines to be altered without complex remodeling or re-rendering.

Abstract

A line depth augmentation system and method for conversion of 2D images to 3D images. Enables adding depth to regions by altering depth of lines in the regions, for example in cell animation images or regions of limited color range. Eliminates creation of wireframe or other depth models and complex modeling of regions to match the depth of lines therein. Enables rapid conversion of two-dimensional images to three-dimensional images by enabling stereographers to quickly add/alter line depth without artifacts in images for example lines in monochrome regions. Embodiments may output a stereoscopic image pair of images with lines having desired depth, or any other three-dimensional viewing enabled image, such as an anaglyph image. Although the lines may be of a different depth than the region they appear in, the human mind interprets the monochromatic region as having depth associated with the line.

Description

LINE DEPTH AUGMENTATION SYSTEM AND METHOD FOR CONVERSION OF 2D IMAGES TO 3D IMAGES
BACKGROUND OF THE INVENTION FIELD OF THE INVENTION
[001] One or more embodiments of the invention are related to the field of image analysis and image enhancement and computer graphics processing of two-dimensional images into three-dimensional images. More particularly, but not by way of limitation, one or more embodiments of the invention enable a line depth augmentation system and method for conversion of 2D images to 3D images. Embodiments enable adding or augmenting lines with depth, for example in cell animation images or other images having limited color range on a region by region basis in which the lines appear, for example without modeling the lines as part of a wireframe or other depth model. This enables rapid conversion of two- dimensional images to three-dimensional images by enabling stereographers to quickly add depth to lines in images without modeling or precisely matching the depth of the line to the region in which the line appears. Embodiments may output a stereoscopic image, e.g., a three-dimensional viewing enabled image, such as an anaglyph image or pair of images for left and right eye viewing of different horizontally offset viewpoints for left and right eyes, for example with lines having the desired depth.
DESCRIPTION OF THE RELATED ART
[002] Three-dimensional images include any type of image or images that provide different left and right eye views to encode depth, some types of three-dimensional images require use of special glasses to ensure the left eye viewpoint is shown to the left eye and the right eye viewpoint is shown to the right eye of an observer. Existing systems that are utilized to convert two-dimensional images to three-dimensional images typically require rotoscoping of images to create outlines of regions in the images. The rotoscoped regions are then individually depth adjusted by hand to produce a left and right eye image, or single anaglyph image, or other three-dimensionally viewable image, such as a polarized three-dimensional image viewed with left and right lenses having different polarization angles for example.
[003] Current methods of adding depth to regions including lines require stereographers to add depth to the region in which line occurs. This is the case since all regions in an image that are to be depth modified are modeled in known systems regardless of whether or not they contain limited colors that do not have visual indications of depth. In other words, areas where the lines exist must generally be modeled to be part of a three-dimensional shape, e.g., a sphere or other volume. Thus, time is required to model areas in cell animation that does not necessarily need to be precisely modeled since the areas surrounding the lines may be of a single color, or of limited range for example.
[004] In addition, typical methods for converting movies from 2D to 3D that may include hundreds of thousands of frames require tremendous amount of labor for modeling, and generally utilize an iterative workflow for correcting errors. The iterative workflow includes rotoscoping or modeling objects in each frame, adding depth and then rendering the frame into left and right viewpoints forming an anaglyph image or a left and right image pair. If there are errors in the edges of the masked objects for example, then the typical workflow involves an "iteration", i.e., sending the frames back to the workgroup responsible for masking the objects, (which can be in a country with cheap unskilled labor half way around the world), after which the masks are sent to the workgroup responsible for rendering the images, (again potentially in another country), wherein rendering is accomplished by either shifting input pixels left and right for cell animation images for example or ray tracing the path of light through each pixel in left and right images to simulate the light effects the path of light interacts with and for example bounces off of or through, which is computationally extremely expensive. After rendering, the rendered image pair is sent back to the quality assurance group. It is not uncommon in this workflow environment for many iterations of a complicated frame to take place. This is known as "throw it over the fence" workflow since different workgroups work independently to minimize their current workload and not as a team with overall efficiency in mind. With hundreds of thousands of frames in a movie, the amount of time that it takes to iterate back through frames containing artifacts can become high, causing delays in the overall project. Even if the re-rendering process takes place locally, the amount of time to re-render or ray-trace all of the images of a scene can cause significant processing and hence delays on the order of at least hours. Each iteration may take a long period of time to complete as the work may be performed by groups in disparate locations having shifted work hours. Eliminating much of the modeling of objects that do not need to be modeled due to their lack of visual indications of depth as is the case in cell animation generally would provide a huge savings in wall-time, or end-to-end time that a conversion project takes, thereby increasing profits and minimizing the workforce needed to implement the workflow.
[005] Hence there is a need for a line depth augmentation system and method for conversion of 2D images to 3D.
BRIEF SUMMARY OF THE INVENTION
[006] Embodiments of the invention accept input for example from a person responsible for masking, to mask lines that may be depth augmented using an embodiment of the system for example. In one or more embodiments, the mask associated with a line may be automatically or semi-automatically created by the system. The system for example may accept an input for the mask and search for high contrast areas and build masks iteratively, pixel-by-pixel through a fill algorithm or operation. In one or more embodiments, the mask may be made slightly larger than the line since the color surrounding the line is generally monochrome. Hence, when depth is added to the line there are no artifacts on the edges since the left and right offsets of area surrounding the line place image data that is of the same color in a horizontal offset over the same color. If the person who is responsible for masking creates masks first, the masked lines then may be viewed by a stereographer who then indicates the depths at which to place lines that exist in a two-dimensional image to convert the image to a three-dimensional image. Alternatively or in combination, the person responsible for masking and the stereographer may work in either order or together on a particular image in parallel or serially as desired. In one or more embodiments of the invention, the input for depth of a line is accepted by the system and displayed at the depth indicated on the three- dimensional version of the two-dimensional input image. In one or more embodiments, the depth may be specified using a graphical input device, such as a graphics drawing tablet. In other embodiments or in combination, depths may be input via a keyboard, or via voice commands while drawing annotation information or symbols for example. After the depth is associated with the respective line, the line is offset by the proper horizontal left and right amount to make a stereoscopic image or pair of images that enable three-dimensional viewing. The line may be adjusted in depth without altering the shape or depth of the region in which the line occurs since there are generally no visual indications of depth in a cell animation region having a single color or small variation of color for example.
[007] Embodiments of the invention make this process extremely intuitive as the depth to apply to lines is input easily and avoids modeling and the associated labor required to model objects that have lines. Since animated images generally have no visual color variations on monochrome characters or homogenous color portions of objects or regions, the lines may be depth adjusted without applying varying depth of objects to match the depth applied to the lines in those objects, which saves a great deal of time in specify, an adjusting depth of lines, for example without complex modeling or re-rendering. [008] When rendering an image pair, left and right viewpoint images and left and right absolute translation files, or a single relative translation file may be generated and/or utilized by one or more embodiments of the invention. The translation files specify the pixel offsets for each source pixel in the original 2D image, for example in relative or absolute form respectively. These files are generally related to an alpha mask for each layer, for example a layer for an actress, a layer for a door, a layer for a background, etc. These translation files, or maps are passed from the depth augmentation group that renders 3D images, to the quality assurance workgroup. This allows the quality assurance workgroup (or other workgroup such as the depth augmentation group) to perform real-time editing of 3D images without re- rendering for example to alter layers/colors/masks and/or remove artifacts such as masking errors without delays associated with processing time/re-rendering and/or iterative workflow that requires such re-rendering or sending the masks back to the mask group for rework, wherein the mask group may be in a third world country with unskilled labor on the other side of the globe. In addition, when rendering the left and right images, i.e., 3D images, the Z depth of regions within the image, such as actors for example, may also be passed along with the alpha mask to the quality assurance group, who may then adjust depth as well without re- rendering with the original rendering software. This may be performed for example with generated missing background data from any layer so as to allow "downstream" real-time editing without re-rendering or ray-tracing for example.
[009] Quality assurance may give feedback to the masking group or depth augmentation group for individuals so that these individuals may be instructed to produce work product as desired for the given project, without waiting for, or requiring the upstream groups to rework anything for the current project. This allows for feedback yet eliminates iterative delays involved with sending work product back for rework and the associated delay for waiting for the reworked work product. Elimination of iterations such as this provide a huge savings in wall-time, or end-to-end time that a conversion project takes, thereby increasing profits and minimizing the workforce needed to implement the workflow.
[0010] In summary, embodiments of the invention minimize the time to augment lines with depth, for example for cell animation by accepting or automatically generating masks for lines to avoid modeling objects, or minimize modeling of objects in which the lines appear. Embodiments of the system also save time by eliminating re-rendering by other work groups, and allow depth to be correctly input local to a work group.
BRIEF DESCRIPTION OF THE DRAWINGS
[0011] Figure 1 shows an architectural view of an embodiment of the system.
[0012] Figure 2 shows an input two-dimensional image having lines at which depth is to be applied without requiring precise modeling of the object in which the lines appear.
[0013] Figure 3 shows a masked version of the two-dimensional image showing masked lines in an object to apply depth to and various regions of the object in which depth can be applied if desired.
[0014] Figure 4 shows annotations for desired depth of one of the lines shown in Figure 3, wherein the annotations may be viewed in three-dimensional depth with anaglyph glasses.
[0015] Figure 5 shows the input image converted to three-dimensional image in anaglyph format, which may be viewed in three-dimensional depth with anaglyph glasses.
[0016] Figure 6 illustrates a close up view of a portion of a line on the region wherein the line does not match the depth of the region but rather implies a depth for the human mind to interpret for the region.
[0017] Figure 7 illustrates a side view of the depth of the region and line shown in Figures 4- 6. [0018] Figure 8 illustrates an example of a style guide for use in the creation of lines and other regions.
DETAILED DESCRIPTION OF THE INVENTION
[0019] Figure 1 shows an architectural view of an embodiment of the system 100. As illustrated, computer 101 is coupled with any combination of input devices including graphics tablet 102a, keyboard 102b, mouse 102c and/or microphone 102d. Computer 101 may obtain a two-dimensional image and display the image on screen 103. The image may be obtained from any local or remote memory device associated with or accessible by the computer for example. Screen 103 may display a single image that may be viewed at depth, for example as an anaglyph using two different colors shifted left and right that may be viewed with glasses with lenses of two different colors, e.g., Red and Blue to view the image as a three-dimensional image for example. In general, the two-dimensional image may have multiple lines within regions that are to be converted to different depths, for example first line 151a, e.g., an eye or eyebrow and second line 151b, a nose for ease of illustration. Any lines representing any object may be depth augmented using embodiments of the invention.
[0020] Embodiments of the system accept a mask having an area associated with a line, for example in a monochrome region or region that varies little in color, in the two-dimensional source image, for example line 151a or line 151b in the cartoon character shown as a face, that may be of a monochrome color, or limited color range for example that does not provide any visual indication of depth or depth variation. The methods for building the mask are detailed further below. Embodiments of the system also accept a depth associated with the line or lines within the region, for example monochrome region, in the two-dimensional source image via an input device coupled with the computer. Embodiments apply the depth to the area of the two-dimensional image defined by the mask to create a three-dimensional image, for example without altering a depth of a remainder of the monochrome region where said mask does not occur. Other embodiments enable accepting a second depth for at least a portion of the mask and changing the depth of the output image without any modeling of the region in which the line occurs and without re-rendering or re-ray-tracing.
[0021] Embodiment accept the mask or accept the depth or both by accepting an input from the graphics tablet, or the mouse, or the keyboard, or the microphone or any combination thereof. In one or more embodiments the system accepts an input location within the two- dimensional source image via one of the input devices and obtains the color of the pixel at the input location. The area associated with the mask is increased to include all of the contiguous pixels that are within a predetermined range of the color of the pixel, for example analogous to a paint fill operation. The predetermined range of the color may be set to a predetermined percentage of a volume of a color space, or predetermined threshold of luminance associated with the color, or set to zero to exactly match the selected pixel's color as a seed for the mask for example. Embodiments optionally may increase the size of the mask by a predetermined size, for example a predetermined number of pixels or percentage of thickness of the line or any other method. This provides for a more robust or tolerant depth setting operation that results in less artifacts since the region surrounding the line is primarily or totally of one color. In other words, shifting the line and a small portion of a single color around the line, left and right to create a left and right viewing angle causes the line and the single color near the line to be shifted over the same color when altering the depth of the line. Thus, the area surrounding the line does not have to be remodeled or re-rendered to change the depth of the line and since there are no visual indications of depth in the surrounding area the lines potentially float above the area, however the human eye cannot detect this since the area surrounding the line is of one color. Hence, a great deal of modeling is bypassed with depth of an area interpreted by the human mind as having the depth of the line at the differing depth on the single color area. [0022] One or more embodiments of the system accepts the depth through analysis of motion data obtained from the graphics tablet or mouse, i.e., if the mouse or graphics tablet is sending data indicative of motion away from the user, then the depth may increase, or by moving the mouse or pen on the graphics tablet closer to the user, the depth may decrease. Alternatively or in combination, the acceptance of depth may be performed by parsing alphanumeric data from the keyboard to determine the depth. This enables a user to type in a positive or negative number and set the area to augment depth associated with the mask through numeric input for example. In one or more embodiments another annotation file may be associated with the two- dimensional source image and that image may be analyzed for script or text via optical character recognition software to obtain depth values. Alternatively or in combination, voice recognition software may be utilized to input values for depth that are accepted by the system, for example with a particular mask selected via the mouse or graphics tablet, etc. Positive or negative numbers may be utilized to indicate further or nearer depth depending on the particular studio or organization that is adding depth and embodiments of the invention may utilize any scale or range or units of measure to indicate depth.
[0023] Any method of creating the resulting three-dimensional image may be utilized include generating a pair of images that includes one image for viewing with a left and right eye respectively. Alternatively, the system may generate a single anaglyph image for each input two-dimensional image for viewing with glasses having lenses of two different colors. In addition, any other type of single image that encodes left and right eye information such as polarized images is in keeping with the spirit of the invention. Regardless of the type of output image technology utilized to view the image, the area of the mask is displaced relative to the original location in the two-dimensional source image left and right based on the depth to create the three-dimensional image. [0024] Figure 2 shows an input two-dimensional image. As shown, a character having a monochrome region, e.g., the face of the character, having lines, here an eyebrow and nose for example. Figure 3 shows a masked version of the two-dimensional image showing regions within each object to optionally apply depth or shapes to. This enables a basic application of depth to an area without highly complex modeling to match the depth shape of the lines in the image. The underlying source color associated with the particular regions 360a and 360b are of one color or of a limited color range (see Figure 2) although the region masks 360a-b are shown in different colors to indicate potentially different depths or depth shapes may be applied. The underlying source color for regions 360c-e are also one color, (see Figure 2). Since regions 360c-e are masks for underlying areas of one color, or at least close to it, and the lines to be depth adjusted occur thereon, left and right translations of the lines thereon may be performed by masking the lines and generously translating around the lines with slightly enlarged masks to ensure no artifacts occur. For example, in one or more embodiments, mask 351a for the underlying line associated with the eyebrow and mask 351b for the underlying line associated with the nose for example are optionally of a predetermined thickness greater than the underlying line. Moving the lines with a portion of the surrounding color obtained by the slightly larger mask 351a-b to the left and right to provide two viewpoints for stereoscopic viewing occurs without complex matching of the line shape to the underlying shape of the region and without artifacts since the color surrounding the shifted foreground line is displaced onto on the same color when generating a left or right eye translated viewpoint. This allows for a different depth or contour that varies in depth to be applied to the lines to indicate depth to the object for the human mind to fill in, that otherwise has no visual indication of depth by altering the depth of the line. In other words, the mask region 360d for example may have a basic depth contour applied, but has not color or shading variations and hence the lines on that area may float over them to persuade the human mind that the underlying region actually has depth. Again, minimal application of depth to the background with depth applied to the lines on that limited color range region provides a visual indication of depth to the human mind, without requiring complex modeling of the regions that contain the lines.
[0025] In one or more embodiments, the lines may be animated over time, for example enlarged, morphed or otherwise reshaped or moved or with a varying color for example to enhance the expression of the line. This saves a tremendous amount of time since the background region or regions that have common boundaries are not required to be remodeled or rendered again to simply changed the apparent human recognizable depth of the region, since the depth of the region can be implied to the human mind by changing the depth profile of the line in the region. Hence, the lines may float over the underlying region wherein the human mind does not interpret the region and line as having distinct depths. In addition, only the lines that need to be depth altered to indicate depth for a region are masked and/or depth modified.
[0026] The other great benefit of one or more embodiments is that for traditional cell animation, lines slightly move from one frame to the next when a human has drawn the lines. In this scenario, using a slightly larger mask than the line enables less tweening or mask reshaping or mask moving to following the exact shape of the line. Hence, a slightly larger mask enables far less work in tracking the mask through a scene.
[0027] In one or more embodiments, lines may be automatically selected by the system based on contrast, luminance, color or any other visual component or characteristic, such as thickness of a color area. For example, in one or more embodiment, line isolation processing may include a dilation of color and/or blur of the image, followed by a subtraction of the image from the original image to detect lines, which works well in cell animation where objects may be of a homogenous color for example. One or more embodiments may highlight all detected lines, lines over a certain length or thickness or curvature or any other feature associated with the lines. Embodiments may show the suggested lines for depth application highlighting a suggested mask or inverting a color or in any other manner. In one or more embodiments, the lines selected may have boundaries over a certain distance to a different background color so that lines further away from the edge of a character are suggested for depth enhancement for example. Embodiments of the invention accept an input to the suggested mask or accept inputs for drawing masks as well.
[0028] Figure 4 shows optional annotations for desired depth at a specific depth for general messages or at the depth of the desired region for example, wherein the annotations may be viewed in three-dimensional depth with anaglyph glasses. As shown, the two-dimensional image is still in two-dimensions, i.e., the depth across the entire image does not vary. In other words, the two-dimensional image along with the three-dimensional annotations specify the depths to apply to particular areas or regions and is used as an input to the depth augmentation group for example. The depth group then moves the associated regions in depth to match the annotations in an intuitive manner that is extremely fast and provides a built-in sanity check for depth. Using this method, it is inherently verifiable whether a depth of a region is at or about at the depth of the associated annotation. As shown, line 451a that represents an eyebrow on a monochrome region is shown without an associated annotation, while line 451b that represents a nose is shown with an annotation that suggests a 10 pixel forward depth shift. In one or more embodiments, line 451a may float over the region in which the line appears, however, the human mind interprets the area where the line occurs as having the depth contour associated with the line even if the region does not have this contour or depth shape. This is because the limited color range of the region in which the line appears otherwise gives no visual indication as to the depth of the region. Hence, the human mind fills in the depth of the region according to the depth of the line. [0029] Figure 5 shows the input image converted to three-dimensional image in anaglyph format, which may be viewed in three-dimensional depth with anaglyph glasses. As shown, the individual lines 151a and 151b as shown in Figure 2, having corresponding areas of masks 351a and 351b respectively as shown in Figure 3 are shifted in depth to produce an output three-dimensional image. The lines are not generally at the depth of the region in which they appear but are rather the elements in the figure that give the appearance of depth to the region in which the lines appear. This enables simple modeling of depth for the regions in which the lines appear as opposed to the creation of a 3D wire model, or other sophisticated volumetric module to be applied to each region. As the cell animation moves around from frame to frame, for example based on individual artistic hand drawn lines, the generous masking applied to the lines minimize the tracking of the lines between frame to frame while still providing the appearance of depth applied to the line and thus implied to the human mind for the whole region.
[0030] Figure 6 illustrates a close up view of a portion of a line on the region wherein the line does not match the depth of the region but rather implies a depth for the human mind to interpret for the region. As shown in close up, line 451a is of a depth that does not necessary correspond to the shape of, or the depth of the region in which the line resides.
[0031] Figure 7 illustrates a side view of the depth of the region and line shown in Figures 4- 6 wherein line 451a has had a depth added to portions of the line to offset line 451a from regions 360f and 360g in which the line appears. As shown by dotted depth indicators 701 for illustrative purposes, the depth of line 451a has been offset from regions 360f and 360g in order to give regions 360f and 360g an implied depth without requiring modeling of the underlying region with wireframe models, etc. This method may be applied to a line that lies in one region or multiple regions as described for exemplary purposes. This enables an extremely fast method for providing one or more regions with implied depth without requiring extensive depth modeling for the region or regions. For example region 360f may have a simple curve or other depth offset applied as is shown in the exemplary side view of Figure 7 while the depth of line 451a is adjusted with a slightly larger mask than the line so as to imply depth to the region. Given the underlying regions of a single or nearly single color, any masking errors are basically undetectable to the human eye as the shifted over- masked area falls on the same color background when shifted horizontally for left and right eye stereoscopic viewing. The human eye believes the entire region to be at the depth contour provided by the line, however the amount of labor required to add depth to the image is greatly lowered with embodiments of the invention. See also Figure 6 with anaglyph glasses to view the lines at a different depth than the background.
[0032] Figure 8 illustrates an example of a style guide for use in the creation of lines and other regions, e.g., the eyelashes, the mole, the chin, etc. As shown, the instructions 801 for augmenting lines with depth are shown for example in a key frame. The instructions are then utilized to focus work on the lines that depth may be added to in order to minimize work, for example to minimize modeling of the regions having lines. This enables consistent work product to be generated for a particular scene and further lowers the amount of effort that is performed in augmenting a scene with depth.
[0033] As illustrated, embodiments of the invention minimize the amount of work required to generate depth associated with lines for areas without requiring complex modeling of the areas and enable the lines to be altered without complex remodeling or re-rendering.
[0034] While the invention herein disclosed has been described by means of specific embodiments and applications thereof, numerous modifications and variations could be made thereto by those skilled in the art without departing from the scope of the invention set forth in the claims.

Claims

CLAIMS What is claimed is:
1. A three-dimensional annotation method for conversion of two-dimensional images to three-dimensional images comprising:
obtaining a two-dimensional source image;
displaying said two-dimensional source image on a screen associated with a first computer; accepting a mask having an area associated with a line in a monochrome region in said two- dimensional source image;
accepting a depth associated with said line within said monochrome region in said two- dimensional source image via an input device coupled with said first computer;
applying said depth to said area of said mask in said two-dimensional image to create a three- dimensional image without altering a depth of a remainder of the monochrome region where said mask does not occur.
2. The method of claim 1 wherein said input device comprises a graphics tablet, mouse, keyboard or microphone or any combination thereof and wherein said accepting said mask or accepting said depth or both, comprises accepting input from said graphics tablet, or said mouse, or said keyboard, or said microphone or any combination thereof.
3. The method of claim 1 wherein said accepting said mask further comprises:
accepting an input location within said two-dimensional source image;
obtaining a color of a pixel at said input location;
increasing said area of said mask to include all contiguous pixels that are within a
predetermined range of said color of said pixel.
4. The method of claim 3 further comprises setting said predetermined range of said color to a predetermined percentage of a volume of a color space.
5. The method of claim 3 further comprises setting said predetermined range of said color to a predetermined threshold of luminance associated with said color.
6. The method of claim 3 further comprises setting said predetermined range of said color to zero so that contiguous pixels have said color to be included in said area of said mask.
7. The method of claim 3 wherein said increasing said area of said mask further comprises: increasing said area of said mask by a predetermined number of pixels.
8. The method of claim 2 wherein obtaining said depth comprises analyzing motion data obtained from said graphics tablet or mouse, or parsing alphanumeric data from said keyboard to determine said depth.
9. The method of claim 1 wherein obtaining said depth comprises analyzing annotations associated with said mask.
10. The method of claim 1 wherein obtaining said depth comprises asserting voice recognition software.
11. The method of claim 1 wherein said applying said depth to said area of said mask in said two-dimensional image to create a three-dimensional image comprises generating a pair of images comprising an image for viewing with a left and right eye respectively.
12. The method of claim 1 wherein said applying said depth to said area of said mask in said two-dimensional image to create a three-dimensional image comprises generating an anaglyph image.
13. The method of claim 1 wherein said applying said depth to said area of said mask in said two-dimensional image to create a three-dimensional image comprises generating an polarized image.
14. The method of claim 1 wherein said applying said depth to said area of said mask in said two-dimensional image to create a three-dimensional image comprises generating single image capable of displaying differing depths.
15. The method of claim 1 further comprising:
displacing at least said area of said mask in said two-dimensional source image left and right based on said depth to create said three-dimensional image.
16. A three-dimensional annotation method for conversion of two-dimensional images to three- dimensional images comprising:
obtaining a two-dimensional source image;
displaying said two-dimensional source image on a screen associated with a first computer; accepting a mask having an area associated with a line in a monochrome region in said two- dimensional source image via an input device coupled with said first computer wherein said input device comprises any combination of graphics tablet, mouse, keyboard or microphone;
accepting a depth associated with said line within said monochrome region in said two- dimensional source image via an input device coupled with said first computer wherein said accepting said depth comprises accepting input from said graphics tablet, or said mouse, or said keyboard, or said microphone or analyzing annotations associated with said mask or any combination thereof;
applying said depth to said area of said mask in said two-dimensional image to create a three- dimensional image without altering a depth of a remainder of the monochrome region where said mask does not occur.
17. The method of claim 16 wherein said accepting said mask further comprises:
accepting an input location within said two-dimensional source image; obtaining a color of a pixel at said input location;
increasing said area of said mask to include all contiguous pixels that are within a
predetermined range of said color of said pixel.
18. The method of claim 17 further comprises setting said predetermined range of said color to a predetermined percentage of a volume of a color space or a predetermined threshold of luminance associated with said color.
19. The method of claim 17 wherein said increasing said area of said mask further comprises: increasing said area of said mask by a predetermined number of pixels.
20. The method of claim 16 further comprising:
displacing at least said area of said mask in said two-dimensional source image left and right based on said depth to create said three-dimensional image.
PCT/US2013/072208 2012-11-27 2013-11-27 Line depth augmentation system and method for conversion of 2d images to 3d images WO2014085573A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US13/686,907 2012-11-27
US13/686,907 US9007365B2 (en) 2012-11-27 2012-11-27 Line depth augmentation system and method for conversion of 2D images to 3D images

Publications (1)

Publication Number Publication Date
WO2014085573A1 true WO2014085573A1 (en) 2014-06-05

Family

ID=50772872

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/072208 WO2014085573A1 (en) 2012-11-27 2013-11-27 Line depth augmentation system and method for conversion of 2d images to 3d images

Country Status (2)

Country Link
US (1) US9007365B2 (en)
WO (1) WO2014085573A1 (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106033418B (en) 2015-03-10 2020-01-31 阿里巴巴集团控股有限公司 Voice adding and playing method and device, and picture classifying and retrieving method and device
US10735707B2 (en) * 2017-08-15 2020-08-04 International Business Machines Corporation Generating three-dimensional imagery
US11928833B2 (en) * 2019-04-30 2024-03-12 Hewlett-Packard Development Company, L.P. Generation of points identifying displaced portions of an image

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090002368A1 (en) * 2007-06-26 2009-01-01 Nokia Corporation Method, apparatus and a computer program product for utilizing a graphical processing unit to provide depth information for autostereoscopic display
US20090219383A1 (en) * 2007-12-21 2009-09-03 Charles Gregory Passmore Image depth augmentation system and method
US20110188773A1 (en) * 2010-02-04 2011-08-04 Jianing Wei Fast Depth Map Generation for 2D to 3D Conversion
KR20120095059A (en) * 2011-02-18 2012-08-28 (주)스튜디오 로프트 Method of converting 2d images to 3d images

Family Cites Families (300)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US2593925A (en) 1948-10-05 1952-04-22 Sheldon Edward Emanuel Device for color projection of invisible rays
US2804500A (en) 1953-10-01 1957-08-27 Rca Corp Color interpretation system
US2799722A (en) 1954-04-26 1957-07-16 Adalia Ltd Reproduction with localized corrections
US2874212A (en) 1955-07-29 1959-02-17 Rca Corp Generator of color images from monochrome television signals
US2883763A (en) 1956-09-28 1959-04-28 Otto F Schaper Carrier landing trainer
US2974190A (en) 1957-12-09 1961-03-07 Columbia Broadcasting Syst Inc Electronic matting apparatus
US3005042A (en) 1958-04-17 1961-10-17 David S Horsley Electronic motion picture printer
US3258528A (en) 1963-06-18 1966-06-28 Gen Precision Inc Converter for changing a black and white television signal to a color television signal
US3551589A (en) 1967-03-23 1970-12-29 Ward Electronic Ind Apparatus for converting monochrome television signals to color signals
US3558811A (en) 1967-05-25 1971-01-26 Xerox Corp Graphic communication electrical interface system
US3486242A (en) 1967-05-29 1969-12-30 Us Navy Assault boat coxswain trainer
US3560644A (en) 1968-02-29 1971-02-02 Us Navy Multiple projection television system
US3621127A (en) 1969-02-13 1971-11-16 Karl Hope Synchronized stereoscopic system
US3595987A (en) 1969-02-20 1971-07-27 Ass Motion Picture Tv Prod Electronic composite photography
US3617626A (en) 1969-05-16 1971-11-02 Technicolor High-definition color picture editing and recording system
US3612755A (en) 1969-07-03 1971-10-12 Dorothea Weitzner Color pattern generator
US3619051A (en) 1969-10-23 1971-11-09 Norman Wright Productions Inc Production of color films from monochromatic film
US3761607A (en) 1969-11-03 1973-09-25 Technicolor Video monochrom to color conversion
US3603962A (en) 1970-03-18 1971-09-07 Rca Corp Color display for computer terminal
US3647942A (en) 1970-04-23 1972-03-07 Eric J Siegel Video color synthesizer
US3731995A (en) 1970-10-29 1973-05-08 Instructional Dynamics Method and apparatus for producing animated motion pictures
US3710011A (en) 1970-12-04 1973-01-09 Computer Image Corp System for automatically producing a color display of a scene from a black and white representation of the scene
US3673317A (en) 1970-12-30 1972-06-27 Westinghouse Electric Corp Comparitive display of images in color
US3772465A (en) 1971-06-09 1973-11-13 Ass Of Motion Picture Televisi Image modification of motion pictures
US3742125A (en) 1971-06-11 1973-06-26 Electronic Visions Inc Color video abstract synthesizer
US3784736A (en) 1971-09-17 1974-01-08 J Novak Method and apparatus for converting monochrome pictures to multi-color pictures electronically
US3706841A (en) 1971-09-17 1972-12-19 Joseph F Novak Method and apparatus for converting monochrome pictures to multi-color pictures electronically
US3705762A (en) 1971-09-20 1972-12-12 Color Systems Inc Method for converting black-and-white films to color films
US3770885A (en) 1971-10-21 1973-11-06 Us Navy Color electronic periscope view simulator
US3737567A (en) 1971-10-25 1973-06-05 S Kratomi Stereoscopic apparatus having liquid crystal filter viewer
US3770884A (en) 1972-02-07 1973-11-06 Us Navy Luminance control circuit for multi-color periscope view simulator
US3769458A (en) 1972-05-23 1973-10-30 Us Navy Color electronic synthesizer
US4021846A (en) 1972-09-25 1977-05-03 The United States Of America As Represented By The Secretary Of The Navy Liquid crystal stereoscopic viewer
US3851955A (en) 1973-02-05 1974-12-03 Marks Polarized Corp Apparatus for converting motion picture projectors for stereo display
US4183633A (en) 1973-02-05 1980-01-15 Marks Polarized Corporation Motion picture film for three dimensional projection
US4017166A (en) 1973-02-05 1977-04-12 Marks Polarized Corporation Motion picture film for three dimensional projection
US3848856A (en) 1973-10-01 1974-11-19 Hazeltine Corp Local correction apparatus for a color previewer
US4168885A (en) 1974-11-18 1979-09-25 Marks Polarized Corporation Compatible 3-dimensional motion picture projection system
US3972067A (en) 1975-01-17 1976-07-27 The Singer Company Color video synthesizer with monochrome input
GB1547666A (en) 1975-04-05 1979-06-27 Nippon Electric Co Video signal coding system
US3971068A (en) 1975-08-22 1976-07-20 The United States Of America As Represented By The Secretary Of The Navy Image processing system
US4021841A (en) 1975-12-31 1977-05-03 Ralph Weinger Color video synthesizer with improved image control means
US4189744A (en) 1976-12-20 1980-02-19 New York Institute Of Technology Apparatus for generating signals representing operator-selected portions of a scene
US4189743A (en) 1976-12-20 1980-02-19 New York Institute Of Technology Apparatus and method for automatic coloration and/or shading of images
US4149185A (en) 1977-03-04 1979-04-10 Ralph Weinger Apparatus and method for animated conversion of black and white video to color
US4235503A (en) 1978-05-08 1980-11-25 Condon Chris J Film projection lens system for 3-D movies
US4183046A (en) 1978-08-17 1980-01-08 Interpretation Systems Incorporated Electronic apparatus for converting digital image or graphics data to color video display formats and method therefor
US4258385A (en) 1979-05-15 1981-03-24 Combined Logic Company Interactive video production system and method
DE3024459A1 (en) 1979-07-03 1981-01-08 Crosfield Electronics Ltd PYRAMID INTERPOLATION
EP0024862A3 (en) 1979-09-04 1981-03-25 Harold Charles Taylor Video apparatus for visualing effects of selected juxtaposed colours
US4318121A (en) 1980-05-06 1982-03-02 Jason Taite Interior decor composition and display systems
US4436369A (en) 1981-09-08 1984-03-13 Optimax Iii, Inc. Stereoscopic lens system
US4617592A (en) 1982-03-11 1986-10-14 Crosfield Electronics Limited Video retouching system
GB8306339D0 (en) 1982-03-19 1983-04-13 Quantel Ltd Video processing systems
US4549172A (en) 1982-06-21 1985-10-22 Motorola, Inc. Multicolor display from monochrome or multicolor control unit
US4645459A (en) 1982-07-30 1987-02-24 Honeywell Inc. Computer generated synthesized imagery
US4600919A (en) 1982-08-03 1986-07-15 New York Institute Of Technology Three dimensional animation
JPS6052190B2 (en) 1982-10-08 1985-11-18 神奈川県 Method for refining waste fluorocarbon-based synthetic lubricating oil
SU1192168A1 (en) 1982-11-09 1985-11-15 Vladimir A Gornykh Method and apparatus for generating and reproducing television signal of pseudostereoscopic picture
JPS59116736A (en) 1982-12-24 1984-07-05 Fuotoron:Kk Stereoscopic projector
US4475104A (en) 1983-01-17 1984-10-02 Lexidata Corporation Three-dimensional display system
US4603952A (en) 1983-04-18 1986-08-05 Sybenga John R Professional stereoscopic projection
CA1258118A (en) 1983-05-05 1989-08-01 Wilson Markle Method of, and apparatus for, colouring a black and white video signal
US4925294A (en) 1986-12-17 1990-05-15 Geshwind David M Method to convert two dimensional motion pictures for three-dimensional systems
US5050984A (en) 1983-05-09 1991-09-24 Geshwind David M Method for colorizing footage
US6590573B1 (en) 1983-05-09 2003-07-08 David Michael Geshwind Interactive computer system for creating three-dimensional image information and for converting two-dimensional image information for three-dimensional display systems
US4606625A (en) 1983-05-09 1986-08-19 Geshwind David M Method for colorizing black and white footage
US4755870A (en) 1983-07-11 1988-07-05 Colorization Inc. Coloring a black and white signal using motion detection
US4608596A (en) 1983-09-09 1986-08-26 New York Institute Of Technology System for colorizing video with both pseudo-colors and selected colors
US4700181A (en) 1983-09-30 1987-10-13 Computer Graphics Laboratories, Inc. Graphics display system
US4558359A (en) 1983-11-01 1985-12-10 The United States Of America As Represented By The Secretary Of The Air Force Anaglyphic stereoscopic image apparatus and method
US4647965A (en) 1983-11-02 1987-03-03 Imsand Donald J Picture processing system for three dimensional movies and video systems
US4723159A (en) 1983-11-02 1988-02-02 Imsand Donald J Three dimensional television and video systems
US4590511A (en) 1984-01-03 1986-05-20 Honeywell Inc. Circuit for converting the phase encoded hue information of a quadrature modulated color subcarrier into distinct analog voltage levels
GB8405947D0 (en) 1984-03-07 1984-04-11 Quantel Ltd Video signal processing systems
US4694329A (en) 1984-04-09 1987-09-15 Corporate Communications Consultants, Inc. Color correction system and method with scene-change detection
US4721951A (en) 1984-04-27 1988-01-26 Ampex Corporation Method and apparatus for color selection and production
US4697178A (en) 1984-06-29 1987-09-29 Megatek Corporation Computer graphics system for real-time calculation and display of the perspective view of three-dimensional scenes
GB8422209D0 (en) 1984-09-03 1984-10-10 Crosfield Electronics Ltd Image retouching
US4642676A (en) 1984-09-10 1987-02-10 Color Systems Technology, Inc. Priority masking techniques for video special effects
US4760390A (en) 1985-02-25 1988-07-26 Computer Graphics Laboratories, Inc. Graphics display system and method with enhanced instruction data and processing
JPH0681275B2 (en) 1985-04-03 1994-10-12 ソニー株式会社 Image converter
US4827255A (en) 1985-05-31 1989-05-02 Ascii Corporation Display control system which produces varying patterns to reduce flickering
JPS62216478A (en) 1986-03-17 1987-09-24 Dainippon Screen Mfg Co Ltd Tint laying device
US4888713B1 (en) 1986-09-05 1993-10-12 Cdi Technologies, Inc. Surface detail mapping system
US4758908A (en) 1986-09-12 1988-07-19 Fred James Method and apparatus for substituting a higher quality audio soundtrack for a lesser quality audio soundtrack during reproduction of the lesser quality audio soundtrack and a corresponding visual picture
US4809065A (en) 1986-12-01 1989-02-28 Kabushiki Kaisha Toshiba Interactive system and related method for displaying data to produce a three-dimensional image of an object
US4725879A (en) 1987-03-27 1988-02-16 Honeywell Inc. Chroma responsive inspection apparatus selectively producing analog voltage levels based on the luminance, the phase of the chrominance subcarrier, or the amplitude of the chrominance subcarrier
WO1989001270A1 (en) 1987-07-27 1989-02-09 Geshwind David M A method for transmitting high-definition television over low-bandwidth channels
US5093717A (en) 1987-08-03 1992-03-03 American Film Technologies, Inc. System and method for digitally coloring images
US4984072A (en) 1987-08-03 1991-01-08 American Film Technologies, Inc. System and method for color image enhancement
IE76718B1 (en) 1987-08-03 1997-11-19 American Film Tech System and method for color image enhancement
DE3736790A1 (en) 1987-10-30 1989-05-11 Broadcast Television Syst METHOD FOR AUTOMATICALLY CORRECTING IMAGE ERRORS IN FILM SCANNING
US4918624A (en) 1988-02-05 1990-04-17 The United States Of America As Represented By The United States Department Of Energy Vector generator scan converter
US4933670A (en) 1988-07-21 1990-06-12 Picker International, Inc. Multi-axis trackball
US4952051A (en) 1988-09-27 1990-08-28 Lovell Douglas C Method and apparatus for producing animated drawings and in-between drawings
US5177474A (en) 1989-09-13 1993-01-05 Matsushita Electric Industrial Co., Ltd. Three-dimensional display apparatus
US5237647A (en) 1989-09-15 1993-08-17 Massachusetts Institute Of Technology Computer aided drawing in three dimensions
US5038161A (en) 1990-01-08 1991-08-06 Ki Lui S Method and a camera for combination pictures in a photograph
JP2621568B2 (en) 1990-01-11 1997-06-18 ダイキン工業株式会社 Graphic drawing method and apparatus
US5428721A (en) 1990-02-07 1995-06-27 Kabushiki Kaisha Toshiba Data processing apparatus for editing image by using image conversion
US5002387A (en) 1990-03-23 1991-03-26 Imax Systems Corporation Projection synchronization system
US5252953A (en) 1990-05-22 1993-10-12 American Film Technologies, Inc. Computergraphic animation system
US5181181A (en) 1990-09-27 1993-01-19 Triton Technologies, Inc. Computer apparatus input device for three-dimensional information
US5481321A (en) 1991-01-29 1996-01-02 Stereographics Corp. Stereoscopic motion picture projection system
US5185852A (en) 1991-05-31 1993-02-09 Digital Equipment Corporation Antialiasing apparatus and method for computer printers
US5347620A (en) 1991-09-05 1994-09-13 Zimmer Mark A System and method for digital rendering of images and printed articulation
JPH06503695A (en) 1991-10-07 1994-04-21 イーストマン コダック カンパニー A compositing interface for arranging the components of special effects jobs for film production.
US7006881B1 (en) 1991-12-23 2006-02-28 Steven Hoffberg Media recording device with remote graphic user interface
US5262856A (en) 1992-06-04 1993-11-16 Massachusetts Institute Of Technology Video image compositing techniques
US5328073A (en) 1992-06-24 1994-07-12 Eastman Kodak Company Film registration and ironing gate assembly
US5973700A (en) 1992-09-16 1999-10-26 Eastman Kodak Company Method and apparatus for optimizing the resolution of images which have an apparent depth
US5534915A (en) 1992-09-30 1996-07-09 American Film Technologies, Inc. Method of color enhancing a monochrome image using multiple base colors for selected regions of the monochrome image
US6011581A (en) 1992-11-16 2000-01-04 Reveo, Inc. Intelligent method and system for producing and displaying stereoscopically-multiplexed images of three-dimensional objects for use in realistic stereoscopic viewing thereof in interactive virtual reality display environments
US5402191A (en) 1992-12-09 1995-03-28 Imax Corporation Method and apparatus for presenting stereoscopic images
US5495576A (en) 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
JP3230618B2 (en) 1993-03-05 2001-11-19 株式会社日立メディコ Radiation stereoscopic imaging equipment
US5717454A (en) 1993-07-14 1998-02-10 Lifetouch Portrait Studios, Inc. Method and apparatus for creating posing masks on video screen
US5739844A (en) 1994-02-04 1998-04-14 Sanyo Electric Co. Ltd. Method of converting two-dimensional image into three-dimensional image
JP3486461B2 (en) 1994-06-24 2004-01-13 キヤノン株式会社 Image processing apparatus and method
JPH08186844A (en) 1994-12-28 1996-07-16 Sanyo Electric Co Ltd Stereoscopic picture generator and stereoscopic picture generating method
JP3568606B2 (en) 1995-01-17 2004-09-22 富士写真フイルム株式会社 Image noise reduction processing method
JP3265893B2 (en) 1995-02-13 2002-03-18 株式会社日立製作所 Image display device
US5729471A (en) 1995-03-31 1998-03-17 The Regents Of The University Of California Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
US5899861A (en) 1995-03-31 1999-05-04 Siemens Medical Systems, Inc. 3-dimensional volume by aggregating ultrasound fields of view
US5699444A (en) 1995-03-31 1997-12-16 Synthonics Incorporated Methods and apparatus for using image data to determine camera location and orientation
US5742291A (en) 1995-05-09 1998-04-21 Synthonics Incorporated Method and apparatus for creation of three-dimensional wire frames
JPH08331473A (en) 1995-05-29 1996-12-13 Hitachi Ltd Display device for television signal
US5684715A (en) 1995-06-07 1997-11-04 Canon Information Systems, Inc. Interactive video system with dynamic video object descriptors
GB9513658D0 (en) 1995-07-05 1995-09-06 Philips Electronics Uk Ltd Autostereoscopic display apparatus
JP2994232B2 (en) 1995-07-28 1999-12-27 ウシオ電機株式会社 Mask-to-mask or mask-to-work alignment method and apparatus
US6005582A (en) 1995-08-04 1999-12-21 Microsoft Corporation Method and system for texture mapping images with anisotropic filtering
US6049628A (en) 1995-09-01 2000-04-11 Cerulean Colorization Llc Polygon reshaping in picture colorization
US6263101B1 (en) 1995-09-01 2001-07-17 Cerulean Colorization Llc Filtering in picture colorization
US5959673A (en) 1995-10-05 1999-09-28 Microsoft Corporation Transform coding of dense motion vector fields for frame and object based video coding applications
CA2187614C (en) 1995-10-10 2002-11-26 Jonathan Erland Traveling mat backing
US5912994A (en) 1995-10-27 1999-06-15 Cerulean Colorization Llc Methods for defining mask of substantially color-homogeneous regions of digitized picture stock
JP4392060B2 (en) 1995-12-19 2009-12-24 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Parallax depth dependent pixel shift
US6088006A (en) 1995-12-20 2000-07-11 Olympus Optical Co., Ltd. Stereoscopic image generating system for substantially matching visual range with vergence distance
US5748199A (en) 1995-12-20 1998-05-05 Synthonics Incorporated Method and apparatus for converting a two dimensional motion picture into a three dimensional motion picture
US5835163A (en) 1995-12-21 1998-11-10 Siemens Corporate Research, Inc. Apparatus for detecting a cut in a video
AUPN732395A0 (en) 1995-12-22 1996-01-25 Xenotech Research Pty Ltd Image conversion and encoding techniques
US5973831A (en) 1996-01-22 1999-10-26 Kleinberger; Paul Systems for three-dimensional viewing using light polarizing layers
US5841512A (en) 1996-02-27 1998-11-24 Goodhill; Dean Kenneth Methods of previewing and editing motion pictures
AU1983397A (en) 1996-02-29 1997-09-16 Acuson Corporation Multiple ultrasound image registration system, method and transducer
US5880788A (en) 1996-03-25 1999-03-09 Interval Research Corporation Automated synchronization of video image sequences to new soundtracks
US5867169A (en) 1996-04-17 1999-02-02 Pixar Method and apparatus for manipulating color values in a computer graphics system
US6184937B1 (en) 1996-04-29 2001-02-06 Princeton Video Image, Inc. Audio enhanced electronic insertion of indicia into video
JP3258236B2 (en) 1996-05-28 2002-02-18 株式会社日立製作所 Multimedia information transfer system
US5778108A (en) 1996-06-07 1998-07-07 Electronic Data Systems Corporation Method and system for detecting transitional markers such as uniform fields in a video signal
US5767923A (en) 1996-06-07 1998-06-16 Electronic Data Systems Corporation Method and system for detecting cuts in a video signal
US5959697A (en) 1996-06-07 1999-09-28 Electronic Data Systems Corporation Method and system for detecting dissolve transitions in a video signal
US5920360A (en) 1996-06-07 1999-07-06 Electronic Data Systems Corporation Method and system for detecting fade transitions in a video signal
EP0817123B1 (en) 1996-06-27 2001-09-12 Kabushiki Kaisha Toshiba Stereoscopic display system and method
US6061067A (en) 1996-08-02 2000-05-09 Autodesk, Inc. Applying modifiers to objects based on the types of the objects
US6108005A (en) 1996-08-30 2000-08-22 Space Corporation Method for producing a synthesized stereoscopic image
GB9619119D0 (en) 1996-09-12 1996-10-23 Discreet Logic Inc Processing image
US20030093790A1 (en) 2000-03-28 2003-05-15 Logan James D. Audio and video program recording, editing and playback systems using metadata
DE69716088T2 (en) 1996-12-19 2003-06-18 Koninkl Philips Electronics Nv METHOD AND DEVICE FOR DISPLAYING AN AUTOSTEREOGRAM
US5990903A (en) 1997-02-03 1999-11-23 Micron Technologies, Inc. Method and apparatus for performing chroma key, transparency and fog operations
US6727938B1 (en) 1997-04-14 2004-04-27 Robert Bosch Gmbh Security system with maskable motion detection and camera with an adjustable field of view
US6067125A (en) 1997-05-15 2000-05-23 Minerva Systems Structure and method for film grain noise reduction
US6492986B1 (en) 1997-06-02 2002-12-10 The Trustees Of The University Of Pennsylvania Method for human face shape and motion estimation based on integrating optical flow and deformable models
US6141433A (en) 1997-06-19 2000-10-31 Ncr Corporation System and method for segmenting image regions from a scene likely to represent particular objects in the scene
US6031564A (en) 1997-07-07 2000-02-29 Reveo, Inc. Method and apparatus for monoscopic to stereoscopic image conversion
AUPO894497A0 (en) 1997-09-02 1997-09-25 Xenotech Research Pty Ltd Image processing method and apparatus
AUPP048097A0 (en) 1997-11-21 1997-12-18 Xenotech Research Pty Ltd Eye tracking apparatus
US6166744A (en) 1997-11-26 2000-12-26 Pathfinder Systems, Inc. System for combining virtual images with real-world scenes
US6119123A (en) 1997-12-02 2000-09-12 U.S. Philips Corporation Apparatus and method for optimizing keyframe and blob retrieval and storage
ID27878A (en) 1997-12-05 2001-05-03 Dynamic Digital Depth Res Pty IMAGE IMPROVED IMAGE CONVERSION AND ENCODING ENGINEERING
JP4056154B2 (en) 1997-12-30 2008-03-05 三星電子株式会社 2D continuous video 3D video conversion apparatus and method, and 3D video post-processing method
US6973434B2 (en) 1998-01-09 2005-12-06 Millermed Software, Inc. Computer-based system for automating administrative procedures in an office
US6226015B1 (en) 1998-02-25 2001-05-01 Intel Corporation Method of automatically producing sketches and cartoon images from movies
US6847737B1 (en) 1998-03-13 2005-01-25 University Of Houston System Methods for performing DAF data filtering and padding
US6271859B1 (en) 1998-04-06 2001-08-07 Adobe Systems Incorporated Recoloring art work
US6677944B1 (en) 1998-04-14 2004-01-13 Shima Seiki Manufacturing Limited Three-dimensional image generating apparatus that creates a three-dimensional model from a two-dimensional image by image processing
US6957341B2 (en) 1998-05-14 2005-10-18 Purdue Research Foundation Method and system for secure computational outsourcing and disguise
US7116324B2 (en) 1998-05-27 2006-10-03 In-Three, Inc. Method for minimizing visual artifacts converting two-dimensional motion pictures into three-dimensional motion pictures
US20050146521A1 (en) 1998-05-27 2005-07-07 Kaye Michael C. Method for creating and presenting an accurate reproduction of three-dimensional images converted from two-dimensional images
US7102633B2 (en) 1998-05-27 2006-09-05 In-Three, Inc. Method for conforming objects to a common depth perspective for converting two-dimensional images into three-dimensional images
US6515659B1 (en) 1998-05-27 2003-02-04 In-Three, Inc. Method and system for creating realistic smooth three-dimensional depth contours from two-dimensional images
US6208348B1 (en) 1998-05-27 2001-03-27 In-Three, Inc. System and method for dimensionalization processing of images in consideration of a pedetermined image projection format
US7116323B2 (en) 1998-05-27 2006-10-03 In-Three, Inc. Method of hidden surface reconstruction for creating accurate three-dimensional images converted from two-dimensional images
US6056691A (en) 1998-06-24 2000-05-02 Ecton, Inc. System for collecting ultrasound imaging data at an adjustable collection image frame rate
US6456340B1 (en) 1998-08-12 2002-09-24 Pixonics, Llc Apparatus and method for performing image transforms in a digital display system
US6480822B2 (en) 1998-08-24 2002-11-12 Conexant Systems, Inc. Low complexity random codebook structure
US6466205B2 (en) 1998-11-19 2002-10-15 Push Entertainment, Inc. System and method for creating 3D models from 2D sequential image data
US6707487B1 (en) 1998-11-20 2004-03-16 In The Play, Inc. Method for representing real-time motion
US6364835B1 (en) 1998-11-20 2002-04-02 Acuson Corporation Medical diagnostic ultrasound imaging methods for extended field of view
GB2344037B (en) 1998-11-20 2003-01-22 Ibm A method and apparatus for adjusting the display scale of an image
AUPP727598A0 (en) 1998-11-23 1998-12-17 Dynamic Digital Depth Research Pty Ltd Improved teleconferencing system
US6390980B1 (en) 1998-12-07 2002-05-21 Atl Ultrasound, Inc. Spatial compounding with ultrasonic doppler signal information
US6373970B1 (en) 1998-12-29 2002-04-16 General Electric Company Image registration using fourier phase matching
US6606166B1 (en) 1999-04-30 2003-08-12 Adobe Systems Incorporated Pattern dithering
AUPQ101899A0 (en) 1999-06-17 1999-07-08 Dynamic Digital Depth Research Pty Ltd Image enhancement system
AUPQ119799A0 (en) 1999-06-25 1999-07-22 Dynamic Digital Depth Research Pty Ltd Ddc/3 image conversion and encoding techniques
JP3722653B2 (en) 1999-08-31 2005-11-30 松下電器産業株式会社 Surveillance camera device and display method of surveillance camera
US6662357B1 (en) 1999-08-31 2003-12-09 Accenture Llp Managing information in an integrated development architecture framework
GB2354389A (en) 1999-09-15 2001-03-21 Sharp Kk Stereo images with comfortable perceived depth
US6964009B2 (en) 1999-10-21 2005-11-08 Automated Media Processing Solutions, Inc. Automated media delivery system
US20010025267A1 (en) 2000-01-14 2001-09-27 Stephen Janiszewski System and method for facilitating bidding transactions and conducting project management utilizing software metric collection
US20030069777A1 (en) 2000-01-31 2003-04-10 Zvi Or-Bach Integrated system for providing remote outsourcing of services
CA2399706C (en) 2000-02-11 2006-01-24 Comsat Corporation Background noise reduction in sinusoidal based speech coding systems
US6737957B1 (en) 2000-02-16 2004-05-18 Verance Corporation Remote control signaling using audio watermarks
US6509926B1 (en) 2000-02-17 2003-01-21 Sensormatic Electronics Corporation Surveillance apparatus for camera surveillance system
US6807290B2 (en) 2000-03-09 2004-10-19 Microsoft Corporation Rapid computer modeling of faces for animation
US20020049778A1 (en) 2000-03-31 2002-04-25 Bell Peter W. System and method of information outsourcing
US7254265B2 (en) 2000-04-01 2007-08-07 Newsight Corporation Methods and systems for 2D/3D image conversion and optimization
US6665798B1 (en) 2000-04-27 2003-12-16 International Business Machines Corporation System for assigning data processing activities to users through an interactive display interface dynamically granting access only during activity to normally inaccessible resources needed for activity
US6611268B1 (en) 2000-05-30 2003-08-26 Microsoft Corporation System and process for generating 3D video textures using video-based rendering techniques
US20010051913A1 (en) 2000-06-07 2001-12-13 Avinash Vashistha Method and system for outsourcing information technology projects and services
WO2001097531A2 (en) 2000-06-12 2001-12-20 Vrex, Inc. Electronic stereoscopic media delivery system
KR20040041082A (en) 2000-07-24 2004-05-13 비브콤 인코포레이티드 System and method for indexing, searching, identifying, and editing portions of electronic multimedia files
CA2418089A1 (en) 2000-08-04 2002-02-14 Dynamic Digital Depth Research Pty Ltd. Image conversion and encoding technique
MXPA03001171A (en) 2000-08-09 2003-06-30 Dynamic Digital Depth Res Pty Image conversion and encoding techniques.
JP3819841B2 (en) 2000-08-18 2006-09-13 株式会社アマダ Outsourcing service equipment for electronic drawing data
US6416477B1 (en) 2000-08-22 2002-07-09 Koninklijke Philips Electronics N.V. Ultrasonic diagnostic systems with spatial compounded panoramic imaging
US7000223B1 (en) 2000-09-08 2006-02-14 Corel Corporation Method and apparatus for preparing a definition to control automated data processing
US6686591B2 (en) 2000-09-18 2004-02-03 Holon Co., Ltd Apparatus for inspecting mask
EP1354292B1 (en) 2000-12-01 2012-04-04 Imax Corporation Method and apparatus FOR DEVELOPING HIGH-RESOLUTION IMAGERY
US7117231B2 (en) 2000-12-07 2006-10-03 International Business Machines Corporation Method and system for the automatic generation of multi-lingual synchronized sub-titles for audiovisual data
US7461002B2 (en) 2001-04-13 2008-12-02 Dolby Laboratories Licensing Corporation Method for time aligning audio signals using characterizations based on auditory events
MXPA03010039A (en) 2001-05-04 2004-12-06 Legend Films Llc Image sequence enhancement system and method.
US6965379B2 (en) 2001-05-08 2005-11-15 Koninklijke Philips Electronics N.V. N-view synthesis from monocular video of certain broadcast and stored mass media content
JP2003046982A (en) 2001-07-30 2003-02-14 Wowow Inc System for participating in cheer in tv broadcast
US7343082B2 (en) 2001-09-12 2008-03-11 Ryshco Media Inc. Universal guide track
US20030097423A1 (en) 2001-10-26 2003-05-22 Yuka Ozawa Preview system for data broadcast contents
US20050188297A1 (en) 2001-11-01 2005-08-25 Automatic E-Learning, Llc Multi-audio add/drop deterministic animation synchronization
US6859523B1 (en) 2001-11-14 2005-02-22 Qgenisys, Inc. Universal task management system, method and product for automatically managing remote workers, including assessing the work product and workers
US7032177B2 (en) 2001-12-27 2006-04-18 Digeo, Inc. Method and system for distributing personalized editions of media programs using bookmarks
JP3673217B2 (en) 2001-12-20 2005-07-20 オリンパス株式会社 Video display device
US8150235B2 (en) 2002-02-08 2012-04-03 Intel Corporation Method of home media server control
US7333519B2 (en) 2002-04-23 2008-02-19 Gateway Inc. Method of manually fine tuning audio synchronization of a home network
US6791542B2 (en) 2002-06-17 2004-09-14 Mitsubishi Electric Research Laboratories, Inc. Modeling 3D objects with opacity hulls
JP2004040445A (en) 2002-07-03 2004-02-05 Sharp Corp Portable equipment having 3d display function and 3d transformation program
US7308139B2 (en) 2002-07-12 2007-12-11 Chroma Energy, Inc. Method, system, and apparatus for color representation of seismic data and associated measurements
US7519990B1 (en) 2002-07-19 2009-04-14 Fortinet, Inc. Managing network traffic flow
US7027054B1 (en) 2002-08-14 2006-04-11 Avaworks, Incorporated Do-it-yourself photo realistic talking head creation system and method
US20030046656A1 (en) 2002-08-29 2003-03-06 Asha Saxena Information technology outsourcing hubs
US7024054B2 (en) 2002-09-27 2006-04-04 Eastman Kodak Company Method and system for generating a foreground mask for a composite image
US8108488B2 (en) 2002-11-18 2012-01-31 Jackbe Corporation System and method for reducing bandwidth requirements for remote applications by utilizing client processing power
JP2004207985A (en) 2002-12-25 2004-07-22 Fuji Photo Film Co Ltd Digital camera
KR100505334B1 (en) 2003-03-28 2005-08-04 (주)플렛디스 Real-time stereoscopic image conversion apparatus using motion parallaxr
US7576332B2 (en) 2003-08-01 2009-08-18 St. George's Healthcare Nhs Trust Imaging system
US7136075B1 (en) 2003-08-26 2006-11-14 Adobe Systems Incorporated Compositing in the presence of different transparencies
US7532225B2 (en) 2003-09-18 2009-05-12 Kabushiki Kaisha Toshiba Three-dimensional image display device
US20050088515A1 (en) 2003-10-23 2005-04-28 Geng Z. J. Camera ring for three-dimensional (3D) surface imaging
US7663689B2 (en) 2004-01-16 2010-02-16 Sony Computer Entertainment Inc. Method and apparatus for optimizing capture device settings through depth information
US7505035B2 (en) 2004-04-19 2009-03-17 Oki Semiconductor Co., Ltd. Power-down circuit for a display device
US7542034B2 (en) 2004-09-23 2009-06-02 Conversion Works, Inc. System and method for processing video images
US20080246836A1 (en) 2004-09-23 2008-10-09 Conversion Works, Inc. System and method for processing video images for camera recreation
US20080259073A1 (en) 2004-09-23 2008-10-23 Conversion Works, Inc. System and method for processing video images
KR100603601B1 (en) 2004-11-08 2006-07-24 한국전자통신연구원 Apparatus and Method for Production Multi-view Contents
CA2529112A1 (en) 2004-12-28 2006-06-28 Weyerhaeuser Company Methods for processing image and/or spectral data for enhanced embryo classification
US7639876B2 (en) * 2005-01-14 2009-12-29 Advanced Digital Systems, Inc. System and method for associating handwritten information with one or more objects
WO2006078237A1 (en) 2005-01-15 2006-07-27 In-Three, Inc. Method for converting two-dimensional images to three-dimensional images using depth perspective
KR20070119018A (en) 2005-02-23 2007-12-18 크레이그 써머스 Automatic scene modeling for the 3d camera and 3d video
US7512262B2 (en) 2005-02-25 2009-03-31 Microsoft Corporation Stereo-based image processing
US8300841B2 (en) 2005-06-03 2012-10-30 Apple Inc. Techniques for presenting sound effects on a portable media player
US7079075B1 (en) 2005-06-07 2006-07-18 Trimble Navigation Limited GPS rover station for synthesizing synthetic reference phases for controlling accuracy of high integrity positions
US20070102622A1 (en) 2005-07-01 2007-05-10 Olsen Richard I Apparatus for multiple camera devices and method of operating same
US7944454B2 (en) 2005-09-07 2011-05-17 Fuji Xerox Co., Ltd. System and method for user monitoring interface of 3-D video streams from multiple cameras
JP2007199684A (en) 2005-12-28 2007-08-09 Canon Inc Image display apparatus
US20070260634A1 (en) 2006-05-04 2007-11-08 Nokia Corporation Apparatus, system, method, and computer program product for synchronizing the presentation of media content
US7742642B2 (en) * 2006-05-30 2010-06-22 Expedata, Llc System and method for automated reading of handwriting
US7573489B2 (en) 2006-06-01 2009-08-11 Industrial Light & Magic Infilling for 2D to 3D image conversion
US7573475B2 (en) 2006-06-01 2009-08-11 Industrial Light & Magic 2D to 3D image conversion
CA2653815C (en) 2006-06-23 2016-10-04 Imax Corporation Methods and systems for converting 2d motion pictures for stereoscopic 3d exhibition
US7623755B2 (en) 2006-08-17 2009-11-24 Adobe Systems Incorporated Techniques for positioning audio and video clips
US7948558B2 (en) 2006-09-29 2011-05-24 The Directv Group, Inc. Audio video timing measurement and synchronization
US7568057B2 (en) 2006-12-19 2009-07-28 Intel Corporation Method and apparatus for maintaining synchronization of audio in a computing system
KR20090092839A (en) 2006-12-19 2009-09-01 코닌클리케 필립스 일렉트로닉스 엔.브이. Method and system to convert 2d video into 3d video
JP5022025B2 (en) 2006-12-27 2012-09-12 インターナショナル・ビジネス・マシーンズ・コーポレーション A method and apparatus for synchronizing content data streams and metadata.
US8655052B2 (en) 2007-01-26 2014-02-18 Intellectual Discovery Co., Ltd. Methodology for 3D scene reconstruction from 2D image sequences
US20080225059A1 (en) 2007-03-12 2008-09-18 Conversion Works, Inc. System and method for using off-screen mask space to provide enhanced viewing
US20080226194A1 (en) 2007-03-12 2008-09-18 Conversion Works, Inc. Systems and methods for treating occlusions in 2-d to 3-d image conversion
US20080225040A1 (en) 2007-03-12 2008-09-18 Conversion Works, Inc. System and method of treating semi-transparent features in the conversion of two-dimensional images to three-dimensional images
US20080225042A1 (en) 2007-03-12 2008-09-18 Conversion Works, Inc. Systems and methods for allowing a user to dynamically manipulate stereoscopic parameters
US20080226128A1 (en) 2007-03-12 2008-09-18 Conversion Works, Inc. System and method for using feature tracking techniques for the generation of masks in the conversion of two-dimensional images to three-dimensional images
US8274530B2 (en) 2007-03-12 2012-09-25 Conversion Works, Inc. Systems and methods for filling occluded information for 2-D to 3-D conversion
US20080225045A1 (en) 2007-03-12 2008-09-18 Conversion Works, Inc. Systems and methods for 2-d to 3-d image conversion using mask to model, or model to mask, conversion
US20080226160A1 (en) 2007-03-12 2008-09-18 Conversion Works, Inc. Systems and methods for filling light in frames during 2-d to 3-d image conversion
US20080226181A1 (en) 2007-03-12 2008-09-18 Conversion Works, Inc. Systems and methods for depth peeling using stereoscopic variables during the rendering of 2-d to 3-d images
US20080228449A1 (en) 2007-03-12 2008-09-18 Conversion Works, Inc. Systems and methods for 2-d to 3-d conversion using depth access segments to define an object
US20080227075A1 (en) 2007-03-15 2008-09-18 Ctb/Mcgraw-Hill, Llc Method and system for redundant data capture from scanned documents
JP4835659B2 (en) 2007-07-30 2011-12-14 コワングウーン ユニバーシティー リサーチ インスティテュート フォー インダストリー コーオペレーション 2D-3D combined display method and apparatus with integrated video background
US7630533B2 (en) 2007-09-20 2009-12-08 Hologic, Inc. Breast tomosynthesis with display of highlighted suspected calcifications
EP2448271A4 (en) 2009-06-24 2015-04-22 Lg Electronics Inc Stereoscopic image reproduction device and method for providing 3d user interface
CA2772607A1 (en) 2009-09-01 2011-03-10 Prime Focus Vfx Services Ii Inc. System and process for transforming two-dimensional images into three-dimensional images
WO2011029209A2 (en) 2009-09-10 2011-03-17 Liberovision Ag Method and apparatus for generating and processing depth-enhanced images
US8947422B2 (en) 2009-09-30 2015-02-03 Disney Enterprises, Inc. Gradient modeling toolkit for sculpting stereoscopic depth models for converting 2-D images into stereoscopic 3-D images
JP5387377B2 (en) 2009-12-14 2014-01-15 ソニー株式会社 Image processing apparatus, image processing method, and program
US20130051659A1 (en) 2010-04-28 2013-02-28 Panasonic Corporation Stereoscopic image processing device and stereoscopic image processing method
KR101198320B1 (en) 2010-06-21 2012-11-06 (주)아이아이에스티 Method and apparatus for converting 2d image into 3d image
WO2012016600A1 (en) 2010-08-06 2012-02-09 Trident Microsystems, Inc. Method for generating of a depth map, method for converting a two-dimensional image sequence and device for generating a stereoscopic image
WO2012037685A1 (en) 2010-09-22 2012-03-29 Berfort Management Inc. Generating 3d stereoscopic content from monoscopic video content
US20120274626A1 (en) 2011-04-29 2012-11-01 Himax Media Solutions, Inc. Stereoscopic Image Generating Apparatus and Method
US8655055B2 (en) 2011-05-04 2014-02-18 Texas Instruments Incorporated Method, system and computer program product for converting a 2D image into a 3D image
KR101869872B1 (en) 2011-12-01 2018-06-21 엘지디스플레이 주식회사 Method of multi-view image formation and stereoscopic image display device using the same
US20140307062A1 (en) 2011-12-05 2014-10-16 Rotary View Ltd. System and method for generating a stereoscopic 3d presentation from picture sequence emanating from single lens source

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090002368A1 (en) * 2007-06-26 2009-01-01 Nokia Corporation Method, apparatus and a computer program product for utilizing a graphical processing unit to provide depth information for autostereoscopic display
US20090219383A1 (en) * 2007-12-21 2009-09-03 Charles Gregory Passmore Image depth augmentation system and method
US20110188773A1 (en) * 2010-02-04 2011-08-04 Jianing Wei Fast Depth Map Generation for 2D to 3D Conversion
KR20120095059A (en) * 2011-02-18 2012-08-28 (주)스튜디오 로프트 Method of converting 2d images to 3d images

Also Published As

Publication number Publication date
US9007365B2 (en) 2015-04-14
US20140146037A1 (en) 2014-05-29

Similar Documents

Publication Publication Date Title
CN111656407B (en) Fusing, texturing and rendering views of a dynamic three-dimensional model
US11854118B2 (en) Method for training generative network, method for generating near-infrared image and device
AU2022200841B2 (en) Mixed reality system with multi-source virtual content compositing and method of generating virtual content using same
Kim et al. Multi-perspective stereoscopy from light fields
CN109479098A (en) Multiple view scene cut and propagation
US20130321396A1 (en) Multi-input free viewpoint video processing pipeline
US20090219383A1 (en) Image depth augmentation system and method
CN115100339A (en) Image generation method and device, electronic equipment and storage medium
JP2004537082A (en) Real-time virtual viewpoint in virtual reality environment
TW201243763A (en) Method for 3D video content generation
US11663775B2 (en) Generating physically-based material maps
Li et al. Multivisual animation character 3D model design method based on VR technology
CN109791704B (en) Texture rendering method, system and device based on multi-layer UV mapping for free-running FVV application
US9007365B2 (en) Line depth augmentation system and method for conversion of 2D images to 3D images
DuVall et al. Compositing light field video using multiplane images
Deng et al. Towards stereoscopic on-vehicle AR-HUD
Delanoy et al. A Generative Framework for Image‐based Editing of Material Appearance using Perceptual Attributes
US9547937B2 (en) Three-dimensional annotation system and method
Yu et al. A framework for automatic and perceptually valid facial expression generation
Noh et al. Soft shadow rendering based on real light source estimation in augmented reality
Tu et al. Acquiring identity and expression information from monocular face image
CN115578298A (en) Depth portrait video synthesis method based on content perception
Okura et al. Addressing temporal inconsistency in indirect augmented reality
CN108769644B (en) Binocular animation stylized rendering method based on deep learning
Mori et al. Exemplar-based inpainting for 6dof virtual reality photos

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13857868

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13857868

Country of ref document: EP

Kind code of ref document: A1