US20060176319A1 - Image editing method and image editing apparatus - Google Patents

Image editing method and image editing apparatus Download PDF

Info

Publication number
US20060176319A1
US20060176319A1 US11/396,466 US39646606A US2006176319A1 US 20060176319 A1 US20060176319 A1 US 20060176319A1 US 39646606 A US39646606 A US 39646606A US 2006176319 A1 US2006176319 A1 US 2006176319A1
Authority
US
United States
Prior art keywords
image
composite image
display
original
display region
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/396,466
Inventor
Takashi Ida
Osamu Hori
Nobuyuki Matsumoto
Hidenori Takeshima
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Individual
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Priority to US11/396,466 priority Critical patent/US20060176319A1/en
Publication of US20060176319A1 publication Critical patent/US20060176319A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/387Composing, repositioning or otherwise geometrically modifying originals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/60Editing figures and text; Combining figures or text
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/503Blending, e.g. for anti-aliasing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/6011Colour correction or control with simulation on a subsidiary picture reproducer

Definitions

  • the present invention relates to an image editing method and an image editing apparatus which performs an image processing such as combination of images and editing.
  • An alpha mask is often used when clipping out an object (subjects such as a person, an object appearing in a picture) from an image, and executing a process for combining a clipped object with another image with a personal computer.
  • An alpha mask is a binary image of the same size as an original image. For example, a region having an image value of 0 is a background, and a region having a pixel value of zero is an object. Only a background is replaced with another image by substituting only a pixel having a pixel value of 0 with a pixel of another image referring to an alpha mask when synthesizing the original image with another image. In such an image synthesis, it is necessary to set an alpha mask precisely.
  • the original image from which an object is to be clipped and combined is selected.
  • the pointer of a mouse is moved on this original image to set an object region.
  • a background region is displayed by a given pattern (in this case whole area is in white) like the object image 3 - 1 of FIG. 6 .
  • the alpha mask in this case is shown in the alpha mask 3 - 1 .
  • the object image 3 - 3 must be modified with respect to its object region.
  • the object image 3 - 2 requires no modification of the object region, because the error in the object image 3 - 2 does not appear on the composite image.
  • the extraction tool of Photoshop Trademark
  • the composite image cannot be confirmed till the extraction work is finished. For this reason, even if it is the object image 3 - 2 , the modification work cannot be omitted. As a result, a user must carry out a useless activity.
  • the conventional method must set a correct object region than need, resulting in increasing a working time.
  • an image editing method comprising: storing a plurality of original images; storing a plurality of alpha masks corresponding to the plurality of original images, respectively, superposing in turn a plurality of object images that are extracted from the plurality of original images by the plurality of alpha masks, to display a composite image on a first display region of a display screen; modifying selectively the alpha masks on a second display region of the display screen to modify the composite image; and displaying the composite image reflecting modification of at least one of the alpha masks on the second display region.
  • FIG. 1 shows flowchart to explain an image editing method according to an embodiment of the present invention
  • FIG. 2 shows a screen of a personal computer used for executing the image editing method according to the embodiment
  • FIG. 3 shows a block diagram of an image editing apparatus to carry out the image editing method according to the embodiment
  • FIG. 4 shows an example of an original image
  • FIG. 5 shows an example of a composite image
  • FIG. 6 shows an example of an object image and an alpha mask
  • FIG. 7 shows an example of a time sequential original image
  • FIG. 8 shows an example of a composite image with the use of a time sequential original image
  • FIG. 9 shows a screen of a personal computer used for an image editing apparatus according to an embodiment of this invention.
  • FIG. 10 shows a screen of a personal computer used for an image editing apparatus according to an embodiment of this invention.
  • FIG. 11 shows a block diagram of an image editing apparatus according to another embodiment of the present invention.
  • FIG. 12 shows a screen of a personal computer used for an image editing apparatus according to another embodiment of the present invention.
  • FIG. 13 shows a flow chart of a process for editing an object region by an a plurality of workers.
  • FIG. 14 shows an example which a part of a screen is painted over in black.
  • FIG. 1 shows flowchart to carry out an image editing method according to one embodiment of the present invention. Referring to this flowchart, there will now be described an embodiment to do setting of an object region and displaying of a composite image at the same time.
  • an object image and a composite image are displayed (S 11 ). Then, using a plurality of original images and alpha masks which are already stored, the display image specified by a user is displayed on a window 28 in a personal computer screen 26 as shown in FIG. 2 as an object image. A composite image is displayed on the other window 27 .
  • the alpha mask is set according to the input of a mouse pointer (S 12 ). Then, for example, an object pen is selected from a toolbar 29 as shown in FIG. 2 , and the pointer is moved in the window 28 to set an object region. A change of an original image in which an object region is to be set is requested from a user.
  • Whether the change of the original image is requested is determined (S 13 ). When this determination is YES, the object image is changed and the process is returned to step S 12 (S 18 ). When the determination is NO, whether the alpha mask is changed is determined (S 14 ). When the original image is not changed, the process is returned to step S 12 . When the original image is changed, the object image is updated (S 15 ). The composite image is updated according to the changed alpha mask (S 16 ).
  • step S 17 it is determined whether setting of the object region setting and end of the composite image display are requested by the user (S 17 ).
  • the process is returned to step S 12 .
  • the determination is YES, the process ends.
  • FIG. 3 A block circuit of an image editing system for executing the image editing method of the flow chart of FIG. 1 is shown in FIG. 3 .
  • a storage unit 101 is configured by a harddisk of a personal computer, a semiconductor memory, a magnetic storage medium.
  • the storage unit 101 stores a file of a plurality of original images beforehand. Assuming that the original images are original images 1 , 2 , 3 and 4 shown in FIG. 4 .
  • the original image data 102 read from the storage unit 101 is input to an object image generator 103 .
  • the original image data is a part of the original images stored in the storage unit 101 .
  • the object image generator 103 generates an object image expressing the setting situation of the object region of the original image. If the object image generator 103 is a personal computer, it is not exclusive hardware but may be a CPU.
  • An alpha mask setter 109 and composite image generator 111 described latter may be constructed by a CPU.
  • the object images include, for example, an image whose background region is painted over with a given color, an image whose background region is replaced with a given pattern. If the background region is an image colored with semi transparency, the pattern of the background can be confirmed, and the setting situation of the alpha mask can be understood. Similarly, the object region may be an image colored with semi transparency, and the background region and object region are colored with different semitransparent.
  • a given alpha mask for example, a full-scale object or a full-scale background is used.
  • the object image data 104 is sent to a memory 105 , and stored therein.
  • Object image data 106 is sent to a display 107 at a constant time interval or a timing for refreshing a screen from the memory 105 , and displayed thereon.
  • This display 107 assumes the screen of a personal computer as shown in FIG. 2 .
  • This personal computer screen 26 displays a composite image display window 27 for displaying a composite image, an object image display window 28 on which the object region is set and a toolbar 29 used for the object region setting. Icons representing an object pen and a background pen are arranged on the toolbar 29 .
  • the object pen or background pen can be select by clicking selectively these icons.
  • this modification is reflected on the composite image displayed on the window 27 promptly.
  • the menu used for reading and saving an image file is prepared on the window 28 .
  • the user confirms the setting situation of the object region and modifies the image appropriately, while he or her watches the image on the screen 26 .
  • the user selects an object pen from the toolbar 29 , and traces a part that is an object in the original image but a background in the current alpha mask to change the part to the object region.
  • the part is a background in the original image, but an object in the current alpha mask, the part is modified by the background pen.
  • operation information 108 of the user that regards to a movement of the pointer of a pointing device 115 such as a mouth is input to an alpha mask setter 109 .
  • the alpha mask 110 changed according to the operation information 108 is sent to an object image generator 103 to update the object image.
  • the pointing device 115 may use a touch panel or a pen tablet.
  • a plurality of original image data 102 used for composition, the original image data of original images 1 - 4 shown in FIG. 4 are input to the composite image generator 111 .
  • Each alpha mask of the original image data 102 of the original images 1 - 4 is sent to the composite image generator 111 from the storage unit 101 or alpha mask setter 109 .
  • the composite image generator 111 combines a plurality of original images 1 - 4 in a given order referring to the alpha mask to generate a composite image A ( FIG. 5 ).
  • the original image 1 is a lower layer image.
  • the original images 2 , 3 and 4 are overwritten on the lower layer image to generate a composite image A.
  • the data of the object image is stored in the memory 113 and sent to the display 107 therefrom.
  • the object image of the original image 1 is displayed on the composite image display window 27 .
  • the data of the object image is stored in the memory 113 .
  • the object image of the original image 2 is displayed on the window 27 with being superposed on the object image of the original image 1 .
  • a part protruded from the object image of the original image 2 for example, a part of a road shown behind the vehicle body as shown in the object image 3 - 3 of FIG. 6 is displayed with being superposed on the front part of the object of the original image 1 , that is, the vehicle body
  • the user modifies the alpha mask corresponding to the original image 2 in the window 28 displaying the original image 2 with the object pen to delete the protruded part.
  • This deletion can be confirmed in the composite image window 27 .
  • the data of the object image is stored in the memory 113 .
  • the object image of the original image 3 is displayed on the window 27 with being superposed on the composite image of the original images 1 and 2 .
  • a part protruded from the object image of the original image 3 for example, a part of a road indicated on the front wheel of the vehicle body as shown in the object image 3 - 3 of FIG. 6 appears on the composite image of three original images as a protruded part.
  • this extrusion is a part of the road, even if the object image of the original image 4 is not superposed on the object image of the original image 3 , the extrusion can be incorporated in the composite image without sense of incongruity. If this extrusion is not a part of the road, the object image of the original image 4 is superposed once on the object image of the original image 3 to confirm whether the extrusion is hidden by the object image of the original image 4 . If the extrusion is not hidden by the object image of the original image 4 , in the window 28 displaying the original image 3 , the corresponding alpha mask is modified with the object pen to erase the extrusion. This erasure of the extrusion can be confirmed on the composite image window 27 . The object image of the original image 4 , that is, the vehicle body is extracted by the corresponding alpha mask, and superposed on the previous image while modifying the alpha mask as necessary. As a result, the composite image A shown in FIG. 5 is generated.
  • the composite image indication window 27 and object image display window 28 are displayed on the personal computer screen.
  • a plurality of original images allocated to hierarchy layers and alpha masks corresponding to the original images are taken in sequentially.
  • the object region of the original image is extracted using the corresponding alpha mask on the object image display window 28 .
  • the object regions are superposed while watching the composite image window 27 .
  • the alpha mask is modified as necessary to generate a suitable composite image.
  • the alpha mask 110 finished by modification is sent to the storage unit 101 .
  • a portion surrounded by a dotted line is called an alpha mask generator 116 .
  • the extrusion does not appear on the composite image because it is overwritten by a car of the original image 4 . Therefore, it is found that the object image 3 - 2 does not need to be modified. Because the modification of the alpha mask is reflected in the composite image promptly, it can be conveniently confirmed whether the modification is sufficient.
  • Updating of the composite image is repeated with a comparatively short time interval.
  • the composite image is updated at a timing that movement of a pen is stopped or the timing when the pen lefts a screen.
  • the composite image is updated. If only a part of the composite image that is changed in the alpha mask is overwritten, the throughput of the updating can be reduced.
  • FIGS. 7 and 8 show original images and composite images in the embodiment of the present invention that is applied to a video image.
  • FIG. 7 shows a scene where an interest car gradually approaches from a distant place.
  • Such a moving image and an alpha mask of each frame are stored in the storage unit 101 .
  • the image which the user wants to finally make is a composite image B 4 of FIG. 8 .
  • the original image is a video image
  • the composite image is a still image obtained by superposing cars of original images 6 , 7 and 8 on an original image 5 . Even in this case, if the alpha mask is modified while confirming the composite image, no alpha mask that is more accurate than required can be formed.
  • the composite image has a plurality of frames to be displayed in a video image, and object regions corresponding to the frames are set.
  • the composite image B 1 of FIG. 8 is the same image as the original image 5 of FIG. 7
  • the composite image B 2 is an image obtained by combining the original image 5 with a car of the original image 6
  • the composite image B 3 is an image obtained by combining cars of the original images 6 and 7 with the original image 5 . Displaying the composite images B 1 , B 2 , B 3 and B 4 sequentially generates a moving image. In this case, the quality of the composite images B 2 and B 3 as well as the composite image B 4 is important, too.
  • the composite image to be displayed when the alpha mask is modified is displayed in a form of a moving image, the throughput increases. However, the composite image can be confirmed in a last display form. For example, even if a certain frame includes an error, it can be determined that the frame need not be corrected if the error is displayed only for a moment.
  • the frames of the composite image are displayed in juxtaposition on the screen, that is, four images of FIG. 8 are displayed in juxtaposition, all frames can be conveniently confirmed at a time.
  • the modified object is colored thinly, surrounded with a dotted line, or marked with a pattern such as slanted lines, the modified object can be distinct in easy.
  • a user selecting an image file name can change the original image to which an object region is set in easy.
  • the original image that the point is included as an object region or the point is near the object region is automatically displayed. In this case, if a portion that the composite quality of the composite image is no good is simply clicked, an original image to be modified is conveniently displayed.
  • FIG. 9 Another example of a personal computer screen is shown in FIG. 9 .
  • Thumbnail images 203 corresponding to the original images are displayed on an original image display window 202 in a screen 201 . Thanks to this, the image used for composition can be confirmed easily.
  • the composite image is displayed on the composite image window 204 .
  • the frame of the composite image can be moved backward and forward by operation of a slide bar 205 in the case of the moving image.
  • an object image is displayed on the object image window 207 , to permit editing of the image.
  • the image is modified using a pen tool in a toolbar 208 .
  • the thumbnail image 206 is displayed so that it is found that the image is under edition. For example, if an original image that is colored or surrounded by a thick frame is a video image, the display frame of the object image can be moved backward and forward by operation of the slide bar 209 . When only some frames of the original image is to be used for combination, a start frame pointer 210 and an end pointer 211 are prepared.
  • the pointers 210 and 211 are arranged on a start frame and an end frame of frames used for combination, respectively, to use only the frames between the start and end frames for combination. Accordingly, when the number of the composite frames is larger than that of the set frames, the original image is repeated or the last frame is stopped. Alternatively, the image is reproduced backward in time from the last frame.
  • the thumbnail image 212 is clicked. Then, the object image window 213 is opened and editing is enabled.
  • this figure shows an example that editing of three images is possible at the same time.
  • FIG. 10 Another embodiment of a personal computer screen is shown in FIG. 10 . Since a composite image window 302 in a personal computer screen 301 and an object image window 303 are the same as that of the previous embodiment, a detailed explanation is omitted.
  • the thumbnails of all original images stored in a hard disk are displayed on a image list window 304 .
  • the thumbnail image corresponding to the image moves to a composite list window 305 , and simultaneously the composite image is displayed on the composite image window 302 .
  • the thumbnail image on the image list window 304 drags to the composite list window 305 or composite image window 302 to move the image to the composite list.
  • the original image used for the composite image is displayed with a thumbnail image on the composite list window 305 .
  • this alpha mask can be edited.
  • the thumbnail image of the composite list window 305 is dragged to the image list window 304 , the image is removed from the composite image.
  • FIG. 11 An embodiment for setting an object region for original image files located at a place using a plurality of computers will be described referring to FIG. 11 .
  • the original image 403 stored in the storage unit 402 of a server 401 is input to a client 404 corresponding to another computer.
  • an alpha mask 406 is generated by each of alpha mask generators 405 shown in FIG. 11 , and returned to the storage unit 402 of the server 401 .
  • alpha masks are generated with a plurality of clients. According to this embodiment, many operators are necessary, but many alpha masks are generated in a short period of time by a parallel operation.
  • a LAN and a public correspondence are connected between the server and each client.
  • Each client sends the original image in editing to the server so that a plurality of editing tasks are not executed for the same original image at the same time.
  • the server holds information of the original image in editing, and each client reads the information to prevent the original image in editing from further editing.
  • the screen of the client 404 shown in FIG. 11 is shown in FIG. 12 .
  • the composite image is displayed on the composite image window 502 in the screen 501 .
  • the server sends the data of the original image and alpha mask.
  • the thumbnail images of the original image are displayed side by side on an original image display window 503 .
  • the object image is displayed on the object image window 507 to make editing possible.
  • the process on the object image window 507 is similar to the above embodiment.
  • another user edits an alpha mask in another client at the same time.
  • a monitor window 508 is provided for confirming the work situation of another user and displays the object image in editing in another client. Also, the object image in editing in another client is displayed on a monitor window 509 .
  • the thumbnail image 505 in editing in another client is colored with a color different from that of the thumbnail image 504 .
  • An original image to be edited next is selected from the non-edited original images.
  • the alpha masks in editing are held in the clients, respectively, and transferred to the server sequentially, to monitor respective editing situations to one another.
  • Each client reads the alpha mask to generate the object image for monitoring and updating the composite image. Then, if the transfer frequency is decreased or the transfer is performed first when the operator saves the alpha mask, the monitoring frame rate decreases, but the quantity of transfer data can be reduced.
  • the original image and alpha mask are read from the server 401 , the object image and composite image are displayed and the object image in another client is displayed as a monitor image (S 21 ).
  • the alpha mask is set according to the input of the mouse pointer.
  • the user changes the original image in order to set the object region.
  • the monitor information is updated (S 22 ).
  • the monitor information represents information indicating which original image is edited or modification situation of the alpha mask.
  • step S 23 It is determined whether monitoring is updated after step S 22 (S 23 ). In this determination, when the original image of an object to be edited is changed in another client, or when the alpha mask is modified, the process for updating the monitoring image is advanced to step S 29 . Otherwise, the process is advanced to step S 24 .
  • step S 24 Whether the change of the original image is requested is determined in step S 24 . If the determination is YES, the process advances to step S 31 . The determination is NO, the process advances to step S 25 . In step S 25 , whether the alpha mask is changed is determined. When the determination is NO, the process returns to step S 22 . When the determination is YES, the process advances to step S 26 . In step S 26 , the object image is updated according to the alpha mask changed and the process advances to step 27 . In this step 27 , the composite image is updated according to the changed alpha mask. Thereafter, it is determined whether the user requests to end setting of the object region and displaying of the composite image displaying. When the determination is NO, the process returns to step S 22 . When the determination is YES, the process is ended.
  • step S 29 the monitoring image is updated based on the monitoring information, and the process advances to step S 30 .
  • the composite image is updated according to the modification.
  • step S 31 the object image is changed to the image that the user requires, and the process returns to step S 22 .
  • the embodiment for clipping an image is described hereinbefore.
  • an image effect such as modification of color of the image, shade a license plate of a car with a black or blur it as shown in FIG. 14 .
  • a function that a brightness value or a color of an original image can be modified or a function that a modified result can be saved in an original image file is convenient.
  • the image effect should be added to a partial region of an image, there is a method to use the second alpha mask for specifying the region.
  • the alpha mask described above expresses an object region that is clipped and combined with another object region.
  • the second alpha mask is decided to be used for a process for blurring only a region whose pixel value is “1”, for example. If the second alpha mask is used, the data of the original image is not changed. Thus, the image effect can be changed to ON or OFF as necessary.
  • a region to which an image effect is to be added is colored with semi transparency to facilitate a distinction of the region, and the second alpha mask is edited while watching the object image window. The image to which the image effect is really applied is displayed on the composite image window.
  • the composite image region can be always confirmed, it is not necessary to set a correct object region than need, resulting in decreasing a working time.

Abstract

An image editing method comprises storing original images, storing alpha masks corresponding to the original images, respectively, superposing in turn object images that are extracted from the original images by the alpha masks, to display a composite image on a first display region of a display screen, modifying selectively the alpha masks on a second display region of the display screen to modify the composite image as necessary, and displaying the composite image reflecting modification of at least one of the alpha masks on the second display region.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application is based upon and claims the benefit of priority from the prior Japanese Patent Application No. 2002-197228, filed Jul. 5, 2002, the entire contents of which are incorporated herein by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to an image editing method and an image editing apparatus which performs an image processing such as combination of images and editing.
  • 2. Description of the Related Art
  • An alpha mask is often used when clipping out an object (subjects such as a person, an object appearing in a picture) from an image, and executing a process for combining a clipped object with another image with a personal computer. An alpha mask is a binary image of the same size as an original image. For example, a region having an image value of 0 is a background, and a region having a pixel value of zero is an object. Only a background is replaced with another image by substituting only a pixel having a pixel value of 0 with a pixel of another image referring to an alpha mask when synthesizing the original image with another image. In such an image synthesis, it is necessary to set an alpha mask precisely. There is Photoshop of Adobe Systems Incorporated as software for performing the image combination. There will now be described an extraction tool adopted in the Photoshop version 6 (Adobe Photoshop 6.0 Japanese edition user guide, p. 154 to 157). Original images 1, 2, 3 and 4 shown in FIG. 4 as an example are combined to generate a composite image A shown in FIG. 5. In this case, a plurality of original images are arranged in a plurality of layers, respectively. The layer is for deciding the overwrite order of the original images. The composite image is generated by overwriting in turn the upper layers on the image of the lowest layer used as furring.
  • In other words, the original image from which an object is to be clipped and combined is selected. The pointer of a mouse is moved on this original image to set an object region. When a preview is carried out in a setting process, a background region is displayed by a given pattern (in this case whole area is in white) like the object image 3-1 of FIG. 6. Thus, the background region and the object region can be confirmed. The alpha mask in this case is shown in the alpha mask 3-1. When the border of this object region is deviated from the contour of an actual object as with the object images 3-2 and 3-3 of FIG. 6, it is possible to modify the image with a pen tool again. When the “Enter” button is pressed, an extraction dialogue becomes extinct in a result that the background region of the original image is erased. When the lower layer image is not disposed with respect to the original image beforehand, the background is displayed with being replaced with another pattern. However, if the lower layer image is disposed, a composite image is displayed.
  • In an example of the above-mentioned composite image A, the left side of the original image 3 appears on the composite image, but the lower side is covered by a car in front. Therefore, the object image 3-3 must be modified with respect to its object region. However, the object image 3-2 requires no modification of the object region, because the error in the object image 3-2 does not appear on the composite image. However, in the case of the extraction tool of Photoshop (Trademark), the composite image cannot be confirmed till the extraction work is finished. For this reason, even if it is the object image 3-2, the modification work cannot be omitted. As a result, a user must carry out a useless activity. As thus described, the conventional method must set a correct object region than need, resulting in increasing a working time.
  • BRIEF SUMMARY OF THE INVENTION
  • It is an object of the present invention to provide an image editing method and an image editing system which can edit an image without setting an object region needlessly at the time of making up of an alpha mask.
  • According to an aspect of the present invention, there is provided an image editing method comprising: storing a plurality of original images; storing a plurality of alpha masks corresponding to the plurality of original images, respectively, superposing in turn a plurality of object images that are extracted from the plurality of original images by the plurality of alpha masks, to display a composite image on a first display region of a display screen; modifying selectively the alpha masks on a second display region of the display screen to modify the composite image; and displaying the composite image reflecting modification of at least one of the alpha masks on the second display region.
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
  • FIG. 1 shows flowchart to explain an image editing method according to an embodiment of the present invention;
  • FIG. 2 shows a screen of a personal computer used for executing the image editing method according to the embodiment;
  • FIG. 3 shows a block diagram of an image editing apparatus to carry out the image editing method according to the embodiment;
  • FIG. 4 shows an example of an original image;
  • FIG. 5 shows an example of a composite image;
  • FIG. 6 shows an example of an object image and an alpha mask;
  • FIG. 7 shows an example of a time sequential original image;
  • FIG. 8 shows an example of a composite image with the use of a time sequential original image;
  • FIG. 9 shows a screen of a personal computer used for an image editing apparatus according to an embodiment of this invention;
  • FIG. 10 shows a screen of a personal computer used for an image editing apparatus according to an embodiment of this invention;
  • FIG. 11 shows a block diagram of an image editing apparatus according to another embodiment of the present invention;
  • FIG. 12 shows a screen of a personal computer used for an image editing apparatus according to another embodiment of the present invention;
  • FIG. 13 shows a flow chart of a process for editing an object region by an a plurality of workers; and
  • FIG. 14 shows an example which a part of a screen is painted over in black.
  • DETAILED DESCRIPTION OF THE INVENTION
  • There will now be described an embodiment of the present invention in conjunction with drawings. FIG. 1 shows flowchart to carry out an image editing method according to one embodiment of the present invention. Referring to this flowchart, there will now be described an embodiment to do setting of an object region and displaying of a composite image at the same time.
  • At first, an object image and a composite image are displayed (S11). Then, using a plurality of original images and alpha masks which are already stored, the display image specified by a user is displayed on a window 28 in a personal computer screen 26 as shown in FIG. 2 as an object image. A composite image is displayed on the other window 27. The alpha mask is set according to the input of a mouse pointer (S12). Then, for example, an object pen is selected from a toolbar 29 as shown in FIG. 2, and the pointer is moved in the window 28 to set an object region. A change of an original image in which an object region is to be set is requested from a user.
  • Whether the change of the original image is requested is determined (S13). When this determination is YES, the object image is changed and the process is returned to step S12 (S18). When the determination is NO, whether the alpha mask is changed is determined (S14). When the original image is not changed, the process is returned to step S12. When the original image is changed, the object image is updated (S15). The composite image is updated according to the changed alpha mask (S16).
  • Next, it is determined whether setting of the object region setting and end of the composite image display are requested by the user (S17). When the determination is NO, the process is returned to step S12. When the determination is YES, the process ends.
  • A block circuit of an image editing system for executing the image editing method of the flow chart of FIG. 1 is shown in FIG. 3. According to this block circuit, a storage unit 101 is configured by a harddisk of a personal computer, a semiconductor memory, a magnetic storage medium. The storage unit 101 stores a file of a plurality of original images beforehand. Assuming that the original images are original images 1, 2, 3 and 4 shown in FIG. 4.
  • The original image data 102 read from the storage unit 101 is input to an object image generator 103. The original image data is a part of the original images stored in the storage unit 101. The object image generator 103 generates an object image expressing the setting situation of the object region of the original image. If the object image generator 103 is a personal computer, it is not exclusive hardware but may be a CPU. An alpha mask setter 109 and composite image generator 111 described latter may be constructed by a CPU.
  • The object images include, for example, an image whose background region is painted over with a given color, an image whose background region is replaced with a given pattern. If the background region is an image colored with semi transparency, the pattern of the background can be confirmed, and the setting situation of the alpha mask can be understood. Similarly, the object region may be an image colored with semi transparency, and the background region and object region are colored with different semitransparent.
  • In the initial state of the object region setting, a given alpha mask, for example, a full-scale object or a full-scale background is used. The object image data 104 is sent to a memory 105, and stored therein. Object image data 106 is sent to a display 107 at a constant time interval or a timing for refreshing a screen from the memory 105, and displayed thereon. This display 107 assumes the screen of a personal computer as shown in FIG. 2. This personal computer screen 26 displays a composite image display window 27 for displaying a composite image, an object image display window 28 on which the object region is set and a toolbar 29 used for the object region setting. Icons representing an object pen and a background pen are arranged on the toolbar 29. The object pen or background pen can be select by clicking selectively these icons. When the display image of the window 28 is modified by the object pen or background pen, this modification is reflected on the composite image displayed on the window 27 promptly. The menu used for reading and saving an image file is prepared on the window 28.
  • The user confirms the setting situation of the object region and modifies the image appropriately, while he or her watches the image on the screen 26. For example, the user selects an object pen from the toolbar 29, and traces a part that is an object in the original image but a background in the current alpha mask to change the part to the object region. When the part is a background in the original image, but an object in the current alpha mask, the part is modified by the background pen. As described above, operation information 108 of the user that regards to a movement of the pointer of a pointing device 115 such as a mouth is input to an alpha mask setter 109. The alpha mask 110 changed according to the operation information 108 is sent to an object image generator 103 to update the object image. The pointing device 115 may use a touch panel or a pen tablet.
  • A plurality of original image data 102 used for composition, the original image data of original images 1-4 shown in FIG. 4 are input to the composite image generator 111. Each alpha mask of the original image data 102 of the original images 1-4 is sent to the composite image generator 111 from the storage unit 101 or alpha mask setter 109. The composite image generator 111 combines a plurality of original images 1-4 in a given order referring to the alpha mask to generate a composite image A (FIG. 5).
  • In the example of FIGS. 4 and 5, the original image 1 is a lower layer image. The original images 2, 3 and 4 are overwritten on the lower layer image to generate a composite image A. In other words, at first when the object image is extracted from the original image 1 by the corresponding alpha mask, the data of the object image is stored in the memory 113 and sent to the display 107 therefrom. As a result, the object image of the original image 1 is displayed on the composite image display window 27.
  • When the object image of the original image 2 is extracted using the corresponding alpha mask, the data of the object image is stored in the memory 113. The object image of the original image 2 is displayed on the window 27 with being superposed on the object image of the original image 1. In this time, when a part protruded from the object image of the original image 2, for example, a part of a road shown behind the vehicle body as shown in the object image 3-3 of FIG. 6 is displayed with being superposed on the front part of the object of the original image 1, that is, the vehicle body, the user modifies the alpha mask corresponding to the original image 2 in the window 28 displaying the original image 2 with the object pen to delete the protruded part. This deletion can be confirmed in the composite image window 27. Similarly, when the object image of the original image 3 is extracted using the corresponding alpha mask, the data of the object image is stored in the memory 113. The object image of the original image 3 is displayed on the window 27 with being superposed on the composite image of the original images 1 and 2. Then, a part protruded from the object image of the original image 3, for example, a part of a road indicated on the front wheel of the vehicle body as shown in the object image 3-3 of FIG. 6 appears on the composite image of three original images as a protruded part.
  • However, since this extrusion is a part of the road, even if the object image of the original image 4 is not superposed on the object image of the original image 3, the extrusion can be incorporated in the composite image without sense of incongruity. If this extrusion is not a part of the road, the object image of the original image 4 is superposed once on the object image of the original image 3 to confirm whether the extrusion is hidden by the object image of the original image 4. If the extrusion is not hidden by the object image of the original image 4, in the window 28 displaying the original image 3, the corresponding alpha mask is modified with the object pen to erase the extrusion. This erasure of the extrusion can be confirmed on the composite image window 27. The object image of the original image 4, that is, the vehicle body is extracted by the corresponding alpha mask, and superposed on the previous image while modifying the alpha mask as necessary. As a result, the composite image A shown in FIG. 5 is generated.
  • As discussed above, according to the present embodiment, the composite image indication window 27 and object image display window 28 are displayed on the personal computer screen. A plurality of original images allocated to hierarchy layers and alpha masks corresponding to the original images are taken in sequentially. The object region of the original image is extracted using the corresponding alpha mask on the object image display window 28. The object regions are superposed while watching the composite image window 27. The alpha mask is modified as necessary to generate a suitable composite image.
  • When the composite image is generated as described above, the alpha mask 110 finished by modification is sent to the storage unit 101. In FIG. 3, a portion surrounded by a dotted line is called an alpha mask generator 116.
  • By the above constitution, it is possible to set the object region while ensuring the composite image, a part making quality of the composite image deteriorate is modified. As described above, when the composite image A of FIG. 5 is made up, in the case of the object image 3-3 of FIG. 6, the car behind is overwritten by the extrusion on the left hand side of this car, in a result that a good composite image is not provided. For this reason, while confirming the composite image on the composite image window 27, this extrusion is deleted on the object image window 28.
  • On the other hand, in the object image 3-2, the extrusion does not appear on the composite image because it is overwritten by a car of the original image 4. Therefore, it is found that the object image 3-2 does not need to be modified. Because the modification of the alpha mask is reflected in the composite image promptly, it can be conveniently confirmed whether the modification is sufficient.
  • Updating of the composite image is repeated with a comparatively short time interval. When great throughput due to this repetitive process wants to be avoided, the composite image is updated at a timing that movement of a pen is stopped or the timing when the pen lefts a screen. Alternatively, when the alpha mask is changed, the composite image is updated. If only a part of the composite image that is changed in the alpha mask is overwritten, the throughput of the updating can be reduced.
  • FIGS. 7 and 8 show original images and composite images in the embodiment of the present invention that is applied to a video image. FIG. 7 shows a scene where an interest car gradually approaches from a distant place. Such a moving image and an alpha mask of each frame are stored in the storage unit 101. The image which the user wants to finally make is a composite image B4 of FIG. 8. In this example, the original image is a video image, but the composite image is a still image obtained by superposing cars of original images 6, 7 and 8 on an original image 5. Even in this case, if the alpha mask is modified while confirming the composite image, no alpha mask that is more accurate than required can be formed.
  • There is a case that the composite image has a plurality of frames to be displayed in a video image, and object regions corresponding to the frames are set. The composite image B1 of FIG. 8 is the same image as the original image 5 of FIG. 7, and the composite image B2 is an image obtained by combining the original image 5 with a car of the original image 6. The composite image B3 is an image obtained by combining cars of the original images 6 and 7 with the original image 5. Displaying the composite images B1, B2, B3 and B4 sequentially generates a moving image. In this case, the quality of the composite images B2 and B3 as well as the composite image B4 is important, too. Consequently, when the alpha image of the original image 7, for example, is modified, an image corresponding to the composite image B3 made up using only the original images 5, 6 and 7 is displayed. For example, even if a part of the road of the original image 7 is combined with an object region protruding, if the part is indistinctive in the composite image, it can be confirmed that the original image needs not to be modified. On the contrary, when the car behind is overwritten by the extrusion, the modification is necessary. This situation can be understood from FIG. 8.
  • If the composite image to be displayed when the alpha mask is modified is displayed in a form of a moving image, the throughput increases. However, the composite image can be confirmed in a last display form. For example, even if a certain frame includes an error, it can be determined that the frame need not be corrected if the error is displayed only for a moment.
  • If the frames of the composite image are displayed in juxtaposition on the screen, that is, four images of FIG. 8 are displayed in juxtaposition, all frames can be conveniently confirmed at a time. When many objects are combined as shown in the composite images A and B4, which object is modified cannot be determined instantly. For this reason, if, in the composite image, the modified object is colored thinly, surrounded with a dotted line, or marked with a pattern such as slanted lines, the modified object can be distinct in easy.
  • A user selecting an image file name can change the original image to which an object region is set in easy. Alternatively, when a point of the composite image is clicked, the original image that the point is included as an object region or the point is near the object region is automatically displayed. In this case, if a portion that the composite quality of the composite image is no good is simply clicked, an original image to be modified is conveniently displayed.
  • When the composite image is saved in the storage unit, there is a situation that a file size should not be made too big. To do so, it is convenient to display, on the window 27 or window 28, the file size when the composite image is stored in the storage unit. The user can provide a contrivance for reducing the number of objects and the number of frames while watching the numeric value of the file size if the file size is too big.
  • Another example of a personal computer screen is shown in FIG. 9. Thumbnail images 203 corresponding to the original images are displayed on an original image display window 202 in a screen 201. Thanks to this, the image used for composition can be confirmed easily. The composite image is displayed on the composite image window 204. The frame of the composite image can be moved backward and forward by operation of a slide bar 205 in the case of the moving image.
  • The user clicks the thumbnail image 206 corresponding to the image including an object region to be modified or drags the thumbnail image 206 on the object image window 207. As a result, an object image is displayed on the object image window 207, to permit editing of the image. The image is modified using a pen tool in a toolbar 208. The thumbnail image 206 is displayed so that it is found that the image is under edition. For example, if an original image that is colored or surrounded by a thick frame is a video image, the display frame of the object image can be moved backward and forward by operation of the slide bar 209. When only some frames of the original image is to be used for combination, a start frame pointer 210 and an end pointer 211 are prepared. The pointers 210 and 211 are arranged on a start frame and an end frame of frames used for combination, respectively, to use only the frames between the start and end frames for combination. Accordingly, when the number of the composite frames is larger than that of the set frames, the original image is repeated or the last frame is stopped. Alternatively, the image is reproduced backward in time from the last frame. When the image 206 is edited simultaneously with the image 206, the thumbnail image 212 is clicked. Then, the object image window 213 is opened and editing is enabled. Similarly, this figure shows an example that editing of three images is possible at the same time.
  • Another embodiment of a personal computer screen is shown in FIG. 10. Since a composite image window 302 in a personal computer screen 301 and an object image window 303 are the same as that of the previous embodiment, a detailed explanation is omitted. The thumbnails of all original images stored in a hard disk are displayed on a image list window 304. When an image used actually for combination is clicked, the thumbnail image corresponding to the image moves to a composite list window 305, and simultaneously the composite image is displayed on the composite image window 302. Alternatively, the thumbnail image on the image list window 304 drags to the composite list window 305 or composite image window 302 to move the image to the composite list.
  • The original image used for the composite image is displayed with a thumbnail image on the composite list window 305. When any one of the thumbnail images is clicked or the thumbnail image drags to the object image window 303, this alpha mask can be edited. When the thumbnail image of the composite list window 305 is dragged to the image list window 304, the image is removed from the composite image.
  • An embodiment for setting an object region for original image files located at a place using a plurality of computers will be described referring to FIG. 11. According to this, the original image 403 stored in the storage unit 402 of a server 401 is input to a client 404 corresponding to another computer. In the client 404, an alpha mask 406 is generated by each of alpha mask generators 405 shown in FIG. 11, and returned to the storage unit 402 of the server 401. Similarly, alpha masks are generated with a plurality of clients. According to this embodiment, many operators are necessary, but many alpha masks are generated in a short period of time by a parallel operation.
  • A LAN and a public correspondence are connected between the server and each client. Each client sends the original image in editing to the server so that a plurality of editing tasks are not executed for the same original image at the same time. The server holds information of the original image in editing, and each client reads the information to prevent the original image in editing from further editing.
  • The screen of the client 404 shown in FIG. 11 is shown in FIG. 12. The composite image is displayed on the composite image window 502 in the screen 501. The server sends the data of the original image and alpha mask. The thumbnail images of the original image are displayed side by side on an original image display window 503. When the thumbnail image 504 of the image corresponding to an alpha mask to be edited is selected and clicked, the object image is displayed on the object image window 507 to make editing possible.
  • The process on the object image window 507 is similar to the above embodiment. In the present embodiment, another user edits an alpha mask in another client at the same time. In this case, a monitor window 508 is provided for confirming the work situation of another user and displays the object image in editing in another client. Also, the object image in editing in another client is displayed on a monitor window 509.
  • In the original image display window 503, the thumbnail image 505 in editing in another client is colored with a color different from that of the thumbnail image 504. As a result, the original images that are not edited can be discriminated easily. An original image to be edited next is selected from the non-edited original images. The alpha masks in editing are held in the clients, respectively, and transferred to the server sequentially, to monitor respective editing situations to one another. Each client reads the alpha mask to generate the object image for monitoring and updating the composite image. Then, if the transfer frequency is decreased or the transfer is performed first when the operator saves the alpha mask, the monitoring frame rate decreases, but the quantity of transfer data can be reduced.
  • The process in the embodiment of FIG. 11 will be described referring to a flow chart of FIG. 13.
  • At first, the original image and alpha mask are read from the server 401, the object image and composite image are displayed and the object image in another client is displayed as a monitor image (S21). The alpha mask is set according to the input of the mouse pointer. The user changes the original image in order to set the object region. The monitor information is updated (S22). The monitor information represents information indicating which original image is edited or modification situation of the alpha mask.
  • It is determined whether monitoring is updated after step S22 (S23). In this determination, when the original image of an object to be edited is changed in another client, or when the alpha mask is modified, the process for updating the monitoring image is advanced to step S29. Otherwise, the process is advanced to step S24.
  • Whether the change of the original image is requested is determined in step S24. If the determination is YES, the process advances to step S31. The determination is NO, the process advances to step S25. In step S25, whether the alpha mask is changed is determined. When the determination is NO, the process returns to step S22. When the determination is YES, the process advances to step S26. In step S26, the object image is updated according to the alpha mask changed and the process advances to step 27. In this step 27, the composite image is updated according to the changed alpha mask. Thereafter, it is determined whether the user requests to end setting of the object region and displaying of the composite image displaying. When the determination is NO, the process returns to step S22. When the determination is YES, the process is ended.
  • In step S29, the monitoring image is updated based on the monitoring information, and the process advances to step S30. When the alpha mask is modified in another client, the composite image is updated according to the modification. In step S31, the object image is changed to the image that the user requires, and the process returns to step S22.
  • The embodiment for clipping an image is described hereinbefore. There is such case as to add, to some regions of an image, an image effect such as modification of color of the image, shade a license plate of a car with a black or blur it as shown in FIG. 14. To do so, a function that a brightness value or a color of an original image can be modified or a function that a modified result can be saved in an original image file is convenient. When the image effect should be added to a partial region of an image, there is a method to use the second alpha mask for specifying the region. The alpha mask described above expresses an object region that is clipped and combined with another object region. The second alpha mask is decided to be used for a process for blurring only a region whose pixel value is “1”, for example. If the second alpha mask is used, the data of the original image is not changed. Thus, the image effect can be changed to ON or OFF as necessary. Similarly to the above embodiments, a region to which an image effect is to be added is colored with semi transparency to facilitate a distinction of the region, and the second alpha mask is edited while watching the object image window. The image to which the image effect is really applied is displayed on the composite image window.
  • As described above, according to the invention, since the composite image region can be always confirmed, it is not necessary to set a correct object region than need, resulting in decreasing a working time.
  • Additional advantages and modifications will readily occur to those skilled in the art. Therefore, the invention in its broader aspects is not limited to the specific details and representative embodiments shown and described herein. Accordingly, various modifications may be made without departing from the spirit or scope of the general inventive concept as defined by the appended claims and their equivalents.

Claims (4)

1-19. (canceled)
20. An image editing apparatus comprising:
a memory unit configured to store a plurality of original images including still pictures;
a generator configured to generate a plurality of alpha masks corresponding respectively to object images extracted from the plurality of original images, respectively;
a superposing unit configured to superpose in turn the object images that are extracted from the plurality of original images by the plurality of alpha masks, to display a composite image on a first display region of a display screen;
a modifying unit configured to modify selectively the alpha masks on a second display region of the display screen to modify the composite image, when superposing in turn the object images, which alpha masks are selectively displayed on the second display region simultaneously with the composite image being displayed on the first display region; and
a display unit configured to display the composite image reflecting modification of at least one of the alpha masks on the second display region.
21. A computer system for video editing, comprising:
means for storing a plurality of original images including still pictures;
means for generating a plurality of alpha masks corresponding respectively to object images extracted from the plurality of original images, respectively;
means for superposing in turn the object images that are extracted from the plurality of original images by the plurality of alpha masks, to display a composite image on a first display region of a display screen;
means for modifying selectively the alpha masks on a second display region of the display screen to modify the composite image, when superposing in turn the object images, which alpha masks are selectively displayed on the second display region simultaneously with the composite image being displayed on the first display region; and
means for displaying the composite image reflecting modification of at least one of the alpha masks on the second display region.
22. A video edition program stored in a computer readable medium, comprising:
means for instructing a computer to store a plurality of original images including still pictures;
means for instructing the computer to generate a plurality of alpha masks corresponding respectively to obtain images extracted from the plurality of original images, respectively;
means for instructing the computer to superpose in turn the object images that are extracted from the plurality of original images by the plurality of alpha masks, to display a composite image on a first display region of a display screen;
means for instructing the computer to modify selectively the alpha masks on a second display region of the display screen to modify the composite image, when superposing in turn the object images, which alpha masks are selectively displayed on the second display region simultaneously with the composite image being displayed on the first display region; and
means for instructing the computer to display the composite image reflecting modification of at least one of the alpha masks on the second display region.
US11/396,466 2002-07-05 2006-04-04 Image editing method and image editing apparatus Abandoned US20060176319A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US11/396,466 US20060176319A1 (en) 2002-07-05 2006-04-04 Image editing method and image editing apparatus

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
JP2002197228A JP2004038746A (en) 2002-07-05 2002-07-05 Image editing method and image editing system
JP2002-197228 2002-07-05
US10/610,563 US7050070B2 (en) 2002-07-05 2003-07-02 Image editing method and image editing apparatus
US11/396,466 US20060176319A1 (en) 2002-07-05 2006-04-04 Image editing method and image editing apparatus

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US10/610,563 Division US7050070B2 (en) 2002-07-05 2003-07-02 Image editing method and image editing apparatus

Publications (1)

Publication Number Publication Date
US20060176319A1 true US20060176319A1 (en) 2006-08-10

Family

ID=29997068

Family Applications (2)

Application Number Title Priority Date Filing Date
US10/610,563 Expired - Fee Related US7050070B2 (en) 2002-07-05 2003-07-02 Image editing method and image editing apparatus
US11/396,466 Abandoned US20060176319A1 (en) 2002-07-05 2006-04-04 Image editing method and image editing apparatus

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US10/610,563 Expired - Fee Related US7050070B2 (en) 2002-07-05 2003-07-02 Image editing method and image editing apparatus

Country Status (2)

Country Link
US (2) US7050070B2 (en)
JP (1) JP2004038746A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20070188659A1 (en) * 2006-02-13 2007-08-16 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20070296736A1 (en) * 2006-06-26 2007-12-27 Agfa Inc. System and method for scaling overlay images
US7587065B2 (en) 2002-09-26 2009-09-08 Kabushiki Kaisha Toshiba Image analysis method, analyzing movement of an object in image data
US20100080489A1 (en) * 2008-09-30 2010-04-01 Microsoft Corporation Hybrid Interface for Interactively Registering Images to Digital Models
US20100328337A1 (en) * 2008-03-03 2010-12-30 Itaru Furukawa Line drawing processing apparatus and program
US20110196578A1 (en) * 2010-02-05 2011-08-11 Strohmaier Jason M Method for operating a vehicle display and a vehicle display system
US11823305B2 (en) 2019-04-01 2023-11-21 Volkswagen Aktiengesellschaft Method and device for masking objects contained in an image

Families Citing this family (61)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3782368B2 (en) * 2002-03-29 2006-06-07 株式会社東芝 Object image clipping method and program, and object image clipping device
JP4068485B2 (en) * 2002-09-30 2008-03-26 株式会社東芝 Image composition method, image composition apparatus, and image composition program
US7283277B2 (en) * 2002-12-18 2007-10-16 Hewlett-Packard Development Company, L.P. Image borders
JP4211460B2 (en) * 2003-03-31 2009-01-21 コニカミノルタホールディングス株式会社 Image editing service system and screen information generation method
GB2405067B (en) * 2003-08-01 2008-03-12 Caladrius Ltd Blending a digital image cut from a source image into a target image
JP2005091430A (en) * 2003-09-12 2005-04-07 Fuji Photo Film Co Ltd Image comparison display method, device therefor and image comparison display program
KR100617702B1 (en) * 2004-01-13 2006-08-28 삼성전자주식회사 Portable terminal capable of editing image and image edition method using that
US7280117B2 (en) * 2004-09-10 2007-10-09 Avid Technology, Inc. Graphical user interface for a keyer
US7752548B2 (en) * 2004-10-29 2010-07-06 Microsoft Corporation Features such as titles, transitions, and/or effects which vary according to positions
ITBO20040749A1 (en) * 2004-12-02 2005-03-02 Bieffebi Spa MACHINE FOR THE ASSEMBLY OF A FLEXOGRAPHIC CLICHE REGISTER WITH A VIRTUAL INFORMATION SYSTEM
US7526725B2 (en) * 2005-04-08 2009-04-28 Mitsubishi Electric Research Laboratories, Inc. Context aware video conversion method and playback system
GB0508073D0 (en) * 2005-04-21 2005-06-01 Bourbay Ltd Automated batch generation of image masks for compositing
AU2005201930B2 (en) * 2005-05-06 2009-02-19 Canon Kabushiki Kaisha Simplification of alpha compositing in the presence of transfer functions
GB0510793D0 (en) * 2005-05-26 2005-06-29 Bourbay Ltd Segmentation of digital images
GB0510792D0 (en) * 2005-05-26 2005-06-29 Bourbay Ltd Assisted selections with automatic indication of blending areas
EP1889471B1 (en) 2005-06-08 2010-08-04 Thomson Licensing Method and apparatus for alternate image/video insertion
US7353143B1 (en) * 2005-08-19 2008-04-01 Apple Inc. Reviewing and changing the outcome of a digital signal processing operation
US7644364B2 (en) * 2005-10-14 2010-01-05 Microsoft Corporation Photo and video collage effects
US7487465B2 (en) * 2006-01-06 2009-02-03 International Business Machines Corporation Application clippings
US9349219B2 (en) * 2006-01-09 2016-05-24 Autodesk, Inc. 3D scene object switching system
KR101351091B1 (en) * 2006-12-22 2014-01-14 삼성전자주식회사 Image forming apparatus and control method of consecutive photographing image
JP4309920B2 (en) * 2007-01-29 2009-08-05 株式会社東芝 Car navigation system, road marking identification program, and road marking identification method
US20090003698A1 (en) * 2007-04-26 2009-01-01 Heligon Limited Segmentaton of digital images
WO2009078957A1 (en) * 2007-12-14 2009-06-25 Flashfoto, Inc. Systems and methods for rule-based segmentation for objects with full or partial frontal view in color images
US20090213140A1 (en) * 2008-02-26 2009-08-27 Masaru Ito Medical support control system
JP4513903B2 (en) * 2008-06-25 2010-07-28 ソニー株式会社 Image processing apparatus and image processing method
JP5171713B2 (en) * 2009-03-27 2013-03-27 パナソニック株式会社 Information display device and information display method
US8243098B2 (en) * 2009-06-16 2012-08-14 Mitre Corporation Authoritative display for critical systems
US8502834B2 (en) * 2009-06-29 2013-08-06 Vistaprint Technologies Limited Representing a printed product using pixel opacity and color modification
US9535599B2 (en) * 2009-08-18 2017-01-03 Adobe Systems Incorporated Methods and apparatus for image editing using multitouch gestures
US8670615B2 (en) * 2009-09-30 2014-03-11 Flashfoto, Inc. Refinement of segmentation markup
US20110113361A1 (en) * 2009-11-06 2011-05-12 Apple Inc. Adjustment presets for digital images
US9628722B2 (en) 2010-03-30 2017-04-18 Personify, Inc. Systems and methods for embedding a foreground video into a background feed based on a control input
US9311567B2 (en) 2010-05-10 2016-04-12 Kuang-chih Lee Manifold learning and matting
US20110314412A1 (en) * 2010-06-17 2011-12-22 Microsoft Corporation Compositing application content and system content for display
US8649592B2 (en) 2010-08-30 2014-02-11 University Of Illinois At Urbana-Champaign System for background subtraction with 3D camera
US8806340B2 (en) * 2010-09-01 2014-08-12 Hulu, LLC Method and apparatus for embedding media programs having custom user selectable thumbnails
JP5565227B2 (en) * 2010-09-13 2014-08-06 カシオ計算機株式会社 Image processing apparatus, image processing method, and program
JP5833822B2 (en) * 2010-11-25 2015-12-16 パナソニックIpマネジメント株式会社 Electronics
JP5506864B2 (en) * 2011-06-20 2014-05-28 富士フイルム株式会社 Image processing apparatus, image processing method, and image processing program
KR101901910B1 (en) * 2011-12-23 2018-09-27 삼성전자주식회사 Method and apparatus for generating or storing resultant image which changes selected area
US9311756B2 (en) * 2013-02-01 2016-04-12 Apple Inc. Image group processing and visualization
US10115431B2 (en) * 2013-03-26 2018-10-30 Sony Corporation Image processing device and image processing method
JP6330990B2 (en) * 2013-08-29 2018-05-30 セイコーエプソン株式会社 Image display device, recording device, program, and display method
US9485433B2 (en) 2013-12-31 2016-11-01 Personify, Inc. Systems and methods for iterative adjustment of video-capture settings based on identified persona
US9414016B2 (en) 2013-12-31 2016-08-09 Personify, Inc. System and methods for persona identification using combined probability maps
US9693023B2 (en) * 2014-02-05 2017-06-27 Panasonic Intellectual Property Management Co., Ltd. Monitoring apparatus, monitoring system, and monitoring method
JP2014212560A (en) * 2014-07-01 2014-11-13 京セラ株式会社 Image transmitter, image transmission method, and image transmission program
US10417801B2 (en) 2014-11-13 2019-09-17 Hewlett-Packard Development Company, L.P. Image projection
US9916668B2 (en) 2015-05-19 2018-03-13 Personify, Inc. Methods and systems for identifying background in video data using geometric primitives
US9563962B2 (en) 2015-05-19 2017-02-07 Personify, Inc. Methods and systems for assigning pixels distance-cost values using a flood fill technique
US10269155B1 (en) * 2015-06-29 2019-04-23 Amazon Technologies, Inc. Image artifact masking
US10235032B2 (en) * 2015-08-05 2019-03-19 Htc Corporation Method for optimizing a captured photo or a recorded multi-media and system and electric device therefor
US9883155B2 (en) 2016-06-14 2018-01-30 Personify, Inc. Methods and systems for combining foreground video and background video using chromatic matching
US9881207B1 (en) 2016-10-25 2018-01-30 Personify, Inc. Methods and systems for real-time user extraction using deep learning networks
GB2559759B (en) * 2017-02-16 2020-07-29 Jaguar Land Rover Ltd Apparatus and method for displaying information
JP6845322B2 (en) 2017-06-06 2021-03-17 マクセル株式会社 Mixed reality display system
JP7364381B2 (en) * 2019-07-19 2023-10-18 ファナック株式会社 Image processing device
CN110636365B (en) * 2019-09-30 2022-01-25 北京金山安全软件有限公司 Video character adding method and device, electronic equipment and storage medium
US11800056B2 (en) 2021-02-11 2023-10-24 Logitech Europe S.A. Smart webcam system
US11800048B2 (en) 2021-02-24 2023-10-24 Logitech Europe S.A. Image generating system with background replacement or modification capabilities

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6285381B1 (en) * 1997-11-20 2001-09-04 Nintendo Co. Ltd. Device for capturing video image data and combining with original image data
US6335985B1 (en) * 1998-01-07 2002-01-01 Kabushiki Kaisha Toshiba Object extraction apparatus
US20020051009A1 (en) * 2000-07-26 2002-05-02 Takashi Ida Method and apparatus for extracting object from video image
US6563960B1 (en) * 1999-09-28 2003-05-13 Hewlett-Packard Company Method for merging images
US20040096085A1 (en) * 2002-09-26 2004-05-20 Nobuyuki Matsumoto Image analysis method, apparatus and program
US20040100482A1 (en) * 1997-08-01 2004-05-27 Claude Cajolet Method and system for editing or modifying 3D animations in a non-linear editing environment
US20040125115A1 (en) * 2002-09-30 2004-07-01 Hidenori Takeshima Strobe image composition method, apparatus, computer, and program product
US20050046729A1 (en) * 2003-08-28 2005-03-03 Kabushiki Kaisha Toshiba Apparatus and method for processing a photographic image
US20050219265A1 (en) * 1998-02-03 2005-10-06 Seiko Epson Corporation Projection display apparatus, display method for same and image display apparatus
US6999103B2 (en) * 2002-03-29 2006-02-14 Kabushiki Kaisha Toshiba Video object clipping method and apparatus

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040100482A1 (en) * 1997-08-01 2004-05-27 Claude Cajolet Method and system for editing or modifying 3D animations in a non-linear editing environment
US6285381B1 (en) * 1997-11-20 2001-09-04 Nintendo Co. Ltd. Device for capturing video image data and combining with original image data
US6677967B2 (en) * 1997-11-20 2004-01-13 Nintendo Co., Ltd. Video game system for capturing images and applying the captured images to animated game play characters
US6335985B1 (en) * 1998-01-07 2002-01-01 Kabushiki Kaisha Toshiba Object extraction apparatus
US20050219265A1 (en) * 1998-02-03 2005-10-06 Seiko Epson Corporation Projection display apparatus, display method for same and image display apparatus
US6563960B1 (en) * 1999-09-28 2003-05-13 Hewlett-Packard Company Method for merging images
US20020051009A1 (en) * 2000-07-26 2002-05-02 Takashi Ida Method and apparatus for extracting object from video image
US6999103B2 (en) * 2002-03-29 2006-02-14 Kabushiki Kaisha Toshiba Video object clipping method and apparatus
US20040096085A1 (en) * 2002-09-26 2004-05-20 Nobuyuki Matsumoto Image analysis method, apparatus and program
US20040125115A1 (en) * 2002-09-30 2004-07-01 Hidenori Takeshima Strobe image composition method, apparatus, computer, and program product
US20050046729A1 (en) * 2003-08-28 2005-03-03 Kabushiki Kaisha Toshiba Apparatus and method for processing a photographic image

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7587065B2 (en) 2002-09-26 2009-09-08 Kabushiki Kaisha Toshiba Image analysis method, analyzing movement of an object in image data
US20070188659A1 (en) * 2006-02-13 2007-08-16 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US8701006B2 (en) * 2006-02-13 2014-04-15 Canon Kabushiki Kaisha Image processing apparatus and image processing method
US20070296736A1 (en) * 2006-06-26 2007-12-27 Agfa Inc. System and method for scaling overlay images
US8072472B2 (en) * 2006-06-26 2011-12-06 Agfa Healthcare Inc. System and method for scaling overlay images
US20100328337A1 (en) * 2008-03-03 2010-12-30 Itaru Furukawa Line drawing processing apparatus and program
US8593477B2 (en) * 2008-03-03 2013-11-26 Dainippon Screen Mfg. Co., Ltd. Line drawing processing apparatus and computer-readable recording medium
US20100080489A1 (en) * 2008-09-30 2010-04-01 Microsoft Corporation Hybrid Interface for Interactively Registering Images to Digital Models
US20110196578A1 (en) * 2010-02-05 2011-08-11 Strohmaier Jason M Method for operating a vehicle display and a vehicle display system
CN102742265A (en) * 2010-02-05 2012-10-17 本田技研工业株式会社 Method for operating a vehicle display and a vehicle display system
US8494723B2 (en) * 2010-02-05 2013-07-23 Honda Motor Co., Ltd. Method for operating a vehicle display and a vehicle display system
US11823305B2 (en) 2019-04-01 2023-11-21 Volkswagen Aktiengesellschaft Method and device for masking objects contained in an image

Also Published As

Publication number Publication date
US7050070B2 (en) 2006-05-23
JP2004038746A (en) 2004-02-05
US20040004626A1 (en) 2004-01-08

Similar Documents

Publication Publication Date Title
US7050070B2 (en) Image editing method and image editing apparatus
US7561160B2 (en) Data editing program, data editing method, data editing apparatus and storage medium
AU2006235850B2 (en) Reviewing editing operations
EP0635808B1 (en) Method and apparatus for operating on the model data structure on an image to produce human perceptible output in the context of the image
JP2965119B2 (en) Image processing system and image processing method
US6373499B1 (en) Automated emphasizing of an object in a digital photograph
US7095413B2 (en) Animation producing method and device, and recorded medium on which program is recorded
AU2873392A (en) A compositer interface for arranging the components of special effects for a motion picture production
US7755644B1 (en) Revealing clipped portion of image object
US20120229501A1 (en) Method and a Computer System for Displaying and Selecting Images
US8645870B2 (en) Preview cursor for image editing
US7890866B2 (en) Assistant editing display method for media clips
US6404434B1 (en) Curve generating apparatus and method, storage medium storing curve generating program, and method of setting associate points
WO2001060060A1 (en) Control of sequence of video modifying operations
US6128020A (en) Computer system supporting design operation of three-dimensional object and method therefor
US6122069A (en) Efficient method of modifying an image
JP2720807B2 (en) Scenario editing device
US6462750B1 (en) Enhanced image editing through an object building viewport
JPH1027258A (en) Image editing system, document editing system and image editing method
JP2006121481A (en) Image display method, image display program and editing device
JP5471417B2 (en) Information processing apparatus, information processing method, and program thereof
JP2565049B2 (en) Dynamic scenario presentation method
JP3205016B2 (en) Electronic device and control method thereof
US20190294437A1 (en) Image edit apparatus, image edit method, and recording medium
JP3638999B2 (en) Image data creation device

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION