US20040046896A1 - Image processing apparatus and method - Google Patents

Image processing apparatus and method Download PDF

Info

Publication number
US20040046896A1
US20040046896A1 US10/656,128 US65612803A US2004046896A1 US 20040046896 A1 US20040046896 A1 US 20040046896A1 US 65612803 A US65612803 A US 65612803A US 2004046896 A1 US2004046896 A1 US 2004046896A1
Authority
US
United States
Prior art keywords
image
change
image signal
input
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/656,128
Inventor
Shinichiro Koga
Yoshihiro Ishida
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Canon Inc
Original Assignee
Canon Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from JP7128378A external-priority patent/JPH08320935A/en
Priority claimed from JP8024337A external-priority patent/JPH09219853A/en
Application filed by Canon Inc filed Critical Canon Inc
Priority to US10/656,128 priority Critical patent/US20040046896A1/en
Publication of US20040046896A1 publication Critical patent/US20040046896A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/147Scene change detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/144Movement detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/188Capturing isolated or intermittent images triggered by the occurrence of a predetermined event, e.g. an object reaching a predetermined position

Definitions

  • the present invention relates to a motion image processing apparatus and method and, more particularly, to an image processing apparatus and method which can reduce transmission, storage, and display data amounts by detecting an image change and can be suitably used for an image transmission apparatus, an image storage device, and a motion image display device. That is, the present invention can be used for a monitoring apparatus for transmitting and storing images in a monitoring area, a video conference apparatus, a general-purpose computer or video server which handles motion images, and the like.
  • a relatively easy method is a method of detecting a change area by obtaining a difference between two images. According to this easy method, even a general-purpose computer can execute detection processing at a sufficient processing speed.
  • typical methods are a method of detecting a change area on the basis of a difference between a background image photographed in advance and a currently photographed image, and a method of detecting a change area on the basis of a difference between two frames adjacent to each other along the time base.
  • the present invention has been made in consideration of the above situation, and has as its object to easily and reliably detect an image change at a sufficiently high processing speed without using any dedicated device.
  • an image processing apparatus including input means (step) for inputting a continuous image signal, detection means (step) for detecting a frame change in an image by comparing the image signal input by the input means (step) with a reference image signal, and storage means (step) for updating/storing the image signal input by the input means (step) as the reference image signal in units of frames in accordance with an output from the detection means (step).
  • an image processing apparatus including input means (step) for inputting a continuous image signal, change component extraction means (step) for extracting change components between images by comparing the image signal input by the input means (step) with a reference image signal, erroneous extraction correction means (step) for detecting and removing an erroneously extracted change component from the change components extracted by the change component extraction means (step), and image change discrimination means (step) for discriminating an image change in the image signal on the basis of the change component corrected by the erroneous extraction correction means (step).
  • FIG. 1 is a flow chart showing the contents of an image processing method according to the first to fourth embodiments of the present invention
  • FIG. 2 is a block diagram showing an example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 1;
  • FIG. 3 is a block diagram showing an example of the actual hardware arrangement for realizing the functions in FIG. 2;
  • FIG. 4 is a block diagram showing another example of the arrangement of the motion image processing apparatus for performing the processing in FIG. 1;
  • FIG. 5 is a view for explaining a pixel processing order in detecting an image change in the first and second embodiments
  • FIG. 6 is a flow chart showing the processing in the image change detection step in the first embodiment
  • FIG. 7 is a flow chart showing the processing in the image change detection step in the second embodiment
  • FIG. 8 is a flow chart showing the processing in the image change detection step in the third embodiment
  • FIGS. 9A and 9B are views for explaining an example of how an image is divided into blocks, a pixel processing order, and a block processing order in the third and fourth embodiments;
  • FIG. 10 is a flow chart showing another example of the processing in the image change detection step in the third embodiment.
  • FIG. 11 is a flow chart showing the processing in the image change detection step in the fourth embodiment.
  • FIG. 12 is a flow chart showing another example of the processing in the image change detection step in the fourth embodiment.
  • FIG. 13 is a flow chart showing the contents of an image processing method according to the fifth embodiment of the present invention.
  • FIG. 14 is a block diagram showing an example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 13;
  • FIG. 15 is a block diagram showing another example of the arrangement of the motion image processing apparatus for performing the processing in FIG. 13;
  • FIG. 16 is a view showing an outline of a motion image processing method to which the present invention is applied.
  • FIG. 17 is a flow chart showing the contents of an image processing method according to the sixth to eighth embodiments of the present invention.
  • FIG. 18 is a block diagram showing an example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 17;
  • FIG. 19 is a block diagram showing another example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 17;
  • FIG. 20 is a view for explaining a pixel processing order in detecting an image change in the sixth to ninth embodiments.
  • FIG. 21 is a flow chart showing the detailed processing in the change component extraction step, the erroneous extraction correction step, and the image change discrimination step in the sixth embodiment;
  • FIG. 22 is a flow chart showing the detailed initialization processing in step S 501 in FIG. 21;
  • FIG. 23 is a flow chart showing the detailed processing in the change component extraction step, the erroneous extraction correction step, and the image change discrimination step in the seventh embodiment;
  • FIG. 24 is a flow chart showing the contents of an image processing method according to the eighth embodiment.
  • FIG. 25 is a block diagram showing an example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 24.
  • FIG. 26 is a block diagram showing another example of the arrangement of the motion image processing apparatus for performing the processing in FIG. 24.
  • FIG. 1 is a flow chart showing the contents of a motion image processing method according to the first embodiment.
  • image input step 1 a still image is extracted from an image obtained by performing a photographing operation with a video camera or the like (not shown), a digital motion image stored in a disk unit (not shown), or the like, and the extracted image is input (this input still image will be referred to as an input image hereinafter).
  • change detection step 2 the input image input in image input step 1 and an image (to be referred to as a newest change image hereinafter) obtained when the newest change of several image changes caused in the past are compared with each other to detect a current image change.
  • a newest change image obtained when the newest change of several image changes caused in the past are compared with each other to detect a current image change.
  • the flow advances to newest change image storage step 3 . If no image change is detected, the flow returns to image input step 1 .
  • newest change image storage step 3 the input image as an object to be compared when the image change is detected in change detection step 2 is stored as a newest change image.
  • image output step 4 the input image obtained when the image change is detected in change detection step 2 is output to a communication path (not shown) (e.g., a WAN or LAN) to transmit it.
  • the input image is output to an image storage device (not shown) to be stored, or is output to an image display device (not shown) to be displayed.
  • FIG. 2 is a block diagram showing an example of the arrangement of an image processing apparatus for performing the processing in FIG. 1.
  • FIG. 3 shows an example of the arrangement of hardware for realizing the functions in FIG. 2.
  • a CPU Central Processing Unit 21 controls the overall operation of the motion image processing apparatus of this embodiment.
  • a ROM Read-Only Memory 22 stores various processing programs.
  • a RAM Random Access Memory 23 stores various kinds of information during processing.
  • a disk unit 24 stores information such as digital motion image information.
  • a disk input/output device (disk I/O) 25 inputs/outputs digital motion image information stored in the disk unit 24 .
  • a video camera 26 acquires image information by photographing an object to be photographed.
  • a video capture device 27 captures a still image from an image obtained by the video camera 26 .
  • a communication device 28 transmits a still image input from the disk unit 24 via the disk I/O 25 , or transmits a still image, input from the video camera 26 via the video capture device 27 , via a communication path such as a WAN or LAN.
  • the CPU 21 , the ROM 22 , the RAM 23 , the disk I/O 25 , the video capture device 27 , and the communication device 28 are connected to a bus 29 .
  • an image input unit 11 acquires a still image from an image obtained by the video camera, a digital motion image stored in the disk unit, or the like.
  • An input image storage unit 14 stores the still image.
  • the image input unit 11 may be constituted by a known image input unit which causes the video capture device 27 to acquire a still image (input image) from an image, obtained by performing a photographing operation with the video camera 26 , and stores the still image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 .
  • the image input unit 11 may be constituted by a known image input unit which reads, via the disk I/O 25 , a still image (input image) from a digital motion image stored in the disk unit 24 , and stores the read image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 .
  • the image input unit 11 may be constituted by a known image input unit which causes a digital still camera to photograph a still image, and stores the image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 .
  • the image input unit 11 may be constituted by a combination of the above plurality of image input units.
  • a change detection unit 12 compares an input image stored in the input image storage unit 14 with an newest change image stored in a newest change image storage unit 15 to detect a current image change. When an image change is detected, the input image stored in the input image storage unit 14 is stored as a newest change image in the newest change image storage unit 15 .
  • the change detection unit 12 can be constituted by the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 , and the RAM 23 used as a work memory or the disk unit 24 .
  • the change detection unit 12 may be constituted by a dedicated CPU, a dedicated RAM, and a dedicated disk unit, or dedicated hardware.
  • An image output unit 13 outputs and transmits, to a communication path (a WAN or LAN), an input image which is stored in the input image storage unit 14 when the change detection unit 12 detects an image change, outputs and stores the image in the image storage device, or outputs the image to the image display device to display it.
  • a communication path a WAN or LAN
  • the image output unit 13 can be constituted by a known image output unit which transmits an image to a communication path such as a WAN or LAN via the communication device 28 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 .
  • the image output unit 13 my be constituted by a known image output unit which stores part of a digital motion image in the disk unit 24 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 .
  • the disk unit 24 may be a unit which can be used via a network such as a LAN.
  • the image output unit 13 may be constituted by a known image output unit which continuously displays still images on the same portion on a display to display a motion image. As is apparent, the image output unit 13 may be constituted by a combination of the above plurality of image output units.
  • the input image storage unit 14 and the newest change image storage unit 15 may be constituted by the RAM 23 or the disk unit 24 .
  • each unit may be constituted by a dedicated storage device.
  • the input image storage unit 14 and the newest change image storage unit 15 may be constituted by two image storage units 16 a and 16 b which are selectively used upon a switching operation, as shown in FIG. 4.
  • the roles of the image storage units 16 a and 16 b are switched in newest change image storage step 3 in FIG. 1.
  • the image storage unit 16 a is switched to serve as a storage unit for storing a newest change image
  • the image storage unit 16 b is switched to serve as a storage unit for storing an input image.
  • the input image stored in the image storage unit 16 a when the image change is detected is used as a newest change image afterward, and the processing of copying an input image from the input image storage unit 14 to the newest change image storage unit 15 in the case shown in FIG. 2 can be omitted.
  • image storage units 16 a and 16 b need not be used as separate memories, but the above function can be realized by one memory with a bank switching operation.
  • the image input unit 11 acquires a still image from an image obtained by the video camera 26 or a digital motion image stored in the disk unit 24 .
  • the image input unit 11 photographs a still image using a digital still camera (not shown), and stores the still image in the input image storage unit 14 .
  • change detection step 2 the input image stored in the input image storage unit 14 is compared with the newest change image stored in the newest change image storage unit 15 to detect a current image change.
  • FIG. 5 shows an example of the contents of processing executed by the CPU 21 in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3 in change detection step 2 .
  • the pixel value difference absolute value
  • the total pixel value difference of the entire image is equal to or larger than a given value, it is determined that the input image changes from the newest change image. That is, it is determined that an image change has occurred.
  • pixel value differences are calculated by processing the respective pixels of an image in the raster scan order, as shown in FIG. 5.
  • the total pixel value difference during processing will be referred to as a total change amount.
  • pixels may be processing concurrently.
  • step S 101 the total change amount is initialized to 0.
  • step S 102 the pixel value between currently processed pixels (to be referred to as subject pixels hereinafter) is calculated.
  • an input image is a binary image.
  • the value of a pixel value difference is “1” if subject pixels have difference values, and is “0” if the subject pixels have the same value.
  • the value of a pixel value difference is the absolute value of the difference between subject pixel values.
  • the value of a pixel value difference is a value obtained by calculating the absolute values of the differences between R, G, and B values of subject pixels, and calculating the sum of the absolute values of the differences.
  • step S 103 the pixel value difference calculated in step S 102 is added to the total change amount.
  • step S 104 it is checked whether the total change amount calculated in step S 103 is larger than a predetermined value (threshold value). If YES in step S 104 , it is determined that an image change has occurred, and the processing shown in FIG. 6 is terminated. If NO in step S 104 , the flow advances to step S 105 .
  • step S 105 it is checked whether all pixels have been processed. If YES in step S 105 , it is determined that no image change has occurred, and the processing in FIG. 6 is terminated. If NO in step S 105 , the flow advances to step S 106 . In step S 106 , processing is shifted to the next pixel (the next pixel is set as a subject pixel), and the flow return to step S 102 .
  • newest change image storage step 3 when an image change is detected in change detection step 2 , the input image stored in the input image storage unit 14 in image input step 1 is stored as a newest change image in the newest change image storage unit 15 .
  • image output step 4 when an image change is detected in change detection step 2 , the input image stored in the input image storage unit 14 is output and transmitted to a communication path (a WAN or LAN) via the communication device 28 or the like, output to the disk unit 24 or the like to be stored, or output to an image display device (not shown) to be displayed.
  • a communication path a WAN or LAN
  • an image which undergoes no change e.g., a background image
  • a moderate image change can be reliably detected.
  • An image change can therefore be easily and reliably detected at a sufficiently high speed without any dedicated device.
  • the corresponding image data is output.
  • the image data amount can be reduced in outputting the image data. For example, only an image corresponding to a detected change is output to a communication path, a storage unit, or a display device to reduce the transmission, storage, or display data amount.
  • FIG. 16 shows an outline of the motion image processing method of the present invention.
  • the second embodiment is another embodiment of the processing in change detection step 2 in the first embodiment.
  • FIG. 7 shows an example of the contents of processing executed by the CPU 21 in change detection step 2 in this embodiment in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3.
  • the pixel value difference (absolute value) between each pair of corresponding pixels is calculated using an input image and a newest change image. If the value of each pixel value difference is equal to or larger than a given value (first threshold value), it is determined that the subject pixel has changed (a pixel having undergone a change will be referred to as a change pixel hereinafter). If the number of change pixels of the entire input image is equal to or larger than a given value (second threshold value), it is determined that the input image has changed as compared with the newest change image. That is, it is determined that an image change has occurred.
  • first threshold value a pixel having undergone a change will be referred to as a change pixel hereinafter.
  • second threshold value it is determined that the input image has changed as compared with the newest change image. That is, it is determined that an image change has occurred.
  • the above determination of a change pixel is performed by processing the respective pixels of an image in the raster scan order. Note that the total number of change pixels during processing will be referred to as the number of change pixels. As is apparent, pixels may be processed concurrently.
  • step S 201 the number of change pixels is initialized.
  • step S 202 the pixel value difference between subject pixels which are currently processed is calculated by the same method as in the first embodiment.
  • step S 203 it is checked whether the pixel value difference calculated in step S 202 is larger than a predetermined value (first threshold value). If YES in step S 202 , it is determined that a subject pixel has undergone a change, and the flow advances to step S 204 to increase the number of change pixels by one. If NO in step S 202 , it is determined that the subject pixel has undergone no change, and the flow jumps to step S 206 .
  • a predetermined value first threshold value
  • step S 205 after step S 204 , it is checked whether the number of change pixels calculated in step S 204 is larger than a predetermined value (second threshold value). If YES in step S 205 , it is determined hat an image change has occurred, and the processing in FIG. 7 is terminated. If NO in step S 205 , the flow advances to step S 206 .
  • a predetermined value second threshold value
  • step S 206 it is checked whether all pixels have been processed. If YES in step S 206 , it is determined that no image change has occurred, and the processing in FIG. 7 is terminated. IF NO in step S 206 , the flow advances to step S 207 . In step S 207 , processing is shifted to the next pixel (the next pixel is set as a subject pixel), and the flow returns to step S 202 .
  • an image change can be easily and reliably detected at a sufficiently high speed even without any dedicated device.
  • the corresponding image data is output to a communication path, a storage unit, or a display device. With this operation, the transmission, storage, or display data amount can be reduced.
  • the third embodiment is still another embodiment of the processing in change detection step 2 in the first embodiment described above.
  • FIG. 8 shows an example of the contents of processing executed by the CPU 21 in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3 in change detection step 2 in this embodiment.
  • an input image and a newest change image are divided into a plurality of blocks, and an image change is detected in units of blocks. Similar to the first embodiment, the pixel value difference between each pair of corresponding pixels is calculated using the input image and the newest change image. If the total pixel value difference in each block is larger than a given value (first threshold value), it is determined that an image change has occurred in the corresponding block.
  • first threshold value a given value
  • pixel value differences in each block are calculated by processing the respective pixels in the raster scan order, as shown in FIG. 9A. Assume also that the respective blocks are processed in the raster scan order, as shown in FIG. 9B. Note that the total pixel value difference in a block which is being processed will be referred to as a total change amount; and the total number of change blocks, the number of change blocks. As is apparent, pixels or blocks may be processed concurrently.
  • step S 301 the number of change blocks is initialized to 0.
  • step S 302 the total change amount is initialized to 0.
  • step S 303 the pixel value difference between subject pixels is calculated by the same method as that in the first embodiment.
  • step S 304 the pixel value difference calculated in step S 303 is added to the total change amount.
  • step S 305 it is checked whether the total change amount calculated in step S 304 is larger than a predetermined value (first threshold value). If YES in step S 305 , it is determined that a change has occurred in the subject block, and the flow advances to step S 307 to add one to the number of change blocks. If NO in step S 305 , the flow advances to step S 306 .
  • a predetermined value first threshold value
  • step S 306 it is checked whether all the pixels in a subject block have been processed. If YES in step S 306 , it is determined that no change has occurred in the subject block, the flow advances to step S 311 to shift processing to the next block (the next block is set as a subject block), and the flow returns to step S 302 . If NO in step S 306 , the flow advances to step S 310 to shift processing to the next pixel (the next pixel is set as a subject pixel), and the flow returns to step S 303 .
  • step S 308 following step S 307 , it is checked whether the number of change blocks calculated in step S 307 is larger than a predetermined value (second threshold value). If YES step S 308 , it is determined that an image change has occurred, and the processing in FIG. 8 is terminated. If NO in step S 308 , the flow advances to step S 309 .
  • step S 309 it is checked whether all blocks have been processed. If YES in step S 309 , it is determined that no image change has occurred, and the processing in FIG. 8 is terminated. If NO in step S 309 , the flow advances to step S 311 to shift processing to the next block, and the flow returns to step S 302 .
  • processing is to detect the presence/absence of an image change. For this reason, when the number of change blocks exceeds the second threshold value, the processing is terminated. However, as shown in the flow chart of FIG. 10, processing may be continued up to the last block.
  • the fourth embodiment is still another embodiment of the processing in change detection step 2 in the first embodiment described above.
  • FIG. 11 shows an example of the contents of processing executed by the CPU 21 in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3 in change detection step 2 in this embodiment.
  • an input image and a newest change image are divided into a plurality of blocks, and an image change is detected in units of blocks.
  • the pixel value difference between each pair of corresponding pixels is calculated using the input image and the newest change image. If the value of each pixel value difference is larger than a given value (first threshold value), it is determined that the subject pixels have undergone a change.
  • Second threshold value If the number of change pixels in each block is larger than a given value (second threshold value), it is determined that the corresponding block has undergone a change. If the number of change blocks in the entire input image is larger than a given value (third threshold value), it is determined that the input image has changed as compared with the newest change image. That is, it is determined that an image change has occurred.
  • pixel value differences in each block are calculated by processing the respective pixels in the raster scan order, as shown in FIG. 9A, and the respective blocks are also processed in the raster scan order, as shown in FIG. 9B.
  • the total number of change pixels in a block which is being processed will be referred to as the number of change pixels; and the total number of change blocks, the number of change blocks.
  • pixels or blocks may be processed concurrently.
  • step S 401 the number of change blocks is initialized to 0.
  • step S 402 the number of change pixels is initialized to 0.
  • step S 403 the pixel value difference between subject pixels is calculated by the same method as that in the first embodiment.
  • step S 404 it is checked whether the pixel value difference calculated in step S 403 is larger than a predetermined value (first threshold value). If YES in step S 404 , it is determined that the subject pixels have undergone a change, and the flow advances to step S 405 to add one to the number of change pixels. If NO in step S 404 , it is determined that the subject pixels have undergone no change, and the flow jumps to step S 407 .
  • a predetermined value first threshold value
  • step S 406 following step S 405 , it is checked whether the number of change pixels calculated in step S 405 is larger than a predetermined value (second threshold value). If YES in step S 406 , it is determined that the subject block has undergone a change, and the flow jumps to step S 408 to add one to the number of change blocks. If NO in step S 406 , the flow advances to step S 407 .
  • a predetermined value second threshold value
  • step S 407 it is checked whether all the pixels in the subject block have been processed. If YES in step S 407 , it is determined that the subject block has undergone no change, and the flow advances to step S 412 to shift processing to the next block (the next block is set to be a subject block), and the flow returns to step S 402 . If NO in step S 407 , the flow advances to step S 411 to shift processing to the next pixel (the next pixel is set to be a subject pixel), and the flow returns to step S 403 .
  • step S 409 following step S 408 , it is checked whether the number of change blocks calculated in step S 408 is larger than a predetermined value (third threshold value). If YES in step S 409 , it is determined that an image change has occurred, and the processing in FIG. 11 is terminated. If NO in step S 409 , the flow advances to step S 410 .
  • step S 410 it is checked whether all blocks have been processed. If YES in step S 410 , it is determined that no image change has occurred, and the processing in FIG. 11 is terminated. If NO in step S 410 , the flow advances to step S 412 to shift processing to the next block, and the flow returns to step S 402 .
  • the purpose of this processing is to detect the presence/absence of an image change, as in the third embodiment. For this reason, when the number of change blocks exceeds the third threshold value, the processing is terminated. However, as shown in the flow chart of FIG. 12, the processing can be continued up to the last block. By transmitting, storing, or displaying only detected change blocks, the transmission, storage, or display data amount can be reduced as in the third embodiment.
  • differential image forming step 5 is inserted between image input step 1 and change detection step 2 in the procedure in FIG. 1 in the first to fourth embodiments.
  • differential image forming step 5 differential processing using, e.g., a Sobel operator, is performed for an input image input in image input step 1 (an image having undergone differential processing will be referred to as a differential image hereinafter).
  • Other processing steps 1 to 4 are performed in the same manner as in the first to fourth embodiments.
  • change detection step 2 the same processing as that in the first to fourth embodiment is performed not for an input image but for a differential image.
  • newest change image storage step 3 a differential image is stored instead of an input image.
  • the apparatus can be designed not to detect an image change due to a change in illumination.
  • a similar effect can be expected in the first to fourth embodiments as well depending on a method of setting threshold values. In this embodiment, however, there is no need to consider a change in illumination in setting a threshold value.
  • FIG. 14 is a block diagram showing an example of a motion image processing apparatus for performing the processing in FIG. 13. Note that the arrangement shown in FIG. 14 can be realized by the hardware shown in FIG. 3, as in the first to fourth embodiments.
  • a differential image forming unit 17 in FIG. 14 can be constituted by the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3, and the RAM 23 used as a work memory or the disk unit 24 .
  • the differential image forming unit 17 performs differential processing for an input image stored in an input image storage unit 14 , and stores the resultant differential image in a differential image storage unit 18 .
  • the differential image storage unit 18 can be constituted by the RAM 23 or the disk unit 24 in FIG. 3.
  • a change detection unit 12 performs the same processing as that in the first to fourth embodiments with respect to a differential image stored in the differential image storage unit 18 instead of an input image stored in the input image storage unit 14 .
  • a newest change image storage unit 15 does not store an input image stored in the input image storage unit 14 as a newest change image but stores a differential image stored in the differential image storage unit 18 .
  • the arrangement shown in FIG. 14 may be replaced with the arrangement shown in FIG. 15, as in the first to fourth embodiments. More specifically, the operation mode of the differential image storage unit 18 and the newest change image storage unit 15 in FIG. 14 can be changed such that two image storage units 16 a and 16 b are selectively used upon a switching operation, as shown in FIG. 15. With this arrangement, the processing of copying a differential image from the differential image storage unit 18 to the newest change image storage unit 15 , which is performed in the case shown in FIG. 14, can be omitted.
  • FIG. 17 is a flow chart showing the contents of a motion image processing method according to the sixth embodiment of the present invention.
  • image input step 101 a still image is extracted from an image obtained by performing a photographing operation with a video camera or the like (not shown), a digital motion image stored in a disk unit or the like (not shown), or the like, and the extracted image is input (this input still image will be referred to as an input image hereinafter).
  • change component extraction step 102 the input image input in image input step 1 and an image obtained upon detection of the newest change of several image changes in the past are compared with each other to detect and extract current image change components.
  • Image change components are extracted by using a newest change image for the following reason.
  • a change area on the basis of a difference between two images In the method of detecting a change area on the basis of a difference between a background image photographed in advance and a current photographed image, for example, a background image having no moving object must be prepared.
  • a changeless image such as a background image need not be prepared as a reference image to be compared with the input image, and even a small change in a moderate image change can be reliably detected and extracted.
  • erroneous extraction correction step 103 a component, of the change components extracted in change component extraction step 102 , which is considered as a component which has been erroneously extracted because of, e.g., flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like is detected and corrected.
  • image change discrimination step 104 it is discriminated, on the basis of the change component corrected in erroneous extraction correction step 103 , whether an image change has occurred. If YES in step 104 , the flow advances to newest change image storage step 105 . If NO in step 104 , the flow returns to image input step 101 .
  • newest change image storage step 105 the input image from which the change components have been extracted in change component extraction step 102 is stored as a newest change image.
  • image output step 106 the input image for which “CHANGE” is discriminated in image change discrimination step 104 is output to a communication path (e.g., a WAN or LAN) (not shown) to be transmitted, output to an image storage unit (not shown) to be stored, or output to an image display device to be displayed.
  • a communication path e.g., a WAN or LAN
  • an image change can be detected with simple processing, i.e., differential processing.
  • An image change can therefore be detected at a sufficiently high speed without any dedicated device for processing an image of a large data amount.
  • an image change since an image change is discriminated upon removal of an erroneously extracted component of an image change, an image change can be accurately detected.
  • the corresponding image can be output after its data amount is reduced. For example, by outputting only an image for which an image change is detected to a communication path, a storage unit, or a display device, the transmission, storage, or display data amount can be reduced.
  • FIG. 18 is a block diagram showing an example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 17.
  • an image input unit 111 acquires a still image from an image obtained by a video camera, a digital motion image stored in a disk unit, or the like, and stores it in an input image storage unit 112 .
  • the image input unit 111 may be constituted by a known image input unit which causes the video capture device 27 to acquire a still image (input image) from an image, obtained by performing a photographing operation with the video camera 26 , and stores the still image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 .
  • the image input unit 111 may be constituted by a known image input unit which reads, via the disk I/O 25 , a still image (input image) from a digital motion image stored in the disk I/O 25 , and stores the read image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 .
  • the image input unit 111 may be constituted by a known image input unit which causes a digital still camera to photograph a still image, and stores the image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 .
  • the image input unit 111 may be constituted by a combination of the above plurality of image input unit.
  • a change component extraction unit 113 compares an input image stored in the input image storage unit 112 with a newest change image stored in a newest change image storage unit 116 to extract current image change components.
  • An erroneous extraction correction unit 114 detects and corrects a component, of the change components extracted by the change component extraction unit 113 , which is considered as a component which has been erroneously extracted because of, e.g., flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like.
  • An image change discrimination unit 115 receives the image change components which has been corrected by the input image storage unit 14 , and discriminates whether an image change has occurred. If the unit 115 discriminates that an image change has occurred, a newest change image storage unit 116 reads out the corresponding input image stored in the input image storage unit 112 , and stores it as a newest change image.
  • Each of the change component extraction unit 113 , the erroneous extraction correction unit 114 , and the image change discrimination unit 115 can be constituted by the CPU 21 which operates in accordance with programs store din the ROM 22 or the RAM 23 in FIG. 3, and the RAM 23 used as a work memory or the disk unit 24 .
  • each unit may be constituted by a dedicated CPU and a dedicated RAM or disk unit, or dedicated hardware.
  • an image output unit 117 outputs the input image stored in the input image storage unit 112 to a communication path (e.g., a WAN or LAN) to transmit it, output to an image storage unit to store it, or output to an image display device to display it.
  • a communication path e.g., a WAN or LAN
  • the image output unit 117 can be constituted by a known image output unit which transmits an image to a communication path such as a WAN or LAN via the communication device 28 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 .
  • the image output unit 117 my be constituted by a known image output unit which stores part of a digital motion image in the disk unit 24 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 .
  • the disk unit 24 may be a unit which can be used via a network such as a LAN.
  • the image output unit 117 may be constituted by a known image output unit which continuously displays still images on the same portion on a display to display a motion image. As is apparent, the image output unit 117 may be constituted by a combination of the above plurality of image output units, and may be used by properly selecting the control programs for these units.
  • the input image storage unit 112 and the newest change image storage unit 116 may be constituted by the RAM 23 or the disk unit 24 .
  • each of the storage units 112 and 116 may be constituted by a dedicated storage device.
  • the input image storage unit 112 and the newest change image storage unit 116 may be constituted by two image storage units 118 a and 118 b which are selectively used upon a switching operation, as shown in FIG. 19.
  • the roles of the image storage units 118 a and 118 b are switched in newest change image storage step 106 in FIG. 17.
  • the image storage unit 118 a is switched to serve as a storage unit for storing a newest change image
  • the image storage unit 118 b is switched to serve as a storage unit for storing an input image.
  • the input image stored in the image storage unit 118 a when the image change is detected is used as a newest change image afterward, and the processing of copying an input image from the input image storage unit 112 to the newest change image storage unit 116 in the case shown in FIG. 18 can be omitted.
  • image storage units 118 a and 118 b need not be used as separate memories, but the above function can be realized by one memory with a bank switching operation.
  • the image input unit 111 acquires a still image from an image obtained by the video camera 26 or a digital motion image stored in the disk unit 24 .
  • the image input unit 111 photographs a still image using a digital still camera (not shown), and stores the still image in the input image storage unit 112 .
  • change component extraction step 102 the input image stored in the input image storage unit 112 in the above manner is compared with the newest change image stored in the newest change image storage unit 116 to detect and extract current image change components.
  • the processing in change component extraction step 102 is performed by calculating the pixel value difference (absolute value) between each pair of corresponding pixels (pixels at identical positions in the respective images) using the input image and the newest change image, and by extracting a pixel value difference exceeding a given threshold value as an image change component.
  • the following description will be made on the assumption that each pixel value difference is calculated by processing the respective pixels of an image in the raster scan order, as indicated by the arrow in FIG. 20. As is apparent, however, pixels may be processed concurrently.
  • erroneous extraction correction step 103 a component, of the change components extracted by the change component extraction unit 113 in change component extraction step 102 , which is considered as a component which has been erroneously extracted because of, e.g., flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like is detected and corrected, thereby generating corrected image change components.
  • the sum total of corrected image change components is obtained.
  • condition (1) it is determined that an image change component is erroneously extracted. This value is not added to a total change amount (in this embodiment, the sum total of pixel value differences during processing (corrected image change components) will be referred to as a total change amount.
  • a total change amount is calculated by processing the respective pixels in the raster scan order, as indicated by the arrow in FIG. 20, in the same manner as the above calculation of pixel value differences).
  • condition (2) If condition (2) is satisfied, the pixel value difference (absolute value) between the corresponding pixels is changed to the mean value of pixel value differences (absolute values) at the positions of the surrounding eight pixels, and the mean value is added to the total change amount. If neither condition (1) nor condition (2) is satisfied, the pixel value difference (absolute value) between the corresponding pixels itself is added to the total change amount.
  • corrected image change components are defined such that the pixel value difference (absolute value) between pixels corresponding to condition (1), of two conditions (1) and (2), is set to be 0, the pixel value difference (absolute value) between pixels corresponding to condition (2) is set to be the mean value of pixel value differences (absolute values) at the positions of eight pixels around a subject pixel, and a pixel value difference between pixels corresponding to neither condition (1) nor condition (2) is a pixel value difference (absolute value) extracted in change component extraction step 102 .
  • image change discrimination step 104 an image change is discriminated on the basis of the sum total (total change amount) of image change components corrected by the erroneous extraction correction unit 114 in erroneous extraction correction step 103 . More specifically, the above total change amount is compared with a predetermined value (threshold value 2). If the total change amount is larger than threshold value 2, it is discriminated that “THERE IS CHANGE”. If the total change amount is not larger than threshold value 2 after all the pixels in the frame are processed, it is discriminated that “THERE IS NO CHANGE” in the frame.
  • the flow chart of FIG. 21 shows an example of the contents of processing executed by the CPU 21 in change component extraction step 102 , erroneous extraction correction step 103 , and image change discrimination step 104 in FIG. 17 in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3.
  • the pixel value difference (absolute value) between each pair of corresponding pixels of an input image and a newest change image is calculated.
  • the total value (total change amount) exceeds another given value (threshold value 2), it is discriminated that the input image has changed as compared with the newest change image, i.e., “THERE IS CHANGE”. If the total change amount of the entire image is equal to or smaller than threshold value 2, it is discriminated that “THERE IS NO CHANGE”.
  • step S 501 initialization processing required for erroneous extraction correction processing using a nine-pixel area (3 pixels (vertical) ⁇ 3 pixels (horizontal)) is performed.
  • pixel value differences corresponding to eight pixels around a subject pixel position for determination of erroneous extraction must be obtained in advance.
  • FIG. 20 shows this state.
  • a position A is a subject position for determination of erroneous extraction in FIG. 20.
  • a nine-pixel area (3 pixels (vertical) ⁇ 3 pixels (horizontal)) indicated by the hatched portion includes pixel positions (to be referred to as a noise discrimination subject area hereinafter) as reference positions for discrimination of erroneous extraction concerning the subject position A.
  • processing is performed in the raster scan order. For this reason, in discriminating erroneous extraction concerning the subject position A, a pixel value difference (absolute value) at a pixel position B required for subsequent discrimination is calculated in advance.
  • step S 501 The initialization processing in step S 501 will be described in detail with reference to the flow chart of FIG. 22.
  • step S 1001 a total change amount is set to zero.
  • step S 1002 the difference in the number of pixels ((x+2) where x is the number of pixels in the main scanning direction) between the pixel value calculation position (position B in FIG. 20) and the subject position of noise discrimination (position A in FIG. 20) is set in a first temporary buffer (not shown).
  • step S 1003 the memory address of an output destination to which a pixel value difference calculation result is to be output and the memory address of an output destination to which a discrimination result indicating whether the calculated pixel value difference is larger than predetermined threshold value 1 is to be output are respectively set in second and third temporary buffers (not shown).
  • the output destinations of the pixel value difference and the comparison result indicating whether the pixel value difference is larger than threshold value 1 are independently ensured in free areas in the RAM 23 .
  • step S 1004 a pixel position where a pixel value difference is to be calculated is set as the head pixel position (the head position of a frame) in the raster scan order in a fourth temporary buffer (not shown).
  • step S 1005 a pixel value difference at the pixel position set in the fourth temporary buffer is calculated. This calculated pixel value difference is output to the pixel value difference output destination (a free area in the RAM 23 ) held in the second temporary buffer.
  • the pixel value difference is, for example, the absolute value of the difference between subject pixels. If the input image is a color image, the pixel value difference is, for example, the sum of the absolute values of the differences between R, G, and B values of subject pixels.
  • step S 1006 it is checked whether the pixel value difference calculated in step S 1005 is larger than predetermined threshold value 1. If YES in step S 1006 , the flow advances to step S 1007 . If NO in step S 1006 , the flow advances to step S 1008 .
  • step S 1007 as a signal indicating that the pixel value difference at the subject pixel position is larger than predetermined threshold value 1, “1” (also called a black pixel) is output to a bit map data holding area (the output destination of the discrimination result indicating that the pixel value difference is larger than threshold value 1) corresponding to the subject pixel position set in the third temporary buffer.
  • the flow then advances to step S 1009 .
  • step S 1008 as a signal indicating that the pixel value difference at the subject pixel position is equal to or smaller than predetermined threshold value 1, “0” (also called a white pixel) is output to a bit map data holding area corresponding to the subject pixel position.
  • the flow then advances to step S 1009 .
  • the above bit map data holding area is a bit map image data area having the same size as that of an image for which change detection is being performed.
  • binary image data black or white pixel data
  • step S 1009 the pixel position set in the fourth temporary buffer is updated to shift the pixel position, at which a pixel value difference is to be calculated, by one pixel ahead.
  • step S 1010 the contents set in the second temporary buffer are updated to shift the output destination of the pixel value difference calculation result by one pixel ahead.
  • the contents set in the third temporary buffer are updated to shift, by one pixel ahead, the output destination to which the discrimination result indicating that the calculated pixel value difference is larger or smaller than predetermined threshold value 1 is to be output.
  • step S 1011 the value (the difference in the number of pixels between the pixel value difference calculation position and the subject position of noise discrimination) set in the first temporary buffer is decreased by one.
  • step S 1012 it is checked whether the value set in the first temporary buffer is zero. With this operation, it is checked whether all the pixels required for initialization have been processed.
  • step S 101 in FIG. 6 If it is discriminated that the value in the first temporary buffer is zero, it is determined that all the pixels required for initialization have been processed. The processing in step S 101 in FIG. 6 is then terminated, and the flow returns to the initial processing routine. If the value in the first temporary buffer is not zero, the flow returns to step S 1005 to further perform initialization processing.
  • step S 502 in FIG. 21 a pixel value difference at the pixel position (subject position of noise discrimination) set in the fourth temporary buffer is calculated.
  • the calculated pixel value difference is output to the pixel value difference output destination held in the second temporary buffer.
  • step S 503 it is checked whether the pixel value difference calculated in step S 502 is larger than predetermined threshold value 1. If YES in step S 503 , the flow advances to step S 504 . If NO in step S 503 , the flow advances to step S 505 .
  • step S 504 as a signal indicating that the pixel value difference at the pixel value difference calculation position in step S 502 is larger than predetermined threshold value 1, data of “1” (black pixel) is output to a bit map data holding area corresponding to the subject pixel position set in the third temporary buffer. The flow then advances to step S 506 .
  • step S 505 as a signal indicating that the pixel value difference at the pixel value difference calculation position in step S 502 is equal to or smaller than predetermined threshold value 1, data of “0” (white pixel) is output to a bit map data holding area corresponding to the subject pixel position. The flow then advances to step S 506 .
  • step S 506 whether the pixel value difference (absolute value) at a subject position of noise discrimination (e.g., the position A in FIG. 20) satisfies condition (1) described above is discriminated. More specifically, whether the pixel value difference at the subject position A exceeds threshold value 1, and the pixel value difference at each of eight pixel positions adjacent to the pixel position A is equal to or smaller than threshold value 1 is discriminated in the following manner.
  • the binary image data stored in the bit map data holding area corresponding to the nine-pixel area (3 pixels (vertical) ⁇ 3 pixels (horizontal)) (the noise discrimination subject area indicated by hatching in FIG. 20) around the subject position A of noise discrimination are referred to check whether the central pixel of the noise discrimination subject area is a black isolated point (only the central pixel is a black pixel, but all the surrounding eight pixels are white pixels). With this operation, the above discrimination processing is performed. If it is discriminated that the central pixel is a black isolated point, the flow advances to step S 513 . Otherwise, the flow advances to step S 507 .
  • step S 507 whether the pixel value difference (absolute value) at a subject position of noise discrimination (e.g., the position A in FIG. 20) satisfies condition (2) described above is discriminated. More specifically, whether the pixel value difference at the subject position A is equal to or smaller than threshold value 1, and the pixel value difference at each of eight pixel positions adjacent to the pixel position A is larger than threshold value 1 is discriminated in the following manner.
  • the binary image data stored in the bit map data holding area corresponding to the nine-pixel area (3 pixels (vertical) ⁇ 3 pixels (horizontal)) (the noise discrimination subject area indicated by hatching in FIG. 20) centered on the subject position A of noise discrimination are referred to check whether the central pixel of the noise discrimination subject area is a white isolated point (only the central pixel is a white pixel, but all the surrounding eight pixels are black pixels). With this operation, the above discrimination processing is performed. If it is discriminated that the central pixel is a white isolated point, the flow advances to step S 508 . Otherwise, the flow advances to step S 509 .
  • step S 508 the mean value of the pixel value differences (absolute values) at the eight pixel positions around the subject position A in the nine-pixel area (3 pixels (vertical) ⁇ 3 pixels (horizontal)) (the noise discrimination subject area indicated by hatching in FIG. 20) centered on the subject position A of noise discrimination is calculated by referring to the area of the RAM 23 in which the pixel value difference calculation result is held. The obtained mean value is then set as the pixel value difference at the subject position A of noise discrimination.
  • step S 509 whether the pixel value difference (absolute value) at the subject position A of noise discrimination is larger than threshold value 1 is discriminated by referring to the binary image data stored in the bit map data holding area corresponding to the subject position A of noise discrimination and checking whether the pixel at the subject position A of noise discrimination is a black pixel. If a black pixel is discriminated in step S 509 , the flow advances to step S 510 . Otherwise, the flow advances to step S 513 .
  • step S 510 the pixel value difference (the above mean value after the processing in step S 508 ; the value calculated in step S 502 after the processing in step S 509 ) is added to the total change amount to obtain a new total change amount.
  • step S 511 it is checked whether the total change amount updated in step S 510 is larger than predetermined threshold value 2. If YES in step S 511 , the flow advances to step S 512 . If NO in step S 511 , the flow advances to step S 513 .
  • step S 512 it is discriminated that an image change has occurred, and the processing in the flow chart of FIG. 21 is terminated.
  • step S 513 it is checked whether all pixels have been processed. If YES in step S 513 , the flow advances to step S 514 to discriminate that no image change has occurred, and the processing in the flow chart of FIG. 21 is terminated. If NO in step S 513 , the flow advances to step S 515 .
  • step S 515 processing is shifted to the next pixel. That is, both the pixel value difference calculation position and the subject position of noise discrimination are shifted to the next pixels to set a new subject pixel for noise discrimination. The flow then returns to step S 502 .
  • a nine-pixel area (3 pixels (vertical) ⁇ 3 pixels (horizontal)) centered on a subject position of noised discrimination may include a portion where no corresponding binary image data or no pixel value difference exists, as in a case wherein the above subject position is at an end portion (upper, lower, left, or right end) of the image. In this case, such a portion is handled as white pixel data or “0”.
  • the input image input in image input step 101 and stored in the input image storage unit 112 is stored as a newest change image in the newest change image storage unit 116 in newest change image storage step 105 .
  • the input image stored in the input image storage unit 112 is output to a communication path (a WAN or LAN) via the communication device 28 or the like to be transmitted, output to the disk unit 24 to be stored, or output to an image display device (not shown) to be displayed in image output step 106 .
  • a communication path a WAN or LAN
  • fine change components dispersing randomly are extracted and removed, as erroneously extracted image change components, from change components extracted by comparison between a subject image whose motion is to be detected and a newest change image.
  • change components corrected in this manner it can be determined whether the above image has undergone a change. Therefore, an image change can be detected without being affected by, e.g., change components produced randomly in the image because of flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like. That is, an image change can be accurately detected.
  • the seventh embodiment is another embodiment of the processing in change component extraction step 102 , erroneous extraction correction step 103 , and image change discrimination step 104 in the sixth embodiment.
  • FIG. 23 is a flow chart showing an example of the contents of processing executed by the CPU 21 in change component extraction step 102 , erroneous extraction correction step 103 , and image change discrimination step 104 in the seventh embodiment in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3.
  • the pixel value difference (absolute value) between each pair of corresponding pixels is calculated using an input image and a newest change image. If each pixel value difference is larger than a given value (threshold value 1), it is determined that the corresponding subject pixel has undergone a change (a pixel having undergone a change will be referred to as a change pixel hereinafter). If the number of change pixels in the entire input image is larger than a given value (threshold value 3), it is determined that the input image has changed as compared with the newest change image. That is, it is determined that an image change has occurred.
  • the above change pixel discrimination processing is performed by processing the respective pixels in an image in the raster scan order, as shown in FIG. 20, and the sum total of change pixels during processing will be referred to as the number of change pixels. As is apparent, pixels may be processed concurrently.
  • step S 601 initialization processing required for erroneous extraction correction processing using a nine-pixel area (3 pixels (vertical) ⁇ 3 pixels (horizontal)) is performed as in the processing in step S 501 in FIG. 21.
  • step S 601 processing is performed in the raster scan order, as shown in FIG. 20. For this reason, in discriminating erroneous extraction concerning the subject position A, a pixel value difference (absolute value) at the pixel position B which is required for the subsequent discrimination processing is calculated, and it is discriminated in advance whether each calculated pixel value difference is larger than predetermined threshold value 1.
  • the initialization processing in step S 601 can be realized by executing the processing in the flow chart of FIG. 22, as in the initialization processing in step S 501 in FIG. 21.
  • the second temporary buffer used in the sixth embodiment and the memory area for holding a pixel value difference itself are not required. For this reason, in the processing in steps S 1003 , S 1005 , and S 1010 in FIG. 22, processing associated with the second temporary buffer in the sixth embodiment and the memory area for holding a pixel value difference itself is not required.
  • step S 602 in FIG. 23 a pixel value difference at a pixel value difference calculation position (subject position of noise discrimination) is calculated by the same method as in the sixth embodiment.
  • step S 603 it is checked whether the pixel value difference calculated in step S 602 is larger than predetermined threshold value 1. If YES in step S 603 , the flow advances to step S 604 . If NO in step S 603 , the flow advances to step S 605 .
  • step S 604 as a signal indicating that the pixel value difference at the pixel value difference calculation position in step S 602 is larger than predetermined threshold value 1, data of “1” (black pixel) is output to a bit map data holding area corresponding to the subject pixel position set in a third temporary buffer (not shown). The flow then advances to step S 606 .
  • step S 605 as a signal indicating that the pixel value difference at the pixel value difference calculation position in step S 602 is equal to or smaller than predetermined threshold value 1, data of “0” (white pixel) is output to a bit map data holding area corresponding to the subject pixel position. The flow then advances to step S 606 .
  • step S 606 similar to the sixth embodiment, whether the pixel value difference at the subject position of noise discrimination (e.g., the position A in FIG. 20) satisfies condition (1) described above is discriminated by checking whether the subject position A of noise discrimination is a black isolated point in the nine-pixel area (3 pixels (vertical) ⁇ 3 pixels (horizontal)) (the noise discrimination subject area indicated by hatching in FIG. 20) centered on the subject position A. If the subject position A is a black isolated point, the flow advances to step S 612 . Otherwise, the flow advances to step S 607 .
  • the subject position A is a black isolated point
  • step S 607 similar to the sixth embodiment, whether the pixel value difference at the subject position of noise discrimination (e.g., the position A in FIG. 20) satisfies condition (2) described above is discriminated by checking whether the subject position A of noise discrimination is a white isolated point in the nine-pixel area (3 pixels (vertical) ⁇ 3 pixels (horizontal)) (the noise discrimination subject area indicated by hatching in FIG. 20) centered on the subject position A. If the subject position A is a white isolated point, the flow advances to step S 609 . Otherwise, the flow advances to step S 608 .
  • the subject position A is a white isolated point
  • step S 608 whether the pixel value difference (absolute value) at the subject position A of noise discrimination is larger than threshold value 1 is discriminated by referring to the binary image data stored in the bit map data holding area corresponding to the subject position A of noise discrimination and checking whether the pixel at the subject position A of noise discrimination is a black pixel. If the subject position A is a black pixel, the flow advances to step S 609 . Otherwise, the flow advances to step S 612 .
  • step S 609 the number of change pixels is increased by one to set a new number of change pixels.
  • step S 610 it is checked whether the number of change pixels updated in step S 609 is larger than predetermined threshold value 3. If YES in step S 610 , the flow advances to step S 611 . If NO in step S 610 , the flow advances to step S 612 .
  • step S 611 it is discriminated that an image change has occurred, and the processing in the flow chart of FIG. 23 is terminated.
  • step S 612 it is checked whether all pixels have been processed. If YES in step S 612 , the flow advances to step S 613 to discriminate that no image change has occurred. The processing in the flow chart of FIG. 23 is then terminated. If NO in step S 612 , the flow advances to step S 614 .
  • step S 614 processing is shifted to the next pixel. That is, both the pixel value difference calculation position and the subject position of noise discrimination are shifted to the next pixels to set a new subject pixel for noise discrimination. The flow then returns to step S 602 .
  • a nine-pixel area (3 pixels (vertical) ⁇ 3 pixels (horizontal)) centered on a subject position of noised discrimination may include a portion where no corresponding binary image data exists, as in a case wherein the above subject position is at an end portion (upper, lower, left, or right end) of the image. In this case, this area is handled as white pixel data.
  • an image change can be detected without being affected by, e.g., change components produced randomly in the image because of flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like. That is, an image change can be accurately detected.
  • the subject pixel positions two positions, i.e., “the subject position of noise discrimination” and “the pixel value difference calculation position” like the positions A and B in FIG. 20 are set, and an appropriate difference ((the number of pixels of an image in the main scanning direction)+two pixels) is set between the two positions.
  • An image change is detected by performing raster scanning once for every still image.
  • the present invention is not limited to this.
  • raster scanning may be performed twice for every still image.
  • first scanning operation pixel value differences in one entire image are calculated, and binary image data representing a discrimination result indicating whether each calculated pixel value difference exceeds predetermined threshold value 1 is formed.
  • second scanning operator for the same still image, noise discrimination and discrimination of the image as a change image may be performed.
  • the difference to be set between the two subject pixel positions is not limited to ((the number of pixels of an image in the main scanning direction)+two pixels).
  • a difference of ((the number of pixels of an image in the main scanning direction) ⁇ 2+three pixels) may be set.
  • erroneous extraction discrimination can be performed by referring to a 25-pixel area (5 pixels (vertical) ⁇ 5 pixels (horizontal)) centered on the subject position of noise discrimination. For example, it is discriminated that a change component is erroneously extracted, if the central pixel in the 25-pixel area is a white pixel, but all the surrounding 24 pixels are black pixels, or if 22 or more pixels of the surrounding 24 pixels have values different from the value of the central pixel.
  • differential image forming step 107 is inserted between image input step 101 and change component extraction step 102 in the procedure shown in FIG. 17 in the sixth to eighth embodiments.
  • differential image forming step 107 differential processing using, e.g., a Sobel operator, is performed for an input image input in image input step 101 (an image having undergone differential processing will be referred to as a differential image hereinafter).
  • Other processing steps 101 to 106 are performed in the same manner as in the sixth to eighth embodiments.
  • change component extraction step 102 the same processing as that in the sixth to eighth embodiment is performed not for an input image but for a differential image.
  • newest change image storage step 105 a differential image is stored instead of an input image.
  • the apparatus can be designed not to detect an image change due to a change in illumination.
  • a similar effect can be expected in the sixth to ninth embodiments as well depending on a method of setting threshold values. In this embodiment, however, there is no need to consider a change in illumination in setting a threshold value.
  • FIG. 25 is a block diagram showing an example of a motion image processing apparatus for performing the processing in FIG. 24. Note that the arrangement shown in FIG. 25 can be realized by the hardware shown in FIG. 3, as in the sixth to ninth embodiments.
  • a differential image forming unit 119 in FIG. 25 can be constituted by the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3, and the RAM 23 used as work memory or the disk unit 24 .
  • the differential image forming unit 119 performs differential processing for an input image stored in an input image storage unit 112 , and stores the resultant differential image in a differential image storage unit 120 .
  • the differential image storage unit 120 can be constituted by the RAM 23 or the disk unit 24 in FIG. 3, similar to the input image storage unit 112 and a newest change image storage unit 116 in FIG. 25.
  • the respective storage units 112 , 116 , and 120 may be constituted by dedicated storage devices.
  • the remaining means in FIG. 25 are the same as those in the sixth to ninth embodiments.
  • a change component extraction unit 113 performs the same processing as that in the sixth to ninth embodiments with respect to a differential image stored in the differential image storage unit 120 instead of an input image stored in the input image storage unit 112 .
  • the newest change image storage unit 116 does not store an input image stored in the input image storage unit 112 as a newest change image but stores a differential image stored in the differential image storage unit 120 .
  • the arrangement shown in FIG. 25 may be replaced with the arrangement shown in FIG. 26, as in the sixth to ninth embodiments. More specifically, the operation mode of the differential image storage unit 120 and the newest change image storage unit 116 in FIG. 25 can be changed such that two image storage units 118 a and 118 b are selectively used upon a switching operation, as shown in FIG. 26. With this arrangement, the processing of copying a differential image from the differential image storage unit 120 to the newest change image storage unit 116 , which is performed in the case shown in FIG. 25, can be omitted.
  • the first to ninth embodiments can be effectively applied to video conference systems and monitoring systems.

Abstract

There is provided an image processing apparatus (method) including an input means (step) for inputting a continuous image signal, a detection means (step) for detecting a frame change in an image by comparing the image signal input by the input means (step) with a reference image signal, and a storage means (step) for updating/storing the image signal input by the input means (step) as the reference image signal in units of frames in accordance with an output from the detection means (step). There is provided an image processing apparatus (method) including an input means (step) for inputting a continuous image signal, a change component extraction means (step) for extracting change components between images by comparing the image signal input by the input means (step) with a reference image signal, an erroneous extraction correction means (step) for detecting and removing an erroneously extracted change component from the change components extracted by the change component extraction means (step), and an image change discrimination means (step) for discriminating an image change in the image signal on the basis of the change component corrected by the erroneous extraction correction means (step).

Description

    BACKGROUND OF THE INVENTION
  • 1. Field of the Invention [0001]
  • The present invention relates to a motion image processing apparatus and method and, more particularly, to an image processing apparatus and method which can reduce transmission, storage, and display data amounts by detecting an image change and can be suitably used for an image transmission apparatus, an image storage device, and a motion image display device. That is, the present invention can be used for a monitoring apparatus for transmitting and storing images in a monitoring area, a video conference apparatus, a general-purpose computer or video server which handles motion images, and the like. [0002]
  • 2. Related Background Art [0003]
  • In the field of monitoring, e.g., road monitoring and night monitoring, apparatuses which reduce the cost for monitoring by transmitting and monitoring images obtained by photographing a monitoring area with a TV camera or storing the images in a storage device have been used. In this case, the transmission and storage data amounts are reduced by detecting an image change and outputting image data only when an image change occurs. [0004]
  • In recent years, image data are exchanged and stored between remote places via a communication network such as a WAN (Wide Area Network) or a LAN (Local Area Network) in many instances, like a video conference, without the intention of monitoring. In addition, with an improvement in the performance of a general-purpose computer, the computer can transmit and store motion images as well as displaying motion images. [0005]
  • Since image data amounts too much, a dedicated device is required to perform detection processing an image change at a speed as high as that of image processing. If there is no such a dedicated device, an image temporarily stored is processed for a long period of time. [0006]
  • Recently, however, image data is transmitted, stored, or displayed by using a general-purpose computer in many instances. For example, video conference systems using general-purpose computers have become popular, as described above. Demands have therefore arisen for a method of performing detection processing of an image change at a speed as high as that of image processing without using any dedicated device. [0007]
  • Various methods associated with the above image change detection method have been proposed. A relatively easy method is a method of detecting a change area by obtaining a difference between two images. According to this easy method, even a general-purpose computer can execute detection processing at a sufficient processing speed. [0008]
  • As methods of detecting a change area on the basis of a difference between two images, typical methods are a method of detecting a change area on the basis of a difference between a background image photographed in advance and a currently photographed image, and a method of detecting a change area on the basis of a difference between two frames adjacent to each other along the time base. [0009]
  • There has been proposed an apparatus based on the latter method of detecting a change area on the basis of a difference between frames to record images in a monitoring area on a VTR, wherein an image change is detected and an image is recorded on the VTR only when the image change occurs. [0010]
  • Many limitations are, however, imposed on the use of the method of detecting a change area on the basis of a difference between a background image and a current image. For example, a background image having no moving object must be prepared. [0011]
  • According to the method of detecting a change area on the basis of a difference between frames adjacent to each other along the time base, when an image change is moderate, the difference is small. In this case, therefore, it is difficult to detect a change area. [0012]
  • Furthermore, in the method of detecting a change area on the basis of a difference between images, erroneous detection of a change area tends to occur because of flickering of the light source or mixing of noise in a photoelectric scanning unit or an electronic circuit unit, and the like. [0013]
  • SUMMARY OF THE INVENTION
  • The present invention has been made in consideration of the above situation, and has as its object to easily and reliably detect an image change at a sufficiently high processing speed without using any dedicated device. [0014]
  • In order to achieve the above object, according to an aspect of the present invention, there is provided an image processing apparatus (method) including input means (step) for inputting a continuous image signal, detection means (step) for detecting a frame change in an image by comparing the image signal input by the input means (step) with a reference image signal, and storage means (step) for updating/storing the image signal input by the input means (step) as the reference image signal in units of frames in accordance with an output from the detection means (step). [0015]
  • It is another object of the present invention to accurately detect an image change without being affected by change areas (erroneously detected small areas dispersing randomly, in particular) erroneously detected because of flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like. [0016]
  • In order to achieve the above object, according to another aspect of the present invention, there is provided an image processing apparatus (method) including input means (step) for inputting a continuous image signal, change component extraction means (step) for extracting change components between images by comparing the image signal input by the input means (step) with a reference image signal, erroneous extraction correction means (step) for detecting and removing an erroneously extracted change component from the change components extracted by the change component extraction means (step), and image change discrimination means (step) for discriminating an image change in the image signal on the basis of the change component corrected by the erroneous extraction correction means (step). [0017]
  • Other objects, features and advantages of the invention will become apparent form the following detailed description taken in conjunction with the accompanying drawings. [0018]
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a flow chart showing the contents of an image processing method according to the first to fourth embodiments of the present invention; [0019]
  • FIG. 2 is a block diagram showing an example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 1; [0020]
  • FIG. 3 is a block diagram showing an example of the actual hardware arrangement for realizing the functions in FIG. 2; [0021]
  • FIG. 4 is a block diagram showing another example of the arrangement of the motion image processing apparatus for performing the processing in FIG. 1; [0022]
  • FIG. 5 is a view for explaining a pixel processing order in detecting an image change in the first and second embodiments; [0023]
  • FIG. 6 is a flow chart showing the processing in the image change detection step in the first embodiment; [0024]
  • FIG. 7 is a flow chart showing the processing in the image change detection step in the second embodiment; [0025]
  • FIG. 8 is a flow chart showing the processing in the image change detection step in the third embodiment; [0026]
  • FIGS. 9A and 9B are views for explaining an example of how an image is divided into blocks, a pixel processing order, and a block processing order in the third and fourth embodiments; [0027]
  • FIG. 10 is a flow chart showing another example of the processing in the image change detection step in the third embodiment; [0028]
  • FIG. 11 is a flow chart showing the processing in the image change detection step in the fourth embodiment; [0029]
  • FIG. 12 is a flow chart showing another example of the processing in the image change detection step in the fourth embodiment; [0030]
  • FIG. 13 is a flow chart showing the contents of an image processing method according to the fifth embodiment of the present invention; [0031]
  • FIG. 14 is a block diagram showing an example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 13; [0032]
  • FIG. 15 is a block diagram showing another example of the arrangement of the motion image processing apparatus for performing the processing in FIG. 13; [0033]
  • FIG. 16 is a view showing an outline of a motion image processing method to which the present invention is applied; [0034]
  • FIG. 17 is a flow chart showing the contents of an image processing method according to the sixth to eighth embodiments of the present invention; [0035]
  • FIG. 18 is a block diagram showing an example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 17; [0036]
  • FIG. 19 is a block diagram showing another example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 17; [0037]
  • FIG. 20 is a view for explaining a pixel processing order in detecting an image change in the sixth to ninth embodiments; [0038]
  • FIG. 21 is a flow chart showing the detailed processing in the change component extraction step, the erroneous extraction correction step, and the image change discrimination step in the sixth embodiment; [0039]
  • FIG. 22 is a flow chart showing the detailed initialization processing in step S[0040] 501 in FIG. 21;
  • FIG. 23 is a flow chart showing the detailed processing in the change component extraction step, the erroneous extraction correction step, and the image change discrimination step in the seventh embodiment; [0041]
  • FIG. 24 is a flow chart showing the contents of an image processing method according to the eighth embodiment; [0042]
  • FIG. 25 is a block diagram showing an example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 24; and [0043]
  • FIG. 26 is a block diagram showing another example of the arrangement of the motion image processing apparatus for performing the processing in FIG. 24.[0044]
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
  • An embodiment of the present invention will be described below with reference to the accompanying drawings. [0045]
  • FIG. 1 is a flow chart showing the contents of a motion image processing method according to the first embodiment. [0046]
  • Referring to FIG. 1, in [0047] image input step 1, a still image is extracted from an image obtained by performing a photographing operation with a video camera or the like (not shown), a digital motion image stored in a disk unit (not shown), or the like, and the extracted image is input (this input still image will be referred to as an input image hereinafter).
  • In [0048] change detection step 2, the input image input in image input step 1 and an image (to be referred to as a newest change image hereinafter) obtained when the newest change of several image changes caused in the past are compared with each other to detect a current image change. When an image change is detected, the flow advances to newest change image storage step 3. If no image change is detected, the flow returns to image input step 1.
  • In newest change [0049] image storage step 3, the input image as an object to be compared when the image change is detected in change detection step 2 is stored as a newest change image. In image output step 4, the input image obtained when the image change is detected in change detection step 2 is output to a communication path (not shown) (e.g., a WAN or LAN) to transmit it. Alternatively, the input image is output to an image storage device (not shown) to be stored, or is output to an image display device (not shown) to be displayed.
  • When the processing in [0050] image output step 4 is completed, the flow returns to image input step 1 again to repeatedly execute the above processing unless the end of the processing is designated.
  • FIG. 2 is a block diagram showing an example of the arrangement of an image processing apparatus for performing the processing in FIG. 1. FIG. 3 shows an example of the arrangement of hardware for realizing the functions in FIG. 2. [0051]
  • Referring to FIG. 3, a CPU (Central Processing Unit) [0052] 21 controls the overall operation of the motion image processing apparatus of this embodiment. A ROM (Read-Only Memory) 22 stores various processing programs. A RAM (Random Access Memory) 23 stores various kinds of information during processing.
  • A [0053] disk unit 24 stores information such as digital motion image information. A disk input/output device (disk I/O) 25 inputs/outputs digital motion image information stored in the disk unit 24. A video camera 26 acquires image information by photographing an object to be photographed. A video capture device 27 captures a still image from an image obtained by the video camera 26.
  • A [0054] communication device 28 transmits a still image input from the disk unit 24 via the disk I/O 25, or transmits a still image, input from the video camera 26 via the video capture device 27, via a communication path such as a WAN or LAN. The CPU 21, the ROM 22, the RAM 23, the disk I/O 25, the video capture device 27, and the communication device 28 are connected to a bus 29.
  • Referring to FIG. 2, an [0055] image input unit 11 acquires a still image from an image obtained by the video camera, a digital motion image stored in the disk unit, or the like. An input image storage unit 14 stores the still image.
  • Referring to FIG. 3, for example, the [0056] image input unit 11 may be constituted by a known image input unit which causes the video capture device 27 to acquire a still image (input image) from an image, obtained by performing a photographing operation with the video camera 26, and stores the still image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23.
  • Alternatively, the [0057] image input unit 11 may be constituted by a known image input unit which reads, via the disk I/O 25, a still image (input image) from a digital motion image stored in the disk unit 24, and stores the read image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23.
  • Although not shown in FIG. 3, the [0058] image input unit 11 may be constituted by a known image input unit which causes a digital still camera to photograph a still image, and stores the image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23. As is apparent, the image input unit 11 may be constituted by a combination of the above plurality of image input units.
  • A [0059] change detection unit 12 compares an input image stored in the input image storage unit 14 with an newest change image stored in a newest change image storage unit 15 to detect a current image change. When an image change is detected, the input image stored in the input image storage unit 14 is stored as a newest change image in the newest change image storage unit 15.
  • For example, as shown in FIG. 3, the [0060] change detection unit 12 can be constituted by the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23, and the RAM 23 used as a work memory or the disk unit 24. As is apparent, the change detection unit 12 may be constituted by a dedicated CPU, a dedicated RAM, and a dedicated disk unit, or dedicated hardware.
  • An [0061] image output unit 13 outputs and transmits, to a communication path (a WAN or LAN), an input image which is stored in the input image storage unit 14 when the change detection unit 12 detects an image change, outputs and stores the image in the image storage device, or outputs the image to the image display device to display it.
  • For example, referring to FIG. 3, the [0062] image output unit 13 can be constituted by a known image output unit which transmits an image to a communication path such as a WAN or LAN via the communication device 28 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23.
  • Alternatively, the [0063] image output unit 13 my be constituted by a known image output unit which stores part of a digital motion image in the disk unit 24 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23. In this case, the disk unit 24 may be a unit which can be used via a network such as a LAN.
  • Although not shown in FIG. 3, the [0064] image output unit 13 may be constituted by a known image output unit which continuously displays still images on the same portion on a display to display a motion image. As is apparent, the image output unit 13 may be constituted by a combination of the above plurality of image output units.
  • For example, referring to FIG. 3, the input [0065] image storage unit 14 and the newest change image storage unit 15 may be constituted by the RAM 23 or the disk unit 24. As is apparent, each unit may be constituted by a dedicated storage device.
  • Note that the input [0066] image storage unit 14 and the newest change image storage unit 15 may be constituted by two image storage units 16 a and 16 b which are selectively used upon a switching operation, as shown in FIG. 4. In this arrangement, every time an image change is detected by the change detection unit 12, the roles of the image storage units 16 a and 16 b are switched in newest change image storage step 3 in FIG. 1.
  • Assume that an image change is detected at a certain time point in the process of sequentially storing an input image in the [0067] image storage unit 16 a. In this case, the image storage unit 16 a is switched to serve as a storage unit for storing a newest change image, whereas the image storage unit 16 b is switched to serve as a storage unit for storing an input image.
  • With this operation, the input image stored in the [0068] image storage unit 16 a when the image change is detected is used as a newest change image afterward, and the processing of copying an input image from the input image storage unit 14 to the newest change image storage unit 15 in the case shown in FIG. 2 can be omitted.
  • Note that the [0069] image storage units 16 a and 16 b need not be used as separate memories, but the above function can be realized by one memory with a bank switching operation.
  • The processing contents in each step in FIG. 1 will be described in detail with reference to the arrangements of FIGS. 2 and 3. [0070]
  • In [0071] image input step 1, the image input unit 11 acquires a still image from an image obtained by the video camera 26 or a digital motion image stored in the disk unit 24. Alternatively, the image input unit 11 photographs a still image using a digital still camera (not shown), and stores the still image in the input image storage unit 14.
  • In [0072] change detection step 2, the input image stored in the input image storage unit 14 is compared with the newest change image stored in the newest change image storage unit 15 to detect a current image change.
  • FIG. 5 shows an example of the contents of processing executed by the [0073] CPU 21 in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3 in change detection step 2. In this processing, the pixel value difference (absolute value) between each pair of corresponding pixels is calculated using an input image and a newest change image. If the total pixel value difference of the entire image is equal to or larger than a given value, it is determined that the input image changes from the newest change image. That is, it is determined that an image change has occurred.
  • In this case, pixel value differences are calculated by processing the respective pixels of an image in the raster scan order, as shown in FIG. 5. The total pixel value difference during processing will be referred to as a total change amount. As is apparent, pixels may be processing concurrently. [0074]
  • Referring to FIG. 6, in step S[0075] 101, the total change amount is initialized to 0. In step S102, the pixel value between currently processed pixels (to be referred to as subject pixels hereinafter) is calculated.
  • Assume that an input image is a binary image. In this case, the value of a pixel value difference is “1” if subject pixels have difference values, and is “0” if the subject pixels have the same value. If an input image is a halftone image, the value of a pixel value difference is the absolute value of the difference between subject pixel values. If an input image is a color image, the value of a pixel value difference is a value obtained by calculating the absolute values of the differences between R, G, and B values of subject pixels, and calculating the sum of the absolute values of the differences. [0076]
  • In step S[0077] 103, the pixel value difference calculated in step S102 is added to the total change amount. In step S104, it is checked whether the total change amount calculated in step S103 is larger than a predetermined value (threshold value). If YES in step S104, it is determined that an image change has occurred, and the processing shown in FIG. 6 is terminated. If NO in step S104, the flow advances to step S105.
  • In step S[0078] 105, it is checked whether all pixels have been processed. If YES in step S105, it is determined that no image change has occurred, and the processing in FIG. 6 is terminated. If NO in step S105, the flow advances to step S106. In step S106, processing is shifted to the next pixel (the next pixel is set as a subject pixel), and the flow return to step S102.
  • In newest change [0079] image storage step 3, when an image change is detected in change detection step 2, the input image stored in the input image storage unit 14 in image input step 1 is stored as a newest change image in the newest change image storage unit 15.
  • In [0080] image output step 4, when an image change is detected in change detection step 2, the input image stored in the input image storage unit 14 is output and transmitted to a communication path (a WAN or LAN) via the communication device 28 or the like, output to the disk unit 24 or the like to be stored, or output to an image display device (not shown) to be displayed.
  • As described above, according to the first embodiment, an image which undergoes no change, e.g., a background image, need not be prepared as the above comparative image, and even a moderate image change can be reliably detected. [0081]
  • An image change can therefore be easily and reliably detected at a sufficiently high speed without any dedicated device. In addition, only when an image change is detected, the corresponding image data is output. With this operation, the image data amount can be reduced in outputting the image data. For example, only an image corresponding to a detected change is output to a communication path, a storage unit, or a display device to reduce the transmission, storage, or display data amount. [0082]
  • FIG. 16 shows an outline of the motion image processing method of the present invention. [0083]
  • The second embodiment of the present invention will be described next. [0084]
  • The second embodiment is another embodiment of the processing in [0085] change detection step 2 in the first embodiment.
  • FIG. 7 shows an example of the contents of processing executed by the [0086] CPU 21 in change detection step 2 in this embodiment in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3.
  • In this processing, the pixel value difference (absolute value) between each pair of corresponding pixels is calculated using an input image and a newest change image. If the value of each pixel value difference is equal to or larger than a given value (first threshold value), it is determined that the subject pixel has changed (a pixel having undergone a change will be referred to as a change pixel hereinafter). If the number of change pixels of the entire input image is equal to or larger than a given value (second threshold value), it is determined that the input image has changed as compared with the newest change image. That is, it is determined that an image change has occurred. [0087]
  • Similar to the first embodiment, as shown in FIG. 5, the above determination of a change pixel is performed by processing the respective pixels of an image in the raster scan order. Note that the total number of change pixels during processing will be referred to as the number of change pixels. As is apparent, pixels may be processed concurrently. [0088]
  • Referring to FIG. 7, in step S[0089] 201, the number of change pixels is initialized. In step S202, the pixel value difference between subject pixels which are currently processed is calculated by the same method as in the first embodiment.
  • In step S[0090] 203, it is checked whether the pixel value difference calculated in step S202 is larger than a predetermined value (first threshold value). If YES in step S202, it is determined that a subject pixel has undergone a change, and the flow advances to step S204 to increase the number of change pixels by one. If NO in step S202, it is determined that the subject pixel has undergone no change, and the flow jumps to step S206.
  • In step S[0091] 205 after step S204, it is checked whether the number of change pixels calculated in step S204 is larger than a predetermined value (second threshold value). If YES in step S205, it is determined hat an image change has occurred, and the processing in FIG. 7 is terminated. If NO in step S205, the flow advances to step S206.
  • In step S[0092] 206, it is checked whether all pixels have been processed. If YES in step S206, it is determined that no image change has occurred, and the processing in FIG. 7 is terminated. IF NO in step S206, the flow advances to step S207. In step S207, processing is shifted to the next pixel (the next pixel is set as a subject pixel), and the flow returns to step S202.
  • Similar to the first embodiment, in the second embodiment, an image change can be easily and reliably detected at a sufficiently high speed even without any dedicated device. In addition, only when an image change is detected, the corresponding image data is output to a communication path, a storage unit, or a display device. With this operation, the transmission, storage, or display data amount can be reduced. [0093]
  • The third embodiment of the present invention will be described next. [0094]
  • The third embodiment is still another embodiment of the processing in [0095] change detection step 2 in the first embodiment described above.
  • FIG. 8 shows an example of the contents of processing executed by the [0096] CPU 21 in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3 in change detection step 2 in this embodiment.
  • In this processing, as shown in FIGS. 9A and 9B, an input image and a newest change image are divided into a plurality of blocks, and an image change is detected in units of blocks. Similar to the first embodiment, the pixel value difference between each pair of corresponding pixels is calculated using the input image and the newest change image. If the total pixel value difference in each block is larger than a given value (first threshold value), it is determined that an image change has occurred in the corresponding block. [0097]
  • If the number of blocks having undergone changes (to be referred to as change blocks hereinafter) of the entire input image is larger than a given value (second threshold value), it is determined that the input image has changed as compared with the newest change image. That is, it is determined that an image change has occurred. [0098]
  • Assume that pixel value differences in each block are calculated by processing the respective pixels in the raster scan order, as shown in FIG. 9A. Assume also that the respective blocks are processed in the raster scan order, as shown in FIG. 9B. Note that the total pixel value difference in a block which is being processed will be referred to as a total change amount; and the total number of change blocks, the number of change blocks. As is apparent, pixels or blocks may be processed concurrently. [0099]
  • Referring to FIG. 8, in step S[0100] 301, the number of change blocks is initialized to 0. In step S302, the total change amount is initialized to 0. In step S303, the pixel value difference between subject pixels is calculated by the same method as that in the first embodiment. In step S304, the pixel value difference calculated in step S303 is added to the total change amount.
  • In step S[0101] 305, it is checked whether the total change amount calculated in step S304 is larger than a predetermined value (first threshold value). If YES in step S305, it is determined that a change has occurred in the subject block, and the flow advances to step S307 to add one to the number of change blocks. If NO in step S305, the flow advances to step S306.
  • In step S[0102] 306, it is checked whether all the pixels in a subject block have been processed. If YES in step S306, it is determined that no change has occurred in the subject block, the flow advances to step S311 to shift processing to the next block (the next block is set as a subject block), and the flow returns to step S302. If NO in step S306, the flow advances to step S310 to shift processing to the next pixel (the next pixel is set as a subject pixel), and the flow returns to step S303.
  • In step S[0103] 308 following step S307, it is checked whether the number of change blocks calculated in step S307 is larger than a predetermined value (second threshold value). If YES step S308, it is determined that an image change has occurred, and the processing in FIG. 8 is terminated. If NO in step S308, the flow advances to step S309.
  • In step S[0104] 309, it is checked whether all blocks have been processed. If YES in step S309, it is determined that no image change has occurred, and the processing in FIG. 8 is terminated. If NO in step S309, the flow advances to step S311 to shift processing to the next block, and the flow returns to step S302.
  • Note that the purpose of this processing is to detect the presence/absence of an image change. For this reason, when the number of change blocks exceeds the second threshold value, the processing is terminated. However, as shown in the flow chart of FIG. 10, processing may be continued up to the last block. [0105]
  • With this operation, although the processing amount increases, all change blocks in an entire image can be detected. By transmitting, storing, or displaying only the change blocks, the transmission, storage, or display data amount can be reduced. [0106]
  • The fourth embodiment of the present invention will be described next. [0107]
  • The fourth embodiment is still another embodiment of the processing in [0108] change detection step 2 in the first embodiment described above.
  • FIG. 11 shows an example of the contents of processing executed by the [0109] CPU 21 in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3 in change detection step 2 in this embodiment.
  • In this processing, as shown in FIGS. 9A and 9B, an input image and a newest change image are divided into a plurality of blocks, and an image change is detected in units of blocks. Similar to the second embodiment, the pixel value difference between each pair of corresponding pixels is calculated using the input image and the newest change image. If the value of each pixel value difference is larger than a given value (first threshold value), it is determined that the subject pixels have undergone a change. [0110]
  • If the number of change pixels in each block is larger than a given value (second threshold value), it is determined that the corresponding block has undergone a change. If the number of change blocks in the entire input image is larger than a given value (third threshold value), it is determined that the input image has changed as compared with the newest change image. That is, it is determined that an image change has occurred. [0111]
  • Similar to the third embodiment, pixel value differences in each block are calculated by processing the respective pixels in the raster scan order, as shown in FIG. 9A, and the respective blocks are also processed in the raster scan order, as shown in FIG. 9B. Note that the total number of change pixels in a block which is being processed will be referred to as the number of change pixels; and the total number of change blocks, the number of change blocks. As is apparent, pixels or blocks may be processed concurrently. [0112]
  • Referring to FIG. 11, in step S[0113] 401, the number of change blocks is initialized to 0. In step S402, the number of change pixels is initialized to 0. In step S403, the pixel value difference between subject pixels is calculated by the same method as that in the first embodiment.
  • In step S[0114] 404, it is checked whether the pixel value difference calculated in step S403 is larger than a predetermined value (first threshold value). If YES in step S404, it is determined that the subject pixels have undergone a change, and the flow advances to step S405 to add one to the number of change pixels. If NO in step S404, it is determined that the subject pixels have undergone no change, and the flow jumps to step S407.
  • In step S[0115] 406 following step S405, it is checked whether the number of change pixels calculated in step S405 is larger than a predetermined value (second threshold value). If YES in step S406, it is determined that the subject block has undergone a change, and the flow jumps to step S408 to add one to the number of change blocks. If NO in step S406, the flow advances to step S407.
  • In step S[0116] 407, it is checked whether all the pixels in the subject block have been processed. If YES in step S407, it is determined that the subject block has undergone no change, and the flow advances to step S412 to shift processing to the next block (the next block is set to be a subject block), and the flow returns to step S402. If NO in step S407, the flow advances to step S411 to shift processing to the next pixel (the next pixel is set to be a subject pixel), and the flow returns to step S403.
  • In step S[0117] 409 following step S408, it is checked whether the number of change blocks calculated in step S408 is larger than a predetermined value (third threshold value). If YES in step S409, it is determined that an image change has occurred, and the processing in FIG. 11 is terminated. If NO in step S409, the flow advances to step S410.
  • In step S[0118] 410, it is checked whether all blocks have been processed. If YES in step S410, it is determined that no image change has occurred, and the processing in FIG. 11 is terminated. If NO in step S410, the flow advances to step S412 to shift processing to the next block, and the flow returns to step S402.
  • Note that the purpose of this processing is to detect the presence/absence of an image change, as in the third embodiment. For this reason, when the number of change blocks exceeds the third threshold value, the processing is terminated. However, as shown in the flow chart of FIG. 12, the processing can be continued up to the last block. By transmitting, storing, or displaying only detected change blocks, the transmission, storage, or display data amount can be reduced as in the third embodiment. [0119]
  • The fifth embodiment of the present invention will be described next. [0120]
  • In the fifth embodiment, differential [0121] image forming step 5 is inserted between image input step 1 and change detection step 2 in the procedure in FIG. 1 in the first to fourth embodiments.
  • In differential [0122] image forming step 5, differential processing using, e.g., a Sobel operator, is performed for an input image input in image input step 1 (an image having undergone differential processing will be referred to as a differential image hereinafter). Other processing steps 1 to 4 are performed in the same manner as in the first to fourth embodiments. However, in change detection step 2, the same processing as that in the first to fourth embodiment is performed not for an input image but for a differential image. In addition, in newest change image storage step 3, a differential image is stored instead of an input image.
  • With this operation, although the processing is complicated, the apparatus can be designed not to detect an image change due to a change in illumination. In some application of the present invention, it is preferable that an image change due to a change in illumination be not detected. A similar effect can be expected in the first to fourth embodiments as well depending on a method of setting threshold values. In this embodiment, however, there is no need to consider a change in illumination in setting a threshold value. [0123]
  • FIG. 14 is a block diagram showing an example of a motion image processing apparatus for performing the processing in FIG. 13. Note that the arrangement shown in FIG. 14 can be realized by the hardware shown in FIG. 3, as in the first to fourth embodiments. [0124]
  • More specifically, a differential [0125] image forming unit 17 in FIG. 14 can be constituted by the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3, and the RAM 23 used as a work memory or the disk unit 24. The differential image forming unit 17 performs differential processing for an input image stored in an input image storage unit 14, and stores the resultant differential image in a differential image storage unit 18. For example, the differential image storage unit 18 can be constituted by the RAM 23 or the disk unit 24 in FIG. 3.
  • The remaining means in FIG. 14 are the same as those in the first to fourth embodiments. However, a [0126] change detection unit 12 performs the same processing as that in the first to fourth embodiments with respect to a differential image stored in the differential image storage unit 18 instead of an input image stored in the input image storage unit 14. A newest change image storage unit 15 does not store an input image stored in the input image storage unit 14 as a newest change image but stores a differential image stored in the differential image storage unit 18.
  • The arrangement shown in FIG. 14 may be replaced with the arrangement shown in FIG. 15, as in the first to fourth embodiments. More specifically, the operation mode of the differential [0127] image storage unit 18 and the newest change image storage unit 15 in FIG. 14 can be changed such that two image storage units 16 a and 16 b are selectively used upon a switching operation, as shown in FIG. 15. With this arrangement, the processing of copying a differential image from the differential image storage unit 18 to the newest change image storage unit 15, which is performed in the case shown in FIG. 14, can be omitted.
  • FIG. 17 is a flow chart showing the contents of a motion image processing method according to the sixth embodiment of the present invention. [0128]
  • Referring to FIG. 17, in [0129] image input step 101, a still image is extracted from an image obtained by performing a photographing operation with a video camera or the like (not shown), a digital motion image stored in a disk unit or the like (not shown), or the like, and the extracted image is input (this input still image will be referred to as an input image hereinafter).
  • In change [0130] component extraction step 102, the input image input in image input step 1 and an image obtained upon detection of the newest change of several image changes in the past are compared with each other to detect and extract current image change components.
  • Image change components are extracted by using a newest change image for the following reason. Consider the above two conventional methods of detecting a change area on the basis of a difference between two images. In the method of detecting a change area on the basis of a difference between a background image photographed in advance and a current photographed image, for example, a background image having no moving object must be prepared. [0131]
  • In the method of detecting a change area on the basis of a difference between two frames adjacent to each other along the time axis, if an image change is moderate, the difference becomes small. It is therefore difficult to detect a change area. [0132]
  • In contrast to this, in the method of extracting change components by comparing an input image with a newest change image as in this embodiment, a changeless image such as a background image need not be prepared as a reference image to be compared with the input image, and even a small change in a moderate image change can be reliably detected and extracted. [0133]
  • In erroneous [0134] extraction correction step 103, a component, of the change components extracted in change component extraction step 102, which is considered as a component which has been erroneously extracted because of, e.g., flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like is detected and corrected.
  • In image [0135] change discrimination step 104, it is discriminated, on the basis of the change component corrected in erroneous extraction correction step 103, whether an image change has occurred. If YES in step 104, the flow advances to newest change image storage step 105. If NO in step 104, the flow returns to image input step 101.
  • In newest change [0136] image storage step 105, the input image from which the change components have been extracted in change component extraction step 102 is stored as a newest change image. In image output step 106, the input image for which “CHANGE” is discriminated in image change discrimination step 104 is output to a communication path (e.g., a WAN or LAN) (not shown) to be transmitted, output to an image storage unit (not shown) to be stored, or output to an image display device to be displayed.
  • Upon completion of the processing in [0137] image output step 106, the flow returns to image input step 101 to repeatedly execute the above processing, unless a processing end command is issued.
  • In the above processing, an image change can be detected with simple processing, i.e., differential processing. An image change can therefore be detected at a sufficiently high speed without any dedicated device for processing an image of a large data amount. In addition, since an image change is discriminated upon removal of an erroneously extracted component of an image change, an image change can be accurately detected. [0138]
  • By outputting an image only when an image change is detected, the corresponding image can be output after its data amount is reduced. For example, by outputting only an image for which an image change is detected to a communication path, a storage unit, or a display device, the transmission, storage, or display data amount can be reduced. [0139]
  • FIG. 18 is a block diagram showing an example of the arrangement of a motion image processing apparatus for performing the processing in FIG. 17. [0140]
  • Note that a hardware arrangement for realizing the functions shown in FIG. 18 is the same as that shown in FIG. 3. [0141]
  • Referring to FIG. 18, an [0142] image input unit 111 acquires a still image from an image obtained by a video camera, a digital motion image stored in a disk unit, or the like, and stores it in an input image storage unit 112.
  • Referring to FIG. 3, for example, the [0143] image input unit 111 may be constituted by a known image input unit which causes the video capture device 27 to acquire a still image (input image) from an image, obtained by performing a photographing operation with the video camera 26, and stores the still image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23.
  • Alternatively, the [0144] image input unit 111 may be constituted by a known image input unit which reads, via the disk I/O 25, a still image (input image) from a digital motion image stored in the disk I/O 25, and stores the read image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23.
  • Although not shown in FIG. 3, the [0145] image input unit 111 may be constituted by a known image input unit which causes a digital still camera to photograph a still image, and stores the image in the RAM 23 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23. As is apparent, the image input unit 111 may be constituted by a combination of the above plurality of image input unit.
  • A change [0146] component extraction unit 113 compares an input image stored in the input image storage unit 112 with a newest change image stored in a newest change image storage unit 116 to extract current image change components. An erroneous extraction correction unit 114 detects and corrects a component, of the change components extracted by the change component extraction unit 113, which is considered as a component which has been erroneously extracted because of, e.g., flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like.
  • An image [0147] change discrimination unit 115 receives the image change components which has been corrected by the input image storage unit 14, and discriminates whether an image change has occurred. If the unit 115 discriminates that an image change has occurred, a newest change image storage unit 116 reads out the corresponding input image stored in the input image storage unit 112, and stores it as a newest change image.
  • Each of the change [0148] component extraction unit 113, the erroneous extraction correction unit 114, and the image change discrimination unit 115 can be constituted by the CPU 21 which operates in accordance with programs store din the ROM 22 or the RAM 23 in FIG. 3, and the RAM 23 used as a work memory or the disk unit 24. As is apparent, each unit may be constituted by a dedicated CPU and a dedicated RAM or disk unit, or dedicated hardware.
  • If “CHANGE” is detected by the image [0149] change discrimination unit 115, an image output unit 117 outputs the input image stored in the input image storage unit 112 to a communication path (e.g., a WAN or LAN) to transmit it, output to an image storage unit to store it, or output to an image display device to display it.
  • For example, referring to FIG. 3, the [0150] image output unit 117 can be constituted by a known image output unit which transmits an image to a communication path such as a WAN or LAN via the communication device 28 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23.
  • Alternatively, the [0151] image output unit 117 my be constituted by a known image output unit which stores part of a digital motion image in the disk unit 24 under the control of the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23. In this case, the disk unit 24 may be a unit which can be used via a network such as a LAN.
  • Although not shown in FIG. 3, the [0152] image output unit 117 may be constituted by a known image output unit which continuously displays still images on the same portion on a display to display a motion image. As is apparent, the image output unit 117 may be constituted by a combination of the above plurality of image output units, and may be used by properly selecting the control programs for these units.
  • For example, referring to FIG. 3, the input [0153] image storage unit 112 and the newest change image storage unit 116 may be constituted by the RAM 23 or the disk unit 24. As is apparent, each of the storage units 112 and 116 may be constituted by a dedicated storage device.
  • Note that the input [0154] image storage unit 112 and the newest change image storage unit 116 may be constituted by two image storage units 118 a and 118 b which are selectively used upon a switching operation, as shown in FIG. 19. In this arrangement, every time an image change component is detected by the change component extraction unit 113, the roles of the image storage units 118 a and 118 b are switched in newest change image storage step 106 in FIG. 17.
  • Assume that an image change is detected at a certain time point in the process of sequentially storing an input image in the [0155] image storage unit 118 a. In this case, the image storage unit 118 a is switched to serve as a storage unit for storing a newest change image, whereas the image storage unit 118 b is switched to serve as a storage unit for storing an input image.
  • With this operation, the input image stored in the [0156] image storage unit 118 a when the image change is detected is used as a newest change image afterward, and the processing of copying an input image from the input image storage unit 112 to the newest change image storage unit 116 in the case shown in FIG. 18 can be omitted.
  • Note that the [0157] image storage units 118 a and 118 b need not be used as separate memories, but the above function can be realized by one memory with a bank switching operation.
  • The processing contents in each step in FIG. 17 will be described in detail with reference to the arrangements of FIGS. 18 and 3. [0158]
  • In [0159] image input step 101, the image input unit 111 acquires a still image from an image obtained by the video camera 26 or a digital motion image stored in the disk unit 24. Alternatively, the image input unit 111 photographs a still image using a digital still camera (not shown), and stores the still image in the input image storage unit 112.
  • In change [0160] component extraction step 102, the input image stored in the input image storage unit 112 in the above manner is compared with the newest change image stored in the newest change image storage unit 116 to detect and extract current image change components.
  • The processing in change [0161] component extraction step 102 is performed by calculating the pixel value difference (absolute value) between each pair of corresponding pixels (pixels at identical positions in the respective images) using the input image and the newest change image, and by extracting a pixel value difference exceeding a given threshold value as an image change component. The following description will be made on the assumption that each pixel value difference is calculated by processing the respective pixels of an image in the raster scan order, as indicated by the arrow in FIG. 20. As is apparent, however, pixels may be processed concurrently.
  • In erroneous [0162] extraction correction step 103, a component, of the change components extracted by the change component extraction unit 113 in change component extraction step 102, which is considered as a component which has been erroneously extracted because of, e.g., flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like is detected and corrected, thereby generating corrected image change components. In addition, the sum total of corrected image change components is obtained.
  • In this erroneous extraction correction processing, first of all, it is checked whether a change component extracted in change [0163] component extraction step 102 satisfies the following two conditions:
  • (1) a condition in which the pixel value difference (absolute value) (corresponding to a subject pixel) between a given pair of corresponding pixels (pixels at identical positions in the respective images) of an input image and a newest change image is larger than a given value (threshold value 1), and all the pixel value differences (absolute values) (corresponding to eight pixels adjacent to the subject pixel (i.e., the currently processed pixel)) between pairs of corresponding pixels of the input image and the newest change image are equal to or smaller than the given value (threshold value 1); and [0164]
  • (2) a condition in which a pixel value difference (absolute value) (corresponding to a subject pixel) between a given pair of corresponding pixels of the input image and the newest change image is equal to or smaller than the given value (threshold value 1), and all the pixel value differences (absolute values) (corresponding to eight pixels adjacent to the subject pixel between pairs of corresponding pixels of the input image and the newest change image is larger than the given value (threshold value 1). [0165]
  • If condition (1) is satisfied, it is determined that an image change component is erroneously extracted. This value is not added to a total change amount (in this embodiment, the sum total of pixel value differences during processing (corrected image change components) will be referred to as a total change amount. A total change amount is calculated by processing the respective pixels in the raster scan order, as indicated by the arrow in FIG. 20, in the same manner as the above calculation of pixel value differences). [0166]
  • If condition (2) is satisfied, the pixel value difference (absolute value) between the corresponding pixels is changed to the mean value of pixel value differences (absolute values) at the positions of the surrounding eight pixels, and the mean value is added to the total change amount. If neither condition (1) nor condition (2) is satisfied, the pixel value difference (absolute value) between the corresponding pixels itself is added to the total change amount. [0167]
  • That is, corrected image change components are defined such that the pixel value difference (absolute value) between pixels corresponding to condition (1), of two conditions (1) and (2), is set to be 0, the pixel value difference (absolute value) between pixels corresponding to condition (2) is set to be the mean value of pixel value differences (absolute values) at the positions of eight pixels around a subject pixel, and a pixel value difference between pixels corresponding to neither condition (1) nor condition (2) is a pixel value difference (absolute value) extracted in change [0168] component extraction step 102.
  • In image [0169] change discrimination step 104, an image change is discriminated on the basis of the sum total (total change amount) of image change components corrected by the erroneous extraction correction unit 114 in erroneous extraction correction step 103. More specifically, the above total change amount is compared with a predetermined value (threshold value 2). If the total change amount is larger than threshold value 2, it is discriminated that “THERE IS CHANGE”. If the total change amount is not larger than threshold value 2 after all the pixels in the frame are processed, it is discriminated that “THERE IS NO CHANGE” in the frame.
  • The flow chart of FIG. 21 shows an example of the contents of processing executed by the [0170] CPU 21 in change component extraction step 102, erroneous extraction correction step 103, and image change discrimination step 104 in FIG. 17 in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3.
  • In the processing shown in FIG. 21, the pixel value difference (absolute value) between each pair of corresponding pixels of an input image and a newest change image is calculated. Of the calculated pixel value differences, only pixel value differences each having a value exceeding a given value (threshold value 1) are added throughout the image. If the total value (total change amount) exceeds another given value (threshold value 2), it is discriminated that the input image has changed as compared with the newest change image, i.e., “THERE IS CHANGE”. If the total change amount of the entire image is equal to or smaller than [0171] threshold value 2, it is discriminated that “THERE IS NO CHANGE”.
  • Assume that the pixel value difference (absolute value) between a given pair of corresponding pixels exceeds [0172] threshold value 1, but all pixel value differences corresponding to eight pixels adjacent to the subject pixel having the above pixel value difference are equal to or smaller than threshold value 1. In this case, it is determined that the pixel value difference at the subject pixel position is an erroneously extracted component, i.e., a change component is erroneously extracted. Therefore, this value is not added to the total change amount.
  • Assume that the pixel value difference (absolute value) between a given pair of corresponding pixels is equal to or smaller than a given value (threshold value 1), but all pixel value differences corresponding to eight pixels adjacent to the subject pixel having the above pixel value difference are larger than [0173] threshold value 1. In this case, it is determined that extraction of an image change component has been omitted, and the pixel value difference at the subject pixel position is changed to the mean value of the respective pixel value differences (absolute values) at the surrounding eight pixel positions. This mean value is then added to the total change amount.
  • The processing in this embodiment will be described with reference to the flow chart of FIG. 21. [0174]
  • Referring to FIG. 21, in step S[0175] 501, initialization processing required for erroneous extraction correction processing using a nine-pixel area (3 pixels (vertical)×3 pixels (horizontal)) is performed. As described above, in order to determine erroneous extraction of an image change component, pixel value differences corresponding to eight pixels around a subject pixel position for determination of erroneous extraction must be obtained in advance. FIG. 20 shows this state.
  • Assume that a position A is a subject position for determination of erroneous extraction in FIG. 20. In this case, a nine-pixel area (3 pixels (vertical)×3 pixels (horizontal)) indicated by the hatched portion includes pixel positions (to be referred to as a noise discrimination subject area hereinafter) as reference positions for discrimination of erroneous extraction concerning the subject position A. Referring to FIG. 20, processing is performed in the raster scan order. For this reason, in discriminating erroneous extraction concerning the subject position A, a pixel value difference (absolute value) at a pixel position B required for subsequent discrimination is calculated in advance. [0176]
  • The initialization processing in step S[0177] 501 will be described in detail with reference to the flow chart of FIG. 22.
  • Referring to FIG. 22, in step S[0178] 1001, a total change amount is set to zero. In step S1002, the difference in the number of pixels ((x+2) where x is the number of pixels in the main scanning direction) between the pixel value calculation position (position B in FIG. 20) and the subject position of noise discrimination (position A in FIG. 20) is set in a first temporary buffer (not shown).
  • In step S[0179] 1003, the memory address of an output destination to which a pixel value difference calculation result is to be output and the memory address of an output destination to which a discrimination result indicating whether the calculated pixel value difference is larger than predetermined threshold value 1 is to be output are respectively set in second and third temporary buffers (not shown). In this case, the output destinations of the pixel value difference and the comparison result indicating whether the pixel value difference is larger than threshold value 1 are independently ensured in free areas in the RAM 23.
  • In step S[0180] 1004, a pixel position where a pixel value difference is to be calculated is set as the head pixel position (the head position of a frame) in the raster scan order in a fourth temporary buffer (not shown). In step S1005, a pixel value difference at the pixel position set in the fourth temporary buffer is calculated. This calculated pixel value difference is output to the pixel value difference output destination (a free area in the RAM 23) held in the second temporary buffer.
  • In this case, if the input image is a halftone image, the pixel value difference is, for example, the absolute value of the difference between subject pixels. If the input image is a color image, the pixel value difference is, for example, the sum of the absolute values of the differences between R, G, and B values of subject pixels. [0181]
  • In step S[0182] 1006, it is checked whether the pixel value difference calculated in step S1005 is larger than predetermined threshold value 1. If YES in step S1006, the flow advances to step S1007. If NO in step S1006, the flow advances to step S1008.
  • In step S[0183] 1007, as a signal indicating that the pixel value difference at the subject pixel position is larger than predetermined threshold value 1, “1” (also called a black pixel) is output to a bit map data holding area (the output destination of the discrimination result indicating that the pixel value difference is larger than threshold value 1) corresponding to the subject pixel position set in the third temporary buffer. The flow then advances to step S1009.
  • In step S[0184] 1008, as a signal indicating that the pixel value difference at the subject pixel position is equal to or smaller than predetermined threshold value 1, “0” (also called a white pixel) is output to a bit map data holding area corresponding to the subject pixel position. The flow then advances to step S1009.
  • In the [0185] RAM 23, the above bit map data holding area is a bit map image data area having the same size as that of an image for which change detection is being performed. At each pixel position of the image for which change detection is being performed, binary image data (black or white pixel data) consisting of a corresponding pixel is to be held.
  • In step S[0186] 1009, the pixel position set in the fourth temporary buffer is updated to shift the pixel position, at which a pixel value difference is to be calculated, by one pixel ahead. In step S1010, the contents set in the second temporary buffer are updated to shift the output destination of the pixel value difference calculation result by one pixel ahead. In addition, the contents set in the third temporary buffer are updated to shift, by one pixel ahead, the output destination to which the discrimination result indicating that the calculated pixel value difference is larger or smaller than predetermined threshold value 1 is to be output.
  • In step S[0187] 1011, the value (the difference in the number of pixels between the pixel value difference calculation position and the subject position of noise discrimination) set in the first temporary buffer is decreased by one. In step S1012, it is checked whether the value set in the first temporary buffer is zero. With this operation, it is checked whether all the pixels required for initialization have been processed.
  • If it is discriminated that the value in the first temporary buffer is zero, it is determined that all the pixels required for initialization have been processed. The processing in step S[0188] 101 in FIG. 6 is then terminated, and the flow returns to the initial processing routine. If the value in the first temporary buffer is not zero, the flow returns to step S1005 to further perform initialization processing.
  • In step S[0189] 502 in FIG. 21, a pixel value difference at the pixel position (subject position of noise discrimination) set in the fourth temporary buffer is calculated. The calculated pixel value difference is output to the pixel value difference output destination held in the second temporary buffer. In step S503, it is checked whether the pixel value difference calculated in step S502 is larger than predetermined threshold value 1. If YES in step S503, the flow advances to step S504. If NO in step S503, the flow advances to step S505.
  • In step S[0190] 504, as a signal indicating that the pixel value difference at the pixel value difference calculation position in step S502 is larger than predetermined threshold value 1, data of “1” (black pixel) is output to a bit map data holding area corresponding to the subject pixel position set in the third temporary buffer. The flow then advances to step S506.
  • In step S[0191] 505, as a signal indicating that the pixel value difference at the pixel value difference calculation position in step S502 is equal to or smaller than predetermined threshold value 1, data of “0” (white pixel) is output to a bit map data holding area corresponding to the subject pixel position. The flow then advances to step S506.
  • In step S[0192] 506, whether the pixel value difference (absolute value) at a subject position of noise discrimination (e.g., the position A in FIG. 20) satisfies condition (1) described above is discriminated. More specifically, whether the pixel value difference at the subject position A exceeds threshold value 1, and the pixel value difference at each of eight pixel positions adjacent to the pixel position A is equal to or smaller than threshold value 1 is discriminated in the following manner.
  • The binary image data stored in the bit map data holding area corresponding to the nine-pixel area (3 pixels (vertical)×3 pixels (horizontal)) (the noise discrimination subject area indicated by hatching in FIG. 20) around the subject position A of noise discrimination are referred to check whether the central pixel of the noise discrimination subject area is a black isolated point (only the central pixel is a black pixel, but all the surrounding eight pixels are white pixels). With this operation, the above discrimination processing is performed. If it is discriminated that the central pixel is a black isolated point, the flow advances to step S[0193] 513. Otherwise, the flow advances to step S507.
  • In step S[0194] 507, whether the pixel value difference (absolute value) at a subject position of noise discrimination (e.g., the position A in FIG. 20) satisfies condition (2) described above is discriminated. More specifically, whether the pixel value difference at the subject position A is equal to or smaller than threshold value 1, and the pixel value difference at each of eight pixel positions adjacent to the pixel position A is larger than threshold value 1 is discriminated in the following manner.
  • The binary image data stored in the bit map data holding area corresponding to the nine-pixel area (3 pixels (vertical)×3 pixels (horizontal)) (the noise discrimination subject area indicated by hatching in FIG. 20) centered on the subject position A of noise discrimination are referred to check whether the central pixel of the noise discrimination subject area is a white isolated point (only the central pixel is a white pixel, but all the surrounding eight pixels are black pixels). With this operation, the above discrimination processing is performed. If it is discriminated that the central pixel is a white isolated point, the flow advances to step S[0195] 508. Otherwise, the flow advances to step S509.
  • In step S[0196] 508, the mean value of the pixel value differences (absolute values) at the eight pixel positions around the subject position A in the nine-pixel area (3 pixels (vertical)×3 pixels (horizontal)) (the noise discrimination subject area indicated by hatching in FIG. 20) centered on the subject position A of noise discrimination is calculated by referring to the area of the RAM 23 in which the pixel value difference calculation result is held. The obtained mean value is then set as the pixel value difference at the subject position A of noise discrimination.
  • In step S[0197] 509, whether the pixel value difference (absolute value) at the subject position A of noise discrimination is larger than threshold value 1 is discriminated by referring to the binary image data stored in the bit map data holding area corresponding to the subject position A of noise discrimination and checking whether the pixel at the subject position A of noise discrimination is a black pixel. If a black pixel is discriminated in step S509, the flow advances to step S510. Otherwise, the flow advances to step S513.
  • In step S[0198] 510, the pixel value difference (the above mean value after the processing in step S508; the value calculated in step S502 after the processing in step S509) is added to the total change amount to obtain a new total change amount. In step S511, it is checked whether the total change amount updated in step S510 is larger than predetermined threshold value 2. If YES in step S511, the flow advances to step S512. If NO in step S511, the flow advances to step S513.
  • In step S[0199] 512, it is discriminated that an image change has occurred, and the processing in the flow chart of FIG. 21 is terminated. In step S513, it is checked whether all pixels have been processed. If YES in step S513, the flow advances to step S514 to discriminate that no image change has occurred, and the processing in the flow chart of FIG. 21 is terminated. If NO in step S513, the flow advances to step S515.
  • In step S[0200] 515, processing is shifted to the next pixel. That is, both the pixel value difference calculation position and the subject position of noise discrimination are shifted to the next pixels to set a new subject pixel for noise discrimination. The flow then returns to step S502.
  • In the above processing, a nine-pixel area (3 pixels (vertical)×3 pixels (horizontal)) centered on a subject position of noised discrimination may include a portion where no corresponding binary image data or no pixel value difference exists, as in a case wherein the above subject position is at an end portion (upper, lower, left, or right end) of the image. In this case, such a portion is handled as white pixel data or “0”. [0201]
  • If it is discriminated in image [0202] change discrimination step 104 that an image change has occurred, the input image input in image input step 101 and stored in the input image storage unit 112 is stored as a newest change image in the newest change image storage unit 116 in newest change image storage step 105.
  • If it is discriminated in image [0203] change discrimination step 104 that an image change has occurred, the input image stored in the input image storage unit 112 is output to a communication path (a WAN or LAN) via the communication device 28 or the like to be transmitted, output to the disk unit 24 to be stored, or output to an image display device (not shown) to be displayed in image output step 106.
  • As described above, according to the sixth embodiment, fine change components dispersing randomly are extracted and removed, as erroneously extracted image change components, from change components extracted by comparison between a subject image whose motion is to be detected and a newest change image. On the basis of the change components corrected in this manner, it can be determined whether the above image has undergone a change. Therefore, an image change can be detected without being affected by, e.g., change components produced randomly in the image because of flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like. That is, an image change can be accurately detected. [0204]
  • The seventh embodiment of the present invention will be described next. [0205]
  • The seventh embodiment is another embodiment of the processing in change [0206] component extraction step 102, erroneous extraction correction step 103, and image change discrimination step 104 in the sixth embodiment.
  • FIG. 23 is a flow chart showing an example of the contents of processing executed by the [0207] CPU 21 in change component extraction step 102, erroneous extraction correction step 103, and image change discrimination step 104 in the seventh embodiment in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3.
  • In the processing shown in FIG. 23, the pixel value difference (absolute value) between each pair of corresponding pixels is calculated using an input image and a newest change image. If each pixel value difference is larger than a given value (threshold value 1), it is determined that the corresponding subject pixel has undergone a change (a pixel having undergone a change will be referred to as a change pixel hereinafter). If the number of change pixels in the entire input image is larger than a given value (threshold value 3), it is determined that the input image has changed as compared with the newest change image. That is, it is determined that an image change has occurred. [0208]
  • Assume that the pixel value difference (absolute value) between a given pair of corresponding pixels is larger than [0209] threshold value 1, but all pixel value differences corresponding to eight pixels adjacent to the subject pixel having the above pixel value difference are equal to or smaller than threshold value 1. In this case, it is determined that the pixel value difference at the subject pixel position is an erroneously extracted component, i.e., an erroneously extracted change component. Therefore, this subject pixel is not counted as a change pixel.
  • Assume that the pixel value difference (absolute value) between a given pair of corresponding pixels is equal to or smaller than [0210] threshold value 1, but all pixel value differences corresponding to eight pixels adjacent to the subject pixel having the above pixel value difference are larger than threshold value 1. In this case, it is determined that extraction of an image change component has been omitted, and this subject pixel is counted as a change pixel.
  • Similar to the sixth embodiment, the above change pixel discrimination processing is performed by processing the respective pixels in an image in the raster scan order, as shown in FIG. 20, and the sum total of change pixels during processing will be referred to as the number of change pixels. As is apparent, pixels may be processed concurrently. [0211]
  • The processing in this embodiment will be described below with reference to the flow chart of FIG. 23. [0212]
  • Referring to FIG. 23, in step S[0213] 601, initialization processing required for erroneous extraction correction processing using a nine-pixel area (3 pixels (vertical)×3 pixels (horizontal)) is performed as in the processing in step S501 in FIG. 21.
  • In step S[0214] 601, processing is performed in the raster scan order, as shown in FIG. 20. For this reason, in discriminating erroneous extraction concerning the subject position A, a pixel value difference (absolute value) at the pixel position B which is required for the subsequent discrimination processing is calculated, and it is discriminated in advance whether each calculated pixel value difference is larger than predetermined threshold value 1.
  • If it is discriminated that a given pixel value difference is larger than [0215] predetermined threshold value 1, a black pixel is assigned. Otherwise, a white pixel is assigned. With this operation, binary image data consisting of pixels corresponding to the respective pixel positions of the image for which change detection is being performed are generated in advance.
  • The initialization processing in step S[0216] 601 can be realized by executing the processing in the flow chart of FIG. 22, as in the initialization processing in step S501 in FIG. 21.
  • In the seventh embodiment, however, the second temporary buffer used in the sixth embodiment and the memory area for holding a pixel value difference itself are not required. For this reason, in the processing in steps S[0217] 1003, S1005, and S1010 in FIG. 22, processing associated with the second temporary buffer in the sixth embodiment and the memory area for holding a pixel value difference itself is not required.
  • In the seventh embodiment, “SET TOTAL CHANGE AMOUNT TO ZERO” in step S[0218] 1001 in FIG. 22 is replaced with “SET NUMBER OF CHANGE PIXELS TO ZERO”.
  • In step S[0219] 602 in FIG. 23, a pixel value difference at a pixel value difference calculation position (subject position of noise discrimination) is calculated by the same method as in the sixth embodiment. In step S603, it is checked whether the pixel value difference calculated in step S602 is larger than predetermined threshold value 1. If YES in step S603, the flow advances to step S604. If NO in step S603, the flow advances to step S605.
  • In step S[0220] 604, as a signal indicating that the pixel value difference at the pixel value difference calculation position in step S602 is larger than predetermined threshold value 1, data of “1” (black pixel) is output to a bit map data holding area corresponding to the subject pixel position set in a third temporary buffer (not shown). The flow then advances to step S606.
  • In step S[0221] 605, as a signal indicating that the pixel value difference at the pixel value difference calculation position in step S602 is equal to or smaller than predetermined threshold value 1, data of “0” (white pixel) is output to a bit map data holding area corresponding to the subject pixel position. The flow then advances to step S606.
  • In step S[0222] 606, similar to the sixth embodiment, whether the pixel value difference at the subject position of noise discrimination (e.g., the position A in FIG. 20) satisfies condition (1) described above is discriminated by checking whether the subject position A of noise discrimination is a black isolated point in the nine-pixel area (3 pixels (vertical)×3 pixels (horizontal)) (the noise discrimination subject area indicated by hatching in FIG. 20) centered on the subject position A. If the subject position A is a black isolated point, the flow advances to step S612. Otherwise, the flow advances to step S607.
  • In step S[0223] 607, similar to the sixth embodiment, whether the pixel value difference at the subject position of noise discrimination (e.g., the position A in FIG. 20) satisfies condition (2) described above is discriminated by checking whether the subject position A of noise discrimination is a white isolated point in the nine-pixel area (3 pixels (vertical)×3 pixels (horizontal)) (the noise discrimination subject area indicated by hatching in FIG. 20) centered on the subject position A. If the subject position A is a white isolated point, the flow advances to step S609. Otherwise, the flow advances to step S608.
  • In step S[0224] 608, whether the pixel value difference (absolute value) at the subject position A of noise discrimination is larger than threshold value 1 is discriminated by referring to the binary image data stored in the bit map data holding area corresponding to the subject position A of noise discrimination and checking whether the pixel at the subject position A of noise discrimination is a black pixel. If the subject position A is a black pixel, the flow advances to step S609. Otherwise, the flow advances to step S612.
  • In step S[0225] 609, the number of change pixels is increased by one to set a new number of change pixels. In step S610, it is checked whether the number of change pixels updated in step S609 is larger than predetermined threshold value 3. If YES in step S610, the flow advances to step S611. If NO in step S610, the flow advances to step S612.
  • In step S[0226] 611, it is discriminated that an image change has occurred, and the processing in the flow chart of FIG. 23 is terminated. In step S612, it is checked whether all pixels have been processed. If YES in step S612, the flow advances to step S613 to discriminate that no image change has occurred. The processing in the flow chart of FIG. 23 is then terminated. If NO in step S612, the flow advances to step S614.
  • In step S[0227] 614, processing is shifted to the next pixel. That is, both the pixel value difference calculation position and the subject position of noise discrimination are shifted to the next pixels to set a new subject pixel for noise discrimination. The flow then returns to step S602.
  • In the above processing, a nine-pixel area (3 pixels (vertical)×3 pixels (horizontal)) centered on a subject position of noised discrimination may include a portion where no corresponding binary image data exists, as in a case wherein the above subject position is at an end portion (upper, lower, left, or right end) of the image. In this case, this area is handled as white pixel data. [0228]
  • In the seventh embodiment, similar to the sixth embodiment, an image change can be detected without being affected by, e.g., change components produced randomly in the image because of flickering of a light source or mixing of noise in a photoelectric scanning unit, an electronic circuit unit, or the like. That is, an image change can be accurately detected. [0229]
  • The eighth embodiment of the present invention will be described next. [0230]
  • In the sixth and seventh embodiments described above, as subject pixel positions, two positions, i.e., “the subject position of noise discrimination” and “the pixel value difference calculation position” like the positions A and B in FIG. 20 are set, and an appropriate difference ((the number of pixels of an image in the main scanning direction)+two pixels) is set between the two positions. An image change is detected by performing raster scanning once for every still image. However, the present invention is not limited to this. [0231]
  • For example, raster scanning may be performed twice for every still image. In the first scanning operation, pixel value differences in one entire image are calculated, and binary image data representing a discrimination result indicating whether each calculated pixel value difference exceeds [0232] predetermined threshold value 1 is formed. In the second scanning operator for the same still image, noise discrimination and discrimination of the image as a change image may be performed.
  • In addition, the difference to be set between the two subject pixel positions (the subject position of noise discrimination and the pixel value difference calculation position) is not limited to ((the number of pixels of an image in the main scanning direction)+two pixels). For example, a difference of ((the number of pixels of an image in the main scanning direction)×2+three pixels) may be set. [0233]
  • In this case, erroneous extraction discrimination can be performed by referring to a 25-pixel area (5 pixels (vertical)×5 pixels (horizontal)) centered on the subject position of noise discrimination. For example, it is discriminated that a change component is erroneously extracted, if the central pixel in the 25-pixel area is a white pixel, but all the surrounding 24 pixels are black pixels, or if 22 or more pixels of the surrounding 24 pixels have values different from the value of the central pixel. [0234]
  • The ninth embodiment of the present invention will be described next. [0235]
  • In the ninth embodiment, as shown in FIG. 24, differential [0236] image forming step 107 is inserted between image input step 101 and change component extraction step 102 in the procedure shown in FIG. 17 in the sixth to eighth embodiments.
  • In differential [0237] image forming step 107, differential processing using, e.g., a Sobel operator, is performed for an input image input in image input step 101 (an image having undergone differential processing will be referred to as a differential image hereinafter). Other processing steps 101 to 106 are performed in the same manner as in the sixth to eighth embodiments. However, in change component extraction step 102, the same processing as that in the sixth to eighth embodiment is performed not for an input image but for a differential image. In addition, in newest change image storage step 105, a differential image is stored instead of an input image.
  • With this operation, although the processing is complicated, the apparatus can be designed not to detect an image change due to a change in illumination. In some application of the present invention, it is preferable that an image change due to a change in illumination be not detected. A similar effect can be expected in the sixth to ninth embodiments as well depending on a method of setting threshold values. In this embodiment, however, there is no need to consider a change in illumination in setting a threshold value. [0238]
  • FIG. 25 is a block diagram showing an example of a motion image processing apparatus for performing the processing in FIG. 24. Note that the arrangement shown in FIG. 25 can be realized by the hardware shown in FIG. 3, as in the sixth to ninth embodiments. [0239]
  • More specifically, a differential [0240] image forming unit 119 in FIG. 25 can be constituted by the CPU 21 which operates in accordance with programs stored in the ROM 22 or the RAM 23 in FIG. 3, and the RAM 23 used as work memory or the disk unit 24. The differential image forming unit 119 performs differential processing for an input image stored in an input image storage unit 112, and stores the resultant differential image in a differential image storage unit 120.
  • In addition, for example, the differential [0241] image storage unit 120 can be constituted by the RAM 23 or the disk unit 24 in FIG. 3, similar to the input image storage unit 112 and a newest change image storage unit 116 in FIG. 25. As is apparent, the respective storage units 112, 116, and 120 may be constituted by dedicated storage devices.
  • The remaining means in FIG. 25 are the same as those in the sixth to ninth embodiments. However, a change [0242] component extraction unit 113 performs the same processing as that in the sixth to ninth embodiments with respect to a differential image stored in the differential image storage unit 120 instead of an input image stored in the input image storage unit 112. The newest change image storage unit 116 does not store an input image stored in the input image storage unit 112 as a newest change image but stores a differential image stored in the differential image storage unit 120.
  • The arrangement shown in FIG. 25 may be replaced with the arrangement shown in FIG. 26, as in the sixth to ninth embodiments. More specifically, the operation mode of the differential [0243] image storage unit 120 and the newest change image storage unit 116 in FIG. 25 can be changed such that two image storage units 118 a and 118 b are selectively used upon a switching operation, as shown in FIG. 26. With this arrangement, the processing of copying a differential image from the differential image storage unit 120 to the newest change image storage unit 116, which is performed in the case shown in FIG. 25, can be omitted.
  • The first to ninth embodiments can be effectively applied to video conference systems and monitoring systems. [0244]
  • The present invention can be practiced in other various forms without departing from its spirit and scope. [0245]
  • In other words, the foregoing description of embodiments has been given for illustrative purposes only and not to be construed as imposing any limitation in every respect. [0246]
  • The scope of the invention is, therefore, to be determined solely by the following claims and not limited by the text of the specifications and alterations made within a scope equivalent to the scope of the claims fall within the true spirit and scope of the invention. [0247]

Claims (24)

What is claimed is:
1. An image processing apparatus comprising:
a) input means for inputting a continuous image signal;
b) detection means for detecting a frame change in an image by comparing the image signal input by said input means with a reference image signal; and
c) storage means for updating/storing the image signal input by said input means as the reference image signal in units of frames in accordance with an output from said detection means.
2. An apparatus according to claim 1, wherein the reference image signal is a newest change image signal obtained when a newest change of frame changes in the past is detected.
3. An apparatus according to claim 1, further comprising output means for outputting the image signal outside in units of frames in accordance with an output from said detection means.
4. An apparatus according to claim 3, wherein said output means outputs the image signal to an external unit via a communication path.
5. An apparatus according to claim 1, wherein said detection means calculates a pixel value difference between each pair of corresponding pixels using the image signal and the reference image signal, and determines, if a sum total of pixel value differences in an entire frame is larger than a predetermined threshold value, that a frame change has occurred.
6. An apparatus according to claim 1, wherein said detection means calculates a pixel value difference between each pair of corresponding pixels using the image signal and the reference image signal, determines, if a corresponding pixel value difference is larger than a first threshold value, that a pixel change has occurred, and determines, if the number of pixels having undergone changes in an entire frame is larger than a second threshold value, that a frame change has occurred.
7. An apparatus according to claim 6, wherein said detection means determines that there is no change in a subject image, even if a pixel value difference corresponding to the subject pixel is larger than the first threshold value, in accordance with a detection result concerning a plurality of pixels adjacent to the subject pixel.
8. An apparatus according to claim 1, wherein said detection means divides the image signal and the reference image signal into a plurality of blocks, calculates the sum total of pixel value differences between corresponding pixels using the image signal and the reference image signal in units of blocks, determines, if the sum total is larger than the first threshold value, that the corresponding block has undergone a change, and determines, if the number of blocks having undergone changes in the entire frame is larger than the second threshold value, that a frame change has occurred.
9. An apparatus according to claim 1, wherein said detection means divides the image signal and the reference image signal into a plurality of blocks, calculates a pixel value difference between each pair of pixels corresponding to the image signal and the reference image signal, determines, if each pixel value difference is larger than the first threshold value, that a corresponding pixel has undergone a change, determines, if the number of pixels having undergone changes in the block, that the corresponding block has undergone a change, and determines, if the number of blocks having undergone changes in an entire frame is larger than a third threshold value, that a frame change has occurred.
10. An apparatus according to claim 9, wherein said detection means determines that there is no change in a subject image, even if a pixel value difference corresponding to the subject pixel is larger than the first threshold value, in accordance with a detection result concerning a plurality of pixels adjacent to the subject pixel.
11. An apparatus according to claim 1, wherein said detection means forms a differential image signal by performing differential processing for the image signal, and detects a frame change on the basis of the differential image signal.
12. An apparatus according to claim 1, wherein said image processing apparatus is applied to a video conference system.
13. An apparatus according to claim 1, wherein said image processing apparatus is applied to a monitoring system.
14. An image processing apparatus comprising:
a) input means for inputting a continuous image signal;
b) change component extraction means for extracting change components between images by comparing the image signal input by said input means with a reference image signal;
c) erroneous extraction correction means for detecting and removing an erroneously extracted change component from the change components extracted by said change component extraction means; and
d) image change discrimination means for discriminating an image change in the image signal on the basis of the change component corrected by said erroneous extraction correction means.
15. An apparatus according to claim 14, wherein the reference image signal is a newest change image signal obtained when a newest change of image changes in the past is detected.
16. An apparatus according to claim 14, further comprising storage means for updating/storing the image signal as the reference image signal in accordance with an output from said image change discrimination means.
17. An apparatus according to claim 14, further comprising output means for outputting the image signal outside in accordance with an output from said image change discrimination means.
18. An apparatus according to claim 14, wherein said change component extraction means forms a differential image signal by performing differential processing for the image signal, and detects the change components on the basis of the differential image signal.
19. An apparatus according to claim 14, wherein said image processing apparatus is applied to a video conference system.
20. An apparatus according to claim 14, wherein said image processing apparatus is applied to a monitoring system.
21. An image processing apparatus comprising:
a) input means for inputting a continuous image signal;
b) input image storage means for storing the image signal input by said input means;
c) detection means for detecting an image change by comparing the image signal input by said input means with a reference image signal; and
d) reference image storage means for updating/storing the image signal input by said input means as the reference image signal in accordance with an output from said detection means,
wherein said input image storage means and said reference image storage means are constituted by two image storage means, whose roles are switched in accordance with an output from said detection means.
22. An apparatus according to claim 21, further comprising output means for outputting the image signal in said input image storage means outside in accordance with an output from said detection means.
23. An image processing method comprising:
a) the input step of inputting a continuous image signal;
b) the detection step of detecting a frame change in an image by comparing the image signal input in the input step with a reference image signal; and
c) the storage step of updating/storing the image signal input in the input step as the reference image signal in units of frames in accordance with an output in the detection step.
24. An image processing method comprising:
a) the input step of inputting a continuous image signal;
b) the change component extraction step of extracting change components between images by comparing the image signal input in the input step with a reference image signal;
c) the erroneous extraction correction step of detecting and removing an erroneously extracted change component from the change components extracted in the change component extraction step; and
d) the image change discrimination step of discriminating an image change in the image signal on the basis of the change component corrected in the erroneous extraction correction step.
US10/656,128 1995-05-26 2003-09-08 Image processing apparatus and method Abandoned US20040046896A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/656,128 US20040046896A1 (en) 1995-05-26 2003-09-08 Image processing apparatus and method

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
JP7-128378 1995-05-26
JP7128378A JPH08320935A (en) 1995-05-26 1995-05-26 Method and device for processing moving image
JP8024337A JPH09219853A (en) 1996-02-09 1996-02-09 Moving image processing method and its device
JP8-024337 1996-02-09
US08/651,348 US6661838B2 (en) 1995-05-26 1996-05-22 Image processing apparatus for detecting changes of an image signal and image processing method therefor
US10/656,128 US20040046896A1 (en) 1995-05-26 2003-09-08 Image processing apparatus and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US08/651,348 Continuation US6661838B2 (en) 1995-05-26 1996-05-22 Image processing apparatus for detecting changes of an image signal and image processing method therefor

Publications (1)

Publication Number Publication Date
US20040046896A1 true US20040046896A1 (en) 2004-03-11

Family

ID=26361836

Family Applications (2)

Application Number Title Priority Date Filing Date
US08/651,348 Expired - Fee Related US6661838B2 (en) 1995-05-26 1996-05-22 Image processing apparatus for detecting changes of an image signal and image processing method therefor
US10/656,128 Abandoned US20040046896A1 (en) 1995-05-26 2003-09-08 Image processing apparatus and method

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US08/651,348 Expired - Fee Related US6661838B2 (en) 1995-05-26 1996-05-22 Image processing apparatus for detecting changes of an image signal and image processing method therefor

Country Status (1)

Country Link
US (2) US6661838B2 (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060119597A1 (en) * 2004-12-03 2006-06-08 Takahiro Oshino Image forming apparatus and method
US20090160995A1 (en) * 2007-12-20 2009-06-25 Masaki Kohama Display device, photographing apparatus, and display method
US20130176310A1 (en) * 2010-07-09 2013-07-11 Sylvain Lefebvre Image synthesis device
US20150125036A1 (en) * 2009-02-13 2015-05-07 Yahoo! Inc. Extraction of Video Fingerprints and Identification of Multimedia Using Video Fingerprinting
US9569831B2 (en) 2014-09-16 2017-02-14 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable storage medium for extracting information embedded in a printed material
US9602691B2 (en) 2014-09-12 2017-03-21 Canon Kabushiki Kaisha Image processing apparatus that determines code corresponding to a digital watermark embedded in an image
US9769380B2 (en) 2014-09-12 2017-09-19 Canon Kabushiki Kaisha Image processing apparatus, method, and storage medium for obtaining watermark information
US11265441B2 (en) 2020-01-06 2022-03-01 Canon Kabushiki Kaisha Information processing apparatus to process captured image data, information processing method, and storage medium

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6661838B2 (en) * 1995-05-26 2003-12-09 Canon Kabushiki Kaisha Image processing apparatus for detecting changes of an image signal and image processing method therefor
JP2000308021A (en) * 1999-04-20 2000-11-02 Niigata Seimitsu Kk Image processing circuit
US7083298B2 (en) * 2001-10-03 2006-08-01 Led Pipe Solid state light source
US7321623B2 (en) 2002-10-01 2008-01-22 Avocent Corporation Video compression system
US20060126718A1 (en) * 2002-10-01 2006-06-15 Avocent Corporation Video compression encoder
US7006701B2 (en) * 2002-10-09 2006-02-28 Koninklijke Philips Electronics N.V. Sequential digital image compression
US7606391B2 (en) 2003-07-25 2009-10-20 Sony Corporation Video content scene change determination
US9560371B2 (en) * 2003-07-30 2017-01-31 Avocent Corporation Video compression system
US7457461B2 (en) 2004-06-25 2008-11-25 Avocent Corporation Video compression noise immunity
US7667776B2 (en) * 2006-02-06 2010-02-23 Vixs Systems, Inc. Video display device, video encoder, noise level estimation module and methods for use therewith
JP4876605B2 (en) * 2006-02-07 2012-02-15 富士ゼロックス株式会社 Electronic conference system, electronic conference support program, and conference control terminal device
US7555570B2 (en) 2006-02-17 2009-06-30 Avocent Huntsville Corporation Device and method for configuring a target device
US8718147B2 (en) 2006-02-17 2014-05-06 Avocent Huntsville Corporation Video compression algorithm
MY149291A (en) * 2006-04-28 2013-08-30 Avocent Corp Dvc delta commands
US20150022626A1 (en) 2012-02-10 2015-01-22 Ibrahim Nahla Data, Multimedia & Video Transmission Updating System
JP5997541B2 (en) * 2012-08-10 2016-09-28 キヤノン株式会社 Image signal processing apparatus and control method thereof
US10678259B1 (en) * 2012-09-13 2020-06-09 Waymo Llc Use of a reference image to detect a road obstacle
KR102007369B1 (en) * 2012-11-27 2019-08-05 엘지디스플레이 주식회사 Timing controller, driving method thereof, and display device using the same
KR102299378B1 (en) 2016-08-24 2021-09-07 구글 엘엘씨 Map interface update system based on change detection
JP6740454B2 (en) 2016-08-24 2020-08-12 グーグル エルエルシー Change Detection Based Image Acquisition Tasking System
EP3477594B1 (en) * 2017-10-27 2020-12-30 Vestel Elektronik Sanayi ve Ticaret A.S. Detecting motion of a moving object and transmitting an image of the moving object
EP3985957B1 (en) 2020-10-14 2022-11-30 Axis AB Method and system for motion segmentation

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4777526A (en) * 1986-06-23 1988-10-11 Sony Corporation Security monitor system
US4855825A (en) * 1984-06-08 1989-08-08 Valtion Teknillinen Tutkimuskeskus Method and apparatus for detecting the most powerfully changed picture areas in a live video signal
US5091780A (en) * 1990-05-09 1992-02-25 Carnegie-Mellon University A trainable security system emthod for the same
US5389965A (en) * 1993-04-01 1995-02-14 At&T Corp. Video telephone station having variable image clarity
US5410356A (en) * 1991-04-19 1995-04-25 Matsushita Electric Industrial Co., Ltd. Scanning-line interpolation apparatus
US5455561A (en) * 1994-08-02 1995-10-03 Brown; Russell R. Automatic security monitor reporter
US5493345A (en) * 1993-03-08 1996-02-20 Nec Corporation Method for detecting a scene change and image editing apparatus
US5502482A (en) * 1992-08-12 1996-03-26 British Broadcasting Corporation Derivation of studio camera position and motion from the camera image
US5528313A (en) * 1991-12-26 1996-06-18 Sony Corporation Motion detection circuit which suppresses detection spread
US5548346A (en) * 1993-11-05 1996-08-20 Hitachi, Ltd. Apparatus for integrally controlling audio and video signals in real time and multi-site communication control method
US5635982A (en) * 1994-06-27 1997-06-03 Zhang; Hong J. System for automatic video segmentation and key frame extraction for video sequences having both sharp and gradual transitions
US5654772A (en) * 1993-04-10 1997-08-05 Robert Bosch Gmbh Method for change detection in moving images
US5664029A (en) * 1992-05-13 1997-09-02 Apple Computer, Inc. Method of disregarding changes in data in a location of a data structure based upon changes in data in nearby locations
US5701160A (en) * 1994-07-22 1997-12-23 Hitachi, Ltd. Image encoding and decoding apparatus
US5731832A (en) * 1996-11-05 1998-03-24 Prescient Systems Apparatus and method for detecting motion in a video signal
US5732146A (en) * 1994-04-18 1998-03-24 Matsushita Electric Industrial Co., Ltd. Scene change detecting method for video and movie
US5748775A (en) * 1994-03-09 1998-05-05 Nippon Telegraph And Telephone Corporation Method and apparatus for moving object extraction based on background subtraction
US5751346A (en) * 1995-02-10 1998-05-12 Dozier Financial Corporation Image retention and information security system
US5793428A (en) * 1993-06-16 1998-08-11 Intel Corporation Self-encoded deltas for digital video data transmission
US5805733A (en) * 1994-12-12 1998-09-08 Apple Computer, Inc. Method and system for detecting scenes and summarizing video sequences
US5838365A (en) * 1994-09-20 1998-11-17 Fujitsu Limited Tracking apparatus for tracking image in local region
US5909252A (en) * 1992-01-29 1999-06-01 Mitsubishi Denki Kabushiki Kaisha High efficiency encoder and video information recording/reproducing apparatus
US6304606B1 (en) * 1992-09-16 2001-10-16 Fujitsu Limited Image data coding and restoring method and apparatus for coding and restoring the same
US6606636B1 (en) * 1993-07-29 2003-08-12 Canon Kabushiki Kaisha Method and apparatus for retrieving dynamic images and method of and apparatus for managing images
US6661838B2 (en) * 1995-05-26 2003-12-09 Canon Kabushiki Kaisha Image processing apparatus for detecting changes of an image signal and image processing method therefor

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPS62147888A (en) * 1985-12-23 1987-07-01 Matsushita Electric Works Ltd Picture monitoring system
JPS635675A (en) * 1986-06-25 1988-01-11 Matsushita Electric Works Ltd Image monitoring system
DE4332753C2 (en) * 1993-09-25 1997-01-30 Bosch Gmbh Robert Process for the detection of moving objects

Patent Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4855825A (en) * 1984-06-08 1989-08-08 Valtion Teknillinen Tutkimuskeskus Method and apparatus for detecting the most powerfully changed picture areas in a live video signal
US4777526A (en) * 1986-06-23 1988-10-11 Sony Corporation Security monitor system
US5091780A (en) * 1990-05-09 1992-02-25 Carnegie-Mellon University A trainable security system emthod for the same
US5410356A (en) * 1991-04-19 1995-04-25 Matsushita Electric Industrial Co., Ltd. Scanning-line interpolation apparatus
US5528313A (en) * 1991-12-26 1996-06-18 Sony Corporation Motion detection circuit which suppresses detection spread
US5909252A (en) * 1992-01-29 1999-06-01 Mitsubishi Denki Kabushiki Kaisha High efficiency encoder and video information recording/reproducing apparatus
US5664029A (en) * 1992-05-13 1997-09-02 Apple Computer, Inc. Method of disregarding changes in data in a location of a data structure based upon changes in data in nearby locations
US5502482A (en) * 1992-08-12 1996-03-26 British Broadcasting Corporation Derivation of studio camera position and motion from the camera image
US6304606B1 (en) * 1992-09-16 2001-10-16 Fujitsu Limited Image data coding and restoring method and apparatus for coding and restoring the same
US5493345A (en) * 1993-03-08 1996-02-20 Nec Corporation Method for detecting a scene change and image editing apparatus
US5389965A (en) * 1993-04-01 1995-02-14 At&T Corp. Video telephone station having variable image clarity
US5654772A (en) * 1993-04-10 1997-08-05 Robert Bosch Gmbh Method for change detection in moving images
US5793428A (en) * 1993-06-16 1998-08-11 Intel Corporation Self-encoded deltas for digital video data transmission
US6606636B1 (en) * 1993-07-29 2003-08-12 Canon Kabushiki Kaisha Method and apparatus for retrieving dynamic images and method of and apparatus for managing images
US5548346A (en) * 1993-11-05 1996-08-20 Hitachi, Ltd. Apparatus for integrally controlling audio and video signals in real time and multi-site communication control method
US5748775A (en) * 1994-03-09 1998-05-05 Nippon Telegraph And Telephone Corporation Method and apparatus for moving object extraction based on background subtraction
US5732146A (en) * 1994-04-18 1998-03-24 Matsushita Electric Industrial Co., Ltd. Scene change detecting method for video and movie
US5635982A (en) * 1994-06-27 1997-06-03 Zhang; Hong J. System for automatic video segmentation and key frame extraction for video sequences having both sharp and gradual transitions
US5701160A (en) * 1994-07-22 1997-12-23 Hitachi, Ltd. Image encoding and decoding apparatus
US5455561A (en) * 1994-08-02 1995-10-03 Brown; Russell R. Automatic security monitor reporter
US5838365A (en) * 1994-09-20 1998-11-17 Fujitsu Limited Tracking apparatus for tracking image in local region
US5805733A (en) * 1994-12-12 1998-09-08 Apple Computer, Inc. Method and system for detecting scenes and summarizing video sequences
US5751346A (en) * 1995-02-10 1998-05-12 Dozier Financial Corporation Image retention and information security system
US6661838B2 (en) * 1995-05-26 2003-12-09 Canon Kabushiki Kaisha Image processing apparatus for detecting changes of an image signal and image processing method therefor
US5731832A (en) * 1996-11-05 1998-03-24 Prescient Systems Apparatus and method for detecting motion in a video signal

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060119597A1 (en) * 2004-12-03 2006-06-08 Takahiro Oshino Image forming apparatus and method
US20090160995A1 (en) * 2007-12-20 2009-06-25 Masaki Kohama Display device, photographing apparatus, and display method
US8149317B2 (en) * 2007-12-20 2012-04-03 Fujifilm Corporation Display device, photographing apparatus, and display method
US20150125036A1 (en) * 2009-02-13 2015-05-07 Yahoo! Inc. Extraction of Video Fingerprints and Identification of Multimedia Using Video Fingerprinting
US20130176310A1 (en) * 2010-07-09 2013-07-11 Sylvain Lefebvre Image synthesis device
US9251619B2 (en) * 2010-07-09 2016-02-02 Inria Institut National De Recherche En Informatique Et En Automatique Image synthesis device
US9602691B2 (en) 2014-09-12 2017-03-21 Canon Kabushiki Kaisha Image processing apparatus that determines code corresponding to a digital watermark embedded in an image
US9769380B2 (en) 2014-09-12 2017-09-19 Canon Kabushiki Kaisha Image processing apparatus, method, and storage medium for obtaining watermark information
US9569831B2 (en) 2014-09-16 2017-02-14 Canon Kabushiki Kaisha Image processing apparatus, image processing method, and non-transitory computer-readable storage medium for extracting information embedded in a printed material
US11265441B2 (en) 2020-01-06 2022-03-01 Canon Kabushiki Kaisha Information processing apparatus to process captured image data, information processing method, and storage medium

Also Published As

Publication number Publication date
US6661838B2 (en) 2003-12-09
US20020044607A1 (en) 2002-04-18

Similar Documents

Publication Publication Date Title
US6661838B2 (en) Image processing apparatus for detecting changes of an image signal and image processing method therefor
US8045825B2 (en) Image processing apparatus and method for composition of real space images and virtual space images
EP0686943B1 (en) Differential motion detection method using background image
JP2595158B2 (en) Automatic picture / character separation apparatus for image information and its method
US5087969A (en) Unmanned vehicle control system with guide line detection
US20060221181A1 (en) Video ghost detection by outline
US6476806B1 (en) Method and apparatus for performing occlusion testing while exploiting frame to frame temporal coherence
US6411339B1 (en) Method of spatio-temporally integrating/managing a plurality of videos and system for embodying the same, and recording medium for recording a program for the method
US7724921B2 (en) Vehicle-onboard driving lane recognizing apparatus
US6173082B1 (en) Image processing apparatus and method for performing image processes according to image change and storing medium storing therein image processing programs
US6999621B2 (en) Text discrimination method and related apparatus
JP3134845B2 (en) Apparatus and method for extracting object in moving image
JP2001043383A (en) Image monitoring system
US6750986B1 (en) Color image processing method with thin-line detection and enhancement
JP2008269181A (en) Object detector
US11272172B2 (en) Image processing apparatus, failure detection method performed by image processing apparatus, and non-transitory recording medium
JPH057363A (en) Picture monitoring device
CN111127362A (en) Video dedusting method, system and device based on image enhancement and storage medium
JPH09198505A (en) Line position detector
JPH09219853A (en) Moving image processing method and its device
JP4282512B2 (en) Image processing apparatus, binarization threshold management method in image processing apparatus, and image processing program
US6778296B1 (en) Color imaging processing method with boundary detection and enhancement
KR940003654B1 (en) Method of processing digital data
JPS6250971A (en) Pattern discriminating device
JPH0442714B2 (en)

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION