CA1196087A - Method for eliminating motion induced flicker in a video image - Google Patents

Method for eliminating motion induced flicker in a video image

Info

Publication number
CA1196087A
CA1196087A CA000426363A CA426363A CA1196087A CA 1196087 A CA1196087 A CA 1196087A CA 000426363 A CA000426363 A CA 000426363A CA 426363 A CA426363 A CA 426363A CA 1196087 A CA1196087 A CA 1196087A
Authority
CA
Canada
Prior art keywords
pel
value
image
difference
encoded
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired
Application number
CA000426363A
Other languages
French (fr)
Inventor
Joan L. Mitchell
William B. Pennebaker
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
International Business Machines Corp
Original Assignee
International Business Machines Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by International Business Machines Corp filed Critical International Business Machines Corp
Application granted granted Critical
Publication of CA1196087A publication Critical patent/CA1196087A/en
Expired legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Signal Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

METHOD FOR ELIMINATING MOTION INDUCED
FLICKER IN A VIDEO IMAGE

ABSTRACT

In an image processing system, a method is shown for efficiently transmitting data representing picture information in fields subsequent to a first field and for suppressing motion induced flicker in the processed image. Using previously processed first field data which is used to predict values for picture elements in subsequent fields, a gradient value is calculated which indicates relative picture activity. A difference value for the picture elements in the subsequent fields is calculated and is limited in magnitude by the gradient value for each picture element.
The limited difference value is encoded as a function of previous difference value and transmitted to a remote location for decoding and picture reconstruction and display.

Description

1 MEI'HOD FOR ELIMINATING MOTION INDUCED
FLICKER IN A VIDEO IMAGE

BACKGROUND OF THE INVENTION

The present invention relates to image data processing systems and more particularly to image data processing systems wherein the image data is processed and compressed to eliminate motion induced flicker in a displayed image and to improve efficiency of storage and transmission of image data between remote stationsO

PRIOR ART

There have been many systems developed for compressing digital data representing picture elements in a video image. Among these are the techniques set forth in Canadian patent applications ~o. 390,460, filed November 19, 1981, entitled "Dynamic Stack Data Compression and Decompression Sys-tem", by Joan L. Mitchell, and "Gray Scale Image Data Compression with Code Words a Function of Image History", Canadian Application No. 399,059, filed March 23, 1982, by D. Anastassiou and J. L. Mitchell.

The prior art of which the inventors of the present invention are aware with respect to data compression techniques are se-t forth in -the above-identified Canadian applica-tions.
i~

Yo982-036 With respect to elimination of motion induced flicker in video images, the inventors are unaware of any prior art which relates to the technique disclosed and claimed herein.

However, an article 'l~ntropy Measurement for Nonadaptive and Adaptive, Frame-to-Frame, ~inear Predictive Coding of Video-Telephone Signals ~!
b~ B. G. Haskall published in the Bell System Technical Journal, Vol. ~4, No. 6, pages 1155 through 1174 (1/9/75) mentions a second field pel predictor as an average of the first field pels above and below the pel to be predicted.

.
The article mentlons only a prediction of a -second field pel but does not relate a complete system for eliminating mo~ion induced flicker through calculation of characteristics of a first and a second field in a multifield per frame video image.
.. . . . . . ..
A good discussion and description of a data encoding method that achieves excellent data compression i5 set forth in the copending patent application of Anastassiou and Mitchell referred to above.

The gray scale image data compression method set forth therein provides an excellent means for encoding and reconstructing a firs~ field in a multifield per frame image display.

'7 YO9~2-036 SUMMARY OF THE INVENTION

It is an o~ject o ~he present invention to eliminate motion induced flicker in a multifield per frame image display.

It is another object of the present invention to compress da~a required for transmission of a second and subsequent fields of a multifield per frame image display.

Yet another object of the present invention is to predict a gray scale value for each pel in a second and subsequent ields of a multifield per frame image display.

Still another object of the present invention is to eliminate motion induced flicker in a multifield per frame imas~ display by calculating gxadient characteristics between a first field and subsequen-t fields in a multifield per frame display.

It is an advantage of tne present invention that motion induced flicker is suppressed and data transmission requirements are reduced through the image processing method according to the present invention.

Other objects, features and advantages of the preser~t invention will become apparent with reference to the following detailed description and dr~wing of a preerred embodiment of the invention.

3t~

BRIEF DESCRIP~ION OF THE DRAWING

FIG. 1 is a block cii~gram of Lmage display system apparatus for performing the method o~
the present invention.

FIG. 2 is a flow diagram of an encoding method according to the present invention.

FIG. 3 is a flow diagram of a decoding method according to the method of the present invention.

FIG. 4 is a schematic diagram showing the relationship o a n~unber ~f lines in a first and second field of a multifield per frame image display according to the present invention.

FIG. 5 is a schematic diagram showing the xelation-ships between various pel locations in a multifield per frame image display according to the present invention~

FIGo 6 r which includes PIGS. 6.1, 6.2 and 6.3 is a flow chart showing a data transmission encoding process for efficient data transmIssion of image information according to the present invention.
FIG. 7 is a look up table chart for determining encoding state in accordance with the method of the present invention.

F~G. 8 which includes FIGS. 8.1, 8.2, 8.3, 8.4, 8.5, 8.6, 8.7 and 8.8 is a ~low chart setting forth ~ transmitted data decoding process in accordance wïth the presen-t invention.

..

~IG. ~ is a transmitted data decode look up table ch~rt for use with the decoding process se~ ~orth in FIG. 8 ~ccording to the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
OF THE INVENTION

In ~oth freeze frame and real-time video trans~
mission, data compression can be used to reduce the communication and storage costs~ A method is descrihed hereîn for compression of the second and subsequent fields of a video image. The described method results in significant data compression imp~ovement for text images as well as for graphics images. Compared with prior known methods for compressi~g intraframe video images, the method described herein improves com-pression by approximately a factor of five.
An important objective of the method described YO982-0~6 is the suppression of motion induced flic}cer in freeze frame images.

If the video equipment employs the NTSC (National Television Systems Committee) standard (other standards are similar in their general properties), the video image i5 captured as two distinct inter-laced fields, l/60th of a second apart.

In the prior art image display systems only one field is often used, both to limit the quantity of data and to avoid flic~er introduced when the object moves during image captureO If the object moves, the two sequentially captured images are not properly superimposed in the displayed image. Th~ interlace no longer provides an effective refresh rate cf 1/60th of a second; the parts of the image which do not overlap are refreshed at a rate of l/30th of a second, which is below the critical flicker fusion frequency.

A method is described herein which suppresses motion induced flicker and improves data compression and which is ideally suited to the - high speed non-recursive processing provided in currently available image processing units.

FIG. 1 shows apparatus for capturing, processing and transmitting a video image which can support the method of the present invention.

Camera 12 is focussed on an object to be displayed which.may be placed an or in ~ront of a display board 14. The image information is transmitted from camera 12 to image processor 16 where it is digitized and processed in accordance with prior art data compression methods with respect to a first field of each frame and by the method of the present invention with respect to second and subsequent fields of each rame. Image processor 16 operates under the control of system controller 18 which controls all data pro~essing and transmission functions of the system shown in FIG. 1. The processed image from image processor 16 is transmittedJto local display 20 and to syst~m controller 18 where it is re-transmitted over a communication circuit 22 to a remote location system controller 24. ~t the remote location, ~ystem controller 24, image processor 26, camera 28, and display 30 are ide~tical apparatus to the respective counterparts camera 12, image processor 16, system controller 18 and display 20.

The transmitted image data on transmission circuit 22 is sent to system controller 24 where it is decoded. The decoded data is then reconstructed by image processor 26 into a video image to be displayed on displ~y 30 all in accordance with the method of the present invention.

Th~ apparatus dîscussed with reference to ~IG. 1 is commercially available and compatible without modification. For example, camera 1~ may be any standard vidlcoll camera with appropriate lens system, such. as, Cohu Model 4400; Image processor 16 and display 20 may be ïmplemented by a Grinnell \\ .

'`U~

Model G~R-270 image processing display system and S~stem Controller 18 may be implemerlted by an IBM Series r Computer System with standard keyboard for data and program entry, standard operator display and disk storage device for storage of program and data.

The encoding method of the present invention will be generally described with reference to PIG. 2.

The method is a form of differential pulse code modulation coding and includes the following steps:

First - the first field of the video image is encoded and reconstructed ln accordanc~ with known methods such as that sho~a in copending 39~ 1~5q application Serial no.~ ~ by Anastassiou and Mitchell. Both the encoder (local station) and the decoder (remote station) store a copy of the reconstructed first field in an image memory in image processors 16 and 26 for use in predicting pel values aIld reconstructing the second field. This reconstructed first field can be repeated twice in a display refresh buffer to produce an acceptable intermediate image.
(Alternately, the predicted pel values from Step 2 ~or the second field can be used for an even better qualit~ intermediate image.) ~ 3~I8~7 YO9~2-036 Second - the value for each pel in the second and subsequent field of the video image is predicted from the reconstructed first field information.
One implementation of the predictor f or gray scale value of second field pels is to average the value of the pels on irst field :Lines lmmediately above and below the current pel position in the second field. For example, see FIGS. 4 and 5.
FIG. 4 shows a representation o a numbex of interlaced lines whereln lines A, C, ~, and G
represent lines in a first field of an interleavecl multifield per frame image display and lines B, D and F represent lines of a second field.
Lines A, C, E and G are all identified with the lS first field ldentifier fl and lines B, D and F
are identified with the second field identifier f2. FIG. 5 sho~s a sampling of a number of pel positions of first field lines A and C and second field line B. Pel positions on line A are labelled from n-6 through n+6 inclusive. Pel positions on line B are labelled p 6 through p+6 inclusive and pel positions on line C are labelled q-6 through q+6 inclusive. If pel position p in second field line ~ is to be predicted according to the averaging method, the value of pel n lmmediately above pel p is added to the value of pel q the pel immediately below pel p and the result is divided by 2 to achieve an average predicted value for pel p. p = n+2~

An average value predictor such as that discus~ed in the Haskall publication xeferred to~above is well known in the art. The predictions or the entire second field can be calculated and stored in a small fraction of a second in an image processing system such as Yoss2-03~

described with respect to FIG. 1. The predicted second field pel values are stored in an image memory f6 in image processor 16.

Third - the reconstructed first field data can also be used to predict regions of strong activity such as by calculating the difference between pels above and beLow a cuxrent second field pel. Referring to the sample as shown in FIG. 5, a gradient value, GRAD, relative to pel p would be equal to the absolute magnitude of n-q divided by 2.

GRAD = 1 ~ 1 This is one method for producing a measure of gradient in the image. It should be noted that gradients may be xeferred to commonly as a vertical gradient due to the standardized top to bottom scanning where Lines in a first field are above and below lines in a second- field. ~owever, it should be understood that if a different scanning stand~rd was used, the invention set forth herein would equally apply.

Yo98~-036 ~ 7 Fourth - the magnitude of the vertical. yradient calculated ln the third step above, is then quantized lnto one of four binary states ~y table lookup employing a table having a transfer characteristic S such as is shown in Table I below.

~BLE I
QUANTIZATION LEVELS FOR VERTICAL GR~DIENT MA&NITUD:E~

GRAD I ENT QUANT I ZAT I ON
Mi~GNITUDE (GRAD) STATE
(decimal) (binary) Mlnimum Maximum O 1~ 00 ~l 255 11 Only positive values are ~onsidered since GRA~ is calculated from absolute value and has no sign.

It should be noted, that the first three quantiYa-tion states represen~ relatively small changes in gradi~nt magnitude while the fourth quantization state represents all gradiPnt levels larger than the first three.
.

Table I shows that the our quantization states are encoded by two binary bits. These two binary bits representing quantization state are combined with three additional bits which are used for error detection and correction~ln subsequent s-teps of the method accordiny to the present invention.

~ ~ 9 ~ 7 Fifth - a diference image is then calculated for the second field of each frame wherein the value of each pel ïn the predicted image is sub-tracted from the value of each corresponding pel in the original image previously calculated (in the second step, above) and stored. This step produces a difference image which may be quantized fairly coarsely by means of a lookup table, for example Table II below.

YOg ~ 2-0 3 6 ~ 7 TABLE II
~

QUANTIZATION LEVELS FOR DIFFERENCE IMAGE
DIFFERENCE QUANTIZATION STATE
RANGE
5 (decimal) (bi~ary) min max -10~ - 89 1101 - ~8 - 73 1011 - 12 - 0 0000 (remapped fxom 0001) 1~05 255 1110 If the difference image value is negative, the least significant bit in the quantization state is set to 1.

YO~82 036 ~ 7 SiA~th - the maximum quantized gradient, M, in an area of the image ~or ~ pel and a predetermined number of nearest neighbors is computed and used in place of the quantized gradient ~alue calculated in the fourth step above.

For example, referring to FIG. 5, the gradients for pel pairs n-6/q 6 through n+6/q+6 have been calculated in ~he third step and quanti~ed in the fourth step above. The maximum quantized gradient over a predetermined number of pel positions are calculated in the current step. It has been found that it is sufficient to consider the current pel position as well as the pel.immedlately preceeding and immediately succeeding the current pel position and to determine the ma~imum quantized gradient, M, of the group of three pel positions. Thus, the quantized gradient, G, representing the quan-tlzation of gradient value for the pel position p is compared to the quantized gradient, G-, representing the quantization of gradient value for pel position p-l to determine which hdS a larger quantized gradient magnitude. That larger quantized gradient magnitude is then compared with the quantized gradient magnitude, G+, of pel position n+l.

The larger quantized gradient magnitude, or ~YGR~.D, M, of this second comparison is then stored in place of the actual quantized gradient, G, for the current pel position n. The calculation of MAXGRAD in a neighborhood surrounding the current pel position is not necessary for proper operation of the method ~ut does produce enhanced quality images~

Seventh If the maximum gradient, referxed to as ~laxgrad or M, is less than a predetermined value, which value has been determined experically on the basis of image qu~lity, then the quantized difference Image, D, calculated in the fifth step above is limited in magnitude to M, for that current pel positïon.

For example, where M is zero, the difference image is set to zero.

The limiting of the difference image forces the reconstructed image -to remain within the bounds dictated by the first field whenever the gradient is small. This step effectively suppresses motion induced flicker or scintillation. It also l; guarantees that large portions of the difference image are zero and the positions of the nonzero regions are known to both the encoder and decoder from the first field information, thus improving the compression by-a factor of about 5.
-YO982~036 16Eighth - The difference image as limited ~y the seventh step above, i5 then encoded for trans-mission to a remote station and for reconstruction for local display of the processed image.

Referring now to FIG. 6, the encoding of the difference ïmage for transmission and reconstruction will be described.

FIG. 6 is a flow chart of an encoding method to be used with the present invention and includes sheets 6.1, 6.2, and 6.3. A start signal is generated by system controller 18 in response to completion of the limiting~step (seventh) above.
A test is made to determine if the end of the image has been reached. If the end of the image has been reached, then a stop signal is presented since the full image has been encoded. The encodlng is accomplished on a line by line basis. Considering the general case where more lines remain to be encoded, the maximum quantized gradients, M, and the quantized difference, D, for the next line are then fetched from storage for encoding. If the line of quantized differences is all zeros, a blank line is recognized and a short end of line (EOL) code is generated and the process loops back to point A ~o determine if there are further lines to be encoded.

The general end of line (EOL) code is 1111. For each additional blank line immediately following an end o~ line code, an additional 1 is sent in the EOL code strïng. Thus, ~or a sin~le blank line ~ollowing an EOL code the EOL string would Y~982-036 ~ ~9 ~ 7 be 11111 followed by 0. ~or two blank lines consecutively ~ollowin~ an EOL code, the EOL
string would be 111111 followed by 0. The trailing 0 ;n the EOL code string signals the end of the EOL ~ode.

Thus, if it is determined that all quantized differences, D, are not 0 in the current line, the 0 is coded signalling the end of the EOL code strïng and encoding of the current line continues.

Do which is a quantized difference of the previous pel position for leLtmost pel position at the beginning of the current line is set to 0, the pel counter j is set to 0 and the maximum gradient M~max+l is se~ to hexidecimal AA which refers to a pel positi~n immediately following the right edge of the image.

The pel counter j is incremented to j+l to encode the first pel of the current line on a first pass.

The MAX5RAD Mj is tested for 0. If Mj equals 0, Do is tested fo~ a value greater than 1. If Do is less than or equal to 1, a loop is made to the increment counter j step and the next pel position is considered. If DQ is grea-ter than 1, Do is set equal to 0 and a loop is made to increment counter j.

If Mj is ~t e~ual to Q, Mj is tested ~or all l's.

There are three permissible ~alues for Mj. These values are 0, representing no quantized ~radient for the current pel position in the image informa.ion area of the display, all l's rep~esenting all legal nonzero Mj's in the image information area and hex-idecimal AA which is a special code assigned to indicate Mjma~x~l~ the MAXGRAD for the phantom pel position immediately following the right edge of the image.

Thus, from the two tests ap~liecl immediately above to the ~alue of Mj, if Mj is not equal to 0 and not e~ual to all l's, the pel counter j is tested to determine whether j is greater than jma~ (the highest pel count position in a line). If , is less than jma~ an exror is indicated since Mj for that condition must be either 0 or all l's.
Mj is then error corrected as follows:

Mj is implemented as an 8 bit byte.

If there are 4 or more "l"s in the byte, Mj is set to hexidecimal F~.

If there are 3 or fewer "1" 5 in the byte, M.
is set to 00.

YO9~2-036 Referring again to FIG. 6.2, after the error correction has been completed, a loop is made to determine whether M is 0 or all l's.

Should M equal all l's, a test ls made o quantized difference Do for 0 value~ If Do equals 0, a test ;s made of Di (the quantized difference of the current pel position~. If Dj e~uals 0, Do i5 set to 1 and the process is then returned to increment counter j, the current pel position counter.

If either Do is not equal to 0 or Dj is not equal to 0, a state code is generated which is a function of Do and Dj. FIG. 7 is a state chart showing the coding states S for the matrix of the Do and Dj expressed as hexidecimal characters. A serial code is generated for each of the states shown in F~G. 7 as a function of Do value.

Table III shown below sets forth an embodiment of a serial bit co~e string for each state in the state diagram shown in FIG. 7 and the end of line indicator for Do values of 0, l and greater than o ,~ ~ ~
o ~ ,1 ~ o o o ,1 ~ ,1 o o o o C~
o ,~ ~ ~ o o o o ~ o ~1 O ~1 ~! ~ O O O O O O O O
,~ o o o o o o o o o o o o o o o o o o o o o o O ~1 ~ ~ ~1 ~1 ~i ~ ~1 ~1 ~ ~ ~ ~
O O ~1 ~1 ~ ~ ~1 ~ ~1 ~ ~ ~ ~1 ~ ~ _I

o ,~ ~ ,1 O ~1 ~! ~1 0 o o ,~ ,1 ~ o o o o o o o o o o o o ~ ,1 ,~ o o o o o o o o ~1 0 0 0 0 0 0 0 0 0 0 o o o C~ o o o o o o o o ~
Z o ~ ~1 ~l ~ ~ ~ ~1 ~ ~1 ~ ~1 ~ ~1 O O O cr O O O O O O O C:~ O O O C~ O
H ~
H

~ O
~1 E~ ~
--l ~ o ~ ~l ~l o o o o o ~; 11 ` O ~1 ~1 ~1 0 0 0 0 o o E~ o o o ,~ o o o o o o o o Q ,~ ,( o c~ c~ o o o o o o o oooooooooooo~
~) o O ~ ~I ~ ~ ,1 ,~ ~1 ~1 ~ ~ ~ '~ '~
o ~ o ~ ~ ,( ~ ,~

~1 c ,~ m c) Q ~
U~ ~1 o U~ o Encoding Table IV above, shows that for states 3 through F inclusive, the final i bits are ldentical in each state regardless of the value of Do~ Further, for states 2-F inclusive, there is always a prefix of "10" fo:r a Do = 1 As indicated previously, ~he EOL code requires four consecutive 1 bits to indicate end of line.
Additional 1 bits may be added to the EOL string if next sequential lines have all D's equal to zero.
The EOL bit string is ~ermina-ted by a single 0 bit. It should be further noted, that the EO~
code is the only code that allows four 1 bits in sequence. .-After the encoding has been completed for Dj as a function of Do/ Do is set equa to Dj a~dencoding o the next pel is initialed by increment ing pel counter j.

A loop is then taken to increment the pel counter j to j~1 and the ne~t pel position is encoded as described above. When j exceeds jmax~ a test is made of Do~ If Do is not equal to 1, an S end of line EOL ~it string 1111 is coded. If Do equals 1, a binary prefix 10 is encoded immediately preceding the EOL string 1111 as shown In Ta~le IV above.

Aftex the EOL code has b~en generated, the process then repeats at point A to determine if more lines must be encoded. When the last line has been encoded, a stop condition occurs and the processing terminates.

The serial bit stream generated by the encoding methods set forth the above, may be transmitted as generated to a remote location for decoding and reconstruction of the image to be displayed or the bLt stream can be stored in a serial storage device for local use and and/or later transmission.

A ninth and last step of the method according to the present invention is the reconstruction of the image to be displayed ~rom the predicted value for each pel added to the difference value for each pel as limited by the MAXGRAD, M, in the seventh step a~ove.

Referring to FIG. 3, the dec~ding method according ko the present invention is seen. The first field data is decoded and reconstxucted as a precursor to the interpolation, decoding and reconstruction of the second Eield data. The method Eor clecodincJ and r~3constructing the first field data is known :Ln the art and may be embodied by Y0982-036 ~ 7 any number of data decoding and image reconstruct-ing methods such as is discussed in t~e cooending patent application Serial No.~ ~ entitled "Gray 5cale Image Data Compression With Code Words a Functïon of Image History" referred to above.

As was discussed above with respect to the encoding method as embodied in FIG. 2, the steps of predicting a pel value for 2L counter pel in a second field, calculating ancl quantizing a gradient value and determlning a maximum neighor-hood quantized gradient value are identical to the same steps employed in the encoding method according to the present inventlon.

As the serial data bit stream is received at the decoding ~remote) station, the decoding process is performed. An example of the decoding process according to the present invention is shown in the flow chart of FIG. 8.

After the quantized and limited difference values have been completely decoded and stored, second and subsequent field pel values Y can then be reconstructed in the same manner as was discussed above with respect to step ninth in the encoding method. Pel value Y i5 calculated from the addition of the predicted value and the recovered, limited difference value which was transmitted from the sending station to the remote station.

The process ~or decoding the limited difference vaiues transmitted to the receiving station will be described with reference to FIG. 8, a 10w chart which shows a decoding method in accordance with the preserit inventïon. FIG. 8 is a
2~
sequential flow chart which includes FIGS. 8.1, 8.2 t 8.3, ~.~, 8.5, 9.6, 8.7 and 8.8.

Referring now to FIG. 8 and more specifically, to FIG. 8~1, the decoding of an ef~iciently transmitted image will ~e ~urther described.

Upon initiation of the decoding process, a tes~
is made to determine if there are further lines to be decoded. ~f not, a stop signal is gener-ated indicating that the entire image frame has ~een processed and decoded and is ready for display.

In ~he more general case, where there is one or more additional lines to be decoded, a line of M data is read and the D line is set to all zeros.

The next serial data bit is tested for a 1. If a 1 is present, the short EOL code is indlcated and the process loops to point A to test for further lines to be processed. If the bit is 0, Do is set to 0, pel counter j is set to 0 and Mjmax+l is set to hexidecimal AA. These initializing steps are the same as were per-formed in the encoding process described with reference to FIG. 6.

Next, pel counter j is incxemented to j+l and the decoding process continues as shown in .FIG. ~.2 in the manner i.dentical to the encoding process as descri.bed with xeXerence to ~IG. 6.2 with the only difference being that whereas if j is greater than imaX indicating ~ peI position ~eyond the end of a line,':ïn the 'encoding process, the end o~ line'code is ~enerate.d' whe'reas in the decoding . .

process, the bit stream mus-t be tested to find end of line.

Find end of line block 500 re~ers to the sub-routin shown in FI.G. 8.7. Four bits in sequence are tested. If there i.s a four bit sequence of l's, a loop is made to the start of the decoding routine at point: A. If less than three 1 bits in sequence are detected, an error condition identified as ERROR A is indicated. If three 1 ~its in succession are detected followed by a 0, a condition labelled ERROR B is indicated. Conditions ERROR A and ERROR B are dealt with in ERROR subroutine 600 shown in greater detail in FIG. 8.8.

If co.~,dition ERROR A has been.detected, the next bits are tested until a 1 bit is detected.
Subsequent bits in the bit stream are tested to determine if a sequence of four 1 bits have been detected indicating an end of line. If less than four 1 bits in sequence are detected, the subroutine loops and continues to test sequences of bits until four 1 bits ha~e been found.

A special case occurs if three 1 bits in sequence are detected with the fourth bit 0. In ~his case, a loop is made to the ERROR B subroutine which in this. event starts with a code stream of 1110. The next sequential bit is tested. If the bit i5 not a 1 bit~ a loop occurs retesting subsequent bits until a 1 bit is detected. The next sequenti.al bit is tested and bypassed regardless of wheth.er the bit is a 1 or 0. In either event, tlle error su.broutine then loops to YO98~-036 the startin~ point oE the ERROR A subroutine to attempt to find a s.equence of four consecuti~e l's indi.cati.ng an end of line. When the end of line is finally detected, the current line o D data is all set to 0 and the process retuxns to loop point A, th.e start of the decoding process~

If i~ the testlng for Mj a hexidecimal code FF
is detected, diffexence Dj for pel position j must be decoded. The main process from FIG. 8.2 exits at point C to the decode routine shown in FIGS. 8.3, 8.4 and 8.5.

If the difference value of the previous pel position Do equals 0, exlt is made to decode 0 subroutine 200 ~FI'i'. 8.4~. The next sequential bit is then tested. If the bit is 0, D. is set to 0, Do is set to -1 and process returns to point B to examine ~he next pel positio~.

If the bit tested is a 1, the next bit is tested to determine a code sequence. If the first two bits o the code sequence are 10, the next bit is tested. If the next bit is not 0, ~he Dj is set to 0, Do is set to 1 and a return is made to point B. If the bit tested indicates a three bit sequence of 100, the state S i5 set to 2. The value Dj as a function of S and Do is then obtained from the decode table shown in FIG~ ~ and DQ is set e~ual to Dj. Th.e decode. su~routine then loops back to point B..

If the first' t~o bi.ts. of the sequence are found to ~e ll,,and the thi'rd bit of the sequence is O, S i9 set equal to 3, D~ is determined from YO9~-036 ~ 7 decode table shown in FIG. 9 and DQ is set equal to Dj. The decode routine a~ain retuxns to point B for the next pel position.

If the first three bitS decoded for the current pel are lll, a branch is taken to subroutine 400 to determine the value of S. S is the state code as a function Of Do and Di. The FIN~-S sub-routine 400 is shown in ~reater detail in FIG. 8.G.

Given that the first three bits in the decode sequence are lll, the next bit is tested. If the fourth bit in sequence is l, an end of line code is indicated at a point other than end of line and a condition ERROR C is generated.

ERROR C (See FIG. 8.8, causes the cur~ent D line to be set to 0's and khe program returned to point A to decode the next line of data if any.

If the fourth hit in the sequence is 0, S is se~
to 4 as an initial value and if the fifth bit in the sequence is 0, S is incremented to S+2 and a loop is made to test the next bit in the bit stream. The loop ls continued until a 1 bit is found in the bit stream at which point the next sequential bit is tested. If the next sequential bit after the l bit is also a l bit, S is set to S+l and access is made to the lookup table for deter~in~tion o~ Dj as described above. If the bit following the 1 bit is 0, indicating a 10 ending sequence, no change ls made to the Yalue of S and again access is made to the decode table shown in FIG: ~ to find the value of Dj.

2~
The decode process continues by re-turn to point 8, incrementlng of the pel counter J and the testing of L~AXGRAD Mj. Referring again to FIG. 8.3 and also to FIG. 8.5, an alternative decoding path will be further described.

If it ls determined that Do is not equal to 0 Do is tested for a value of 1. If Do equals -1, Dj is set equal to 0, Do is set equal to 0 and a loop is made to point B.

If Do is not equal to 0 and not equal to -1, decode ~ subroutine 300 is taken. If the decode 2 subroutine 300 is taXen, the third column in transmission and encoding table III indicates the bit stream sequence for the various states S for Do ~ 1. Referring to Table III and FIG.8.5, the decode subroutine will be described~ The first bit is tested. If the first bit is 0, S
equals 0 and Dj is obtained from the access of t~e decode table shown in FIGo 9 as a function of S and Do~ Do is also set equal to Dj and a loop is made to point B. If the first bit is not 0, the next bit is tested. If the next bit is 0, a 10 sequence indicates a state S = 2. Again the decode lookup table is accessed for the value of Dj and Do is set to Dj Yo982-o3~ t~

If the second bit in the sequence is a 1, indicating a sequence of 11, the n~xt bit is tested. If the next bît is 0, indicating a sequence of 110, state S - 3 and a table look up is made to S determine the value of Dj as a~ove.

I the third ~it in the sequence is 1, the ind S subroutine described above with reference to FIG. 8.6 is taken and the prefix 111 has ~een de~ected. In any event, processing continues decoding the values for Dj until the end of line signal is detected at.which point a return is made to point A to determine whether there are any further lines to be decoded in the image.

It should be noted that aecode look up table FIG. 9 is nothing more than the inverse of the encode look up table of FIG. 7 where in FIG. 7 the state S i5 determined as a function of Dj and Do whereas in the decode look up table FIG. 9 D
is determined as a function of S and Do~

All information for the reconstruction of the efficiently transmitted image which has motion induced flicker suppressed is now available. As indicated above in the descrlption of the encoding process, the reconstructed pel value Y for each pel position on each even line in the image is calculated by addin~ the interpolated value calculated from the odd field data added to the recaptured diference value for the pel.
Th.e recaptured difference value for each pel is obtained b~ a table look up access wherein ~or each hexadecimal (:or 4 ~it binary~ quantized difference D there exists a difference value YO98~-036 bet~een -256 and ~255.

The following Table IV s.ets forth representative recaptuxed difference values, referred to as U(D) In FIGS. 2 and 3, for various quantized difference states 0 thxough F respectivel~.

TABLE IV

RECAPTURED DIFFERENCE LOOKUP TABLE
U(D) . D. . . . ~ DIFFERENCE
. .

9 - 6~

~5 5 - 32
3 - 18 O O
4 32 ~ 112 Y(~)98 2~036 '7 EXAMP LE

The following is an example of the method according to the present invention setting forth value~
for each calculation in the process.

As indicated above, the method according to the present ïnvention requires as a starting point an image which has ~een reconstructed from the ~ïrst field encoded data stream. Odd lines (lines 1, 3, S, etc.~ are the direct reconstxuction of the image lines; even lines (lines 2, 4, 6, etc are the avexage interpolated value of the odd lines above and below. The original image from which the first field todd lines) was encoded is still available for calculations of the error bet~:een the interpolation and the original second fie~d data in the encoder.

Unable to recognize this page.

Unable to recognize this page.

Unable to recognize this page.

Yo9~ 36 ~ J

The decodlng process would develop the same informa-tion for M and for the in-terpolated value of second field pels. Except for lines where the EOL code forces a new line, the decoding process then decodes 1 pel for each non 0 gradient pel. If, due to error in calculatlng MAXGRAD, either too many or too few MAXGRADS, the EOL code will occur in the wrong place ïn the code stream. The decoder then finds the EOL, zeros out the D values for that line and goes on to the next line as described above with reference to FIG. 80 The method described can achieve a data compression effïciency which results in a 256 level gray scale pel value being encoded by an average of less than 0.5 bits per pel over the entire image frame.

It should be further noted that a preferred embodiment of the method according to the present invention has been completely coded, is operational and has been fully tested.

Also, rom the above description the method according to the present invention, generating line by line program code would be within the competence of persons skilled in the art.

Although the invention has been described with reference to a preferred embodimenc thereof, it shouid be understood by those skilled in the art that various changes in scope and detail may be made wit~out departing from the spirit of the invention.
.

Claims (15)

The embodiments of the invention in which an exclusive property or privilege is claimed are defined as follows:
1. In a multifleld display system, wherein previous field data has been encoded and reconstructed, and wherein a value for each pel in a subsequent field has been predicted from information already encoded, a method of suppressing motion induced flicker and of compressing data in fields subsequent to said previous field in a displayed image, comprising the steps of:

a) calculating a gradient value for each pel in said subsequent field from said encoded values of corresponding pels b) calculating a difference value for each pel in said subsequent field from an original value for said each pel and said predicted value for each pel;

c) limiting said difference value to said gradient value if said gradient value is less than a predetermined value for each pel;

d) encoding said difference value if said gradient value for said pel is not equal to zero; and e) reconstructing an image of said subsequent field from said predicted value for each pel and said encoded difference value for each pel.
2. A method according to claim 1 further comprising the steps of:

quantizing said difference value for each pel in said subsequent field;

encoding said quantized difference value if said gradient value for said pel is not equal to zero; and reconstructiug an image of said subsequent field from said predicted value for each pel and said encoded quantized difference value for each pel.
3. A method according to claim 1 further comprising the steps of:

quantizing said gradient value for each pel in said subsequent field; and limiting said difference value if said quantized gradient value is less than a predetermined value for each said pel.
4. A method according to claim 3 further comprising the step of calculating for each said pel and predetermined adjacent pels a maximum quantized gradient value.
5. A method according to claim 4 wherein said difference value is limited if said maximum quantized gradient value is less than a predetermined value for each said pel.
6. A method according to claim 5 further comprising the step of quantizing said difference value for each pel in said subsequent field.
7. A method according to claim 6 wherein said quantized difference value is encoded if said gradient value for said pel is not equal to zero.
8. A method according to claim 1 further comprising the step of transmuting said encoded first field data and said encoded difference value for each pel in said subsequent field to a remote location for reconstruction of said image at said remote location.
9. A method according to claim 8 wherein said difference value for each pel is encoded as a function of a difference value for a previous pel in said subsequent field.
10. A method according to claim 9 further comprising the step of decoding said encoded difference value for each pel in said subsequent field as a function of a difference value for a previous pel in said subsequent field.
11. A method according to claim 1 further comprising the step of transmitting a predetermined code indicative of a blank line if all difference values and limited difference values for pels in said subsequent field are zero.
12. A method according to claim 1 further comprising the step of transmitting a unique code containing difference values for an adjacent pair of pels having a pre-determined relationship in said subsequent field to enhance data transmission efficiency.
13. A method according to claim 1 further comprising the step of storing said efficiently encoded image data for later access and use.
14. A method according to claim 1 further comprising the step of correcting errors in said steps of gradient value calculation, difference value calculation, difference limiting, and difference encoding by comparison of bit patterns in the data to be corrected.
15. A method according to claim 1 further comprising the step of correcting errors generated in the calculation of pel gradient values. and counting 1 bits in a data byte representing gradient value and correcting said byte representing said gradient value to a predetermined value depending on the number of 1 bits counted for each said byte.
CA000426363A 1982-06-01 1983-04-07 Method for eliminating motion induced flicker in a video image Expired CA1196087A (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US06/383,406 US4488174A (en) 1982-06-01 1982-06-01 Method for eliminating motion induced flicker in a video image
US383,406 1982-06-01

Publications (1)

Publication Number Publication Date
CA1196087A true CA1196087A (en) 1985-10-29

Family

ID=23512996

Family Applications (1)

Application Number Title Priority Date Filing Date
CA000426363A Expired CA1196087A (en) 1982-06-01 1983-04-07 Method for eliminating motion induced flicker in a video image

Country Status (5)

Country Link
US (1) US4488174A (en)
EP (1) EP0095560B1 (en)
JP (1) JPS58215186A (en)
CA (1) CA1196087A (en)
DE (1) DE3381696D1 (en)

Families Citing this family (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4543607A (en) * 1982-10-28 1985-09-24 Quantel Limited Video processors
DE3333404A1 (en) * 1983-09-15 1985-04-04 Siemens AG, 1000 Berlin und 8000 München METHOD AND CIRCUIT ARRANGEMENT FOR IMPROVING THE IMAGE QUALITY BY ACTIVITY CONTROLLED DPCM CODING
US4654876A (en) * 1984-12-19 1987-03-31 Itek Corporation Digital image motion correction method
US4679086A (en) * 1986-02-24 1987-07-07 The United States Of America As Represented By The Secretary Of The Air Force Motion sensitive frame integration
US4725885A (en) * 1986-12-22 1988-02-16 International Business Machines Corporation Adaptive graylevel image compression system
US5276778A (en) * 1987-01-08 1994-01-04 Ezel, Inc. Image processing system
JPS63250277A (en) * 1987-03-20 1988-10-18 インターナシヨナル・ビジネス・マシーンズ・コーポレーシヨン Status input generator
US4870695A (en) * 1987-03-20 1989-09-26 International Business Machines Corporation Compression and de-compression of column-interlaced, row-interlaced graylevel digital images
US4785356A (en) * 1987-04-24 1988-11-15 International Business Machines Corporation Apparatus and method of attenuating distortion introduced by a predictive coding image compressor
US5553170A (en) * 1987-07-09 1996-09-03 Ezel, Inc. High speed image processing system having a preparation portion and a converting portion generating a processed image based on the preparation portion
US5283866A (en) * 1987-07-09 1994-02-01 Ezel, Inc. Image processing system
EP0393906B1 (en) * 1989-04-21 1994-01-19 Sony Corporation Video signal interpolation
JP2860702B2 (en) * 1990-10-16 1999-02-24 シャープ株式会社 Motion vector detection device
US5191416A (en) * 1991-01-04 1993-03-02 The Post Group Inc. Video signal processing system
US5327254A (en) * 1992-02-19 1994-07-05 Daher Mohammad A Method and apparatus for compressing and decompressing image data
KR0151410B1 (en) * 1992-07-03 1998-10-15 강진구 Motion vector detecting method of image signal
US5416857A (en) * 1992-10-21 1995-05-16 International Business Machines Corporation Apparatus and method for compressing data while retaining image integrity
JP3084175B2 (en) * 1993-06-18 2000-09-04 シャープ株式会社 Image compression device
US5382983A (en) * 1993-07-29 1995-01-17 Kwoh; Daniel S. Apparatus and method for total parental control of television use
US5544251A (en) * 1994-01-14 1996-08-06 Intel Corporation Process and apparatus for pseudo-SIMD processing of image data
JP3045284B2 (en) * 1997-10-16 2000-05-29 日本電気株式会社 Moving image display method and device
US6542181B1 (en) * 1999-06-04 2003-04-01 Aerial Videocamera Systems, Inc. High performance aerial videocamera system
US8458105B2 (en) * 2009-02-12 2013-06-04 Decisive Analytics Corporation Method and apparatus for analyzing and interrelating data
US20100235314A1 (en) * 2009-02-12 2010-09-16 Decisive Analytics Corporation Method and apparatus for analyzing and interrelating video data

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4245248A (en) * 1979-04-04 1981-01-13 Bell Telephone Laboratories, Incorporated Motion estimation and encoding of video signals in the transform domain
GB2050752B (en) * 1979-06-07 1984-05-31 Japan Broadcasting Corp Motion compensated interframe coding system
US4232338A (en) * 1979-06-08 1980-11-04 Bell Telephone Laboratories, Incorporated Method and apparatus for video signal encoding with motion compensation
FR2461405A1 (en) * 1979-07-09 1981-01-30 Temime Jean Pierre SYSTEM FOR CODING AND DECODING A DIGITAL VISIOPHONE SIGNAL
US4371895A (en) * 1980-01-18 1983-02-01 Nippon Electric Co., Ltd. Coded video signal transmitting and receiving system
US4355306A (en) * 1981-01-30 1982-10-19 International Business Machines Corporation Dynamic stack data compression and decompression system
US4369463A (en) * 1981-06-04 1983-01-18 International Business Machines Corporation Gray scale image data compression with code words a function of image history

Also Published As

Publication number Publication date
DE3381696D1 (en) 1990-08-02
JPS58215186A (en) 1983-12-14
EP0095560B1 (en) 1990-06-27
EP0095560A3 (en) 1987-02-04
JPH0418509B2 (en) 1992-03-27
US4488174A (en) 1984-12-11
EP0095560A2 (en) 1983-12-07

Similar Documents

Publication Publication Date Title
CA1196087A (en) Method for eliminating motion induced flicker in a video image
US4816914A (en) Method and apparatus for efficiently encoding and decoding image sequences
US4704628A (en) Combined intraframe and interframe transform coding system
EP0123456B1 (en) A combined intraframe and interframe transform coding method
JP3716931B2 (en) Adaptive decoding device for continuous images
US5442400A (en) Error concealment apparatus for MPEG-like video data
US4725885A (en) Adaptive graylevel image compression system
US5708511A (en) Method for adaptively compressing residual digital image data in a DPCM compression system
US6304606B1 (en) Image data coding and restoring method and apparatus for coding and restoring the same
EP0249086B1 (en) Method and apparatus for encoding/transmitting image
CA1245339A (en) Method and system for bit-rate compression of digital data
US6489996B1 (en) Moving-picture decoding method and apparatus calculating motion vectors to reduce distortion caused by error propagation
JPS63501257A (en) Hybrid coding method using conversion for image signal transmission
US5892549A (en) Method and apparatus for compressing a digital signal using vector quantization
NO179890B (en) Adaptive motion compensation for digital television
JPH09219863A (en) Method and device for concealing channel error in video signal
US6298090B1 (en) System for detecting redundant images in a video sequence by comparing two predetermined threshold values
KR100242635B1 (en) A system for variable-length-coding and variable-length-decoding digital data
EP0699001A2 (en) Image data signal compression/transmission method and image data signal compression/transmission system
CA2166623C (en) Coding image data
JPS62284535A (en) Method and apparatus for encoding data by employing block list conversion
KR100205550B1 (en) Coder and decoder for digital transmission and/or recording of component-coded color tv signals
JPH0686247A (en) Receiver/reproducer for digital picture signal
JPH04336896A (en) Moving picture storage device
Vivian et al. DPCM studies using edge prediction and adaptive quantisation laws for the transmission of still pictures over the ISDN

Legal Events

Date Code Title Description
MKEC Expiry (correction)
MKEX Expiry