WO2006108299A1 - Image contrast enhancement - Google Patents

Image contrast enhancement Download PDF

Info

Publication number
WO2006108299A1
WO2006108299A1 PCT/CA2006/000591 CA2006000591W WO2006108299A1 WO 2006108299 A1 WO2006108299 A1 WO 2006108299A1 CA 2006000591 W CA2006000591 W CA 2006000591W WO 2006108299 A1 WO2006108299 A1 WO 2006108299A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
value
color
values
color values
Prior art date
Application number
PCT/CA2006/000591
Other languages
French (fr)
Inventor
David Sheldon Hooper
Original Assignee
Acd Systems, Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Acd Systems, Ltd. filed Critical Acd Systems, Ltd.
Publication of WO2006108299A1 publication Critical patent/WO2006108299A1/en

Links

Classifications

    • G06T5/94
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration by the use of local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10024Color image

Definitions

  • the present invention relates to the field of digital image processing, and more particularly to enhancement of digital images.
  • Digital image enhancement concerns applying operations to a digital image in order to improve a picture, or restore an original that has been degraded.
  • Digital image enhancement dates back to the origins of digital image processing. Unlike film-based images, digital images can be edited interactively at will, and detail that is indiscernible in an original image can be brought out. Often images that appear too dark because of poor illumination or too bright because of over illumination can be restored to accurately reflect the original scene.
  • Enhancement techniques involve digital image processing and can include inter alia filtering, morphology, composition of layers and random variables.
  • Conventional image processing software often includes a wide variety of familiar editing tools used for enhancing an image, such as edge sharpening, smoothing, blurring, histogram equalization, gamma correction, dithering, color palate selection, paint brush effects and texture rendering.
  • enhancement techniques include one or more parameters, which a user can dynamically adjust to fine-tune the enhanced image.
  • Contrast enhancement is a particular aspect of image enhancement that concerns color variations that are not clearly discernible within an image, because of dark shadows or bright highlights. Because of the eye's relative insensitivity to variations in dark colors and variations in bright colors, important details within an image can be missed.
  • Conventional contrast enhancement can involve transfer functions for expanding dynamic range in some parts of the color spectrum and compressing dynamic range in other parts, and gamma correction for adjusting brightness and contrast.
  • the present invention concerns a novel method and system for image enhancement that uses one or more filtered images to generate offsets and multipliers for adjusting pixel color values.
  • the image enhancement can be controlled through adjustable user parameters that perform leveling and contrast adjustment for both shadow portions and highlight portions of an original image. These parameters are intuitive and easy to adjust.
  • the present invention can be implemented extremely efficiently, so as to achieve real-time performance whereby image enhancement is performed within a fraction of a second immediately upon adjustment of a user parameter, for multi-megapixel images captured by today's digital cameras. As such, a user is provided with continuous real-time feedback, which enables extremely accurate fine-tuning of the enhancement.
  • a user is able to instantly compare an enhanced image with a source image, and compare one enhanced image with another enhanced image.
  • the present invention generates two response curves, one for shadows and one for highlights, each a function of color value.
  • the values of the response curves are used as multipliers for pixel color values.
  • the response curve for highlights is an exponential curve, and increases monotonically from a value of one, corresponding to a color value of 0, to a value greater than or equal to one, corresponding to a maximum color value.
  • the multiplier for highlights visually amplifies bright color variation.
  • the response curve for shadows is also an exponential curve, and decreases monotonically from a value greater than or equal to one, corresponding to a color value of 0, to a value of one, corresponding to a maximum color value. As such, the multiplier for shadows visually amplifies dark color variation.
  • the highlight response curve is derived from a first filtered image and the shadow response curve is derived from a second filtered image.
  • the first filtered image is a filter of an image of minimum source color values and the second filtered image is a filter of an image of maximum source color values.
  • the first and second filtered images are filters of an image of source luminance values.
  • the present invention uses an offset curve, which is also a function of color value, to scale dynamic color range to the range of color values greater than or equal to the offset value.
  • a method for contrast enhancement for digital images including filtering an original image having original color values, to generate a first filtered image, corresponding to bright color values, and a second filtered image corresponding to dark color values, deriving local highlight multipliers by applying a highlight response curve to the first filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, deriving local shadow multipliers by applying a shadow response curve to the second filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, deriving local offset values by applying an offset curve to the first filtered image, and processing the original image, including subtracting the local offset values from the original color values to generate shifted color values,
  • a system for enhancing contrast of digital images including a filter processor for filtering an original image having original color values, to generate a first filtered image, corresponding to bright color values, and a second filtered image corresponding to dark color values, and an image enhancer coupled to said filter processor for (i) deriving local highlight multipliers by applying a highlight response curve to the first filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, (ii) deriving local shadow multipliers by applying a shadow response curve to the second filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, (iii) deriving local offset values by applying an offset curve to the first
  • a computer- readable storage medium storing program code for causing a computer to perform the steps of filtering an original image having original color values, to generate a first filtered image, corresponding to bright color values, and a second filtered image corresponding to dark color values, deriving local highlight multipliers by applying a highlight response curve to the first filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, deriving local shadow multipliers by applying a shadow response curve to the second filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, deriving local offset values by applying an offset curve to the first filtered image, and processing the original image, including subtracting the local offset
  • a method for contrast enhancement for digital images including filtering an original image having original color values, to generate a filtered image corresponding to bright color values, deriving local highlight multipliers by applying a highlight response curve to the filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, deriving local offset values by applying an offset curve to the filtered image, and processing the original image, including subtracting the local offset values from the original color values to generate shifted color values, and multiplying the shifted color values by the local highlight multipliers.
  • a method for contrast enhancement for digital images including filtering an original image having original color values, to generate a filtered image corresponding to dark color values, deriving local shadow multipliers by applying a shadow response curve to the filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, and processing the original image, comprising multiplying the original color values by the local shadow multipliers, thereby generating a contrast- enhanced image from the original image.
  • a system for enhancing contrast of digital images including a filter processor for filtering an original image having original color values, to generate a filtered image corresponding to bright color values, and an image enhancer coupled to said filter processor for (i) deriving local highlight multipliers by applying a highlight response curve to the filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, (ii) deriving local offset values by applying an offset curve to the filtered image, (iii) subtracting the local offset values from the original color values to generate shifted color values, and (iv) multiplying the shifted color values by the local highlight multipliers, thereby generating a contrast-enhanced image from the original image.
  • a system for enhancing contrast of digital images including a filter processor for filtering an original image having original color values, to generate a filtered image corresponding to dark color values, and an image enhancer coupled to said filter processor for (i) deriving local shadow multipliers by applying a shadow response curve to the filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, and (ii) multiplying the original color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
  • a computer- readable storage medium storing program code for causing a computer to perform the steps of filtering an original image having original color values, to generate a filtered image corresponding to bright color values, deriving local highlight multipliers by applying a highlight response curve to the filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, deriving local offset values by applying an offset curve to the filtered image, and processing the original image, including subtracting the local offset values from the original color values to generate shifted color values, and multiplying the shifted color values by the local highlight multipliers.
  • a computer-readable storage medium storing program code for causing a computer to perform the steps of filtering an original image having original color values, to generate a filtered image corresponding to dark color values, deriving local shadow multipliers by applying a shadow response curve to the filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, and processing the original image, comprising multiplying the original color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
  • FIG. IA is an illustration of filter windows and sub-filter windows used in deriving a median filtered image, in accordance with a preferred embodiment of the present invention.
  • FIG. IB is an illustration of filter windows and sub-filter windows used in deriving a weighted average filtered image, in accordance with a preferred embodiment of the present invention.
  • FIG. 2 is a simplified block diagram of the essential components of a system for interactively enhancing digital images, in accordance with a preferred embodiment of the present invention.
  • FIG. 3 is a simplified flowchart of the essential steps for interactively enhancing digital images, in accordance with a preferred embodiment of the present invention
  • FIGS. 4A - 4F illustrate an image before and after enhancement, an image of overexposed pixel locations for the enhanced image, and various filtered images used in the enhancement process, in accordance with a preferred embodiment of the present invention.
  • FIG. 5 is a flow diagram of the principal software methods in the source code listed in Appendices A and B, in accordance with a preferred embodiment of the present invention. LIST OF APPENDICES
  • Appendix A is a detailed listing of computer source code written in the C++ programming language for implementing a median or a weighted average filter in accordance with a preferred embodiment of the present invention.
  • Appendix B is a detailed listing of computer source code written in the C++ programming language for implementing for implementing color enhancement, in accordance with a preferred embodiment of the present invention.
  • the present invention is preferably embodied on a general- purpose consumer-grade computer, including a processor for executing an image-processing software application, a memory for storing program code and for storing one or more digital images, one or more input devices such as a keyboard and a mouse for enabling a user to interact with the software application, a display device, and a bus for intercommunication between these components.
  • the computer also includes an interface for receiving digital images from a scanner and from a digital camera, and a network connection for receiving and transmitting digital images over the Internet.
  • the present invention can be embodied in mobile computing devices, in Internet appliances such as electronic picture frames, within vehicles such as airplanes, and within equipment such as medical scanners.
  • the present invention can be implemented in software or in general-purpose or special- purpose hardware, or in a software-hardware combination.
  • the present invention operates by processing an original source image and deriving a contrast-enhanced image therefrom.
  • the present invention applies to grayscale and color images, with arbitrary color depth.
  • the present invention includes two phases; namely, a first phase ("Phase One") for deriving one or more filtered images, and a second phase (“Phase Two”) for using the one or more filtered images to derive local highlight and shadow multipliers, and local offset values, for each pixel location of the original source image.
  • the enhancement process preferably subtracts the local offset values from color values of the original source image, and multiplies the resulting differences by the local highlight and shadow multipliers.
  • the highlight and shadow multipliers are preferably derived from highlight and shadow response curves that visually amplify color variation in bright and dark portions of the source image, respectively.
  • the offset values are preferably derived from curves that stretch the contrast of pixels away from the maximum color value.
  • the highlight response curve when used as a multiplier, serves to visually amplify color variations for bright colors; and the shadow response curve, when used as a multiplier, serves to visually amplify color valuations for dark colors.
  • the highlight and shadow response curves are controllable by adjusting values of parameters. These parameters determine the shapes of the highlight and shadow response curves. A user can fine-tune values of these parameters interactively, using a graphical user interface, in order to obtain a satisfactory contrast- enhanced image.
  • the highlight and shadow multipliers are not obtained by applying the highlight and shadow response curves directly to the source image, respectively, since the benefit of the contrast enhancement to bring out detail in the highlight and shadow areas, would be offset by detailed variation in the multipliers at each pixel location.
  • the highlight and shadow multipliers should be relatively insensitive to local detail variation in the source image.
  • the highlight and shadow multipliers are preferably derived by applying the highlight and shadow response curves to corresponding filtered versions of the source image.
  • the filtered versions serve to dampen local detail variation, and thus provide relatively smooth local color values to which the response curves can be applied, and thus a smooth base upon which to derive local highlight and shadow multipliers for the source image.
  • the present invention can be embodied using a variety of types of filters, such as filters based on medians and filters based on weighted averages, which have been found to be very suitable.
  • filter parameters such as window sizes and choice of weights can be adjusted by a user. Enhancement Algorithm
  • Phase One is described hereinbelow for two preferred types of filters; a modified median filter, and a modified weighted average filter.
  • the modified filters used in the present invention differ from the prior art in that sub-window averages of pixel color values are used in place of single pixel color values. I.e., entire sub- windows of pixel values are treated as if they are lumped together and located at single pixel locations. Such modification is better suited for large windows, say, with dimensions on the order of 100 pixels, and yield better representations of local contrast than prior art filters.
  • Phase One is also described hereinbelow for grayscale images, which have a single color channel, and color images, which have multiple color channels.
  • Two approaches are described for color image enhancement, a first approach that uses a single filtered image, based on filtering luminance source color values; and a second approach that uses two filtered images, based on filtering minimum and maximum source color values.
  • a modified median filter is applied to a source image, I SO urce, to derive a filtered image If,
  • the median filter used in the preferred embodiment is a hybrid multi-stage median filter, which is a modified version of the finite-impulse response (FIR) median hybrid filter described in Nieminen, A., Heinonen, P. and Neuvo, Y., "A new class of detail-preserving filters for image processing," IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 9, Jan. 1987, and in Arce, G., "Detail-preserving ranked-order based filters for image processing," IEEE Trans. Acoustics, Speech and Signal Processing, Vol. 37, No. 1, Jan. 1989.
  • FIR finite-impulse response
  • the present invention uses squares instead of directed lines to construct the sub-filter windows.
  • An advantage of using squares with median hybrid filters instead of directed lines is that the number of directed lines required to preserve details is large, when using large filter window sizes.
  • the prior art uses 8 sub-filters with a filter of radius 2, and more sub-filters, which use directed lines at additional angles, are generally recommended with filters of larger radii.
  • a filter radius as large as 500 is not uncommon, and thus the prior art would require an enormous number of sub-filters.
  • FIG. 1 is an illustration of filter windows and sub-windows used in deriving a modified median filtered image, in accordance with a preferred embodiment of the present invention. Shown in FIG. 1 is a filter window of dimension (2N + 1) x (2N + 1) pixels, centered at a pixel (i,j), together with eight sub-windows, each of dimension (2M + 1) x (2M + 1) pixels.
  • the eight sub-windows are designated "East,” “West,” “North,” “South,” “Northwest,” “Southeast,” “Northeast” and “Southwest,” according to their relative positions with respect to pixel (i,j).
  • the center of the West window for example, is (i - (N-M), j), and its pixel coordinates range from (i - N, j - M) at the lower left corner to (i - N + 2M, j + M) at the upper right corner.
  • the median of three numbers is the number in the middle when the three numbers are ordered.
  • (2g) is the preferred median filter.
  • a way to represent this median filter is to imagine each sub-filter window lumped into a single pixel positioned adjacent to pixel (i, j) according to the relative position of the sub-window, with a pixel value equal to the average value of the source image over the sub-window. This determines a 3x3 pixel window,
  • equations (2a) - (2g) directly determine the preferred median filter If,
  • a first preferred method uses the luminance component of the pixel color values, L sour ce / as a source image, and derives a median filtered image, Lf, ⁇ te r, therefrom.
  • t er, and the median filtered image from MAX SOU rce, denoted MAXf ⁇ i te r, are both used in the second preferred method.
  • the modified weighted average filter operates on an original source image, Isource, to produce a filtered image, Imter- The modified weighted filter is computed using weighted averages of sample values, where the weights corresponding to sample values are constructed as described hereinbelow.
  • each of width and height 2M + 1 are positioned so as to be centered at pixel locations (i+kd, j+ld), for some fixed integer parameter d.
  • the indices k and I above independently range from -n to +n.
  • the average color value over such sub-window uniformly averaged over the (2M + 1) 2 pixels within the sub-window, is denoted by a k ⁇ and is considered as if it is lumped into a single pixel value.
  • Each of the values a ⁇ denotes the average of the pixel color values of Isourc e over a sub-window of size (2M + 1) x (2M + 1), centered at (i+kd, j+ld).
  • the sub-window corresponding to a-22 has its lower left corner at pixel location (i-2d-M, j+2d-M) and its upper right corner at pixel location (i-2d + M, j+2d+M).
  • FIG. IB is an illustration of filter windows and sub-windows used in deriving a modified weighted average filtered image, in accordance with a preferred embodiment of the present invention. Shown in FIG.
  • IB is a (2N + 1) x (2N + 1) filter window centered at pixel location (i,j). Also shown are pixel locations with spacings of d between them, vertically and horizontally, centered at pixel location (i,j). The pixel with a solid black fill color, for example, is located at (i+2d, j-d).
  • the sub-window illustrated in FIG. IB of dimensions (2M + 1) x (2M + 1), is centered at the solid black pixel.
  • Such sub-windows are positioned at each of the twenty-five pixel locations (i+kd, j+ld) as k and I each range independently from -2 to +2, and the average of the (2M + 1) 2 color values within each such sub-window is the quantity denoted by a ⁇ ⁇ ⁇ hereinabove.
  • the average over the sub- window illustrated in FIG. IB, for example, is the quantity a 2 -i- [0059] It may be appreciated by those skilled in the art that the sub-window averages a k ⁇ may be simple uniform averages, as described above, or non-uniform averages, such as Gaussian blur averages.
  • Equation (4) the terms If,i te r, Isource and the terms a ⁇ ⁇ ⁇ in Equation (4) are evaluated at specific pixel locations (i,j); i.e., I- f, ⁇ ter(i,j), IsourceCU) and a k ⁇ (i,j).
  • weights correspond to different types of filters. It is expected that good choices place more weight on sample values that have a value closer to the value of the center pixel. For inverse gradient filters, for example, the weights are chosen so that less weight is placed on averages a k ⁇ that differ more from the value of I S ource(i,j)- Four candidate methods for assigning the weights are as follows:
  • the weights are preferably re-normalized after being determined as above, by dividing them by their sum, thus ensuring that the normalized weights add up to one.
  • the inverse gradient feature requires that the weights w k ⁇ be decreasing or, more precisely, non-increasing, in their dependence on Delta. With method 4 above, the weights are also decreasing in their dependence on the distance, r, of the sub-window a k ⁇ from the center pixel (i,j).
  • Respective references for the weightings in methods 1 - 4 are:
  • reference 2 teaches a different expression for Delta, and does not use the exponent k2 nor the addition of 1 to Delta/kl.
  • weighted average filters in general, are primarily sensitive to magnitude of detail in addition to size.
  • Median filters on the other hand in general, are primarily sensitive only to size.
  • a weighted average filter small but very bright, or very dark, features of an image are left intact in the filtered image - which serves to protect such areas from being over-exposed or under-exposed in Phase Two.
  • the parameter N often has to be reduced using a median filter in order to prevent over-exposure or under-exposure in Phase Two, this is generally not required when using a weighted average filter.
  • N tends to be beneficial with weighted average filters, for preserving relative balance of objects in an image.
  • a small value of N tends to eliminate shadows from a face, giving the face an unnatural flat look, whereas a large vale of N preserves the shadows.
  • Phase Two of a preferred of the present invention derives a desired enhanced image, denoted Ienhanced, from the source image Isource and from the filtered images computed in Phase One.
  • Various parameters used in the derivation of Ienhanced are user-adjustable, as described hereinbelow. As such, the user can refine the enhanced image by iteratively adjusting these user parameters based on the appearance of the enhanced image.
  • Phase Two proceeds as follows:
  • J-enhanced (1-source ⁇ goffset) 9min 9max / wa )
  • S is the maximum value for a color component (e.g., 255 for 8-bit color channels), and k n ⁇ , k hC/ k s ⁇ and k sc are user-adjustable parameters.
  • knc is the "highlight contrast” and ranges between 1 and 25
  • k h i is the "highlight level” and ranges between 0 and a maximum value determined from k nc , as described hereinbelow
  • k sc is the "shadows contrast” and ranges between 1 and 25
  • ksi is the "shadows level” and ranges between 0 and a maximum value determined from k sc , as described hereinbelow.
  • Ie nh anced is clipped to take values between 0 and S; i.e., Ienhanced is set to zero if the value calculated in Equation (5a) is negative, and Ienhanced is set to S if the value calculated in Equation (5a) exceeds S.
  • Equations (5a) - (5d) are applied point-wise at each pixel location (i,j) within the enhanced image.
  • Ienha n ced in Equation (5a) corresponds to Ienhanced (U)
  • Isource corresponds to
  • Phase Two For color images with red, green and blue color components, Phase Two proceeds as follows:
  • Csource is a color component from the source image
  • L-source is a luminance component from the source image
  • Equation (6a) represents three equations, for C corresponding to (i) the red color component, (ii) the green color component and (iii) the blue color component.
  • Equation (6a) reduces to
  • Equation (6a) ((-source ⁇ goffsety gmin gmax / consistent with Equation (5a) hereinabove.
  • ⁇ -enhanced ( L SO urce ⁇ goffset) (gmin gmax + (-source/ Lsource — 1 ) .
  • Equations (6a) - (6e) correspond to the embodiment of Phase One for color images described hereinabove in Section 1.1.3, in which minimum and maximum images are filtered.
  • both MINfiiter and MAX f ii te r are replaced by L fl
  • the enhanced color components C en hanced are preferably clipped so as to range from 0 to S.
  • Equations (5a) - (5d) are not simply applied independently to each individual color component, since doing so would yield an over-saturated image as a result of the multiplications by g m ⁇ n and g ma ⁇ , each of which is greater than one.
  • Equations (6a) - (6e) include multiplications by g mm and g m ax / which serve to increase saturation; and addition of terms (l-k C ⁇ ) * D and kcB * E, which serve to reduce saturation.
  • the various sub-window averages are computed once, and re-used eight times for the modified media filter, and re-used (2n+l) 2 times for the weighted average filter .
  • the East average relative to pixel location (i,j) is identical to the West average relative to pixel (i+2*(N-M), j) and to the North average relative to pixel (i+(N-M), j-(N-m)) and to the South average relative to pixel (i+(N-M), j+(N-M)), etc., as can be seen from FIG. 1.
  • the various sub-window averages may be stored in a sliding window and re-used.
  • Equation (1) reduces to two summations, each over 2M + 1 pixel locations; namely, one sum in the vertical direction and the other sum in the horizontal direction.
  • a one-dimensional sliding window average is preferably computed by a moving sum. As the window slides one pixel to the right, the new pixel within the window is added into the sum, and the pixel that "slid out of" the window is subtracted. [0076] As a result, the computation of each of the medians in Equations (2a) - (2d) is achieved using only two additions, two subtractions and one division per pixel.
  • Appendix A is a detailed listing of computer source code written in the C++ programming language for implementing a median or a weighted average filter in accordance with a preferred embodiment of the present invention. It is noted that the listing Appendix A includes an integer-arithmetic implementation, for faster performance.
  • phase Two of the present invention computation of MIN fl i t er and MAXf ⁇ i te r is also performed using a sliding window. It may thus be appreciated that, using the present invention, it is not necessary to store intermediate results in an intermediate buffer of the same dimensions as the source image.
  • the present invention preferably down-samples the images to eight bits per color prior to application of the filter. Since, as per Equation (6a) above, the original image is processed at its full dynamic range in Phase Two, the down-sampling of Phase One results in no loss of detail.
  • Appendix B is a detailed listing of computer source code written in the C++ programming language for implementing for implementing color enhancement, in accordance with a preferred embodiment of the present invention.
  • the terms g mm and g m ax are preferably pre-computed for all values of MIN and MAX, and the results are stored in look-up tables. This eliminates the need for re-computing the exponential terms in Equations (6d) and (6e) as the enhanced image pixel color values are being determined.
  • the parameters k h ⁇ , k hc , k s! , k sc and kc B do not impact the filter calculations.
  • FIG. 2 is a simplified block diagram of the essential components of a system for interactively enhancing digital images, in accordance with a preferred embodiment of the present invention.
  • a source image is processed by a filter processor 210, to derive one or more filtered images.
  • Full details of operation of filter processor 210, in accordance with a preferred embodiment of the present invention, are provided in the computer source code listing in Appendix A.
  • filter processor 210 if the source image is a grayscale image, I S0Ur ce, then filter processor 210 preferably derives a filtered image, I fl
  • An image enhancer 220 derives an enhanced image, based on the source image and the one or more filtered images derived by filter processor 210.
  • Image enhancer 220 includes a module 240 for applying an offset curve and highlight and shadow response curves to the filtered images, to obtain offset values and highlight and shadow multipliers, respectively.
  • module 240 computes the terms g m ⁇ n , g ma ⁇ and g O ffset from Equations (5b) - (5d).
  • Image enhancer 220 also includes an offset subtracter, 250, for subtracting offset values from source pixel color values, to obtain shifted color values; a highlight multiplier for multiplying the shifted color values by the highlight multipliers; and a shadow multiplier for further multiplying by the shadow multipliers.
  • Equation (5a) For purposes of clarification, the operation of image enhancer 220 has been described as it applied to grayscale images, using Equations (5a) - (5d). Similar, but more complex operations are performed when image enhancer 220 is applied to color images, in carrying out Equations (6a) - (6e).
  • Image enhancer 220 uses parameters k h ⁇ , k hc , k s ⁇ , k sc and kc B , as described hereinabove, which are preferably adjustable by a user. Full details of operation of image enhancer 220, in accordance with a preferred embodiment of the present invention, are provided in the computer source code listing in Appendix B. [0089] After the enhanced image is computed, a user interface manager 230 displays the enhanced image. A user viewing the image can interactively modify values of parameters, and in turn a corresponding modified enhanced image is derived by filter processor 210 and image enhancer 220.
  • FIG. 3 is a simplified flowchart of the essential steps for interactively enhancing digital images, in accordance with a preferred embodiment of the present invention.
  • a determination is made as to whether or not a user has adjusted one or more parameter values. If not, then processing waits at step 320 and periodically repeats the determination of step 310.
  • step 310 determines that one or more parameter values have been adjusted, then at step 330 a further determination is made as to whether or not any of the filter parameters have been adjusted. If so, then at step 340 a user interface sets values of the adjusted filter-based parameters, and at step 350 one or more filtered images are computed. Full details of performance of step 350, in accordance with a preferred embodiment of the present invention, are provided in the computer source code listing in Appendix A.
  • Step 330 determines that none of the filter parameters have been adjusted, then processing jumps to step 360, where a user interface sets values of the adjusted enhancement-based parameters.
  • step 370 an enhanced image is computed.
  • Step 370 preferably includes steps 371 - 376, which derive offset values and highlight and shadow multipliers, based on Ifiiter, as per Equations (5b) - (5d); and apply them to the source image I SO urce, as per Equation (5a).
  • the steps shown in FIG. 3 correspond to grayscale image processing. Similar but more complex operations are performed when using Equations (6a) - (6e) for color images.
  • step 380 the enhanced image is displayed within a user interface, and a user can interactively adjust parameter values, based on the appearance of the enhanced image. Processing returns to step 310, where the system checks if the user has adjusted parameter values.
  • FIGS. 4A - 4F illustrate an image before and after enhancement, an image of overexposed pixel locations for the enhanced image, and various filtered images used in the enhancement process, in accordance with a preferred embodiment of the present invention.
  • Shown in FIG. 4A is a user interface display, which is used to view results of image enhancement, and interactively modify enhancement parameters.
  • An original digital image 405, I SO urce, is displayed. The original image suffers from poor illumination.
  • Controls 410, 415 and 420 are used to adjust shadow-related parameters.
  • Control 410 is a slider bar for setting a value for the parameter k s ⁇ , through an intermediate value referred to as "Amount.”
  • the value of the "Amount” parameter ranges from 0 to 100, and the value of k s ⁇ , ranges correspondingly from 0 to its maximum value, as described hereinbelow.
  • the correspondence between k s ⁇ and the "Amount" parameter may be linear or non-linear, increasing or decreasing.
  • Control 420 is a slider bar for setting a value for the parameter k sc , referred to as "Range.”
  • the value of the "Range” parameter ranges from 0 to 100, and the value of k sc ranges correspondingly from 1 to 25.
  • the correspondence between k sc and the "Range” parameter may be linear or non-linear, increasing or decreasing.
  • Control 430 is a slider bar, ranging from 0 to 100, for setting the window size,
  • Controls 425, 430 and 435 are similar to controls 410, 415 and 420, respectively, but are used to adjust highlight-related parameters instead of shadow-related parameters.
  • Control 440 is used to set the color-boost parameter k C ⁇ -
  • Control 445 "Color Priority,” is used to select either the embodiment described in Section 1.1.2 above, or the embodiment described in Section 1.1.3 above, for performing image enhancement for a color image; i.e., to select between using a filtered luminance image and using filtered minimum and maximum images, as described hereinabove.
  • Control 50 is used to select the type of filter, from among the list:
  • a user can adjust controls 410 - 450 and view the corresponding enhanced image interactively.
  • Shown in FIG. 4B is an enhanced image 455, Ienhanced, having much better illumination and showing details that were not visible in the original image 405, Isource-
  • the response curve is useful for visualizing the effect of the local contract enhancement on the shadows (low values of n) and on the highlights (high values of n).
  • the highlight factor, g m ⁇ n (n), increases from 1 to 1 + k h ⁇ as n ranges from 0 to S
  • the shadows factor, g ma ⁇ (n) decreases from 1 + k s ⁇ to 1, respectively, and the shapes of these curves changes from linear, when the exponents k hc and k sc are zero, to sub-linear when these exponents are positive.
  • the response curve 460 exhibits a shape like that of a cubic function of n.
  • a control 465 "Exposure Warning,” is used to display pixel locations for which pixel color values as determined by Equations (6a) - (6d) were out of range, and had to be clipped to 0 or to S. Shown in FIG. 4C is a visualization of the out of range pixel locations.
  • FIGS. 4D and 4E show the filtered minimum and maximum images, MINf, ⁇ ter and MAXf,i t er, as described in Section 1.1.3 hereinabove, used in deriving the enhanced image shown in FIG. 4B.
  • FIG. 4F shows the luminance image Lf, ⁇ ter / which is used when following the embodiment described in Section 1.1.2 hereinabove. It is noted that control 445 is checked in FIGS. 4D and 4E, indicating use of the Section 1.1.3 embodiment, and that control 445 is un-checked in FIG. 4F, indicating use of the Section 1.1.2 embodiment.
  • the present invention suppresses details well in a source image, when such details are smaller than the radius of the filter window.
  • the advantage of suppression of details derives from the use of filtered images to drive enhancement of the source image.
  • the presence of detail in the filtered images suppresses the corresponding detail in the enhancement process, since dark areas are made relatively lighter and light areas are made relatively darker, where "lightness” and "darkness” of an area are determined by the filtered images.
  • the present invention stretches contrast of small details, without stretching contrast of large details.
  • the filter may exhibit insensitivity to small detail in transition zones between large bright areas and large dark areas, and such detail passes through the filter without much attenuation.
  • the transition zone is an area with width equal to the radius of the filter window and oriented along boundary contours between the dark and light areas.
  • an optimal value for M is N/2.
  • the parameter N may be user-specified.
  • a default value of N is set to min(width, height)/2, where width and height are the pixel dimensions of the source image, and a user can adjust the value of N to between 0% and 100% of this default value.
  • the filter window can be as large as the entire image.
  • this choice for the maximum value of N makes the relative setting of this parameter practically resolution-independent.
  • the optimal relative setting for N is practically the same, say, for a one mega-pixel image as for a six mega-pixel image.
  • the enhancement algorithm of the present invention is essentially invariant to scale. Specifically, enhancing, say, a six mega-pixel image and then sub-sampling to a one mega-pixel image produces an image that is nearly identical as the image produces by first sub- sampling and then enhancing.
  • Such invariance to scale is an important advantage of the present invention, since enhancement can be performed on a sub-sampled image used for previewing while a user is adjusting enhancement parameters. When the user then commits the parameters, for example, by clicking on an "Apply" button, the full-resolution image can be enhanced, and the results enhanced image will appear as the user expects.
  • Equation (5c) that such value corresponds to the minimum of
  • the user-specified parameter within the graphical user interface is a percentage between 0 and 100, which is multiplied by the largest values of k h ⁇ and k s ⁇ as determined in steps 1 - 3 above.
  • the listing in Appendix B provides sample source code for determination of parameters k h ⁇ and k s ⁇ in accordance with steps 1 - 4 above.
  • FIG. 5 is a flow diagram of the principal software methods in the source code listed in Appendices A and B, in accordance with a preferred embodiment of the present invention.
  • the listing in Appendices A and B includes line numbers within methods, for ease of reference.
  • the main method Run() listed in Appendix B, calls ApplyFilterO at lines 22 - 26, which performs local contrast enhancement in accordance with a preferred embodiment of the present invention, and returns a bitmap pDIBDst.
  • the method ApplyFilterO listed in Appendix B, carries out Equations (6a) - (6e) hereinabove.
  • ApplyFilter calls CreateMinMaxDIBsO to generate the filtered image MIN fl i ter and MAXfMter, which are arrays accessed by member pointers m_pDIBMin and m_pDIBMax, respectively.
  • the highlight and shadow multipliers g m ⁇ n and g m ax are tabulated and stored in look-up tables dFacLUTH[] and dfacLUTS[], respectively.
  • the color boost parameter, k C B is stored in the variable dCBColor.
  • Lines 141 - 171 correspond to Equations (6a) - (6c) for 24-bit color images.
  • Other code sections in ApplyFilterO correspond to 8-bit, 16-bit and 64-bit color images.
  • the method CreateMinMaxDIBsO calls IP_LocalMinMax() at lines 65 - 67, which is the main method used to generate the filtered images MIN fl
  • CreateMinMaxDIBsO also generates the source luminance image, Lsource, by calling GetPixelVal() at line 27 and at line 47.
  • L sourC e is an array accessed by a member pointer m_pDIBY.
  • the method IP_LocalMinMax() listed in Appendix A, generates the filtered images MIN fl
  • the parameter pColorPriority determines whether to filter the luminance source image, as described in Sec. 1.1.2 hereinabove, or else to filter the maximum and minimum sources images, as described in Sec. 1.1.3 hereinabove.
  • at lines 50 and 51 GetMinMaxImages() is called, and at lines 59 and 60 GetLuminancelmageO is called.
  • the parameter iMethod determines whether to use a median filter, as described in Sec.
  • the method IP_HybridMedianFilter() is called, and at lines 92 and 93 the method IP_HybridWeightedAverageFilter() is called, for computing MINfiiter- Similarly, at lines 125 and 126 the method IP_HybridMedianFilter() is called, and at lines 128 and 129 the method IP_HybridWeightedAverageFilter() is called, for computing
  • the method IP_HybridMedianFilter() listed in Appendix A, carries out Equations (2a) - (2g) hereinabove.
  • the method ComputeHorizontalAverages() is called, to compute various sub-window averages, as described hereinbelow, and at lines 84 - 91 the appropriate averages are stored in arrays pAveWindowWest, pAveWindowEast, etc.
  • the Equations (3a) - (3c) are carried out at lines 101 - 121 using the method opt_med3() to compute the median of three numbers.
  • the method IP_HybridWeightedAverageFilter() carries out Equation (4) hereinabove.
  • the parameter iMethod is used to determine the weights that are used, in accordance with methods 1 - 4 described hereinabove in Sec. 1.2, as can be seen in lines 53 - 79.
  • the weights are tabulated in an array pnWeights[].
  • the weights are modified at lines 99 and 100, to incorporate the multiplication by exp(- r 2 /k 2 ).
  • the method ComputeHorizonalAverages() is called, to compute various sub-window averages, as described hereinbelow.
  • the weighted average in Equation (4) is computed at lines 159 - 166 and lines 183 - 190.
  • the method GetMinMaxImages() listed in Appendix A, computes the source minimum and maximum images, MIN S0U rce and MAX source , using the methods GetPixelMin() and GetPixelMax(), respectively.
  • the method Getl_uminancelmage() listed in Appendix B, computes the luminance source image, L S0U rce, using the method GetPixelVal().
  • ComputeHorizontalAverages() listed in Appendix A, computers one-dimensional horizontal (2M + 1) x 1 sub- window averages. These horizontal averages are then averaged vertically at lines 61 - 76 of method IP_HybridMedianFilter() and lines 119 - 134 of method IP_HybridWeightedAverageFilter(), to derive the two-dimensional (2M + 1) x (2M+1) sub-window averages.
  • PIX_SORT (p [ 0 ] , p [ l ] ) ; PIX_SORT (p [ 1 ] , p [2 ] ) ; PIX_SORT (p [ 0 ] , p [ 1 ] ) ; return (p [ 1 ] ) ;
  • *pAveDst++ lutAveDiv [*pAveSum++] ;
  • *pAveDst++ *pAveSum++ / nSumRows ;
  • BYTE* pSrcRow (BYTE* ) pBitsSrc + yOutRow * nRowBytesSrc ;
  • pHorizAveWindow [i - 1] pHorizAveWindow [i] ;
  • const double e dSMCM_e
  • const double t2 nSMCM_t * nSMCM_t
  • IP_CallbackFN pCBFunc void* pCBParam
  • rgbRed c; pBMIDst->bmiColors [c] .
  • rgbGreen c; pBMIDst->bmiColors [c] .
  • rgbBlue c; pBMIDst->bmiColors [c] .
  • iOffLUTHLum[i] iOffLUTH [i] - iOffLUTHClr [i] ;
  • nVal (((*pSrc++ ⁇ 8) - iOffLUTH [nMin] )
  • nOffset iOffLUTH [nMin] ;
  • CLocalContrastEnhancement CreateResponseCurves ( double* dShadows, int iShadowsLevel , int iShadowsContrast , int iShadowsThresholdO , int iShadowsThresholdl, double* dHilites, int iHilitesLevel , int iHilitesContrast , int iHilitesThresholdO, int iHilitesThresholdl)

Abstract

A method for contrast enhancement for digital images, the method filters an original image having original color values to generate a first filtered image corresponding to bright color values, and a second filtered image corresponding to dark color values. Local highlight multipliers are derived by applying a highlight response curve to the first filtered image, the highlight response curve being a function of color value that increases from a response value of one. Also, local shadow multipliers are derived by applying a shadow response curve to the second filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one. Local offset values are determined by applying an offset curve to the first filtered image allowing the processing of the original image to generate a contrast-enhanced image from the original image.

Description

IMAGE CONTRAST ENHANCEMENT
COMPUTER PROGRAM LISTING
[0001] A portion of the disclosure of this patent document contains material which is subject to copyright protection. The copyright owner has no objection to facsimile reproduction by anyone of the patent document or the patent disclosure, as it appears in the Patent & Trademark Office patent file or records, but otherwise reserves all copyright rights whatsoever.
FIELD OF THE INVENTION
[0002] The present invention relates to the field of digital image processing, and more particularly to enhancement of digital images.
BACKGROUND OF THE INVENTION
[0003] Digital image enhancement concerns applying operations to a digital image in order to improve a picture, or restore an original that has been degraded. Digital image enhancement dates back to the origins of digital image processing. Unlike film-based images, digital images can be edited interactively at will, and detail that is indiscernible in an original image can be brought out. Often images that appear too dark because of poor illumination or too bright because of over illumination can be restored to accurately reflect the original scene.
[0004] Enhancement techniques involve digital image processing and can include inter alia filtering, morphology, composition of layers and random variables. Conventional image processing software often includes a wide variety of familiar editing tools used for enhancing an image, such as edge sharpening, smoothing, blurring, histogram equalization, gamma correction, dithering, color palate selection, paint brush effects and texture rendering. Generally, enhancement techniques include one or more parameters, which a user can dynamically adjust to fine-tune the enhanced image.
[0005] Several factors distinguish one enhancement technique over another, including inter alia (i) visual quality of the enhanced image, (ii) robustness, (iii) speed and (iv) ease of use. Requirements (ii) and (iv) generally conflict, since the ability to enhance a variety of different types of imagery generally requires a large number of complex adjustable parameters, where ease of use requires a small number of simple adjustable parameters, each of which is intuitive in its effect on the enhanced image. Regarding requirement (iii), for interactive image editing applications, wherein a user repeatedly adjusts enhancement parameters and watches as the enhanced image changes, if the enhancement operation takes more than a fraction of a second to complete, the performance becomes sluggish, and the capability of accurately fine-tuning the enhancement as desired deteriorates. This speed requirement, together with the proliferation of mega-pixel color images, makes design of an enhancement algorithm quite challenging. [0006] Contrast enhancement is a particular aspect of image enhancement that concerns color variations that are not clearly discernible within an image, because of dark shadows or bright highlights. Because of the eye's relative insensitivity to variations in dark colors and variations in bright colors, important details within an image can be missed. Conventional contrast enhancement can involve transfer functions for expanding dynamic range in some parts of the color spectrum and compressing dynamic range in other parts, and gamma correction for adjusting brightness and contrast.
[0007] US Patent Nos. 6,633,684 and 6,677,959 to James describe contrast enhancement by fitting an original source image, such as an x-ray, between upper and lower low frequency images, and expanding the dynamic color range locally to an interval of color values between the lower and upper frequency image color values. Such expansion serves to visually amplify subtle variations in color value between the lower and upper frequency image color values.
SUMMARY OF THE DESCRIPTION
[0008] The present invention concerns a novel method and system for image enhancement that uses one or more filtered images to generate offsets and multipliers for adjusting pixel color values. The image enhancement can be controlled through adjustable user parameters that perform leveling and contrast adjustment for both shadow portions and highlight portions of an original image. These parameters are intuitive and easy to adjust. [0009] The present invention can be implemented extremely efficiently, so as to achieve real-time performance whereby image enhancement is performed within a fraction of a second immediately upon adjustment of a user parameter, for multi-megapixel images captured by today's digital cameras. As such, a user is provided with continuous real-time feedback, which enables extremely accurate fine-tuning of the enhancement. Using the present invention, a user is able to instantly compare an enhanced image with a source image, and compare one enhanced image with another enhanced image.
[0010] Experimental results have shown that the present invention is robust, and yields excellent quality results, even for images where most of the detail is obscured in shadows or bright highlights.
[0011] Specifically, the present invention generates two response curves, one for shadows and one for highlights, each a function of color value. The values of the response curves are used as multipliers for pixel color values. The response curve for highlights is an exponential curve, and increases monotonically from a value of one, corresponding to a color value of 0, to a value greater than or equal to one, corresponding to a maximum color value. As such, the multiplier for highlights visually amplifies bright color variation. The response curve for shadows is also an exponential curve, and decreases monotonically from a value greater than or equal to one, corresponding to a color value of 0, to a value of one, corresponding to a maximum color value. As such, the multiplier for shadows visually amplifies dark color variation. [0012] In accordance with a preferred embodiment of the present invention, the highlight response curve is derived from a first filtered image and the shadow response curve is derived from a second filtered image. In one embodiment of the present invention, the first filtered image is a filter of an image of minimum source color values and the second filtered image is a filter of an image of maximum source color values. In another embodiment of the present invention, the first and second filtered images are filters of an image of source luminance values.
[0013] In addition, the present invention uses an offset curve, which is also a function of color value, to scale dynamic color range to the range of color values greater than or equal to the offset value.
[0014] Although US Patent Nos. 6,633,684 and 6,677,959 to James describe use of upper and lower low frequency images, in distinction to James the present invention generates response curves for use as multipliers, whereas James users the upper and lower low frequency images for local scaling of dynamic range. [0015] There is thus provided in accordance with a preferred embodiment of the present invention a method for contrast enhancement for digital images, including filtering an original image having original color values, to generate a first filtered image, corresponding to bright color values, and a second filtered image corresponding to dark color values, deriving local highlight multipliers by applying a highlight response curve to the first filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, deriving local shadow multipliers by applying a shadow response curve to the second filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, deriving local offset values by applying an offset curve to the first filtered image, and processing the original image, including subtracting the local offset values from the original color values to generate shifted color values, multiplying the shifted color values by the local highlight multipliers, and further multiplying the shifted color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
[0016] There is further provided in accordance with a preferred embodiment of the present invention a system for enhancing contrast of digital images, including a filter processor for filtering an original image having original color values, to generate a first filtered image, corresponding to bright color values, and a second filtered image corresponding to dark color values, and an image enhancer coupled to said filter processor for (i) deriving local highlight multipliers by applying a highlight response curve to the first filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, (ii) deriving local shadow multipliers by applying a shadow response curve to the second filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, (iii) deriving local offset values by applying an offset curve to the first filtered image, (iv) subtracting the local offset values from the original color values to generate shifted color values, (v) multiplying the local shifted color values by the highlight multipliers, and (vi) further multiplying the shifted color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
[0017] There is yet further provided in accordance with a preferred embodiment of the present invention a computer- readable storage medium storing program code for causing a computer to perform the steps of filtering an original image having original color values, to generate a first filtered image, corresponding to bright color values, and a second filtered image corresponding to dark color values, deriving local highlight multipliers by applying a highlight response curve to the first filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, deriving local shadow multipliers by applying a shadow response curve to the second filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, deriving local offset values by applying an offset curve to the first filtered image, and processing the original image, including subtracting the local offset values from the original color values to generate shifted color values, multiplying the shifted color values by the local highlight multipliers, and further multiplying the shifted color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
[0018] There is additionally provided in accordance with a preferred embodiment of the present invention a method for contrast enhancement for digital images, including filtering an original image having original color values, to generate a filtered image corresponding to bright color values, deriving local highlight multipliers by applying a highlight response curve to the filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, deriving local offset values by applying an offset curve to the filtered image, and processing the original image, including subtracting the local offset values from the original color values to generate shifted color values, and multiplying the shifted color values by the local highlight multipliers. [0019] There is moreover provided in accordance with a preferred embodiment of the present invention a method for contrast enhancement for digital images, including filtering an original image having original color values, to generate a filtered image corresponding to dark color values, deriving local shadow multipliers by applying a shadow response curve to the filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, and processing the original image, comprising multiplying the original color values by the local shadow multipliers, thereby generating a contrast- enhanced image from the original image.
[0020] There is further provided in accordance with a preferred embodiment of the present invention a system for enhancing contrast of digital images, including a filter processor for filtering an original image having original color values, to generate a filtered image corresponding to bright color values, and an image enhancer coupled to said filter processor for (i) deriving local highlight multipliers by applying a highlight response curve to the filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, (ii) deriving local offset values by applying an offset curve to the filtered image, (iii) subtracting the local offset values from the original color values to generate shifted color values, and (iv) multiplying the shifted color values by the local highlight multipliers, thereby generating a contrast-enhanced image from the original image. [0021] There is yet further provided in accordance with a preferred embodiment of the present invention a system for enhancing contrast of digital images, including a filter processor for filtering an original image having original color values, to generate a filtered image corresponding to dark color values, and an image enhancer coupled to said filter processor for (i) deriving local shadow multipliers by applying a shadow response curve to the filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, and (ii) multiplying the original color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image. [0022] There is additionally provided in accordance with a preferred embodiment of the present invention a computer- readable storage medium storing program code for causing a computer to perform the steps of filtering an original image having original color values, to generate a filtered image corresponding to bright color values, deriving local highlight multipliers by applying a highlight response curve to the filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, deriving local offset values by applying an offset curve to the filtered image, and processing the original image, including subtracting the local offset values from the original color values to generate shifted color values, and multiplying the shifted color values by the local highlight multipliers.
[0023] There is moreover provided in accordance with a preferred embodiment of the present invention a computer-readable storage medium storing program code for causing a computer to perform the steps of filtering an original image having original color values, to generate a filtered image corresponding to dark color values, deriving local shadow multipliers by applying a shadow response curve to the filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, and processing the original image, comprising multiplying the original color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] The present invention will be more fully understood and appreciated from the following detailed description, taken in conjunction with the drawings in which :
[0025] FIG. IA is an illustration of filter windows and sub-filter windows used in deriving a median filtered image, in accordance with a preferred embodiment of the present invention;
[0026] FIG. IB is an illustration of filter windows and sub-filter windows used in deriving a weighted average filtered image, in accordance with a preferred embodiment of the present invention;
[0027] FIG. 2 is a simplified block diagram of the essential components of a system for interactively enhancing digital images, in accordance with a preferred embodiment of the present invention; and
[0028] FIG. 3 is a simplified flowchart of the essential steps for interactively enhancing digital images, in accordance with a preferred embodiment of the present invention;
[0029] FIGS. 4A - 4F illustrate an image before and after enhancement, an image of overexposed pixel locations for the enhanced image, and various filtered images used in the enhancement process, in accordance with a preferred embodiment of the present invention; and
[0030] FIG. 5 is a flow diagram of the principal software methods in the source code listed in Appendices A and B, in accordance with a preferred embodiment of the present invention. LIST OF APPENDICES
[0031] Appendix A is a detailed listing of computer source code written in the C++ programming language for implementing a median or a weighted average filter in accordance with a preferred embodiment of the present invention; and
[0032] Appendix B is a detailed listing of computer source code written in the C++ programming language for implementing for implementing color enhancement, in accordance with a preferred embodiment of the present invention.
DETAILED DESCRIPTION
[0033] The present invention is preferably embodied on a general- purpose consumer-grade computer, including a processor for executing an image-processing software application, a memory for storing program code and for storing one or more digital images, one or more input devices such as a keyboard and a mouse for enabling a user to interact with the software application, a display device, and a bus for intercommunication between these components. Preferably, the computer also includes an interface for receiving digital images from a scanner and from a digital camera, and a network connection for receiving and transmitting digital images over the Internet. The present invention can be embodied in mobile computing devices, in Internet appliances such as electronic picture frames, within vehicles such as airplanes, and within equipment such as medical scanners. The present invention can be implemented in software or in general-purpose or special- purpose hardware, or in a software-hardware combination. [0034] The present invention operates by processing an original source image and deriving a contrast-enhanced image therefrom. The present invention applies to grayscale and color images, with arbitrary color depth. In a preferred embodiment, the present invention includes two phases; namely, a first phase ("Phase One") for deriving one or more filtered images, and a second phase ("Phase Two") for using the one or more filtered images to derive local highlight and shadow multipliers, and local offset values, for each pixel location of the original source image. The enhancement process preferably subtracts the local offset values from color values of the original source image, and multiplies the resulting differences by the local highlight and shadow multipliers. [0035] The highlight and shadow multipliers are preferably derived from highlight and shadow response curves that visually amplify color variation in bright and dark portions of the source image, respectively. The offset values are preferably derived from curves that stretch the contrast of pixels away from the maximum color value.
[0036] Preferably, the highlight response curve is an exponential function of color value, of the general form f(x) = 1 + k*xn, increasing monotonically from a value of one, corresponding to a color value of zero, which is the darkest color value, to a value greater than or equal to one, corresponding to a maximum color value, which is the brightest color value. The variable x is a linearly scaled color value, which ranges from x=0 to x= l as the color value ranges from zero to its maximum value. Similarly, the shadow response curve is preferably an exponential function of color value, of the general form f(x) = 1 + k*(l-x)n, decreasing monotonically from a value greater than or equal to one, corresponding to a color value of zero, to a value of one, corresponding to the maximum color value. As such, the highlight response curve, when used as a multiplier, serves to visually amplify color variations for bright colors; and the shadow response curve, when used as a multiplier, serves to visually amplify color valuations for dark colors. Thus careful construction of the highlight and shadow response curves results in high quality contrast enhancement.
[0037] In accordance with a preferred embodiment of the present invention, the highlight and shadow response curves are controllable by adjusting values of parameters. These parameters determine the shapes of the highlight and shadow response curves. A user can fine-tune values of these parameters interactively, using a graphical user interface, in order to obtain a satisfactory contrast- enhanced image.
[0038] Further in accordance with a preferred embodiment of the present invention, the highlight and shadow multipliers are not obtained by applying the highlight and shadow response curves directly to the source image, respectively, since the benefit of the contrast enhancement to bring out detail in the highlight and shadow areas, would be offset by detailed variation in the multipliers at each pixel location. In order for contrast enhancement to be effective, the highlight and shadow multipliers should be relatively insensitive to local detail variation in the source image.
[0039] Instead, the highlight and shadow multipliers are preferably derived by applying the highlight and shadow response curves to corresponding filtered versions of the source image. The filtered versions serve to dampen local detail variation, and thus provide relatively smooth local color values to which the response curves can be applied, and thus a smooth base upon which to derive local highlight and shadow multipliers for the source image. [0040] The present invention can be embodied using a variety of types of filters, such as filters based on medians and filters based on weighted averages, which have been found to be very suitable. Preferably, filter parameters such as window sizes and choice of weights can be adjusted by a user. Enhancement Algorithm
[0041] For purposes of organization, a preferred enhancement algorithm is described in the following description according to Phase One and Phase Two, according to grayscale image enhancement and color image enhancement, and according to type of filters used. Specifically, Phase One is described hereinbelow for two preferred types of filters; a modified median filter, and a modified weighted average filter.
[0042] The modified filters used in the present invention differ from the prior art in that sub-window averages of pixel color values are used in place of single pixel color values. I.e., entire sub- windows of pixel values are treated as if they are lumped together and located at single pixel locations. Such modification is better suited for large windows, say, with dimensions on the order of 100 pixels, and yield better representations of local contrast than prior art filters.
1. Phase One
[0043] Phase One is also described hereinbelow for grayscale images, which have a single color channel, and color images, which have multiple color channels. Two approaches are described for color image enhancement, a first approach that uses a single filtered image, based on filtering luminance source color values; and a second approach that uses two filtered images, based on filtering minimum and maximum source color values.
1.1 Modified Median Filter [0044] In this embodiment, a modified median filter is applied to a source image, ISOurce, to derive a filtered image If,|ter- The median filter used in the preferred embodiment is a hybrid multi-stage median filter, which is a modified version of the finite-impulse response (FIR) median hybrid filter described in Nieminen, A., Heinonen, P. and Neuvo, Y., "A new class of detail-preserving filters for image processing," IEEE Trans. Pattern Analysis and Machine Intelligence, Vol. 9, Jan. 1987, and in Arce, G., "Detail-preserving ranked-order based filters for image processing," IEEE Trans. Acoustics, Speech and Signal Processing, Vol. 37, No. 1, Jan. 1989. In distinction to the prior art, the present invention uses squares instead of directed lines to construct the sub-filter windows. [0045] An advantage of using squares with median hybrid filters instead of directed lines is that the number of directed lines required to preserve details is large, when using large filter window sizes. The prior art uses 8 sub-filters with a filter of radius 2, and more sub-filters, which use directed lines at additional angles, are generally recommended with filters of larger radii. For the present invention, a filter radius as large as 500 is not uncommon, and thus the prior art would require an enormous number of sub-filters. [0046] Another advantage of using squares with median hybrid filters instead of directed lines, is that the average value computed at each square may be re-used eight times over the course of deriving the median filtered image of the present invention. [0047] Reference is now made to FIG. 1, which is an illustration of filter windows and sub-windows used in deriving a modified median filtered image, in accordance with a preferred embodiment of the present invention. Shown in FIG. 1 is a filter window of dimension (2N + 1) x (2N + 1) pixels, centered at a pixel (i,j), together with eight sub-windows, each of dimension (2M + 1) x (2M + 1) pixels. The eight sub-windows are designated "East," "West," "North," "South," "Northwest," "Southeast," "Northeast" and "Southwest," according to their relative positions with respect to pixel (i,j). Thus the center of the West window, for example, is (i - (N-M), j), and its pixel coordinates range from (i - N, j - M) at the lower left corner to (i - N + 2M, j + M) at the upper right corner. Here N and M are arbitrary numbers with 0 < M < N, and the pixel numbering advances from left to right (first coordinate), bottom to top (second coordinate). In the illustration shown in FIG. 1, N = 10 and M = 3. [0048] In accordance with a preferred embodiment of the present invention, given a source image, ISOurce, for each pixel location (i, j), the averages of ISOurce over each of the sub-windows are computed. The average of Isource over the West sub-window; namely, ϊ ∑ Σ/ {i-N + MJ) ' UJ
(2M+1)2 ι = ^ -M J ^--M - is denoted I-West(iJ), the (i,j) denoting that the West sub-window is positioned westward relative to pixel location (i,j). The other averages are similarly denoted.
[0049] When the various averages have been computed, the following seven medians are determined: med-l(i,j) = metf/d/7(I-East(i,j), I-West(i, j), WceO,])); (2a) med-2(i,j) = mec//a/7(I-North(i,j), I-South(i, j), IS0Urce(i,j)); (2b) med-3(i,j) = mec//3A7(I-Northwest(i,j), I-Southeast(i,j), IsourceOJ)); (2c) med-4(i,j) = mec//aA?(I-Northeast(i,j), I-Southwest(iJ), Isource(U)); (2d) med-12(i,j)
Figure imgf000020_0001
med-2(i, j), ISOurce(i,j)); (2e) med-34(i,j) = mecf/ar?(med-3(i,j), med-4(i, j), IS0Urce(i,j); and (2f) Ifnter(iJ) = mec//3A7(med-12(i,j), med-34(i, j), ISOurce(i,j))- (2g)
The median of three numbers is the number in the middle when the three numbers are ordered.
[0050] The result of the seventh median calculation in Equation
(2g) is the preferred median filter. A way to represent this median filter is to imagine each sub-filter window lumped into a single pixel positioned adjacent to pixel (i, j) according to the relative position of the sub-window, with a pixel value equal to the average value of the source image over the sub-window. This determines a 3x3 pixel window,
Figure imgf000021_0001
centered at pixel location (i,j), where x = ISource(i,j), a = I- Northwest(i,j) and similarly for b through h. Then the value of Ifιiter(i,j) is determined by u = mec//aA7(mec//an(d,e,x), mec//a/7(b,g,x), x); (3a) v = median(median(a,h,x), median(c,f,x), x); and (3b)
Ifiiterϋ/ j) = media n(u, v,x). (3c)
1.1.1 Modified Median Filter - Grayscale Source Image
[0051] When the original source image is a grayscale image, with a single color component, then equations (2a) - (2g) directly determine the preferred median filter If,|ter-
1.1.2 Modified Median Filter with Filtered Luminance Image - Color Source Image [0052] For an original source image that is a color image with red, green and blue pixel color components, (R(i,j), G(i,j), B(i,j)), two preferred methods for determining median filtered images are described herein. A first preferred method uses the luminance component of the pixel color values, Lsource/ as a source image, and derives a median filtered image, Lf,ιter, therefrom. The luminance component of a pixel color with color values (R, G, B) is preferably determined by a conventional formula such as L = 0.299*R + 0.587*G + 0.114*B.
1.1.3 Modified Median Filter with Filtered Minimum and Maximum Images - Color Source Image
[0053] A second preferred method uses two median filtered images, based on the maximum and minimum of the RGB color values. Specifically, denote MINSOurce(i,j) = minimum(R(\,i), G(i,j), B(i,j)) and MAXS0Urce(i,j) = maximum(R(\,i), G(i,j), B(i,j)). The median filtered image from MINsource, denoted MINfl|ter, and the median filtered image from MAXSOUrce, denoted MAXfιiter, are both used in the second preferred method.
1.2 Modified Weighted Average Filter
[0054] It may be appreciated by those skilled in the art that filters other than median filters may be used in Phase One of the present invention. It has been found by experimentation with various types of filters that certain weighted average filters perform well with contrast enhancement, in particular when the filter coefficients generally have an inverse gradient dependency, as described hereinbelow. [0055] As with the modified median filter, the modified weighted average filter operates on an original source image, Isource, to produce a filtered image, Imter- The modified weighted filter is computed using weighted averages of sample values, where the weights corresponding to sample values are constructed as described hereinbelow.
[0056] In accordance with a preferred embodiment of the present invention, within a sliding square filter window of width and height 2N+1 centered at pixel location (i,j), square sub-windows, each of width and height 2M + 1 are positioned so as to be centered at pixel locations (i+kd, j+ld), for some fixed integer parameter d. It is assumed that M is a divisor of N, i.e., N = Mb, for some integer b; and that the parameter d, which controls the spacing between the sub-windows, is a divisor of N-M, i.e., N-M = nd, for some integer n. The indices k and I above independently range from -n to +n. The average color value over such sub-window, uniformly averaged over the (2M + 1)2 pixels within the sub-window, is denoted by akι and is considered as if it is lumped into a single pixel value. [0057] The filtered value If,ιter(i,j) is computed as a weighted average of the color value at (i,j), together with the (2n+l)2 sub- window averages aw- For example, with n = 2 there are twenty-five sub-window averages, say a-2-2, a-2-i, ..., as follows:
Figure imgf000023_0001
Each of the values a^ denotes the average of the pixel color values of Isource over a sub-window of size (2M + 1) x (2M + 1), centered at (i+kd, j+ld). For example, the sub-window corresponding to a-22 has its lower left corner at pixel location (i-2d-M, j+2d-M) and its upper right corner at pixel location (i-2d + M, j+2d+M). [0058] Reference is now made to FIG. IB, which is an illustration of filter windows and sub-windows used in deriving a modified weighted average filtered image, in accordance with a preferred embodiment of the present invention. Shown in FIG. IB is a (2N + 1) x (2N + 1) filter window centered at pixel location (i,j). Also shown are pixel locations with spacings of d between them, vertically and horizontally, centered at pixel location (i,j). The pixel with a solid black fill color, for example, is located at (i+2d, j-d). The sub-window illustrated in FIG. IB of dimensions (2M + 1) x (2M + 1), is centered at the solid black pixel. Such sub-windows are positioned at each of the twenty-five pixel locations (i+kd, j+ld) as k and I each range independently from -2 to +2, and the average of the (2M + 1)2 color values within each such sub-window is the quantity denoted by aι<ι hereinabove. The average over the sub- window illustrated in FIG. IB, for example, is the quantity a2-i- [0059] It may be appreciated by those skilled in the art that the sub-window averages akι may be simple uniform averages, as described above, or non-uniform averages, such as Gaussian blur averages.
[0060] The filtered value for pixel location (i,j) is given by a weighted average n n IfiHer = W I source + Σ Σ Wt, Qk, (4) k=~nl=—n where the weights w and wkι are normalized so as to sum to one. In applying Equation (4), the terms If,iter, Isource and the terms aι<ι in Equation (4) are evaluated at specific pixel locations (i,j); i.e., I- f,ιter(i,j), IsourceCU) and akι(i,j).
[0061] It may be appreciated by those skilled in the art that the choice of b and n can be used to provide a trade-off between computational complexity and accuracy in the filtered image. Increasing b and n yields a filtered image with fewer artifacts due to the small sub-window sizes, but increases the computational time. In the limiting case where M=O and d = l, the modified weighted average in Equation (4) reduces to a conventional weighted average filter.
[0062] Various choices for the weights correspond to different types of filters. It is expected that good choices place more weight on sample values that have a value closer to the value of the center pixel. For inverse gradient filters, for example, the weights are chosen so that less weight is placed on averages akι that differ more from the value of ISource(i,j)- Four candidate methods for assigning the weights are as follows:
1. Assign w = 1, and wkι = 1 or 0 according to whether Delta is less than Sigma or not, respectively, where Delta =
Figure imgf000025_0001
- n UkI , ' and Sig -"ma is a constant.
2. Assign w = 1, and wkι = (1 + Delta/kl)"k2 , where kl and k2 are constants.
3. Assign w = 1, and wkι = exp(- Delta2/Sigma2).
4. Assign w = 1, and wkι = exp(-r2/k2 - Delta2/Sigma2), where r is the distance between the center pixel (i,j) and the center of the sub-window corresponding to aw, and k is a constant. For computational purposes, r is calculated as the Euclidean pixel distance normalized by the pixel distance between adjacent samples; i.e., r = sqrt[(i-k)2 + (j-l)2] / d. Such normalization by d ensures that the weight coefficients are independent of N.
In each method, 1 - 4, the weights are preferably re-normalized after being determined as above, by dividing them by their sum, thus ensuring that the normalized weights add up to one. The inverse gradient feature requires that the weights wkι be decreasing or, more precisely, non-increasing, in their dependence on Delta. With method 4 above, the weights are also decreasing in their dependence on the distance, r, of the sub-window akι from the center pixel (i,j).
[0063] Respective references for the weightings in methods 1 - 4 are:
1. Lee, J. -S., "Digital image smoothing and the Sigma Filter," Computer Vision, Graphics and Image Processing, Vol. 24, 1983, pages 255 - 269.
2. Wang, D. C. C, Vagnucci, A. H. and Li, CC, "Gradient inverse weighted smoothing scheme and the evaluation of its performance," Computer Graphics and Image Processing, Vol. 15, 1981, pages 167 - 181.
3. Saint-Marc, P., Chen, J. S. and Medioni, G., "Adaptive smoothing : a general tool for early vision," Proceedings of the Conference on Computer Vision and Pattern Recognition, 1989, pages 618 - 624. 4. Smith, S. and Brady, J., "SUSAN - a new approach to low level image processing," International Journal of Computer Vision, Vol. 23, No. 1, 1997, pages 45 - 78. In distinction from the present invention, reference 2 teaches a different expression for Delta, and does not use the exponent k2 nor the addition of 1 to Delta/kl. Reference 3 teaches a discrete gradient, as computed within a 3x3 window, instead of Delta = - π *Λkl , ' as above. Additionally ' , ' reference 3 teaches a term 2k2
in the denominator of the exponential term, instead of Sigma2, as above. Reference 4 above teaches a value of w = 0 for the weight at the center pixel (i,j). However, it has been found that, for the present invention, a value of w = 1 yields better performance for contrast enhancement.
[0064] A particular characteristic of weighted average filters, as opposed to median filters, is that weighted average filters, in general, are primarily sensitive to magnitude of detail in addition to size. Median filters, on the other hand in general, are primarily sensitive only to size. Thus using a weighted average filter, small but very bright, or very dark, features of an image are left intact in the filtered image - which serves to protect such areas from being over-exposed or under-exposed in Phase Two. As such, whereas the parameter N often has to be reduced using a median filter in order to prevent over-exposure or under-exposure in Phase Two, this is generally not required when using a weighted average filter. In fact, using a large value of N tends to be beneficial with weighted average filters, for preserving relative balance of objects in an image. For example, a small value of N tends to eliminate shadows from a face, giving the face an unnatural flat look, whereas a large vale of N preserves the shadows.
2. Phase Two
[0065] Phase Two of a preferred of the present invention derives a desired enhanced image, denoted Ienhanced, from the source image Isource and from the filtered images computed in Phase One. Various parameters used in the derivation of Ienhanced are user-adjustable, as described hereinbelow. As such, the user can refine the enhanced image by iteratively adjusting these user parameters based on the appearance of the enhanced image.
2. Grayscale Images
[0066] For grayscale images, Phase Two proceeds as follows:
J-enhanced = (1-source ~ goffset) 9min 9max / wa ) where
O o- mill = i + k rVΛlii yIjUter-) J ' <5b)
Figure imgf000028_0001
goffset = S * ( 1 - 1/gmin) , (5d)
S is the maximum value for a color component (e.g., 255 for 8-bit color channels), and knι, khC/ ksι and ksc are user-adjustable parameters. Specifically, knc is the "highlight contrast" and ranges between 1 and 25; khi is the "highlight level" and ranges between 0 and a maximum value determined from knc, as described hereinbelow ; ksc is the "shadows contrast" and ranges between 1 and 25; and ksi is the "shadows level" and ranges between 0 and a maximum value determined from ksc, as described hereinbelow. Preferably, Ienhanced is clipped to take values between 0 and S; i.e., Ienhanced is set to zero if the value calculated in Equation (5a) is negative, and Ienhanced is set to S if the value calculated in Equation (5a) exceeds S.
[0067] The choice of g offset is such that color value I = S is a fixed- point of the function (I - gOffset) * gmm- Since gmm is generally greater than one, this function serves to stretch color values, I, away from S.
[0068] In accordance with a preferred embodiment of the present invention, Equations (5a) - (5d) are applied point-wise at each pixel location (i,j) within the enhanced image. I.e., Ienhanced in Equation (5a) corresponds to Ienhanced (U), Isource corresponds to
Isource(ij) and Ifiiter Corresponds tO IfπterOJ).
2.2 Color Images
[0069] For color images with red, green and blue color components, Phase Two proceeds as follows:
Cenhanced = (l-kCB) * (E-D) + [(l-kCB)*D + kCB*E] * gmιn*gmax , (6a) where the terms D and E are given by
D = Lsource ~ goffset , (6b)
E = Csource ~ {[kcB*Uource + ( l-kcB)*CSource]/LSource} * goffset (6c)
Figure imgf000029_0001
O max rV sl \i MAX/Uxr ) , (6e)
and where
Csource is a color component from the source image;
L-source is a luminance component from the source image;
Cenhanced is the enhanced color component; kcB is a user-adjustable parameter - the "color boost" parameter, ranging between 0 and 1; and the parameters gOffset, S, khι, khc, ksι and ksc are as defined hereinabove with respect to Equations (5a) - (5d). I.e., Equation (6a) represents three equations, for C corresponding to (i) the red color component, (ii) the green color component and (iii) the blue color component.
[0070] It is noted that for the special case where CS0Urce = Uource, or for the special case where kCB = 1, then Equation (6a) reduces to
(-enhanced = ((-source ~ goffsety gmin gmax / consistent with Equation (5a) hereinabove. As such, the color boost parameter kCB can be considered as a weighting, or mixing factor between Csource and LS0Urce, with no weight on LS0Urce when kCβ = 1. It is further noted that for the special case where kCB = 0, then Equation (6a) reduces to
^-enhanced = ( LSOurce ~~ goffset) (gmin gmax + (-source/ Lsource — 1 ) .
For example, if gmιn= l and gmaχ=2, then gOffset=0 and the above expression reduces tO Cenhanced = Csource + Lsource-
[0071] Equations (6a) - (6e) correspond to the embodiment of Phase One for color images described hereinabove in Section 1.1.3, in which minimum and maximum images are filtered. In the embodiment of Phase One for color images described hereinabove in Section 1.1.2, in which the luminance image is filtered, both MINfiiter and MAXfiiter are replaced by Lfl|ter in Equations (6d) and (6e).
[0072] As described hereinabove with respect to Equations (5a) - (5d), the enhanced color components Cenhanced are preferably clipped so as to range from 0 to S.
[0073] It is noted that for color images, Equations (5a) - (5d) are not simply applied independently to each individual color component, since doing so would yield an over-saturated image as a result of the multiplications by gmιn and gmaχ, each of which is greater than one. Instead, Equations (6a) - (6e) include multiplications by gmm and gmax/ which serve to increase saturation; and addition of terms (l-kCβ) * D and kcB * E, which serve to reduce saturation.
Implementation Details
[0074] In a preferred embodiment of Phase One of the present invention, the various sub-window averages are computed once, and re-used eight times for the modified media filter, and re-used (2n+l)2 times for the weighted average filter . For example, the East average relative to pixel location (i,j) is identical to the West average relative to pixel (i+2*(N-M), j) and to the North average relative to pixel (i+(N-M), j-(N-m)) and to the South average relative to pixel (i+(N-M), j+(N-M)), etc., as can be seen from FIG. 1. Thus it may be appreciated by those skilled in the art that the various sub-window averages may be stored in a sliding window and re-used.
[0075] It may further be appreciated that since a square neighborhood average is a separable convolution filter, the two- dimensional summation in Equation (1) reduces to two summations, each over 2M + 1 pixel locations; namely, one sum in the vertical direction and the other sum in the horizontal direction. Moreover, a one-dimensional sliding window average is preferably computed by a moving sum. As the window slides one pixel to the right, the new pixel within the window is added into the sum, and the pixel that "slid out of" the window is subtracted. [0076] As a result, the computation of each of the medians in Equations (2a) - (2d) is achieved using only two additions, two subtractions and one division per pixel. The division can be replaced with a look-up table of size S*(2M + 1)2, where S is a maximum color value. For 8-bit color channels, S=255. [0077] Reference is now made to Appendix A, which is a detailed listing of computer source code written in the C++ programming language for implementing a median or a weighted average filter in accordance with a preferred embodiment of the present invention. It is noted that the listing Appendix A includes an integer-arithmetic implementation, for faster performance.
[0078] In a preferred embodiment of Phase Two of the present invention, computation of MINfliter and MAXfιiter is also performed using a sliding window. It may thus be appreciated that, using the present invention, it is not necessary to store intermediate results in an intermediate buffer of the same dimensions as the source image. [0079] For images having color channels with more than eight bits per color, the present invention preferably down-samples the images to eight bits per color prior to application of the filter. Since, as per Equation (6a) above, the original image is processed at its full dynamic range in Phase Two, the down-sampling of Phase One results in no loss of detail.
[0080] Reference is now made to Appendix B, which is a detailed listing of computer source code written in the C++ programming language for implementing for implementing color enhancement, in accordance with a preferred embodiment of the present invention. For computational efficiency, the terms gmm and gmax are preferably pre-computed for all values of MIN and MAX, and the results are stored in look-up tables. This eliminates the need for re-computing the exponential terms in Equations (6d) and (6e) as the enhanced image pixel color values are being determined. [0081] It is further noted that the parameters khι, khc, ks!, ksc and kcB do not impact the filter calculations. As such, the currently saved filtered images can be re-used when a user adjusts these parameters. This eliminates the need for re-applying the filter. [0082] Reference is now made to FIG. 2, which is a simplified block diagram of the essential components of a system for interactively enhancing digital images, in accordance with a preferred embodiment of the present invention. As shown in FIG. 2, a source image is processed by a filter processor 210, to derive one or more filtered images. Full details of operation of filter processor 210, in accordance with a preferred embodiment of the present invention, are provided in the computer source code listing in Appendix A. [0083] As described hereinabove, if the source image is a grayscale image, IS0Urce, then filter processor 210 preferably derives a filtered image, Ifl|ter- If the source image is a color image, then filter processor 210 preferably derives, either (i) a filtered luminance image, Lfiiter, in accordance with a first preferred embodiment of the present invention, or (ii) two filtered images, Milliter and MAXfHter, in accordance with a second preferred embodiment. The selection of first or second preferred embodiment is set by a user-adjustable parameter, METHOD. [0084] Filter processor 210 uses window size parameters, M and N, as described hereinabove, which are preferably adjustable by a user.
[0085] An image enhancer 220 derives an enhanced image, based on the source image and the one or more filtered images derived by filter processor 210. Image enhancer 220 includes a module 240 for applying an offset curve and highlight and shadow response curves to the filtered images, to obtain offset values and highlight and shadow multipliers, respectively. Specifically, module 240 computes the terms gmιn, gmaχ and gOffset from Equations (5b) - (5d). The highlight response curve is of the form f(x) = 1 + k(x/S)π, where x is a color value ranging between 0 and S, and the highlight multiplier for pixel location (i,j) is obtained by applying the highlight response curve to IfiiterOj), as in Equation (5b). Similarly, the shadow response curve is of the form f(x) = 1 + k(l - x/S)n, and the shadow multiplier for pixel location (i,j) is obtained by applying the shadow response curve to IfnterOJ), as in Equation (5c). According to Equation (5d), the offset curve is given by S*[l - l/f(x)], where f(x) is the highlight response curve, and the offset value for pixel location (i,j) is obtained by applying the offset curve to Ifιiter(i,j)- As the highlight response curve increases from 1 to 1 + k, the offset curve increases from 0 to [k / (k+1)] * S. [0086] Image enhancer 220 also includes an offset subtracter, 250, for subtracting offset values from source pixel color values, to obtain shifted color values; a highlight multiplier for multiplying the shifted color values by the highlight multipliers; and a shadow multiplier for further multiplying by the shadow multipliers. Together, offset subtractor 250, highlight multiplier 260 and shadow multiplier 270 carry out the operations in Equation (5a). [0087] For purposes of clarification, the operation of image enhancer 220 has been described as it applied to grayscale images, using Equations (5a) - (5d). Similar, but more complex operations are performed when image enhancer 220 is applied to color images, in carrying out Equations (6a) - (6e).
[0088] Image enhancer 220 uses parameters khι, khc, ksι, ksc and kcB, as described hereinabove, which are preferably adjustable by a user. Full details of operation of image enhancer 220, in accordance with a preferred embodiment of the present invention, are provided in the computer source code listing in Appendix B. [0089] After the enhanced image is computed, a user interface manager 230 displays the enhanced image. A user viewing the image can interactively modify values of parameters, and in turn a corresponding modified enhanced image is derived by filter processor 210 and image enhancer 220.
[0090] Reference is now made to FIG. 3, which is a simplified flowchart of the essential steps for interactively enhancing digital images, in accordance with a preferred embodiment of the present invention. At step 310 a determination is made as to whether or not a user has adjusted one or more parameter values. If not, then processing waits at step 320 and periodically repeats the determination of step 310. There are two types of parameters; namely, filter-based parameters M, N and METHOD, and enhancement-based parameters khι, khc, ksι, ksc and kCB, as described hereinabove.
[0091] If step 310 determines that one or more parameter values have been adjusted, then at step 330 a further determination is made as to whether or not any of the filter parameters have been adjusted. If so, then at step 340 a user interface sets values of the adjusted filter-based parameters, and at step 350 one or more filtered images are computed. Full details of performance of step 350, in accordance with a preferred embodiment of the present invention, are provided in the computer source code listing in Appendix A.
[0092] If step 330 determines that none of the filter parameters have been adjusted, then processing jumps to step 360, where a user interface sets values of the adjusted enhancement-based parameters. At step 370 an enhanced image is computed. [0093] Step 370 preferably includes steps 371 - 376, which derive offset values and highlight and shadow multipliers, based on Ifiiter, as per Equations (5b) - (5d); and apply them to the source image ISOurce, as per Equation (5a). As mentioned hereinabove with respect to image enhancer 220 (FIG. 2), the steps shown in FIG. 3 correspond to grayscale image processing. Similar but more complex operations are performed when using Equations (6a) - (6e) for color images. Full details of performance of step 370, in accordance with a preferred embodiment of the present invention, are provided in the computer source code listing in Appendix B. [0094] At step 380 the enhanced image is displayed within a user interface, and a user can interactively adjust parameter values, based on the appearance of the enhanced image. Processing returns to step 310, where the system checks if the user has adjusted parameter values.
Discussion of Experimental Results
[0095] Reference is now made to FIGS. 4A - 4F, which illustrate an image before and after enhancement, an image of overexposed pixel locations for the enhanced image, and various filtered images used in the enhancement process, in accordance with a preferred embodiment of the present invention. Shown in FIG. 4A is a user interface display, which is used to view results of image enhancement, and interactively modify enhancement parameters. An original digital image 405, ISOurce, is displayed. The original image suffers from poor illumination.
[0096] In a right panel of FIG. 4A are displayed various controls that are used to adjust parameters values. Controls 410, 415 and 420 are used to adjust shadow-related parameters. Control 410 is a slider bar for setting a value for the parameter ksι, through an intermediate value referred to as "Amount." The value of the "Amount" parameter ranges from 0 to 100, and the value of ksι, ranges correspondingly from 0 to its maximum value, as described hereinbelow. The correspondence between ksι and the "Amount" parameter may be linear or non-linear, increasing or decreasing. Control 420 is a slider bar for setting a value for the parameter ksc, referred to as "Range." The value of the "Range" parameter ranges from 0 to 100, and the value of ksc ranges correspondingly from 1 to 25. The correspondence between ksc and the "Range" parameter may be linear or non-linear, increasing or decreasing. Control 430 is a slider bar, ranging from 0 to 100, for setting the window size,
N, for the filter computation, as described hereinabove.
[0097] Controls 425, 430 and 435 are similar to controls 410, 415 and 420, respectively, but are used to adjust highlight-related parameters instead of shadow-related parameters.
[0098] Control 440 is used to set the color-boost parameter kCβ-
Control 445, "Color Priority," is used to select either the embodiment described in Section 1.1.2 above, or the embodiment described in Section 1.1.3 above, for performing image enhancement for a color image; i.e., to select between using a filtered luminance image and using filtered minimum and maximum images, as described hereinabove. Control 50 is used to select the type of filter, from among the list:
Median - in accordance with Section 1.1 above;
Sigma - in accordance with method 1 for assigning weights in
Section 1.2 above;
Gradient Inverse - in accordance with method 2 for assigning weights in Section 1.2 above;
SMCM - in accordance with method 3 for assigning weights in
Section 1.2 above; and
SUSAN - in accordance with method 4 for assigning weights in
Section 1.2 above
[0099] As shown in FIG. 4A, the default values of the parameter settings are: Amount = 50, Range = 60 and Detail = 100 for the shadow-related parameters; Amount = 0, Range = 70 and Detail = 100 for the highlight-related parameters; Color Boost = 70 and Color Priority is set, indicating the embodiment that uses filtered minimum and maximum images. The default choice of filter is "SMCM." It can be seen from Equations (6d) and (6e) that when the amount parameters are set to zero, so that ksι = khι = 0, then the value of the range parameters have no effect on the enhancement.
[00100] Preferably, a user can adjust controls 410 - 450 and view the corresponding enhanced image interactively. Shown in FIG. 4B is an enhanced image 455, Ienhanced, having much better illumination and showing details that were not visible in the original image 405, Isource- As seen in the figure, the user-adjusted values of the parameter settings are: Amount = 100, Range = 27 and Detail = 100 for the shadow-related parameters; Amount = 100, Range = 40 and Detail = 100 for the highlight-related parameters; Color Boost = 70 and Color Priority is set, indicating the embodiment that uses filtered minimum and maximum images. The choice of filter is "Gradient Inverse."
[00101] Also shown in FIG. 4B is a response curve 460, displaying the product f(n) = [n - gOffset(n)]*gmιn(n)*gmax(n), as a function of color values, n, between 0 and S. The response curve is useful for visualizing the effect of the local contract enhancement on the shadows (low values of n) and on the highlights (high values of n). The highlight factor, gmιn(n), increases from 1 to 1 + khι as n ranges from 0 to S, and the shadows factor, gmaχ(n), decreases from 1 + ksι to 1, respectively, and the shapes of these curves changes from linear, when the exponents khc and ksc are zero, to sub-linear when these exponents are positive. As such, the response curve 460 exhibits a shape like that of a cubic function of n. As can be seen from FIG. 4B, the response curve crosses the 45° line, where f(n) = n, three times, including n = 0 and n = S and an intermediate value of n.
[00102] A control 465, "Exposure Warning," is used to display pixel locations for which pixel color values as determined by Equations (6a) - (6d) were out of range, and had to be clipped to 0 or to S. Shown in FIG. 4C is a visualization of the out of range pixel locations.
[00103] FIGS. 4D and 4E show the filtered minimum and maximum images, MINf,ιter and MAXf,iter, as described in Section 1.1.3 hereinabove, used in deriving the enhanced image shown in FIG. 4B. FIG. 4F shows the luminance image Lf,ιter/ which is used when following the embodiment described in Section 1.1.2 hereinabove. It is noted that control 445 is checked in FIGS. 4D and 4E, indicating use of the Section 1.1.3 embodiment, and that control 445 is un-checked in FIG. 4F, indicating use of the Section 1.1.2 embodiment.
[00104] Studies of results have shown that the present invention suppresses details well in a source image, when such details are smaller than the radius of the filter window. The advantage of suppression of details derives from the use of filtered images to drive enhancement of the source image. The presence of detail in the filtered images suppresses the corresponding detail in the enhancement process, since dark areas are made relatively lighter and light areas are made relatively darker, where "lightness" and "darkness" of an area are determined by the filtered images. In enhancing the image, the present invention stretches contrast of small details, without stretching contrast of large details. This controlled stretching occurs when the filtered images (i) suppress smaller, high-contrast details, by amplitude reduction or blurring; and (ii) leave larger, high-contrast details intact, neither reducing them in amplitude nor blurring them. Blurring of large details in filtered images causes visible "halo" effects in the output image, which is a significant drawback in prior art systems, such as Adobe Photoshop®.
[00105] When larger details are present in the source image, the filter may exhibit insensitivity to small detail in transition zones between large bright areas and large dark areas, and such detail passes through the filter without much attenuation. Specifically, the transition zone is an area with width equal to the radius of the filter window and oriented along boundary contours between the dark and light areas.
[00106] Through empirical testing it was found that iterative application of the median filter with successively larger values of a filter radius, r0, rlt r2, ..., serves to resolve the above insensitivity. It was also found that, for large values of window radius, starting with an initial radius r0 less than 10 gives negligible improvement. Thus, in practice, the sequence of radii used is N, N/2, N/4, ..., terminating when a radius, N/2k+1, less than 10 is reached. Iterative application of the median filter then proceeds from the smallest radius, N/2k, to the largest radius, N. For example, when the median filter radius is 128, the successive values used for r0, ri, r2/ ... are 16, 32, 64 and 128. [00107] Similarly, for the weighted average filters, the value of N is increased by a factor of five at each iteration, starting with a minimum value of N = 2.
[00108] Through empirical testing it was also found that for a given value of N, an optimal value for M is N/2. The parameter N may be user-specified. A default value of N is set to min(width, height)/2, where width and height are the pixel dimensions of the source image, and a user can adjust the value of N to between 0% and 100% of this default value. As such, the filter window can be as large as the entire image. It is noted that this choice for the maximum value of N makes the relative setting of this parameter practically resolution-independent. Thus, the optimal relative setting for N is practically the same, say, for a one mega-pixel image as for a six mega-pixel image.
[00109] It has further been found through experimentation, that the enhancement algorithm of the present invention is essentially invariant to scale. Specifically, enhancing, say, a six mega-pixel image and then sub-sampling to a one mega-pixel image produces an image that is nearly identical as the image produces by first sub- sampling and then enhancing. Such invariance to scale is an important advantage of the present invention, since enhancement can be performed on a sub-sampled image used for previewing while a user is adjusting enhancement parameters. When the user then commits the parameters, for example, by clicking on an "Apply" button, the full-resolution image can be enhanced, and the results enhanced image will appear as the user expects. [00110] Through empirical testing, it has been found that the setting b = 2 and n = 2 in the sub-window configurations for the weighted average filter, represents a good trade-off between computational complexity and accuracy when enhancing typical digital photo resolutions on a typical personal computer. [OOlll] Regarding choices for the parameters khι, khc, ksι and kSC/ results have shown that a good approach is to let the maximum value of khι depend on the value of khc, and similarly to let the maximum value of ksι depend on the value of ksc. In accordance with a preferred embodiment of the present invention, such maximum values are determined as follows:
1. Determine the largest highlights multiplier for a given value of khc; i.e., the largest value which, when multiplied by any color value, I, gives a product I*gmin(I) no smaller than 0. It can be seen from Equation (5b) that such value corresponds to
the minimum of , over the range of values for
Figure imgf000043_0001
I between 1 and 255. Similarly, determine the largest shadow multiplier for a given value of ksc; i.e., the largest value which, when multiplied by any color value, I, gives a product I*gmax(I) no larger than 255. It can be seen from
Equation (5c) that such value corresponds to the minimum of
( , over the range of values for I between 1
Figure imgf000043_0002
and 255.
2. For shadows, use min(256, (I+30)*2.5)) as the maximum value, instead of 255. This adjustment serves to limit the maximum multiplier when the value of ksc is low. For example, multiplying a shadow area by a factor of 10 raises the average luminance of such area to 200, which is still within the dynamic range, but yields poor results due to posterization and noise.
3. For shadows case, levels above 200 are not checked. This adjustment serves to allow a certain amount of clipping when ksc is high. Otherwise, the maximum value of ksι may be set so low as to eliminate the effect of contrast enhancement. For example, if ksc is set to 100, then the response curve is flat, and all areas of the image are multiplied by the same multiplier. The subject adjustment enables a user to modify an image to bring out some amount of clipping. Preferably, an analogous adjustment is made for highlights for levels below 50.
4. The user-specified parameter within the graphical user interface is a percentage between 0 and 100, which is multiplied by the largest values of khι and ksι as determined in steps 1 - 3 above.
The listing in Appendix B provides sample source code for determination of parameters khι and ksι in accordance with steps 1 - 4 above.
[00112] Regarding the parameters kl and k2 used in determining the weights for the Gradient Inverse filter, as described with respect to Method 2 in Section 1.2 above, it has been found that a choice of kl = 120 and k2 = 6 produces good results. Increasing k2 causes the filter to better preserve extreme local values. In general, a higher exponential rolloff rate tends to better preserve local extremes while simultaneously removing low-amplitude texture. Discussion of Appendices
[00113] Reference is now made to FIG. 5, which is a flow diagram of the principal software methods in the source code listed in Appendices A and B, in accordance with a preferred embodiment of the present invention. The listing in Appendices A and B includes line numbers within methods, for ease of reference. [00114] The main method Run(), listed in Appendix B, calls ApplyFilterO at lines 22 - 26, which performs local contrast enhancement in accordance with a preferred embodiment of the present invention, and returns a bitmap pDIBDst. [00115] The method ApplyFilterO, listed in Appendix B, carries out Equations (6a) - (6e) hereinabove. At lines 24 and 25, ApplyFilter calls CreateMinMaxDIBsO to generate the filtered image MINfliter and MAXfMter, which are arrays accessed by member pointers m_pDIBMin and m_pDIBMax, respectively. The highlight and shadow multipliers gmιn and gmax are tabulated and stored in look-up tables dFacLUTH[] and dfacLUTS[], respectively. The color boost parameter, kCB, is stored in the variable dCBColor. Lines 141 - 171 correspond to Equations (6a) - (6c) for 24-bit color images. Other code sections in ApplyFilterO correspond to 8-bit, 16-bit and 64-bit color images.
[00116] The method CreateMinMaxDIBsO, listed in Appendix B, calls IP_LocalMinMax() at lines 65 - 67, which is the main method used to generate the filtered images MINfl|ter and MAXf||ter, in accordance with a preferred embodiment of the present invention. CreateMinMaxDIBsO also generates the source luminance image, Lsource, by calling GetPixelVal() at line 27 and at line 47. LsourCe is an array accessed by a member pointer m_pDIBY. [00117] The method IP_LocalMinMax(), listed in Appendix A, generates the filtered images MINfl|ter and MAXfl|ter, and stored the results in arrays accessed by pointers pBitsMin and pBitsMax, respectively. The parameter pColorPriority determines whether to filter the luminance source image, as described in Sec. 1.1.2 hereinabove, or else to filter the maximum and minimum sources images, as described in Sec. 1.1.3 hereinabove. Correspondingly, at lines 50 and 51 GetMinMaxImages() is called, and at lines 59 and 60 GetLuminancelmageO is called. The parameter iMethod determines whether to use a median filter, as described in Sec. 1.1 hereinabove, or a weighted average filter, as described in Sec. 1.2 hereinabove. Correspondingly, at lines 89 and 90 the method IP_HybridMedianFilter() is called, and at lines 92 and 93 the method IP_HybridWeightedAverageFilter() is called, for computing MINfiiter- Similarly, at lines 125 and 126 the method IP_HybridMedianFilter() is called, and at lines 128 and 129 the method IP_HybridWeightedAverageFilter() is called, for computing
MAXfπter-
[00118] The method IP_HybridMedianFilter(), listed in Appendix A, carries out Equations (2a) - (2g) hereinabove. At lines 52 and 53, the method ComputeHorizontalAverages() is called, to compute various sub-window averages, as described hereinbelow, and at lines 84 - 91 the appropriate averages are stored in arrays pAveWindowWest, pAveWindowEast, etc. The Equations (3a) - (3c) are carried out at lines 101 - 121 using the method opt_med3() to compute the median of three numbers. [00119] The method IP_HybridWeightedAverageFilter(), listed in Appendix A, carries out Equation (4) hereinabove. The parameter iMethod is used to determine the weights that are used, in accordance with methods 1 - 4 described hereinabove in Sec. 1.2, as can be seen in lines 53 - 79. The weights are tabulated in an array pnWeights[]. For method 4, the weights are modified at lines 99 and 100, to incorporate the multiplication by exp(- r2/k2). At lines 110 and 111 the method ComputeHorizonalAverages() is called, to compute various sub-window averages, as described hereinbelow. The weighted average in Equation (4) is computed at lines 159 - 166 and lines 183 - 190.
[00120] The method GetMinMaxImages(), listed in Appendix A, computes the source minimum and maximum images, MINS0Urce and MAXsource, using the methods GetPixelMin() and GetPixelMax(), respectively. Similarly, the method Getl_uminancelmage(), listed in Appendix B, computes the luminance source image, LS0Urce, using the method GetPixelVal().
[00121] The method ComputeHorizontalAverages(), listed in Appendix A, computers one-dimensional horizontal (2M + 1) x 1 sub- window averages. These horizontal averages are then averaged vertically at lines 61 - 76 of method IP_HybridMedianFilter() and lines 119 - 134 of method IP_HybridWeightedAverageFilter(), to derive the two-dimensional (2M + 1) x (2M+1) sub-window averages.
[00122] The method CreateResponseCurves(), listed in Appendix B, computers the response curves corresponding to gmm and gmaχ- Lines 6 - 22 correspond to determination of maximum values for khι and ksi, based upon values of khc and ksc, respectively, as described hereinabove. [00123] In the foregoing specification, the invention has been described with reference to specific exemplary embodiments thereof. It will, however, be evident that various modifications and changes may be made to the specific exemplary embodiments without departing from the broader spirit and scope of the invention as set forth in the appended claims. Accordingly, the specification and drawings are to be regarded in an illustrative rather than a restrictive sense.
APPENDIX A ttinclude "StdAfx.h" ttinclude " IP_LocalMinMax.h" inline BYTE GetPixelMin (const BYTE* pPixel)
{
BYTE bMin = pPixel [O]; if (pPixel[l] < bMin) bMin = pPixel [1] ; if (pPixel [2] < bMin) bMin = pPixel [2]; return bMin;
} inline BYTE GetPixelMin (const WORD* pPixel)
{
WORD wMin = pPixel [0] ; if (pPixel [1] < wMin) wMin = pPixel [1] ; if (pPixel [2] < wMin) wMin = pPixel [2]; return wMin >> 8;
} inline BYTE GetPixelMax (const BYTE* pPixel)
{
BYTE bMax = pPixel [0] ; if (pPixel [1] > bMax) bMax = pPixel [1] ; if (pPixel [2] > bMax) bMax = pPixel [2] ; return bMax;
} inline BYTE GetPixelMax (const WORD* pPixel)
{
WORD wMax = pPixel [O]; if (pPixel [1] > wMax) wMax = pPixel [1] ; if (pPixel [2] > wMax) wMax = pPixel [2] ; return wMax >> 8;
return (pPixel [0] *114 + pPixel [1] *587 + pPixel [2 ] *299 + 500) / 1000;
} inline BYTE GetPixelVal (const WORD* pPixel)
{ return (pPixel [0] *299 + pPixel [1] *587 + pPixel [2] *114 ) / (1000 * 256) ;
} inline int BytesPerRow (int w, int bpp)
{ return ( (w * bpp + 31) >> 3) & ~3 ;
}
#define PIX_SORT (a, b) { if ( (a) > (b) ) PIX_SWAP ( (a) , (b) ) ; } #define PIX_SWAP (a, b) { BYTE temp= (a) ; (a) = (b) ; (b) =temp; } inline BYTE opt_med3 (BYTE * p)
{
PIX_SORT (p [ 0 ] , p [ l ] ) ; PIX_SORT (p [ 1 ] , p [2 ] ) ; PIX_SORT (p [ 0 ] , p [ 1 ] ) ; return (p [ 1 ] ) ;
static void ComputeHorizontalAverages (const BYTE* pSrc, BYTE* pDst, int nCnt, int nRadius, const BYTE* lutAveDiv, BOOL fl6BPS) { int nSum = 0 ; int nSumCnt = 0 ; int iDiam = 2*nRadius + 1 ; int iOut = -iDiam; int iln = 0; const BYTE* pSrcIn = pSrc + iln; const BYTE* pSrcOut = pSrc + iOut; pDst -= nRadius; if (flδBPS) { pSrcIn++; pSrcOut += iOut + 1; } for (int i=-nRadius ; i<nCnt ; i++) { if (iOut >= 0) { nSum -= *pSrcOut; nSumCnt--; } if (iln < nCnt) { nSum += *pSrcIn; nSumCnt++; } if (i>=0) { if (nSumCnt == iDiam) { *pDst = lutAveDiv [nSum] ; } else { *pDst = nSum / nSumCnt; } } iln++; iθut++; pSrcOut++; pSrcIn++; if (fl6BPS) { pSrcIn++; pSrcOut++; } pDst++; } } BOOL IP_HybridMedianFilter ( const BITMAPINFO* pBMISrc, const void* pBitsSrc, void* pBitsDst, int iWinRadius, IP_CallbackFN pCBFunc , void* pCBParam) SIZE si = { pBMISrc->bmiHeader biWidth, pBMISrc- >bmiHeader biHeight } ; int nRowBytesSrc = BytesPerRow (pBMISrc- >bmiHeader . biWidth, pBMISrc->bmiHeader .biBitCount) ; int nRowBytesDst = BytesPerRow (si . ex, 8); const int iWinHeight = iWinRadius * 2 + 1 ; const int lAveRadius = iWinRadius/2 ; const int lAveDiam = 2 * lAveRadius + 1, const int lAveWmHeight = iWinHeight - 2*iAveRadius , const int iRadiusDelta = iWinRadius - lAveRadius ; BYTE* lutAveDiv = new BYTE [iAveDiam*256] , const int iAveDiam2 = ( iAveDiam+1) / 2 ; const int iMaxSum = lAveDiam * 256, for (int i=0 , κiAveDiam*256 , i++ ) { lutAveDiv[i] = min(255, (i + iAveDiam2) / lAveDiam) ; } int* lutWestOffsets = new int [si. ex]; mt* lutEastOffsets = new int [si. ex]; for (int x=0 ; x<si.cx , x++) { lutWestOffsets [x] = (x > iRadiusDelta) ? x iRadiusDelta; lutEastOf fsets [x] = (x < (si cx-1-iRadiusDelta) ) ? x . si. ex - 1 - iRadiusDelta; } BYTE* * pHori zAveWmdow = new BYTE* [ lAveDiam] ; for ( int i = 0 ; K iAveDiam ; i ++ ) { pHori zAveWmdow [i ] = new BYTE [s i . ex] ; } int * pCurRowSums = new mt f s i . cx] ; ZeroMemory (pCurRowSums , sizeof ( int ) * s i . cx) ; BYTE* * pAveWindow = new BYTE* [ lAveWmHeight ] ; for ( mt i = 0 , K iAveWmHeight ; i ++ ) { pAveWindow [l] = new BYTE [si ex] ; } int nSumRows = 0 ; BOOL fRes = TRUE; for (int y=0 ; y<si . cy+iWinRadius ; y++) { if (y < si cy) { BYTE* pSrc = (BYTE*)pBitsSrc + y * nRowBytesSrc; ComputeHoπzontalAverages (pSrc, pHorizAveWmdow [iAveDiam-1] , si. ex, lAveRadius, lutAveDiv, (pBMISrc- >bmiHeader . biBitCount == 16)), BYTE* pHoπzAvePtr = pHorizAveWmdow [lAveDiam- 1] , for (x=0 ; x<si ex ; x++) { pCurRowSums [x] += *pHorizAvePtr++ ; } nSumRows++; } if (nSumRows) { BYTE* pAveDst = pAveWindow [lAveWmHeight -1] , mt* pAveSum = pCurRowSums, for (int x=0 ; x<si ex , x++) if (nSumRows == iAveDiam)
{
*pAveDst++ = lutAveDiv [*pAveSum++] ;
} else
{
*pAveDst++ = *pAveSum++ / nSumRows ;
} int yOutRow = y - iWinRadius ; if (yOutRow >= 0)
{
BYTE* pSrcRow = (BYTE* ) pBitsSrc + yOutRow * nRowBytesSrc ;
BYTE* pDstRow = (BYTE*)pBitsDst + yOutRow * nRowBytesDst; int yTop = max(0, iRadiusDelta-yOutRow) ; int yBottom = min (iAveWinHeight-1, si . cy-yOutRow) ;
BYTE* pAveWindowNorth = kpAveWindow [yTop ] [O];
BYTE* pAveWindowSouth = kpAveWindow [yBottom ] [O];
BYTE* pAveWindowWest = kpAveWindow [iRadiusDelta] [- iRadiusDelta] ;
BYTE* pAveWindowEast = &pAveWindow [iRadiusDelta] [ iRadiusDelta] ;
BYTE* pAveWindowNorthWest = kpAveWindowNorth [-iRadiusDelta] ;
BYTE* pAveWindowNorthEast = kpAveWindowNorth [ iRadiusDelta] ;
BYTE* pAveWindowSouthWest = kpAveWindowSouth [-iRadiusDelta] ;
BYTE* pAveWindowSouthEast = &pAveWindowSouth [ iRadiusDelta];
BYTE ml [3] ;
BYTE m2 [3] ;
BYTE m3 [3] ; for (int x=0 ; x<si.cx ; x++)
{ int wx = lutWestOffsets [x] ; int ex = lutEastOffsets [x] ; BYTE ctr = (pBMISrc->bmiHeader .biBitCount = = 16) pSrcRow [ (X<<1) +1] pSrcRow [x] ; ml [0] pAveWindowNorth [x] ; ml [l] pAveWindowSouth [x] ; ml [2] ctr; m2 [0] opt_med3 (ml) ; ml [0] pAveWindowWest [wx] ; ml [l] pAveWindowEast [ex] ; ml [2] ctr; m2 [1] opt_med3 (ml) ; m2 [2] ctr; m3 [0] opt_med3 (m2) ; ml [0] pAveWindowNorthWest [wx] ml [1] pAveWindowSouthEast [ex] ml [2] ctr; m2 [0] opt_med3 (ml) ; ml [0] ctr; ml [1] pAveWindowNorthEast [ex] ml [2] pAveWindowSouthWest [wx] m2 [1] opt_med3 (ml) ; m2 [2] ctr; m3 [1] opt_med3 (m2) ; m3 [2] ctr; } 127 }
128 if (y >= iAveDiam-1)
129 {
130 int* pAveSum = pCurRowSums;
131 BYTE* pAve = pHorizAveWindow [0 ];
132 for (int x=0 ; xoi.cx ; x++)
133 {
134 *pAveSum++ -= *pAve++;
135 }
136 nSumRows - - ;
137 }
138 BYTE* pHorizAveWindowTmp = pHorizAveWindow [0] ;
139 for (int i=l ; i<iAveDiam ; i++)
140 {
141 pHorizAveWindow [i - 1] = pHorizAveWindow [i] ;
142 }
143 pHorizAveWindow [iAveDiam- 1] = pHorizAveWindowTmp ;
144 BYTE* pAveWindowTmp = pAveWindow [0] ;
145 for ( int i=l ; i<iAveWinHeight ; i++)
146 {
147 pAveWindow [i - 1] = pAveWindow [i] ;
148 }
149 pAveWindow [iAveWinHeight-1] = pAveWindowTmp;
150 if (pCBFunc)
151 {
152 if (! pCBFunc (pCBParam, y, si.cy))
153 {
154 fRes = FALSE;
155 break;
156 }
157 }
158 }
159 for (int i=0 ; i<iAveDiam ; i++)
160 {
161 delete [ ] pHori zAveWindow [ i ] ;
162 }
163 for (int i=0 ; i<iAveWinHeight ; i++)
164 {
165 deleteU pAveWindow [i] ;
166 }
167 delete [] lutAveDiv;
168 delete [] lutWestOffsets ;
169 delete [] lutEastOffsets ;
170 delete [] pHorizAveWindow;
171 delete [] pCurRowSums ;
172 delete [] pAveWindow;
173 return fRes;
174 }
1 BOOL IP_HybridWeightedAverageFilter (
2 const BITMAPINFO* pBMISrc,
3 const void* pBitsSrc,
4 void* pBitsDst,
5 int iWinRadius ,
6 int iMethod,
7 IP_CallbackFN pCBFunc,
8 void* pCBParam)
9 { SIZE si = { pBMISrc->bmiHeader .biWidth, pBMISrc- >bmiHeader .biHeight }; int nRowBytesSrc = BytesPerRow (pBMISrc- >bmiHeader . biWidth, pBMISrc->bmiHeader .biBitCount) ; int nRowBytesDst = BytesPerRow (si . ex, 8 ) ; const int iWinHeight = iWinRadius * 2 + 1; const int iAveRadius = iWinRadius / raFac; const int iAveDiam = 2 * iAveRadius + 1; const int iAveWinHeight = iWinHeight - 2 *iAveRadius ; const int iRadiusDelta = iWinRadius - iAveRadius; const int iDeltaStep = max ( 1 , (2*iRadiusDelta+l) /dFac) ; const int iNumSteps = 2 * iRadiusDelta / iDeltaStep + 1; const int iDeltaOffset = iRadiusDelta % iDeltaStep; BYTE* lutAveDiv = new BYTE [iAveDiam*256] ; const int iAveDiam2 = (iAveDiam+1) / 2; const int iMaxSum = iAveDiam * 256; for (int i=0 ; i<iAveDiam*256 ; i++) { lutAveDiv[i] = min(255, (i + iAveDiam2) / iAveDiam); } BYTE* * pHori zAveWindow = new BYTE* [ iAveDiam] ; for ( int i =0 ; i < iAveDiam ; i++ ) { pHorizAveWindow [i] = new BYTE [si. ex]; } int * pCurRowSums = new int [ s i . ex] ; ZeroMemory (pCurRowSums , s i zeof ( int ) * s i . cx) ; BYTE* * pAveWindow = new BYTE* [ iAveWinHe ight ] ; for ( int i= 0 ; i<iAveWinHeight ; i++ ) { pAveWindow [ i ] = new BYTE [ s i . cx] ; } int* pClampX = new int [si.cx + 2*iRadiusDelta+l] ; int* pLUTClampX = pClampX + iRadiusDelta; for (int x=-iRadiusDelta ; x<=si . cx+iRadiusDelta ; x++) pLUTClampX [x] = min ( si . ex- 1 , max ( 0 , x) ) ; int* pLUTClampY = new int [iAveWinHeight] ; int nSumRows = 0; BOOL fRes = TRUE; UINT anWeights [512] ; UINT* pnWeights = anWeights+256 ; const int nWeightShift = 12; const UINT k = 1 << nWeightShift; const double e = 2.71828182845904523536; for (int i=-256 ; i<256 ; i++) { switch (iMethod) { case IP_LMM_METHOD_SIGMA: { pnWeights [i] = (abs(i) <= nSigma_Sigma) ; break; } case IP_LMM_METHOD_GIWO : { if (i == 0) pnWeights [i] = (l<<nWeightShift) ; else pnWeights [i] = UINT ( (l<<nWeightShif t) * pow(l 0 + double (abs (i) ) / nGIWO_DivFac, -dGIW0_exp) ) ; break; } case IP_LMM_METHOD_SMCM. { const double e = dSMCM_e , const double t2 = nSMCM_t * nSMCM_t ; pnWeights [i] = UINT(k * pow(e, -double (1*1) / t2)); break; } } } UINT** pnDistWeights = NULL; if (lMethod == IP_LMM_METHOD_SUSAN) { const double e = dSUSAN_e ; const double s2 = dSUSAN_s * dSUSAN_s ; const double t2 = nSUSAN_t * nSUSAN_t , pnDistWeights = new UINT* [iNumSteps*iNumSteps] ; for (mt dy=0 , dy<iNumSteps ; dy++ ) { for (int dx=0 ; dx<iNumSteps , dx++) { int ldx = dy*iNumSteps + dx, pnDistWeights [ldx] = new UINT [512], int x = iDeltaOffset-iRadiusDelta + (dx * iDeltaStep) , int y = iDeltaOffset-iRadiusDelta + (dy * iDeltaStep) , double dDist2 = double (x*x + y*y) / double (iDeltaStep*iDeltaStep) , for (int i=-256 ; i<256 ; i++) { pnDistWeights [ldx] [i+256] = UINT(k * pow(e, -dDist2 / (2*s2) - double ( i*i) / t2)) ; } } } } for (int y=0 ; y<si . cy+iWmRadius ; y++) { if (y < si. cy) { BYTE* pSrc = (BYTE*) pBitsSrc + y * nRowBytesSrc , ComputeHoπzontalAverages (pSrc, pHorizAveWmdow [iAveDiam-1] , si. ex, lAveRadius, lutAveDiv, (pBMISrc->bmiHeader . biBitCount == 16)); BYTE* pHorizAvePtr = pHorizAveWmdow [iAveDiam-1] ; for (x=0 , x<si.cx ; x++) { pCurRowSums [x] += *pHorizAvePtr++ ; } nSumRows++, } if (nSumRows) { BYTE* pAveDst = pAveWmdow [lAveWmHeight- 1] , mt* pAveSum = pCurRowSums , for (int x=0 , x<si.cx ; x++) { if (nSumRows == lAveDiam) 127 { 128 *pAveDst++ = lutAveDiv [ *pAveSum++ ] ; 129 } 130 else 131 { 132 *pAveDst++ = *pAveSum++ / nSumRows ; 133 134 135 } 136 int yOutRow = y - iWinRadius ; 137 if (yOutRow >= 0) 138 { 139 BYTE* pSrcRow = (BYTE*) pBitsSrc + yOutRow * nRowBytesSrc ; 140 BYTE* pDstRow = (BYTE*) pBitsDst + yOutRow * nRowBytesDst ; 141 int y0 = max(0, iRadiusDelta-yOutRow) ; 142 int yl = iAveWinHeight-1 + min(0, si . cy-1-yOutRow- 143 iRadiusDelta) ; 144 for (int iy=0 ; iy<iAveWinHeight ; iy++) 145 pLUTClampY [iy] = max(yθ, min(yl, iy) ) ; 146 if (iMethod == IP_LMM_METHOD_SUSAN) 147 { 148 for (int x=0 ; x<si.cx ; x++) 149 { 150 int x0 = x-iRadiusDelta+iDeltaOffset ; 151 int xl = x+iRadiusDelta; 152 BYTE ctr = (pBMISrc->bmiHeader.biBitCount == 16) ? 153 pSrcRow [ (X<<1) +1] : pSrcRow [x] ; 154 UINT uWeightSum = (1 << nWeightShift) ; 155 UINT uSum = ctr * uWeightSum; 156 int wldx = 0 ; 157 for (int iy=iDeltaOffset ; iy<iAveWinHeight ; 158 iy+=iDeltaStep) 159 { 160 BYTE* pAve = pAveWindow [pLUTClampY [ iy] ] ; 161 for ( int ix=xθ ; ix< =xl ; ix+=iDeltaStep ) 162 { 163 UINT uValWin = pAve [pLUTClampX [ix] ] ; 164 UINT uWeight = pnDistWeights [wldx++] [uValWin- 165 ctr+256] 166 uWeightSum += uWeight; 167 uSum += uValWin * uWeight; 168 } 169 } 170 pDstRow[x] = (uSum + (uWeightSum>>l) ) / uWeightSum; 171 } 172 } 173 else 174 { 175 for (int x=0 ; x<si.cx ; x++) 176 { 177 int x0 = x-iRadiusDelta; 178 int xl = x+iRadiusDelta; 179 BYTE ctr = (pBMISrc->bmiHeader .biBitCount == 16) ? 180 pSrcRow [ (X<<1) +1] : pSrcRow [x] ; 181 UINT uWeightSum = pnWeights [O]; 182 UINT uSum = ctr * uWeightSum; 183 for (int iy=0 ; iy<iAveWinHeight ; iy+=iDeltaStep) 184 { 185 BYTE* pAve = pAveWindow [pLUTClampY [iy] ]; 186 for ( int ix=xθ ; ix<=xl ; ix+=iDeltaStep)
187 {
188 UINT uValWin = pAve [pLUTClampX [ix] ] ;
189 UINT uWeight = pnWeights [uValWin - ctr] ;
190 uWeightSum += uWeight;
191 uSum += uValWin * uWeight;
192 }
193 }
194 pDstRow[x] = (uSum + (uWeightSum>>l) ) / uWeightSum;
195 }
196 }
197 }
198 if (y >= iAveDiam-1)
199 {
200 int* pAveSum = pCurRowSums ;
201 BYTE* pAve = pHorizAveWindow [0] ;
202 for (int x=0 ; x<si.cx ; x++)
203 {
204 *pAveSum++ -= *pAve++;
205 }
206 nSumRows - - ;
207 }
208 BYTE* pHorizAveWindowTmp = pHorizAveWindow [0] ;
209 for (int i=l ; i<iAveDiam ; i++)
210 {
211 pHorizAveWindow [i - 1 ] = pHorizAveWindow [i ] ;
212 }
213 pHori zAveWindow [ iAveDiam- 1 ] = pHorizAveWindowTmp ;
214 BYTE* pAveWindowTmp = pAveWindow [ 0 ] ;
215 for ( int i=l ; i<iAveWinHeight ; i++ )
216 {
217 pAveWindow [i - 1] = pAveWindow [i ] ;
218 }
219 pAveWindow [iAveWinHeight- 1] = pAveWindowTmp;
220 if (pCBFunc)
221 {
222 if ( lpCBFunc (pCBParam, y, si.cy))
223 {
224 fRes = FALSE; 225 break;
226 }
227 }
228 }
229 for (int i=0 ; i<iAveDiam ; i++)
230 {
231 delete[] pHorizAveWindow [i] ;
232 }
233 for (int i=0 ; i<iAveWinHeight ; i++)
234 {
235 delete [ ] pAveWindow [ i ] ;
236 }
237 if (pnDistWeights)
238 {
239 for (int i=0 ; i<iNumSteps*iNumSteps ; i++)
240 {
241 delete [ ] pnDistWeights [i ] ;
242 }
243 delete [] pnDistWeights;
244 } 245 delete [] lutAveDiv;
246 delete [] pHorizAveWindow;
247 delete [] pCurRowSums ;
248 delete [] pAveWindow;
249 delete [] pClampX;
250 delete [] pLUTClampY;
251 return fRes;
252 }
1 struct ProgressStruct
2 {
3 int nPass;
4 int nPasses;
5 IP_CallbackFN pCBFunc;
6 void* pCBParam; v };
8
9 BOOL stdcall MyProgressFunc (void* pParam, int nProgressNum, int
10 nProgressDen)
11 {
12 ProgressStruct* ps = (ProgressStruct* ) pParam;
13 int nProgress = ps->nPass * 100 + MulDiv (nProgressNum, 100,
14 nProgressDen) ;
15 if (ps->pCBFunc)
16 return ps->pCBFunc (ps- >pCBParam, nProgress, 100*ps- >nPasses) ;
17 else
18 return TRUE;
19 }
1 template <class T>int GetMinMaxImages (const BITMAPINFO* pBMISrc,
2 const T* pBitsSrc, BYTE* pBitsMin, BYTE* pBitsMax, IP_CallbackFN
3 pCBFunc, void* pCBParam)
4 {
5 UINT cbRowBytesSrc = BytesPerRow (pBMISrc- >bmiHeader . biWidth,
6 pBMISrc - >bmiHeader . biBitCount ) ;
7 UINT cbRowBytesDst = BytesPerRow (pBMISrc- >bmiHeader . biWidth, 8);
8 int iSamplesPerPixel = (pBMISrc ->bmiHeader .biBitCount == 24) ? 3
9 4;
10 for (int y=0 ; y<pBMISrc- >bmiHeader . biHeight ; y++)
11 (
12 const T* pSrc = (const T*) ((const BYTE*) pBitsSrc + y *
13 cbRowBytesSrc) ;
14 BYTE* pDstl = pBitsMin + y * cbRowBytesDst;
15 BYTE* pDst2 = pBitsMax + y * cbRowBytesDst;
16 for (int x=0 ; x<pBMISrc- >bmiHeader . biWidth ; x++)
17 {
18 *pDstl++ = GetPixelMin (pSrc) ;
19 *pDst2++ = GetPixelMax (pSrc) ; 20 pSrc += iSamplesPerPixel;
21 }
22 if (! pCBFunc (pCBParam, y, pBMISrc- >bmiHeader . biHeight) ) 23 return FALSE;
24 }
25 return TRUE;
26 }
1 int GetMinMaxImages (const BITMAPINFO* pBMISrc, const void* pBitsSrc,
2 void** ppBitsMin, void** ppBitsMax, IP_CallbackFN pCBFunc, void*
3 pCBParam) { UINT cbRowBytesDst = BytesPerRow (pBMISrc->bmiHeader . biWidth, 8 ) ; BYTE* pBitsMin = new BYTE [cbRowBytesDst * pBMISrc- >bmiHeader .biHeight] ; BYTE* pBitsMax = new BYTE [cbRowBytesDst * pBMISrc- >bmiHeader. biHeight] ; BOOL fRes = FALSE; if (pBMISrc->bmiHeader .biBitCount == 64) fRes = GetMinMaxImages (pBMISrc , (const WORD* ) pBitsSrc , pBitsMin, pBitsMax, pCBFunc, pCBParam) ; else fRes = GetMinMaxImages (pBMISrc, (const BYTE*) pBitsSrc, pBitsMin, pBitsMax, pCBFunc, pCBParam) ; if (fRes) { *ppBitsMin = pBitsMin; *ppBitsMax = pBitsMax; } else { *ppBitsMin = NULL; *ppBitsMax = NULL; delete [] pBitsMin; delete [] pBitsMax; } return fRes; } template <class T>int GetLuminanceImage (const BITMAPINFO* pBMISrc, const T* pBitsSrc, BYTE* pBitsY, IP_CallbackFN pCBFunc, void* pCBParam) { UINT cbRowBytesSrc = BytesPerRow (pBMISrc- >bmiHeader . biWidth, pBMISrc->bmiHeader .biBitCount) ; UINT cbRowBytesDst = BytesPerRow (pBMISrc- >bmiHeader . biWidth, 8); int iSamplesPerPixel = (pBMISrc- >bmiHeader .biBitCount == 24) ? 3 4; for (int y=0 ; y<pBMISrc->bmiHeader .biHeight ; y++) ( const T* pSrc = (const T*) ((const BYTE*) pBitsSrc + y * cbRowBytesSrc) ; BYTE* pDst = pBitsY + y * cbRowBytesDst; for (int x=0 ; x<pBMISrc- >bmiHeader . biWidth ; x++) { *pDst++ = GetPixelVal (pSrc) ; pSrc += iSamplesPerPixel; } if (! pCBFunc (pCBParam, y, pBMISrc->bmiHeader . biHeight) ) return FALSE; } return TRUE; } int GetLuminancelmage (const BITMAPINFO* pBMISrc, const void* pBitsSrc, void** ppBitsY, IP_CallbackFN pCBFunc, void* pCBParam) { UINT cbRowBytesDst = BytesPerRow (pBMISrc->bmiHeader . biWidth, 8); BYTE* pBitsY = new BYTE [cbRowBytesDst * pBMISrc- >bmiHeader .biHeight] ; BOOL fRes = TRUE; if (pBMISrc->bmiHeader .biBitCount == 64) fRes = GetLuminanceImage (pBMISrc, (const WORD* ) pBitsΞrc , pBitsY, pCBFunc, pCBParam); else if (pBMISrc->bmiHeader .biBitCount == 24) fRes = GetLuminancelmage (pBMISrc, (const BYTE*) pBitsSrc, pBitsY, pCBFunc, pCBParam) ; else if (pBMISrc- >bmiHeader . biBitCount == 16) { UINT cbRowBytesSrc = BytesPerRow (pBMISrc->bmiHeader . biWidth, pBMISrc->bmiHeader .biBitCount) ; for (int y=0 ; y<pBMISrc- >bmiHeader . biHeight && fRes ; y++) { const BYTE* pSrc = ((const BYTE* ) pBitsSrc + y * cbRowBytesSrc) + 1; BYTE* pDst = pBitsY + y * cbRowBytesDst ; for (int x=0 ; x<pBMISrc- >bmiHeader . biWidth ; x++) { *pDst++ = *pSrc; pSrc += 2; } fRes = pCBFunc (pCBParam, y, pBMISrc- >bmiHeader . biHeight ); } } if (fRes) { *ppBitsY = pBitsY; } else { *ppBitsY = NULL; delete!] pBitsY; } return fRes; } BOOL IP_LocalMinMax( const BITMAPINFO* pBMISrc, const void* pBitsSrc, void* pBitsMin, void* pBitsMax, int iMinRadius , int iMaxRadius ,
BOOL bColorPriority, int iMethod,
IP_CallbackFN pCBFunc , void* pCBParam) { SIZE si = { pBMISrc ->bmiHeader. biWidth, pBMISrc- >bmiHeader .biHeight } ; int nRowBytes = BytesPerRow (pBMISrc->bmiHeader . biWidth, 8); int iRadiiMin [32] ; int iRadiiMax [32] ; int iRadiiMinCnt = 0; int iRadiiMaxCnt = 0; int iRadius = iMinRadius; do { iRadiiMin [iRadiiMinCnt++] = iRadius; iRadius = (iRadius+1) / lDivFac, } while (iRadius >= nMmRadius) ; if (lMmRadius ' = iMaxRadius | | pBMISrc->bmiHeader . biBitCount 24 && bColorPrioπty) { iRadius = iMaxRadius, do { iRadiiMax [iRadnMaxCnt++] = iRadius; iRadius = (iRadius+1) / lDivFac; } while (iRadius > = nMmRadius) , } ProgressStruct ps; ps.nPass = 0 ; ps.nPasses = iRadiiMinCnt + iRadiiMaxCnt , ps.pCBFunc = pCBFunc; ps.pCBParam = pCBParam, void* pBitsTmpl = NULL; void* pBitsTmp2 = NULL, const void* pSrc = NULL, void* pDst = NULL; BOOL fRes = TRUE; if (pBMISrc->bmiHeader biBitCount >= 24) { if (bColorPπority) { ps nPasses++; fRes = GetMmMaxImages (pBMISrc, pBitsSrc, kpBitsTmpl , &pBitsTmp2, MyProgressFunc , &ps); ps . nPass++ ; pSrc = pBitsTmpl; pDst = pBitsMm, } else { ps . nPasses++ ; fRes = GetLummanceImage (pBMISrc, pBitsSrc, &pBitsTmpl, MyProgressFunc, &ps); ps.nPass++; pSrc = pBitsTmpl, pDst = pBitsMm; } } else { if (pBMISrc->bmiHeader biBitCount == 16) { ps .nPasses++; fRes = GetLummancelmage (pBMISrc, pBitsSrc, &pBitsTmpl , MyProgressFunc, &ps), ps.nPass++; pSrc = pBitsTmpl; pDst = pBitsMm; } else { pBitsTmpl = new BYTE[Si cy * BytesPerRow (si . ex, 8 ) ] ; pSrc = pBitsSrc, pDst = pBitsMm; 83 }
84 }
85 BITMAPINFO bmi = *pBMISrc;
86 bmi .bmiHeader .biBitCount = 8;
87 while (iRadiiMinCnt-- && fRes)
88 {
89 if (lMethod == IP_LMM_METHOD_MEDIAN)
90 fRes = IP_HybndMedianFilter (&bmi, pSrc, pDst,
91 iRadiiMin [iRadiiMinCnt] , MyProgressFunc , &ps) ;
92 else
93 fRes = IP_HybπdWeightedAverageFilter (&bmi , pSrc, pDst,
94 iRadiiMin [iRadiiMinCnt] , lMethod, MyProgressFunc, &ps);
95 ps.nPass++,
96 if (pDst == pBitsTmpl | | pDst == pBitsMax)
97 {
98 pSrc = pDst;
99 pDst = (BYTE*)pBitsMin;
100 }
101 else
102 {
103 pSrc = (BYTE*)pBitsMm;
104 pDst = pBitsMax;
105 }
106 }
107 if (fRes)
108 {
109 if (pDst == pBitsMin)
110 CopyMemory (pBitsMin, pBitsMax, si.cy * nRowBytes);
111 if (iRadiiMaxCnt)
112 {
113 if (pBMISrc->bmiHeader.biBitCount >= 24)
114 {
115 pSrc = bColorPriority ? pBitsTmp2 pBitsTmpl;
116 pDst = pBitsMax;
117 }
118 else
119 {
120 pSrc = (pBMISrc->bmiHeader biBitCount == 16) ' pBitsTmpl
121 pBitsSrc;
122 pDst = pBitsMax,
123 }
124 while (iRadiiMaxCnt-- && fRes)
125 {
126 if (lMethod == IP_LMM_METHOD_MEDIAN)
127 fRes = IP_HybridMedianFilter ( ibrai , pSrc, pDst,
128 iRadiiMax [iRadiiMaxCnt] , MyProgressFunc, &ps) ;
129 else
130 fRes = IP_HybπdWeightedAverageFilter (&bmi, pSrc,
131 pDst, iRadiiMax [iRadiiMaxCnt] , lMethod, MyProgressFunc, &ps) ;
132 ps.nPass++,
133 if (pDst == pBitsTmpl)
134 {
135 pSrc = pBitsTmpl;
136 pDst = (BYTE*)pBitsMax;
137 }
138 else
139 {
140 pSrc = (BYTE*)pBitsMax;
141 pDst = pBitsTmpl; 142 }
143 }
144 if (pDst == pBitsMax && fRes)
145 CopyMemory (pBitsMax, pBitsTmpl, si.cy * nRowByt
146 }
147 else
148 {
149 CopyMemory (pBitsMax, pBitsMin, si.cy * nRowBytes);
150 }
151 }
152 delete [] pBitsTmpl;
153 delete [] pBitsTmp2 ;
154 return fRes;
155 }
APPENDIX B ttinclude " stdafx . h"
#include "LocalContrastEnhancement . h" ttinclude "IP_LocalMinMax. h" inline BYTE GetPixelVal (const BYTE* pPixel)
{ return (pPixel [0] *114 + pPixel [1] *587 + pPixel [2] *299 + 500) / 1000;
} inline WORD GetPixelVal (const WORD* pPixel)
{ return (pPixel [0] *114 + pPixel [1] *587 + pPixel [2] *299 + 500) / 1000;
CLocalContrastEnhancement : : CLocalContrastEnhancement ( ) { m_pDIBMin = NULL; m_pDIBMax = NULL; m_pDIBY = NULL ; m_bCacheEnabled = TRUE; m_iCachedRadiusShadows = -9999; m_iCachedRadiusHilites = -9999; m_bCachedColorPriority = TRUE; m_iCachedMethod = -1; } CLocalContrastEnhancement: : -CLocalContrastEnhancement () { delete m_pDIBMin; delete m_pDIBMax; delete m_pDIBY; } void CLocalContrastEnhancement :: EnableCaching (BOOL fEnable) { if(fEnable) { m_bCacheEnabled = TRUE; } else { delete m_pDIBMin; delete m_pDIBMax; delete m_pDIBY; m_pDIBMin = NULL; m_pDIBMax = NULL; m_pDIBY = NULL; m_bCacheEnabled = FALSE; } } DIB* CLocalContrastEnhancement :: Run ( const DIB* pDIBSrc, const double dFacLUTS [] , const double dFacLUTH[], int iRadiusShadows, int iRadiusHilites , int iColorBoost, BOOL bColorPriority, int iMethod, BOOL bExposureWarning, IP_CallbackFN pCBFunc, void* pCBParam) { m_pDIBSrc = pDIBSrc ; DIB* pDIBDst = new DIB (pDIBSrc- >Width ( ) , pDIBSrc- >Height ( ) , pDIBSrc->BitsPP ( ) , pDIBSrc- >ColormapLen (), pDIBSrc->Colormap ( ) , NULL, (pDIBSrc->BitsPP()==16) ) ; if(ipDIBDst) { return NULL; } if (ApplyFilter (pDIBDst- >bmi , pDIBDst- >bits , pDIBSrc->bmi , pDIBSrc->bits, dFacLUTS, dFacLUTH, iRadiusShadows, iRadiusHilites, iColorBoost, bColorPriority, iMethod, bExposureWarning, pCBFunc, pCBParam)) { m_pDIBSrc = NULL; return pDIBDst; } else { delete pDIBDst; return NULL; } } BOOL CLocalContrastEnhancement :: ApplyFilter ( BITMAPINFO* pBMIDst, void* pBitsDst, const BITMAPINFO* pBMISrc, const void* pBitsSrc, const double dFacLUTS [], const double dFacLUTH [], int iRadiusShadows, int iRadiusHilites, int iColorBoost, BOOL bColorPriority, int iMethod, BOOL bExposureWarning, IP_CallbackFN pCBFunc, void* pCBParam) { if (IpBMIDst | | lpBitsDst || IpBMISrc | | IpBitsSrc) return FALSE; switch (pBMISrc->bmiHeader .biBitCount) { case 8: case 16: case 24: case 64: break; default: return FALSE; } if ( ! CreateMinMaxDIBs (iRadiusShadows , iRadiusHilites , bColorPriority, iMethod, pCBFunc , pCBParam) ) { return FALSE; } const BITMAPINFOHEADER& bmihSrc = pBMISrc- >bmiHeader; const BITMAPINFOHEADER& bmihDst = pBMIDst- >bmiHeader; SIZE si = { pBMISrc->bmiHeader .biWidth, pBMISrc- >bmiHeader .biHeight }; UINT cbRowBytesSrc = BytesPerRow (pBMISrc) ; UINT cbRowBytesDst = BytesPerRow (pBMIDst ); switch (bmihSrc .biBitCount) { case 8: case 16: case 24: case 64: break; default: return FALSE; } if (bmihDst .biBitCount != bmihSrc .biBitCount) return FALSE; if (bmihSrc .biBitCount == 8) { for (int c=0 ; c<256 ; C++) { pBMIDst- >bmiColors [c] . rgbRed = c; pBMIDst->bmiColors [c] . rgbGreen = c; pBMIDst->bmiColors [c] . rgbBlue = c; pBMIDst->bmiColors [c] . rgbReserved = 0; } } double dScale = doubled << 12); double dCBColor = double (iColorBoost) / 100.0; double dCBLuma = 1.0 - dCBColor; int iCBColor = int (dCBColor * dScale + 0.5); int iCBLuma = int (dCBLuma * dScale + 0.5) ; double dOffLUTH [256] ; double dOffLUTH16 [256] ; double dOffLUTHLum [256] ; double dOffLUTHCIr [256] ; for (int i=0 ; i<256 ; i++) { dOffLUTH[i] = 255.5 - (255.0 / dFacLUTH [i] ) ; dOffLUTH16 [i] = 65535.5 - (65535.0 / dFacLUTH [i] ) ; dOffLUTHClr [i] = max ( 0 , (255.5 - (255.0 / dFacLUTH [i] ) ) ) * dCBColor; dOffLUTHLum [i] = dOffLUTH[i] - dOffLUTHCIr [i] ; } int iFacLUTH [256] ; int iFacLUTS [256] ; int iOffLUTH [256] ; int iOffLUTHLum [256] ; int iOffLUTHCIr [256] ; int nFac2Max = 0; for (int i=0 ; i<256 ; i++) { iFacLUTS [i] = int (dFacLUTΞ [i] * dScale + 0.5); iFacLUTH [i] = int (dFacLUTH [i] * dScale + 0.5); iOffLUTH[i] = int (65535.0 - (65535.0 / dFacLUTH [i] )); 82 iOffLUTHCIr [i] = max ( 0 , int ( (65535.0 - (65535.0 / dFacLUTH [i] ) )
83 * dCBColor + 0.5) );
84 iOffLUTHLum[i] = iOffLUTH [i] - iOffLUTHClr [i] ;
85 }
86 BOOL fRes = TRUE;
87 for (int y=0 ; y<si.cy y++)
88 {
89 const BYTE* pSrc (const BYTE*)pBitsSrc + y *
90 cbRowBytesSrc ;
91 BYTE* pDst (BYTE*)pBitsDst + y *
92 cbRowBytesDst ;
93 const BYTE* pMin (const BYTE*)m_pDIBMin->bits + y *
94 m_pDIBMin->BytesPerRow ( ) ;
95 const BYTE* pMax (const BYTE*)m_pDIBMax->bits + y *
96 m_pDIBMax->BytesPerRow( ) ;
97 if (bmihSrc .biBitCount == 8)
98 {
99 for (int x=0 ; x<si.cx ; x++)
100 {
101 int nMin = *pMin++;
102 int nMax = *pMax++;
103 int nVal = (((*pSrc++ << 8) - iOffLUTH [nMin] )
104 ( (iFacLUTS [nMax] * iFacLUTH [nMin] ) >> 12)) >> 20;
105 if (nVal < 0) nVal = 0; else if (nVal > 255) nVal = 255,
106 *pDst++ = nVal;
107 }
108 }
109 else if (bmihSrc .biBitCount == 16)
110 {
111 const WORD* pSrclδ = (const WORD*)pSrc;
112 WORD* pDstlδ = (WORD*)pDst;
113 for (int x=0 ; x<si.cx ; x++)
114
115 int nMin = *pMin++;
116 int nMax = *pMax++;
117 int nVal = *pSrcl6++ - iOffLUTH [nMin] ,
118 int nOut = int(nVal * dFacLUTH [nMin] * dFacLUTS [nMax] +
119 0.5)
120 if (nOut < 0) nOut = 0;
121 else if (nOut > 65535) nOut = 65535;
122 *pDst!6++ = nOut;
123
124
125 else if (bmihSrc .biBitCount == 24)
126 {
127 const BYTE* pY = (const BYTE*) m_pDIBY->bits + y * m_pDIBY-
128 >BytesPerRow( ) ;
129 for (int x=0 ; x<si.cx ; x++)
130 {
131 int Y = *pY++ << 8;
132 int B = *pSrc++ << 8;
133 int G = *pSrc++ << 8;
134 int R = *pSrc++ << 8;
135 int nMin = *pMin++;
136 int nMax = *pMax++;
137 UINT uFac = (iFacLUTS [nMax] * iFacLUTH [nMin] ) >> 12;
138 int nOffset = iOffLUTH [nMin] ;
139 if (nOffset)
140 { 141 int iColorOffset = iOffLUTHClr [nMin] ; 142 int iLumaOffset = iOffLUTHLum [nMin] ; 143 int iOffY = max(0, Y - iLumaOffset); 144 if (Y > 0) 145 { 146 B = MulDiv(B, iOffY, Y) - iColorOffset; if (B < 0) B = 147 0; 148 G = MulDivfG, iOffY, Y) - iColorOffset; if (G < 0) G = 149 0; 150 R = MulDiv(R, iOffY, Y) - iColorOffset; if (R < 0) R = 151 0; 152 Y = iOffY - iColorOffset; if (Y < 0) Y 153 0; 154 155 else 156 { 157 B = G = R = O; 158 159 160 int iOutY = (Y * uFac) >> 12; 161 int iOutB = (B * uFac) >> 12; 162 int iOutG = (G * uFac) >> 12; 163 int iOutR = (R * uFac) >> 12; 164 int iDiffY = iOutY - Y; 165 if (iDiffY) 166 { 167 int iDiffYContrib = (iDiffY * iCBLuma) >> 12; 168 if (iDiffYContrib) 169 { 170 iOutB = (B + iDiffYContrib + (((iOutB - B) * iCBColor) > 171 12) ); 172 iOutG = (G + iDiffYContrib + (((iOutG - G) * iCBColor) >> 173 12 ) ) ; 174 iOutR = (R + iDiffYContrib + ( ( (iOutR - R) * iCBColor) >> 175 12)) ; 176 } 177 178 if (iOutB < 0) iOutB = 0; else if (iOutB > 65535) iOutB = 179 65535; 180 if (iOutG < 0) iOutG = 0; else if (iOutG > 65535) iOutG = 181 65535; 182 if (iOutR < 0) iOutR = 0; else if (iOutR > 65535) iOutR = 183 65535; 184 *pDst++ = iOutB >> 8; 185 *pDst++ = iOutG >> 8; 186 *pDst++ = iOutR >> 8; 187 if (bExposureWarning) 188 { 189 RGBTRIPLE* pRGBDst = (RGBTRIPLE*) (pDst-3) ; 190 RGBTRIPLE* pRGBSrc = (RGBTRIPLE*) (pSrc-3) ; 191 if ( (pRGBDst->rgbtRed == 255 && pRGBSrc- >rgbtRed != 255) 192 193 (pRGBDst->rgbtGreen == 255 && pRGBSrc->rgbtGreen != 255) 194 195 (pRGBDst->rgbtBlue == 255 && pRGBSrc- >rgbtBlue != 196 255) ) 197 198 pRGBDst->rgbtRed = 255; 199 pRGBDst->rgbtGreen = 0; 200 pRGBDst->rgbtBlue = 0; 201 } 202 else if ( (pRGBDst->rgbtRed == 0 && pRGBSrc- >rgbtRed .'■ 203 0) 204 (pRGBDst->rgbtGreen == 0 && pRGBSrc- >rgbtGreen !: 205 0) 206 (pRGBDst->rgbtBlue == 0 && pRGBSrc- >rgbtBlue != 207 O)) 208 209 pRGBDst->rgbtRed = 0 ; 210 pRGBDst->rgbtGreen = 255; 211 pRGBDst->rgbtBlue = 0; 212 213 214 215 } 216 else if (bmihSrc .biBitCount == 64) 217 { 218 const WORD* pY = (const WORD* ) ( (const BYTE* ) m_pDIBY- 219 >bits + y * m_pDIBY->BytesPerRow() ) ; 220 const WORD* pSrcl6 = (const WORD*)pSrc; 221 WORD* pDstl6 = (WORD*)pDst; 222 for (int x=0 ; x<si.cx ; x++) 223 { 224 int nMin = *pMin++; 225 int nMax = *pMax++; 226 double Y = *pY++; 227 double R = *pSrcl6++; 228 double G = *pSrcl6++; 229 double B = *pSrcl6++; 230 pSrcl6++; 231 double sFac = dFacLUTS [nMax] ; 232 double hFac = dFacLUTH [nMin] ; 233 double fac = sFac * hFac; 234 int nOffset = iOffLUTH [nMin] ; 235 if (nOffset) 236 { 237 double dColorOffset = dOffLUTHCIr [nMin] * 256.0; 238 double dLumaOffset = dOffLUTHLum [nMin] * 256.0; 239 double dOffY = max(0.0, Y-dLumaOffset ) ; 240 B = max(0.0, (Y>0.0 ? B * dOffY / Y 0.0) 241 dColorOffset) 242 G = max (0.0, (Y>0.0 ? G * dOffY / Y 0.0) 243 dColorOffset) 244 R = max(0.0, (Y>0.0 ? R * dOffY / Y 0.0) 245 dColorOffset) 246 Y = max(0.0, dOffY - dColorOffset); 247 248 double dOutB = B * fac; 249 double dOutG = G * fac; 250 double dOutR = R * fac; 251 double dOutY = Y * fac; 252 double dDiffY = dOutY - Y; 253 double dDiffB = dOutB - B; 254 double dDiffG = dOutG - G; 255 double dDiffR = dOutR - R; 256 double dDiffYContrib = dDiffY * dCBLuma; 257 dOutB = B + dDiffYContrib + dDiffB * dCBColor; 258 dOutG = G + dDiffYContrib + dDiffG * dCBColor; 259 dOutR = R + dDiffYContrib + dDiffR dCBColor; 260 int iB = int (dOutB + 0.5); if (iB 0) iB = 0 else if 261 (iB > 65535) iB = 65535; 262 int iG = int (dOutG + 0.5); if (iG 0) iG = 0, else if 263 (iG > 65535) iG = 65535; 264 int iR = int (dOutR + 0.5); if (iR 0) iR = 0; else if 265 (iR > 65535) iR = 65535; 266 *pDstl6++ = iR; 267 *pDstl6++ = iG; 268 *pDstl6++ = iB; 269 *pDstl6++ = 0; 270 if (bExposureWarning) 271 272 if ((pDstl6[-4] == 65535 ScSc pSrclδ [-4] I = 65535) 273 (pDstlδ [-3] == 65535 ScSc pSrclδ [-3] I = 65535) 274 (pDstl6 [-2] == 65535 ScSc pSrcl6 [-2] I — 65535) 275 276 pDstl6[-4] = 65535; 277 pDstl6[-3] = 0; 278 pDst!6[-2] = 0; 279 } 280 else if ((pDstl6[-4] == 0 ScSc pSrcl6[-4] != 0) 281 (pDstl6[-3] == 0 ScSc pSrcl6[-3] != 0) 282 (pDstl6[-2] == 0 ScSc pSrcl6[-2] != 0) 283 284 pDstlδ [-4] = 0; 285 pDstl6 [-3] = 65535; 286 pDstlδ [-2] = 0; 287 288 289 290 291 if (pCBFunc) 292 { 293 int nProgl y * 100 / bmihSrc . biHeight ; 294 int nProg2 = (y+1) * 100 / bmihSrc . biHeight ; 295 if (nProg2 > nProgl) 296 297 if (! pCBFunc (pCBParam, nProg2 , 100) 298 { 299 fRes = FALSE; 300 break; 301 302 303 304 } 305 return fRes; 306
1 BOOL CLocalContrastEnhancement : : CreateMinMaxDIBs (int iRadiusShadows , 2 int iRadiusHilites , BOOL bColorPriority, int iMethod, IP_CallbackFN 3 pCBFunc, void*pCBParam) 4 { 5 if (m_bCacheEnabled ScSc m_pDIBMin && m_pDIBMax && 6 iRadiusShadows = = m_iCachedRadiusShadows && 7 iRadiusHilites = = m_iCachedRadiusHilites ScSc 8 bColorPriority = = m_bCachedColorPriority && 9 iMethod == m iCachedMethod) 10 return TRUE; S I ZE s i = { m_pDIBSrc - >Width ( ) , m_pDIBSrc - >Height ( ) } ; delete m_pDIBMin ; m_pDIBMin = NULL ; de lete m_pDIBMax ; m_pDIBMax = NULL ; delete m_pDIBY ; m_pDIBY = NULL ; int fRes = TRUE ; i f (m_pDIBSrc - >Bit sPP ( ) == 24 ) { m_pDIBY = new DIB (si. ex, si.cy, 8, 256, NULL) ; for (int y=0 ; y<si.cy ; y++) { const BYTE* pSrc = (const BYTE* ) m_pDIBSrc- >bits + y * m_pDIBSrc->BytesPerRow ( ) ; BYTE* pDst = (BYTE*) m_pDIBY- >bits + y * m_pDIBY->BytesPerRow ( ) ; for (int x=0 ; x<si.cx ; x++) { *pDεt++ = GetPixelVal (pSrc) ; pSrc += 3; } if (pCBFunc && IpCBFunc (pCBParam, y, si.cy)) { fRes = FALSE; break; } } } else if (m_pDIBSrc->BitsPP ( ) == 64) { m_pDIBY = new DIB(si.cx, si.cy, 16, 0, NULL, NULL, TRUE); for (int y=0 ; y<si.cy ; y++) { const WORD* pSrc = (const WORD* ) ( (const BYTE*) m_pDIBSrc- >bits + y * m_pDIBSrc->BytesPerRow ( ) ) ; WORD* pDst = (WORD*) ( (BYTE* ) m_pDIBY- >bits + y * m_pDIBY- >BytesPerRow ( ) ) ; for (int x=0 ; x<si.cx ; x++) { *pDst++ = GetPixelVal (pSrc) ; pSrc += 4; } if (pCBFunc && IpCBFunc (pCBParam, y, si.cy)) { fRes = FALSE; break; } } } if (fRes) { m_pDIBMin = new DIB(si.cx, si.cy, 8, 256, NULL); m_pDIBMax = new DIB(si.cx, si.cy, 8, 256, NULL); for (int i=0 ; i<256 ; i++) { if (m_pDIBMin) memset (&m_pDIBMin- >bmi- >bmiColors [i] , i, 3); if (m_pDIBMax) memset (&m_pDIBMax- >bmi- >bmiColors [i] , i, 3) ; } fRes = IP_LocalMinMax (m_pDIBSrc- >bmi , m_pDIBSrc->bits , m_pDIBMin->bits , m_pDIBMax- >bits , iRadiusHilites , iRadiusShadows , bColorPriority, iMethod, pCBFunc, pCBParam); } m_iCachedRadiusShadows iRadiusShadows ,- m_iCachedRadiusHilites iRadiusHilites ; m_bCachedColorPriority bColorPriority; m_iCachedMethod iMethod; if (!fRes) delete m_pDIBMin; m_pDIBMin NULL ; delete m_pDIBMax; m_pDIBMax NULL ; delete m_pDIBY; m_pDIBY NULL ;
} return fRes;
void CLocalContrastEnhancement : : CreateResponseCurves ( double* dShadows, int iShadowsLevel , int iShadowsContrast , int iShadowsThresholdO , int iShadowsThresholdl, double* dHilites, int iHilitesLevel , int iHilitesContrast , int iHilitesThresholdO, int iHilitesThresholdl)
{ double dGammaS = 25.0 - 4.6 * log ( (double) iShadowsContrast*2 + 1.0); double dGammaH = 25.0 4.6 log ( (double) iHilitesContrast*2 + 1.0); double dMultMaxS = 10000.0; double dMultMaxH = 10000.0; for (int i=l ; i<256 ; i++)
{ double dAmtS double (iShadowsThresholdl-i) double (iShadowsThresholdl - iShadowsThresholdO); double dAmtH double ( i - iShadowsThresholdO ) double (iShadowsThresholdl - iShadowsThresholdO); double dFacMaxS = (min(256, (i+30)*2.5) / double (i) - 1.0) pow (dAmtS , dGammaS ) ; double dFacMaxH = (i/double (256-i) / pow (dAmtH, dGammaH); if (i<200 ScSc dFacMaxS < dMultMaxS) dMultMaxS dFacMaxS ; if (i>50 ScSc dFacMaxH < dMultMaxH) dMultMaxH dFacMaxH;
} double dShadowsMult = iShadowsLevel * dMultMaxS / 100.0; double dHilitesMult = iHilitesLevel * dMultMaxH / 100.0; for (int i=0 ; i<256 ; i++)
{ if (i <= iShadowsThresholdO) dShadows [i] = 1.0 + dShadowsMult; else if (i >= iShadowsThresholdl) dShadows [i] = 1.0; else
{ doub1e dAmt double (iShadowsThresholdl - i ) double (iShadowsThresholdl iShadowsThresholdO) ; dShadows [i] = 1.0 + (pow(dAmt, dGammaS) * dShadowsMult);
} if (i <= iHilitesThresholdO) dHilites [i] = 1.0; else if (i >= iHilitesThresholdl) dHilites [i] = 1.0 + dHilitesMult; else { double dAmt = double (i-iHilitesThresholdO) double (iHilitesThresholdl - iHilitesThresholdO) ; dHilites [i] = 1.0 + (pow(dAmt, dGammaH) * dHilitesMult) ; } } }

Claims

CLAIMSWhat is claimed is:
1. A method for contrast enhancement for digital images, comprising : filtering an original image having original color values, to generate a first filtered image corresponding to bright color values, and a second filtered image corresponding to dark color values; deriving local highlight multipliers by applying a highlight response curve to the first filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value; deriving local shadow multipliers by applying a shadow response curve to the second filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value; deriving local offset values by applying an offset curve to the first filtered image; and processing the original image, comprising : subtracting the local offset values from the original color values to generate shifted color values; multiplying the shifted color values by the local highlight multipliers; and further multiplying the shifted color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
2. The method of claim 1 wherein the first and second filtered images are the same image.
3. The method of claim 2 wherein the first and second filtered images are obtained from luminance color values of the original image.
4. The method of claim 1 wherein the first and second filtered images are different images.
5. The method of claim 4 wherein the first filtered image is obtained from minimum red-green-blue color values of the original image, and wherein the second filtered image is obtained from maximum red-green-blue color values of the original image.
6. The method of claim 1 wherein the first and second filtered images are computed using medians.
7. The method of claim 6 wherein the medians are medians of sub-averages.
8. The method of claim 1 wherein the first and second filtered images are computed using weighted averages.
9. The method of claim 8 wherein the weighted averages are weighted averages of sub-averages.
10. The method of claim 8 wherein the weighted averages use weight coefficients that have an inverse gradient dependency on input values.
11. The method of claim 8 wherein the weighted averages use weight coefficients that have an inverse distance dependency on input positions.
12. The method of claim 1 wherein said filtering is based on values of at least one parameter, and further comprising setting at least one such parameter value by a user.
13. The method of claim 12 further comprising rendering the contrast-enhanced image on a display device, and wherein said setting at least one such parameter value is performed in response to said rendering.
14. The method of claim 13 wherein said processing, said rendering and said setting at least one such parameter value are performed repeatedly until a satisfactory contrast-enhanced image is obtained.
15. The method of claim 1 wherein said deriving local highlight multipliers and said deriving local shadow multipliers are based on values of at least one parameter, and further comprising setting at least one such parameter value by a user.
16. The method of claim 15 further comprising rendering the contrast-enhanced image on a display device, and wherein said setting at least one such parameter value is performed in response to said rendering.
17. The method of claim 16 wherein said processing, said rendering and said setting at least one such parameter value are performed repeatedly until a satisfactory contrast-enhanced image is obtained.
18. A system for enhancing contrast of digital images, comprising : a filter processor for filtering an original image having original color values, to generate a first filtered image corresponding to bright color values, and a second filtered image corresponding to dark color values; and an image enhancer coupled to said filter processor for (i) deriving local highlight multipliers by applying a highlight response curve to the first filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, (ii) deriving local shadow multipliers by applying a shadow response curve to the second filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, (iii) deriving local offset values by applying an offset curve to the first filtered image, (iv) subtracting the local offset values from the original color values to generate shifted color values, (v) multiplying the shifted color values by the local highlight multipliers, and (vi) further multiplying the shifted color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
19. The system of claim 18 wherein the first and second filtered images are the same image.
20. The system of claim 19 wherein the first and second filtered images are obtained from luminance color values of the original image.
21. The system of claim 18 wherein the first and second filtered images are different images.
22. The system of claim 21 wherein the first filtered image is obtained from minimum red-green-blue color values of the original image, and wherein the second filtered image is obtained from maximum red-green-blue color values of the original image.
23. The system of claim 18 wherein said filter processor uses medians.
24. The system of claim 23 wherein the medians are medians of sub-averages.
25. The system of claim 18 wherein said filter processor uses weighted averages.
26. The method of claim 25 wherein the weighted averages are weighted averages of sub-averages.
27. The method of claim 25 wherein the weighted averages use weight coefficients that have an inverse gradient dependency on input values.
28. The method of claim 25 wherein the weighted averages use weight coefficients that have an inverse distance dependency on input positions.
29. The system of claim 18 wherein said filter processor uses values of at least one parameter, and further comprising a user interface for setting at least one such parameter value by a user.
30. The system of claim 29 wherein said user interface renders the contrast-enhanced image on a display device, and wherein the user sets at least one such parameter value in response to viewing the contrast-enhanced image on the display device.
31. The system of claim 18 wherein said image enhancer uses values of at least one parameter, and further comprising a user interface for setting at least one such parameter value by a user.
32. The system of claim 31 wherein said user interface renders the contrast-enhanced image on a display device, and wherein the user sets at least one such parameter value in response to viewing the contrast-enhanced image on the display device.
33. A machine readable storage medium storing program code for causing a data processing system to perform a method comprising: filtering an original image having original color values, to generate a first filtered image corresponding to bright color values, and a second filtered image corresponding to dark color values; deriving local highlight multipliers by applying a highlight response curve to the first filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value; deriving local shadow multipliers by applying a shadow response curve to the second filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value; deriving local offset values by applying an offset curve to the first filtered image; and processing the original image, comprising : subtracting the local offset values from the original color values to generate shifted color values; multiplying the shifted color values by the local highlight multipliers; and further multiplying the shifted color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
34. A method for contrast enhancement for digital images, comprising : filtering an original image having original color values, to generate a filtered image corresponding to bright color values; deriving local highlight multipliers by applying a highlight response curve to the filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value; deriving local offset values by applying an offset curve to the filtered image; and processing the original image, comprising : subtracting the local offset values from the original color values to generate shifted color values; and multiplying the shifted color values by the local highlight multipliers.
35. A method for contrast enhancement for digital images, comprising : filtering an original image having original color values, to generate a filtered image corresponding to dark color values; deriving local shadow multipliers by applying a shadow response curve to the filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value; and processing the original image, comprising multiplying the original color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
36. A system for enhancing contrast of digital images, comprising: a filter processor for filtering an original image having original color values, to generate a filtered image corresponding to bright color values; and an image enhancer coupled to said filter processor for (i) deriving local highlight multipliers by applying a highlight response curve to the filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value, (ii) deriving local offset values by applying an offset curve to the filtered image, (iii) subtracting the local offset values from the original color values to generate shifted color values, and (iv) multiplying the shifted color values by the local highlight multipliers, thereby generating a contrast-enhanced image from the original image.
37. A system for enhancing contrast of digital images, comprising : a filter processor for filtering an original image having original color values, to generate a filtered image corresponding to dark color values; and an image enhancer coupled to said filter processor for (i) deriving local shadow multipliers by applying a shadow response curve to the filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value, and (ii) multiplying the original color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
38. A machine readable storage medium storing program code for causing a data processing system to perform a method comprising : filtering an original image having original color values, to generate a filtered image corresponding to bright color values; deriving local highlight multipliers by applying a highlight response curve to the filtered image, the highlight response curve being a function of color value that increases from a response value of one, corresponding to a color value of zero, to a response value greater than one, corresponding to a maximum color value; deriving local offset values by applying an offset curve to the filtered image; and processing the original image, comprising : subtracting the local offset values from the original color values to generate shifted color values; and multiplying the shifted color values by the local highlight multipliers, thereby generating a contrast-enhanced image from the original image.
39. A machine readable storage medium storing program code for causing a data processing system to perform a method comprising : filtering an original image having original color values, to generate a filtered image corresponding to dark color values; deriving local shadow multipliers by applying a shadow response curve to the filtered image, the shadow response curve being a function of color value that decreases from a response value greater than one, corresponding to a color value of zero, to a response value of one, corresponding to a maximum color value; and processing the original image, comprising multiplying the original color values by the local shadow multipliers, thereby generating a contrast-enhanced image from the original image.
PCT/CA2006/000591 2005-04-13 2006-04-13 Image contrast enhancement WO2006108299A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US11/106,339 2005-04-13
US11/106,339 US8014034B2 (en) 2005-04-13 2005-04-13 Image contrast enhancement

Publications (1)

Publication Number Publication Date
WO2006108299A1 true WO2006108299A1 (en) 2006-10-19

Family

ID=37086587

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CA2006/000591 WO2006108299A1 (en) 2005-04-13 2006-04-13 Image contrast enhancement

Country Status (2)

Country Link
US (3) US8014034B2 (en)
WO (1) WO2006108299A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2947082A1 (en) * 2009-06-22 2010-12-24 St Ericsson France Sas METHOD AND DEVICE FOR PROCESSING A DIGITAL IMAGE TO LIGHTEN THIS IMAGE.
CN102473293A (en) * 2009-08-07 2012-05-23 株式会社理光 Image processing apparatus, image processing method, and computer program
CN108596120A (en) * 2018-04-28 2018-09-28 北京京东尚科信息技术有限公司 A kind of object detection method and device based on deep learning

Families Citing this family (92)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FI20045201A (en) * 2004-05-31 2005-12-01 Nokia Corp A method and system for viewing and enhancing images
US7746411B1 (en) * 2005-12-07 2010-06-29 Marvell International Ltd. Color management unit
US20070291316A1 (en) * 2006-06-14 2007-12-20 Kabushiki Kaisha Toshiba And Toshiba Tec Kabushiki Kaisha Automatic image enhancement using computed predictors
US7701618B2 (en) * 2006-06-14 2010-04-20 Kabushiki Kaisha Toshiba Automatic image enhancement using computed predictors
US8111941B2 (en) * 2006-11-22 2012-02-07 Nik Software, Inc. Method for dynamic range editing
US8233738B2 (en) 2007-07-30 2012-07-31 Dolby Laboratories Licensing Corporation Enhancing dynamic ranges of images
US8073285B2 (en) * 2007-08-27 2011-12-06 Ancestry.Com Operations Inc. User interface methods and systems for image brightness and contrast
US8488901B2 (en) 2007-09-28 2013-07-16 Sony Corporation Content based adjustment of an image
US8208750B2 (en) * 2007-10-26 2012-06-26 Hewlett-Packard Development Company, L.P. Method and system for dual-envelope image enhancement
WO2009079644A2 (en) * 2007-12-18 2009-06-25 Brijot Imaging Systems, Inc. Software methodology for autonomous concealed object detection and threat assessment
US8238687B1 (en) 2008-01-09 2012-08-07 Helwett-Packard Development Company, L.P. Local contrast enhancement of images
US8300928B2 (en) * 2008-01-25 2012-10-30 Intermec Ip Corp. System and method for locating a target region in an image
US20090220169A1 (en) * 2008-02-28 2009-09-03 Microsoft Corporation Image enhancement
US9110791B2 (en) 2008-03-03 2015-08-18 Microsoft Technology Licensing, Llc Optimistic object relocation
US8245005B2 (en) * 2008-03-03 2012-08-14 Microsoft Corporation Probabilistic object relocation
US8666189B2 (en) * 2008-08-05 2014-03-04 Aptina Imaging Corporation Methods and apparatus for flat region image filtering
US8385634B1 (en) * 2008-08-25 2013-02-26 Adobe Systems Incorporated Selecting and applying a color range in an image mask
US8340406B1 (en) * 2008-08-25 2012-12-25 Adobe Systems Incorporated Location-weighted color masking
US8370759B2 (en) 2008-09-29 2013-02-05 Ancestry.com Operations Inc Visualizing, creating and editing blending modes methods and systems
US8576145B2 (en) * 2008-11-14 2013-11-05 Global Oled Technology Llc Tonescale compression for electroluminescent display
US9524700B2 (en) 2009-05-14 2016-12-20 Pure Depth Limited Method and system for displaying images of various formats on a single display
CN101945224B (en) * 2009-07-01 2015-03-11 弗卢克公司 Thermography methods
US8928682B2 (en) * 2009-07-07 2015-01-06 Pure Depth Limited Method and system of processing images for improved display
US20120144304A1 (en) * 2009-08-12 2012-06-07 Ju Guo System and method for reducing artifacts in images
JP5596948B2 (en) * 2009-09-18 2014-09-24 キヤノン株式会社 Image processing apparatus, image processing method, and program
US8456711B2 (en) * 2009-10-30 2013-06-04 Xerox Corporation SUSAN-based corner sharpening
US8582914B2 (en) * 2010-03-22 2013-11-12 Nikon Corporation Tone mapping with adaptive slope for image sharpening
KR101113483B1 (en) * 2010-04-09 2012-03-06 동아대학교 산학협력단 Apparatus for enhancing visibility of color image
US8736722B2 (en) * 2010-07-15 2014-05-27 Apple Inc. Enhanced image capture sharpening
JP5676968B2 (en) * 2010-08-12 2015-02-25 キヤノン株式会社 Image processing apparatus and image processing method
TWI433053B (en) * 2010-11-02 2014-04-01 Orise Technology Co Ltd Method and system for image sharpness enhancement based on local feature of the image
TWI460679B (en) * 2010-12-23 2014-11-11 Nat Univ Chung Hsing Section length enhancement unit and method
JP5927829B2 (en) * 2011-02-15 2016-06-01 株式会社リコー Printing data creation apparatus, printing data creation method, program, and recording medium
US9024951B2 (en) * 2011-02-16 2015-05-05 Apple Inc. Devices and methods for obtaining high-local-contrast image data
WO2012127904A1 (en) * 2011-03-24 2012-09-27 三菱電機株式会社 Image processing device and method
US8824821B2 (en) 2011-03-28 2014-09-02 Sony Corporation Method and apparatus for performing user inspired visual effects rendering on an image
US9497447B2 (en) * 2011-06-15 2016-11-15 Scalable Display Technologies, Inc. System and method for color and intensity calibrating of a display system for practical usage
JP5435307B2 (en) * 2011-06-16 2014-03-05 アイシン精機株式会社 In-vehicle camera device
RU2477007C1 (en) * 2011-09-05 2013-02-27 Открытое акционерное общество "Государственный Рязанский приборный завод" System for correcting dark, light and middle tones on digital images
DE112013002409T5 (en) 2012-05-09 2015-02-26 Apple Inc. Apparatus, method and graphical user interface for displaying additional information in response to a user contact
DE112013002412T5 (en) * 2012-05-09 2015-02-19 Apple Inc. Apparatus, method and graphical user interface for providing feedback for changing activation states of a user interface object
DE112013002387T5 (en) 2012-05-09 2015-02-12 Apple Inc. Apparatus, method and graphical user interface for providing tactile feedback for operations in a user interface
CN103390262B (en) * 2012-05-11 2016-06-29 华为技术有限公司 The acquisition methods of weight coefficient of digital filter and device
US9397844B2 (en) * 2012-09-11 2016-07-19 Apple Inc. Automated graphical user-interface layout
US9445011B2 (en) * 2012-10-22 2016-09-13 GM Global Technology Operations LLC Dynamic rearview mirror adaptive dimming overlay through scene brightness estimation
US9690980B2 (en) * 2012-11-09 2017-06-27 Google Inc. Automatic curation of digital images
US8995719B2 (en) * 2012-12-10 2015-03-31 Intel Corporation Techniques for improved image disparity estimation
CN103942755B (en) * 2013-01-23 2017-11-17 深圳市腾讯计算机系统有限公司 Brightness of image adjusting method and device
US11113821B2 (en) 2017-12-20 2021-09-07 Duelight Llc System, method, and computer program for adjusting image contrast using parameterized cumulative distribution functions
TWI473039B (en) * 2013-03-05 2015-02-11 Univ Tamkang Method and image processing device for image dynamic range compression with local contrast enhancement
TWI503792B (en) * 2013-05-21 2015-10-11 Nat Taichung University Science & Technology Alignment device and method thereof
US9894328B2 (en) * 2013-07-18 2018-02-13 BOT Home Automation, Inc. Wireless entrance communication device
GB2520611B (en) * 2013-08-02 2016-12-21 Anthropics Tech Ltd Image manipulation
US20150086127A1 (en) * 2013-09-20 2015-03-26 Samsung Electronics Co., Ltd Method and image capturing device for generating artificially defocused blurred image
US20150109323A1 (en) * 2013-10-18 2015-04-23 Apple Inc. Interactive black and white image editing
EP2897112B1 (en) * 2014-01-17 2019-03-06 Wincor Nixdorf International GmbH Method and apparatus for the prevention of false alarms in monitoring systems
JP6593675B2 (en) * 2014-02-28 2019-10-23 パナソニックIpマネジメント株式会社 Image processing apparatus and image processing method
JP6414386B2 (en) * 2014-03-20 2018-10-31 株式会社島津製作所 Image processing apparatus and image processing program
US9633462B2 (en) * 2014-05-09 2017-04-25 Google Inc. Providing pre-edits for photos
CN104318524A (en) * 2014-10-15 2015-01-28 烟台艾睿光电科技有限公司 Method, device and system for image enhancement based on YCbCr color space
US9471966B2 (en) * 2014-11-21 2016-10-18 Adobe Systems Incorporated Area-dependent image enhancement
US9632664B2 (en) 2015-03-08 2017-04-25 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US9639184B2 (en) 2015-03-19 2017-05-02 Apple Inc. Touch input cursor manipulation
AU2015201623A1 (en) * 2015-03-30 2016-10-20 Canon Kabushiki Kaisha Choosing optimal images with preference distributions
US11170480B2 (en) * 2015-04-29 2021-11-09 University of Pittsburgh—of the Commonwealth System of Higher Education Image enhancement using virtual averaging
US9860451B2 (en) 2015-06-07 2018-01-02 Apple Inc. Devices and methods for capturing and interacting with enhanced digital images
WO2016207875A1 (en) 2015-06-22 2016-12-29 Photomyne Ltd. System and method for detecting objects in an image
US10277813B1 (en) 2015-06-25 2019-04-30 Amazon Technologies, Inc. Remote immersive user experience from panoramic video
US10084959B1 (en) * 2015-06-25 2018-09-25 Amazon Technologies, Inc. Color adjustment of stitched panoramic video
US9880735B2 (en) 2015-08-10 2018-01-30 Apple Inc. Devices, methods, and graphical user interfaces for manipulating user interface objects with visual and/or haptic feedback
US10636121B2 (en) 2016-01-12 2020-04-28 Shanghaitech University Calibration method and apparatus for panoramic stereo video system
WO2017127445A1 (en) * 2016-01-21 2017-07-27 Astral Images Corporation Processing image content for enabling high dynamic range (hdr) output thereof and computer-readable program product having such hdr content
US20170323385A1 (en) 2016-05-09 2017-11-09 Axioma, Inc. Methods and apparatus employing hierarchical conditional variance to minimize downside risk of a multi-asset class portfolio and improved graphical user interface
WO2017218255A1 (en) 2016-06-14 2017-12-21 BOT Home Automation, Inc. Configurable motion detection and alerts for audio/video recording and communication devices
US10350010B2 (en) * 2016-11-14 2019-07-16 Intai Technology Corp. Method and system for verifying panoramic images of implants
US10187637B2 (en) 2017-03-07 2019-01-22 Filmic Inc. Inductive micro-contrast evaluation method
CN107967669B (en) * 2017-11-24 2022-08-09 腾讯科技(深圳)有限公司 Picture processing method and device, computer equipment and storage medium
EP3528201A1 (en) * 2018-02-20 2019-08-21 InterDigital VC Holdings, Inc. Method and device for controlling saturation in a hdr image
CN109246354B (en) * 2018-09-07 2020-04-24 Oppo广东移动通信有限公司 Image processing method and device, electronic equipment and computer readable storage medium
US11144758B2 (en) 2018-11-15 2021-10-12 Geox Gis Innovations Ltd. System and method for object detection and classification in aerial imagery
KR102575126B1 (en) * 2018-12-26 2023-09-05 주식회사 엘엑스세미콘 Image precessing device and method thereof
CN110009563B (en) * 2019-03-27 2021-02-19 联想(北京)有限公司 Image processing method and device, electronic device and storage medium
TWI727306B (en) * 2019-04-16 2021-05-11 瑞昱半導體股份有限公司 Contrast adjustment system and contrast adjustment method
KR20190103097A (en) * 2019-08-16 2019-09-04 엘지전자 주식회사 Beauty counseling information providing device and beauty counseling information providing method
KR102392716B1 (en) * 2019-10-23 2022-04-29 구글 엘엘씨 Customize content animation based on viewpoint position
WO2021226601A1 (en) * 2020-05-08 2021-11-11 Lets Enhance Inc Image enhancement
CN112488954B (en) * 2020-12-07 2023-09-22 江苏理工学院 Adaptive image enhancement method and device based on image gray level
CN112862709A (en) * 2021-01-27 2021-05-28 昂纳工业技术(深圳)有限公司 Image feature enhancement method and device and readable storage medium
KR20220148423A (en) * 2021-04-29 2022-11-07 삼성전자주식회사 Denoising method and denosing device of reducing noise of image
CN113506231B (en) * 2021-08-03 2023-06-27 泰康保险集团股份有限公司 Processing method and device for pixels in image, medium and electronic equipment
CN113643406B (en) * 2021-08-12 2022-03-25 北京的卢深视科技有限公司 Image generation method, electronic device, and computer-readable storage medium
US20240022676A1 (en) * 2022-07-15 2024-01-18 Zhejiang University Of Technology Method for acquiring color density characteristic curve in printing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4590582A (en) * 1982-10-07 1986-05-20 Tokyo Shibaura Denki Kabushiki Kaisha Image data processing apparatus for performing spatial filtering of image data
US20030215133A1 (en) * 2002-05-20 2003-11-20 Eastman Kodak Company Color transformation for processing digital images

Family Cites Families (54)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4384336A (en) 1980-08-29 1983-05-17 Polaroid Corporation Method and apparatus for lightness imaging
US5450502A (en) 1993-10-07 1995-09-12 Xerox Corporation Image-dependent luminance enhancement
US5774599A (en) 1995-03-14 1998-06-30 Eastman Kodak Company Method for precompensation of digital images for enhanced presentation on digital displays with limited capabilities
US5991456A (en) 1996-05-29 1999-11-23 Science And Technology Corporation Method of improving a digital image
US5818975A (en) * 1996-10-28 1998-10-06 Eastman Kodak Company Method and apparatus for area selective exposure adjustment
JPH10191079A (en) * 1996-12-26 1998-07-21 Canon Inc Image reader and image read method
US6069979A (en) 1997-02-25 2000-05-30 Eastman Kodak Company Method for compressing the dynamic range of digital projection radiographic images
US6317521B1 (en) 1998-07-06 2001-11-13 Eastman Kodak Company Method for preserving image detail when adjusting the contrast of a digital image
US6212304B1 (en) 1998-07-06 2001-04-03 Intel Corp. Method and apparatus for imaging processing
US6825884B1 (en) * 1998-12-03 2004-11-30 Olympus Corporation Imaging processing apparatus for generating a wide dynamic range image
US6677959B1 (en) 1999-04-13 2004-01-13 Athentech Technologies Inc. Virtual true color light amplification
JP4076302B2 (en) 1999-08-31 2008-04-16 シャープ株式会社 Image brightness correction method
US6731790B1 (en) 1999-10-19 2004-05-04 Agfa-Gevaert Method of enhancing color images
US6760484B1 (en) 2000-01-26 2004-07-06 Hewlett-Packard Development Company, L.P. Method for improved contrast mapping of digital images
US6822762B2 (en) 2000-03-31 2004-11-23 Hewlett-Packard Development Company, L.P. Local color correction
US6813041B1 (en) 2000-03-31 2004-11-02 Hewlett-Packard Development Company, L.P. Method and apparatus for performing local color correction
US7342609B2 (en) 2000-05-09 2008-03-11 Eastman Kodak Company Exposure adjustment in an imaging apparatus
US6633684B1 (en) 2000-07-07 2003-10-14 Athentech Technologies Corp. Distortion-free image contrast enhancement
US6741753B1 (en) 2000-09-05 2004-05-25 Hewlett-Packard Development Company, L.P. Method and system of local color correction using background liminance masking
US7016080B2 (en) 2000-09-21 2006-03-21 Eastman Kodak Company Method and system for improving scanned image detail
US6804409B2 (en) 2001-03-10 2004-10-12 Hewlett-Packard Development Company, L.P. Method for contrast mapping of digital images using a variable mask
US20020154323A1 (en) 2001-03-10 2002-10-24 Sobol Robert E. Method for variable contrast mapping of digital images
US6807299B2 (en) 2001-03-10 2004-10-19 Hewlett-Packard Development Company, L.P. Method for contrast mapping of digital images that converges on a solution
US6941028B2 (en) 2001-04-30 2005-09-06 Hewlett-Packard Development Company, L.P. System and method for image enhancement, dynamic range compensation and illumination correction
GB0110748D0 (en) 2001-05-02 2001-06-27 Apical Ltd Image enhancement methods and apparatus therefor
JP3750797B2 (en) 2001-06-20 2006-03-01 ソニー株式会社 Image processing method and apparatus
JP4649781B2 (en) 2001-06-20 2011-03-16 ソニー株式会社 Image processing method and apparatus
US6834125B2 (en) 2001-06-25 2004-12-21 Science And Technology Corp. Method of improving a digital image as a function of its dynamic range
US6842543B2 (en) 2001-06-25 2005-01-11 Science And Technology Corporation Method of improving a digital image having white zones
US7215365B2 (en) 2001-06-25 2007-05-08 Sony Corporation System and method for effectively calculating destination pixels in an image data processing procedure
US6826310B2 (en) * 2001-07-06 2004-11-30 Jasc Software, Inc. Automatic contrast enhancement
US7116836B2 (en) 2002-01-23 2006-10-03 Sony Corporation Method and apparatus for enhancing an image using a wavelet-based retinex algorithm
US6937775B2 (en) 2002-05-15 2005-08-30 Eastman Kodak Company Method of enhancing the tone scale of a digital image to extend the linear response range without amplifying noise
US7113649B2 (en) * 2002-06-24 2006-09-26 Eastman Kodak Company Enhancing the tonal characteristics of digital images
US7058234B2 (en) * 2002-10-25 2006-06-06 Eastman Kodak Company Enhancing the tonal, spatial, and color characteristics of digital images using expansive and compressive tone scale functions
US7298917B2 (en) 2002-11-11 2007-11-20 Minolta Co., Ltd. Image processing program product and device for executing Retinex processing
US7149358B2 (en) * 2002-11-27 2006-12-12 General Electric Company Method and system for improving contrast using multi-resolution contrast based dynamic range management
US7489814B2 (en) 2003-02-21 2009-02-10 Ramot At Tel Aviv University Ltd. Method of and device for modulating a dynamic range of still and video images
US7102793B2 (en) * 2003-04-30 2006-09-05 Hewlett-Packard Development Company, L.P. Image filtering method
US7672528B2 (en) 2003-06-26 2010-03-02 Eastman Kodak Company Method of processing an image to form an image pyramid
US7409083B2 (en) 2003-07-18 2008-08-05 Canon Kabushiki Kaisha Image processing method and apparatus
US7469072B2 (en) 2003-07-18 2008-12-23 Canon Kabushiki Kaisha Image processing apparatus and method
JP4639037B2 (en) 2003-07-18 2011-02-23 キヤノン株式会社 Image processing method and apparatus
US7352911B2 (en) 2003-07-31 2008-04-01 Hewlett-Packard Development Company, L.P. Method for bilateral filtering of digital images
JP3880553B2 (en) 2003-07-31 2007-02-14 キヤノン株式会社 Image processing method and apparatus
US7289666B2 (en) 2003-09-09 2007-10-30 Hewlett-Packard Development Company, L.P. Image processing utilizing local color correction and cumulative histograms
US7760943B2 (en) 2003-10-02 2010-07-20 Hewlett-Packard Development Company, L.P. Method to speed-up Retinex-type algorithms
US20050073702A1 (en) 2003-10-02 2005-04-07 Doron Shaked Robust recursive envelope operators for fast retinex-type processing
US7466868B2 (en) 2003-10-03 2008-12-16 Adobe Systems Incorporated Determining parameters for adjusting images
US7412105B2 (en) 2003-10-03 2008-08-12 Adobe Systems Incorporated Tone selective adjustment of images
TWI273507B (en) * 2005-03-15 2007-02-11 Sunplus Technology Co Ltd Method and apparatus for image processing
JP4780374B2 (en) * 2005-04-21 2011-09-28 Nkワークス株式会社 Image processing method and program for suppressing granular noise, and granular suppression processing module for implementing the method
US7305127B2 (en) * 2005-11-09 2007-12-04 Aepx Animation, Inc. Detection and manipulation of shadows in an image or series of images
JP5495025B2 (en) * 2009-12-22 2014-05-21 ソニー株式会社 Image processing apparatus and method, and program

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4590582A (en) * 1982-10-07 1986-05-20 Tokyo Shibaura Denki Kabushiki Kaisha Image data processing apparatus for performing spatial filtering of image data
US20030215133A1 (en) * 2002-05-20 2003-11-20 Eastman Kodak Company Color transformation for processing digital images

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2947082A1 (en) * 2009-06-22 2010-12-24 St Ericsson France Sas METHOD AND DEVICE FOR PROCESSING A DIGITAL IMAGE TO LIGHTEN THIS IMAGE.
WO2010149575A3 (en) * 2009-06-22 2012-01-19 St-Ericsson (France) Sas Digital image processing method and device for lightening said image
US8824795B2 (en) 2009-06-22 2014-09-02 St-Ericsson (France) Sas Digital image processing method and device for lightening said image
CN102473293A (en) * 2009-08-07 2012-05-23 株式会社理光 Image processing apparatus, image processing method, and computer program
EP2462558A1 (en) * 2009-08-07 2012-06-13 Ricoh Company, Limited Image processing apparatus, image processing method, and computer program
EP2462558A4 (en) * 2009-08-07 2013-02-06 Ricoh Co Ltd Image processing apparatus, image processing method, and computer program
US8750638B2 (en) 2009-08-07 2014-06-10 Ricoh Company, Ltd. Image processing apparatus, image processing method, and computer program
CN102473293B (en) * 2009-08-07 2015-01-07 株式会社理光 Image processing apparatus and image processing method
CN108596120A (en) * 2018-04-28 2018-09-28 北京京东尚科信息技术有限公司 A kind of object detection method and device based on deep learning

Also Published As

Publication number Publication date
US8014034B2 (en) 2011-09-06
US8228560B2 (en) 2012-07-24
US8928947B2 (en) 2015-01-06
US20060232823A1 (en) 2006-10-19
US20130022287A1 (en) 2013-01-24
US20070036456A1 (en) 2007-02-15

Similar Documents

Publication Publication Date Title
US8014034B2 (en) Image contrast enhancement
RU2298226C1 (en) Method for improving digital images
EP1107181B1 (en) Adjusting the contrast of a digital image with an adaptive recursive filter
US7020332B2 (en) Method and apparatus for enhancing a digital image by applying an inverse histogram-based pixel mapping function to pixels of the digital image
EP2076013B1 (en) Method of high dynamic range compression
US4945502A (en) Digital image sharpening method using SVD block transform
Lee et al. A space-variant luminance map based color image enhancement
EP1111907A2 (en) A method for enhancing a digital image with noise-dependant control of texture
Hassan et al. The Retinex based improved underwater image enhancement
JP5392560B2 (en) Image processing apparatus and image processing method
JP2001275015A (en) Circuit and method for image processing
JP2008263475A (en) Image processing device, method, and program
WO2003061266A2 (en) System and method for compressing the dynamic range of an image
US8488899B2 (en) Image processing apparatus, method and recording medium
WO2017190786A1 (en) Optimized content-adaptive inverse tone mapping for low to high dynamic range conversion
Al-Samaraie A new enhancement approach for enhancing image of digital cameras by changing the contrast
Asari et al. Nonlinear enhancement of extremely high contrast images for visibility improvement
Watanabe et al. Improvement of color quality with modified linear multiscale retinex
US7437020B2 (en) Digital image processing device and method
WO2006110127A2 (en) Visibility improvement in color video stream
Strickland et al. Luminance, hue, and saturation processing of digital color images
Liu et al. An adaptive tone mapping algorithm based on gaussian filter
JPH10187964A (en) Method and device for image filtering, and image outline emphasis processor
Bhavani et al. Lime: Low-light image enhancement via illumination map estimation
Tseng et al. Image enhancement based on gamma map processing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

NENP Non-entry into the national phase

Ref country code: RU

WWW Wipo information: withdrawn in national office

Country of ref document: RU

122 Ep: pct application non-entry in european phase

Ref document number: 06741385

Country of ref document: EP

Kind code of ref document: A1