US7760264B2 - Method of obtaining an image - Google Patents

Method of obtaining an image Download PDF

Info

Publication number
US7760264B2
US7760264B2 US10/038,569 US3856902A US7760264B2 US 7760264 B2 US7760264 B2 US 7760264B2 US 3856902 A US3856902 A US 3856902A US 7760264 B2 US7760264 B2 US 7760264B2
Authority
US
United States
Prior art keywords
image
intensity
pixels
offset
values
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US10/038,569
Other versions
US20020186305A1 (en
Inventor
Philip Atkin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Synoptics Ltd
Original Assignee
Synoptics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Synoptics Ltd filed Critical Synoptics Ltd
Assigned to SYNOPTICS LIMITED reassignment SYNOPTICS LIMITED ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: ATKIN, PHILIP
Publication of US20020186305A1 publication Critical patent/US20020186305A1/en
Application granted granted Critical
Publication of US7760264B2 publication Critical patent/US7760264B2/en
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/407Control or modification of tonal gradation or of extreme levels, e.g. background level
    • H04N1/4072Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/40Image enhancement or restoration by the use of histogram techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10056Microscopic image

Definitions

  • the present invention relates to a method of obtaining an image and, more particularly, a method of obtaining a substantially linear representation of the brightness of an image having a wide dynamic range.
  • CCD charge coupled device
  • v xy ⁇ ⁇ KTi xy + C v min ⁇ v xy ′ ⁇ v max v max ⁇ ⁇ when ⁇ v xy ′ ⁇ v max ⁇ v min ⁇ v xy ′ ⁇ v min
  • T n an average over a series of images with different exposure times
  • Averaging of this kind is well known in the art but it does not resolve the problem of saturated areas and dark areas and therefore the present invention discards the values for these pixels from the average, such that only pixels having significant values, that is those well away from the dark and light limits, are considered when averaging the images together.
  • the present invention is aimed at overcoming the shortcomings of the prior art methods.
  • a method of creating an image which includes the steps of:
  • the method comprising, for each of a set of pixels (x, y) in a two dimensional array, calculating an estimate of the true image intensity (î xy ) as a weighted average of n samples of the apparent image intensity (v n,xy ) as
  • v n,xy is the apparent intensity measured
  • T n is the exposure time
  • K is the gain of the system
  • C is an offset
  • w n,xy is a weighting factor which is defined to maximise the signal to noise ratio and discard insignificant, that is saturated or near zero, values
  • w n , xy ⁇ ⁇ 1 v min ⁇ v n , xy ⁇ v max 0 ⁇ ⁇ when ⁇ v n , xy ⁇ v max ⁇ 0 ⁇ v n , xy ⁇ v min
  • a further example is that of Gaussian noise which is minimised, and hence the signal to noise ratio is maximised, when
  • w n , xy ⁇ ⁇ KT n v min ⁇ v n , xy ⁇ v max 0 ⁇ ⁇ when ⁇ v n , xy ⁇ v max ⁇ 0 ⁇ v n , xy ⁇ v min
  • the invention provides for an image to be formed over a wider dynamic range than that of the basic sensor and A to D converter and enables the image to be a linear representation of the brightness of the original scene whilst taking into account all the available data and with an optimal signal to noise ratio. It also allows an image series to be captured automatically to cover a wide range of exposure times as the number of saturated and zero output pixels is a natural output from each step, so that the sequence can be chosen to cover the entire dynamic range of the specimen automatically rather than arriving, by iteration, at the optimal single exposure time for the whole specimen which is the technique currently employed in the art.
  • the input image from the camera may be of any precision. Typically it would be 8, 12, or 16 bits/pixel.
  • the algorithm can be coded such that each of these forms is converted to a high precision or floating point format before further processing.
  • the choice of the exposure times T n is very important. For practical cameras, the exposure time is subject to limits and not all exposure times are possible within those limits. As part of the implementation of the present invention, it is required to determine those pixels which are saturated.
  • the operation of a system incorporating the invention can be broken down into three phases: adjustment, capture and analysis. By counting those pixels which are saturated and those for which there is a zero output for a particular frame, the implementation can determine whether the frame is generally over or under exposed.
  • the adjustment phase the best single exposure time for the specimen is selected. This selection process is either performed manually via viewing unmodified digital image directly on screen or automatically from the frequency histogram of the intensity levels of the image.
  • the number of pixels at the limits may be monitored. If the exposure time is reduced to a point where no pixels are saturated then there is no more information to be obtained by any further reduction in exposure time. Similarly, if the exposure time is increased to the point where most pixels are saturated there is little point in any further increase. In the presence of noise, it may be worth going a little beyond each of these limits so as to increase the number of valid samples for the averaging.
  • This approach can be subject to two problems: sometimes the ratio between the successive exposure times is not precisely known and there is often an offset present in the camera electronics or in the analogue to digital converter which results in a further offset to each image.
  • each image can be transformed to match the scale and offset of the other.
  • the gradient is the ratio of the two exposure times and the offset is a constant for all pixels.
  • sL is the number of pixels which is saturated in at least one of the images.
  • each image can be transformed to match the scale and offset of the first in the series.
  • the expression for the averaging kernel is then:
  • w n , xy ⁇ ⁇ a n v min ⁇ v n , xy ⁇ v max 0 ⁇ ⁇ when ⁇ v n , xy ⁇ v max ⁇ 0 ⁇ v n , xy ⁇ v min
  • v min and v max may be conveniently expressed as fractions of the total data range of the camera and therefore these parameters do not need to be altered for different cameras.
  • the processing may be slightly different. For the shortest exposure even pixels above v max can be included in the calculations because no other, longer exposure, is going to yield a better representation of the brightness of the pixel. Similarly, for the longest exposure, even pixels that are below v min can be included. If this were not the case then no data on these pixels would be obtained at any time during the process. Such pixels should not, however, be used to calculate the relative gain and offset between the image and its predecessor.
  • the relative gain is the same for each colour band because it represents the ratio of the exposure times and thus is calculated for all colours together.
  • the offset is colour dependent and must be calculated by processing each colour band separately.
  • FIG. 1 is a general view of the apparatus.
  • the system as implemented in the example includes a camera 1 (in the particular example, a JVC KY F50E camera).
  • the camera 1 output signals are digitized by analogue to digital converters which form part of a framestore board 2 (a Synoptics Prysm framestore) fitted within a personal computer 3 (a Gateway 2000 PC with a 500 MHz Pentium II processor and 256 MB RAM running Windows 98) and then the digital image 4 is placed in the computer s memory and may be displayed on a monitor 5 .
  • Control of the system was achieved using software written in Visual Basic and C++ using Synoptics Image Objects to handle the display and processing functions.
  • the best single exposure time for the specimen object is selected.
  • the selection may be performed by the user it is preferable that it is determined automatically from the occurrence frequency histogram of the intensity levels of the image.
  • a predefined intensity generally v max
  • the image is considered too bright to form the starting point and the process is repeated with a shorter exposure time.
  • v min a predefined intensity
  • the image is considered too dark and the exposure time is increased. If the process is repeated more than a predefined number of times without an appropriate image being acquired, then the algorithm can be aborted with an error report.
  • the data in these memory areas is stored in high precision format in order to avoid data degradation due to rounding or truncation.
  • the exposure time selected during the adjustment phase is then used as a starting point.
  • An image is acquired using this exposure time.
  • a is taken as 1 and b as zero, w is 1 for those pixels having an in range value and zero otherwise, and the summations identified above are initialised with the values indicated.
  • the exposure time is then increased by approximately a factor of 2, and another image acquired.
  • This process can be repeated to resume for the same starting exposure time but decrease in the exposure time for each iteration until more than a given percentage of the spatial locations have an intensity less than v min .
  • the ratio of the two accumulation values is calculated to form a result image.
  • This image may be subject to a linear resealing or offset to make it suitable for conversion to a fixed point format image for further processing.
  • the linear image is very useful for processing and analysis it can be difficult to view the image on a conventional display with a much lower dynamic range. Therefore after the extended dynamic range image has been created it can be useful to process the data further in order to view it conveniently on a display medium with a more limited dynamic range.
  • Two methods have been employed, the first of which is histogram equalization. This technique can be applied to an entire image or to a smaller neighbourhood within the image. The technique considers the frequency histogram of the intensity levels in the result images and alters the brightness levels in the image so as to achieve a desired distribution of intensity levels. The effect is that a range of intensity levels that contains much information, and therefore many pixels within that intensity range, is spread out to enhance the contrast in that range whereas the thinly populated range of intensity levels is compressed.
  • This technique can be used to compress an image with high dynamic range into another suitable for direct display, with little loss of information.
  • a variation that is useful when dealing with colour image sources, is to arrange for the hue of each pixel to remain constant as its brightness is adjusted. This prevents unnatural colour casts from occurring.
  • An alternative technique for displaying the information on a conventional display with a reduced dynamic range is local contrast enhancement. This technique boosts contrast in areas with little detail, and compresses contrast in areas with much detail. It can also darken generally light areas and vice versa in order to normalise the brightness and contrast of each neighbourhood of pixels. The resultant image has a uniform level of detail throughout the image. When processing colour images it is necessary, as with the histogram equalisation technique, to maintain a constant hue.

Abstract

A method of creating an image 4 obtained from say a camera 1 to obtain a substantially linear representation of the brightness of the image includes, for each of a set of pixels (x, y) in a two dimensional array, calculating, in a computer 3, an estimate of the true image intensity (ixy) as a weighted average of n samples of the apparent image intensity (vn,xy). This is calculated as:
i ^ xy = n ( w n , xy ( v n , xy - C KT n ) ) n w n , xy = 1 K n ( w n , xy ( v n , xy - C T n ) ) n w n , xy
where vn,xy is the apparent intensity measured, Tn is the exposure time, K is the gain of the system, C is an offset and wn,xy is a weighting factor which is defined to maximize the signal to noise ratio and discard insignificant, that is saturated or near zero, values. Thereafter each of the values îxy is saved together with other data representing the image 4, before the image is output to a display 5 or to a printing device.

Description

BACKGROUND
The present invention relates to a method of obtaining an image and, more particularly, a method of obtaining a substantially linear representation of the brightness of an image having a wide dynamic range.
Digital electronic cameras and other similar imaging devices and systems often use CCD (charge coupled device) sensors, and work by converting incident photons into charge and accumulating the charge in each pixel for the duration of the exposure. Like all other imaging devices, including conventional wet photography and other digital systems, such imaging devices impose limits on the dynamic range of the signal they can capture. In particular, image details in the regions which are either too dark or too bright cannot be captured. In such digital devices, the charge in each cell on to which part of an image is imposed, is read out and converted into a number representing its intensity or brightness via an analogue to digital converter. Therefore, if the image is too bright, or the exposure time is too long, then saturation occurs and the brightness of the image is clipped to the maximum representable intensity and no detailed information is available in these areas. This imposes an upper limit on the detected brightness. Conversely, if the image is too dark any signal is indistinguishable above the quantisation noise in the analogue to digital converter and therefore detail is lost in these dark areas.
An important characteristic of this clipping process is that it is destructive and destroys information which cannot subsequently be recovered from the recorded image. Therefore any attempt to capture the information that would otherwise be missing, must be made at the time of capture of the image. The resulting image requires a wider dynamic range than that of the basic sensor and analogue to digital converter. It is possible, then, to correct overall over or under exposure by adjusting the time over which the sensor is accumulating charge so that the full dynamic range of the digital imaging system is utilised. However, whilst this uses the available dynamic range to best effect, it cannot assist where there are both very bright and very dark regions, for example, in an electrophoresis gel, in the same field of view as both lengthening and reducing the exposure time cannot be carried out simultaneously. In these circumstances it is possible to capture either the bright areas or the dark areas accurately, but not both simultaneously.
Now, it is evident that, under such circumstances, a series of images captured using different exposure times will contain all of the available information. Indeed a technique well known in the art involves simply averaging together a series of images recorded with steadily increasing exposure times. This can be shown to result in an image which is approximately proportional to the logarithm of the intensity. Such a result may be pleasing to the eye in that very bright regions do not suppress the detail of very dark regions, but such images lack the crucial linear quality often required for quantitative analysis.
It is possible to obtain camera systems having an inherent high dynamic range and whose output can be digitised to a 16 bit resolution. Such systems can achieve the dynamic range necessary to image both very dark and very light areas without saturation, but they are both very expensive and generally slow to operate with regard to focussing and adjusting the field of view. The advantages of the present invention are that an inexpensive sensor and digitizer may be used, and the readout can be rapid, facilitating convenient, fluid adjustment of focus.
The ideal digital output ixy from an analogue to digital converter of a true image of intensity ixy is given by
v′ xy =KTi xy +C
where T is the exposure time, K is the overall gain of the system, and C is an offset. However, due to the saturation at the high and low ends of the range the true output ixy is constrained by:
v xy = { KTi xy + C v min < v xy < v max v max when v xy v max v min v xy v min
In order to reduce the effects of noise an average over a series of images with different exposure times Tn can be taken, giving a superior estimate of the true image ixy:
i ^ xy = 1 n n ( v n , xy - C KT n )
Averaging of this kind is well known in the art but it does not resolve the problem of saturated areas and dark areas and therefore the present invention discards the values for these pixels from the average, such that only pixels having significant values, that is those well away from the dark and light limits, are considered when averaging the images together.
The present invention is aimed at overcoming the shortcomings of the prior art methods.
SUMMARY OF THE INVENTION
According to the present invention there is provided a method of creating an image which includes the steps of:
obtaining a substantially linear representation of the brightness of an image, the method comprising, for each of a set of pixels (x, y) in a two dimensional array, calculating an estimate of the true image intensity (îxy) as a weighted average of n samples of the apparent image intensity (vn,xy) as
i ^ xy = n ( w n , xy ( v n , xy - C KT n ) ) n w n , xy = 1 K n ( w n , xy ( v n , xy - C T n ) ) n w n , xy
where vn,xy is the apparent intensity measured, Tn is the exposure time, K is the gain of the system, C is an offset and wn,xy is a weighting factor which is defined to maximise the signal to noise ratio and discard insignificant, that is saturated or near zero, values;
thereafter saving each of the values îxy together with other data representing the image; and
outputting the image to a display or to a printing device or to a subsequent analysis.
For example, in the most simplistic case:
w n , xy = { 1 v min < v n , xy < v max 0 when v n , xy v max 0 v n , xy v min
A further example is that of Gaussian noise which is minimised, and hence the signal to noise ratio is maximised, when
w n , xy KT n = 1
and therefore
w n , xy = { KT n v min < v n , xy < v max 0 when v n , xy v max 0 v n , xy v min
The invention provides for an image to be formed over a wider dynamic range than that of the basic sensor and A to D converter and enables the image to be a linear representation of the brightness of the original scene whilst taking into account all the available data and with an optimal signal to noise ratio. It also allows an image series to be captured automatically to cover a wide range of exposure times as the number of saturated and zero output pixels is a natural output from each step, so that the sequence can be chosen to cover the entire dynamic range of the specimen automatically rather than arriving, by iteration, at the optimal single exposure time for the whole specimen which is the technique currently employed in the art.
The input image from the camera may be of any precision. Typically it would be 8, 12, or 16 bits/pixel. In order that any of these forms can be processed the algorithm can be coded such that each of these forms is converted to a high precision or floating point format before further processing.
The choice of the exposure times Tn is very important. For practical cameras, the exposure time is subject to limits and not all exposure times are possible within those limits. As part of the implementation of the present invention, it is required to determine those pixels which are saturated. The operation of a system incorporating the invention can be broken down into three phases: adjustment, capture and analysis. By counting those pixels which are saturated and those for which there is a zero output for a particular frame, the implementation can determine whether the frame is generally over or under exposed. During the adjustment phase, the best single exposure time for the specimen is selected. This selection process is either performed manually via viewing unmodified digital image directly on screen or automatically from the frequency histogram of the intensity levels of the image.
As the exposure time is varied, the number of pixels at the limits may be monitored. If the exposure time is reduced to a point where no pixels are saturated then there is no more information to be obtained by any further reduction in exposure time. Similarly, if the exposure time is increased to the point where most pixels are saturated there is little point in any further increase. In the presence of noise, it may be worth going a little beyond each of these limits so as to increase the number of valid samples for the averaging.
This approach can be subject to two problems: sometimes the ratio between the successive exposure times is not precisely known and there is often an offset present in the camera electronics or in the analogue to digital converter which results in a further offset to each image.
In order to resolve these difficulties a regression calculation can be performed between successive pairs of images in order to determine the linear function relating them. Given this relationship, each image can be transformed to match the scale and offset of the other. For unsaturated pixels there is a linear relationship between the images recorded at different exposure times, where the gradient is the ratio of the two exposure times and the offset is a constant for all pixels.
For example if one considers:
v′ m,xy =KT m i xy +C
and
v′ n,xy =KT n i xy +C
these can be re arranged as:
v m , xy = T m T n v n , xy + C - C T m T n
which is of the linear form:
v′ m,xy =av′ n,xy +b
So, for unsaturated pixels, there is a linear relationship between the images recorded with different exposure times, where the gradient a is the ratio of the two exposure times and the offset b is a constant for all pixels. The situation is slightly complicated by the presence of noise in both the vm and vn images and the best fit linear relationship in this case is given by a perpendicular regression, which minimises the sum of the squares of the perpendicular distance between a point formed by the coordinates (vn,xy,vm,xy) and the fitted linear relationship.
An example of an implementation of a standard perpendicular regression technique that accounts for the possibility of saturated pixels is the following pseudocode:
s = sL = sn = sm = snn = smm = snm = 0
for all y values of image
{
for all x values of image
{
(vn is intensity value of image n at x,y and vm is intensity value
of image m at x,y)
if vn > vmin AND vn < vmax AND vm > vmin AND
vm < vmax
{
s = s + 1
sn = sn + vn
snn = snn + (vn*vn)
sm = sm + vm
smm = smm + (vm*vm)
snm = snm (vn*vm)
}
else
{
sL = sL + 1
}
}
}
sdndndmdm = snn (sn * sn / s) snm + (sm * sm/s)
sdndm = snm (sn * sm/s)
aa = sdndndmdm/sdndm
a = ( aa + SquareRoot(aa * aa + 4))/2
b =(sm a * sn)/s
In this way the linear relationship between one image and another may be determined. This is best done between one image and the next in the series, because, in this way, the number of spatial locations (x,y) whose intensity is not saturated in either image, is maximised, resulting in the best statistics and the most accurate estimate of a and b.
As a by product, sL is the number of pixels which is saturated in at least one of the images.
Having determined a and b, each image can be transformed to match the scale and offset of the first in the series. The expression for the averaging kernel is then:
i ^ xy = ( w n , xy ( v n , xy - n b n n a n ) ) n w n , xy
where an and bn are the gradient a and offset b measured between image n and image n 1 (a1=1; b1=0), where:
w n , xy = { a n v min < v n , xy < v max 0 when v n , xy v max 0 v n , xy v min
vmin and vmax may be conveniently expressed as fractions of the total data range of the camera and therefore these parameters do not need to be altered for different cameras.
When processing an image with either the shortest or the longest exposure time of the series, the processing may be slightly different. For the shortest exposure even pixels above vmax can be included in the calculations because no other, longer exposure, is going to yield a better representation of the brightness of the pixel. Similarly, for the longest exposure, even pixels that are below vmin can be included. If this were not the case then no data on these pixels would be obtained at any time during the process. Such pixels should not, however, be used to calculate the relative gain and offset between the image and its predecessor.
Furthermore, it is possible to use the system to process colour images. The relative gain is the same for each colour band because it represents the ratio of the exposure times and thus is calculated for all colours together. However, the offset is colour dependent and must be calculated by processing each colour band separately.
BRIEF DESCRIPTION OF THE DRAWINGS
One example of a system constructed in accordance with the present invention will now be described with reference to the accompanying drawing, in which:
FIG. 1 is a general view of the apparatus.
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
The system as implemented in the example includes a camera 1 (in the particular example, a JVC KY F50E camera). The camera 1 output signals are digitized by analogue to digital converters which form part of a framestore board 2 (a Synoptics Prysm framestore) fitted within a personal computer 3 (a Gateway 2000 PC with a 500 MHz Pentium II processor and 256 MB RAM running Windows 98) and then the digital image 4 is placed in the computer s memory and may be displayed on a monitor 5.
Thereafter the brightness values îxy for each pixel are calculated in accordance with the expressions above.
Control of the system was achieved using software written in Visual Basic and C++ using Synoptics Image Objects to handle the display and processing functions.
In operation, firstly, during the adjustment phase the best single exposure time for the specimen object is selected. Although the selection may be performed by the user it is preferable that it is determined automatically from the occurrence frequency histogram of the intensity levels of the image. Thus, if more than a predefined number of pixels exceed a predefined intensity, generally vmax, then the image is considered too bright to form the starting point and the process is repeated with a shorter exposure time. Conversely, if more than a predetermined number of the pixels have an intensity less than vmin then the image is considered too dark and the exposure time is increased. If the process is repeated more than a predefined number of times without an appropriate image being acquired, then the algorithm can be aborted with an error report.
Thereafter, during the capture phase a number of steps are under taken according to the present invention. First, areas of computer memory are located for the values needed during the process corresponding to:
n ( w n , x y ( v n , x y - n b n n a n ) ) and n w n , x y
The data in these memory areas is stored in high precision format in order to avoid data degradation due to rounding or truncation.
The exposure time selected during the adjustment phase is then used as a starting point. An image is acquired using this exposure time. For this image, a is taken as 1 and b as zero, w is 1 for those pixels having an in range value and zero otherwise, and the summations identified above are initialised with the values indicated. The exposure time is then increased by approximately a factor of 2, and another image acquired.
For each new image the following processes are carried out:
    • the image is compared to the previous one acquired to determine the linear relationship between the gradient a and offset b, considering only those spacial pixel locations in which the intensities in both images are well away from the limits vmin and vmax;
    • the product of all the a values, and a sum of all the b values, is computed;
    • for each pixel location in which the new image of intensity value is within range, the values in the two formations given above are updated; (w is equal to the product of all the a values for those pixels having an in range value, and zero otherwise)
    • if the number of spatial locations for which the new image s intensity value exceeds vmax is greater than a predetermined percentage of all spatial locations in the image, then there is little point in increasing the exposure time further.
This process can be repeated to resume for the same starting exposure time but decrease in the exposure time for each iteration until more than a given percentage of the spatial locations have an intensity less than vmin. Finally, the ratio of the two accumulation values is calculated to form a result image. This image may be subject to a linear resealing or offset to make it suitable for conversion to a fixed point format image for further processing.
Although the linear image is very useful for processing and analysis it can be difficult to view the image on a conventional display with a much lower dynamic range. Therefore after the extended dynamic range image has been created it can be useful to process the data further in order to view it conveniently on a display medium with a more limited dynamic range. Two methods have been employed, the first of which is histogram equalization. This technique can be applied to an entire image or to a smaller neighbourhood within the image. The technique considers the frequency histogram of the intensity levels in the result images and alters the brightness levels in the image so as to achieve a desired distribution of intensity levels. The effect is that a range of intensity levels that contains much information, and therefore many pixels within that intensity range, is spread out to enhance the contrast in that range whereas the thinly populated range of intensity levels is compressed. This technique can be used to compress an image with high dynamic range into another suitable for direct display, with little loss of information. A variation that is useful when dealing with colour image sources, is to arrange for the hue of each pixel to remain constant as its brightness is adjusted. This prevents unnatural colour casts from occurring.
An alternative technique for displaying the information on a conventional display with a reduced dynamic range is local contrast enhancement. This technique boosts contrast in areas with little detail, and compresses contrast in areas with much detail. It can also darken generally light areas and vice versa in order to normalise the brightness and contrast of each neighbourhood of pixels. The resultant image has a uniform level of detail throughout the image. When processing colour images it is necessary, as with the histogram equalisation technique, to maintain a constant hue.

Claims (4)

1. A method of creating an image which includes the steps of:
obtaining a representation of the brightness of an image, said representation being linear over the whole range of brightness, by calculating, for each of a set of pixels (x, y) in a two dimensional array, an estimate of the true image intensity (ixy) as a weighted average of n samples of the apparent image intensity (vn,xy) as
i ^ x y = n w n , x y ( v n , x y - n b n n a n ) n w n , x y
where an and bn are the gradient a and offset b measured between image n and image n−1 (a1=1; b1=0) when
w n , x y = { n a n v min < v n , x y < v max 0 when v n , x y v max 0 v n , x y v min
where vn,xy is the apparent intensity measured, n is greater than or equal to 2, and vmin and vmax are defined to maximize the signal to noise ratio and discard insignificant, that is saturated or near zero, values;
thereafter saving each of the values ixy together with other data representing the image; and
outputting the image to a display or to a printing device.
2. A method according to claim 1, wherein the gradients a and the offsets b are obtained by the use of a regression technique whereby each image is transformed to match the scale and offset of the first in the series.
3. A method according to claim 1 or claim 2, wherein the image is a coloured image and the offset is colour dependent.
4. A method according to claim 2, wherein the regression is a perpendicular regression.
US10/038,569 2001-01-03 2002-01-02 Method of obtaining an image Expired - Fee Related US7760264B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
EP01300025.2 2001-01-03
EP01300025A EP1225756A1 (en) 2001-01-03 2001-01-03 Method of obtaining an image
EP01300025 2001-01-03

Publications (2)

Publication Number Publication Date
US20020186305A1 US20020186305A1 (en) 2002-12-12
US7760264B2 true US7760264B2 (en) 2010-07-20

Family

ID=8181624

Family Applications (1)

Application Number Title Priority Date Filing Date
US10/038,569 Expired - Fee Related US7760264B2 (en) 2001-01-03 2002-01-02 Method of obtaining an image

Country Status (2)

Country Link
US (1) US7760264B2 (en)
EP (1) EP1225756A1 (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7167574B2 (en) * 2002-03-14 2007-01-23 Seiko Epson Corporation Method and apparatus for content-based image copy detection
DE10307744A1 (en) * 2003-02-24 2004-09-02 Carl Zeiss Jena Gmbh Electromagnetic radiation intensity determination method in which measurements of radiation originating from particular locations are normalized based on the length of an intensity recording exposure
US7532804B2 (en) * 2003-06-23 2009-05-12 Seiko Epson Corporation Method and apparatus for video copy detection
US7486827B2 (en) * 2005-01-21 2009-02-03 Seiko Epson Corporation Efficient and robust algorithm for video sequence matching
JP2008028957A (en) * 2006-07-25 2008-02-07 Denso Corp Image processing apparatus for vehicle
GB2443663A (en) 2006-07-31 2008-05-14 Hewlett Packard Development Co Electronic image capture with reduced noise
SE530789C2 (en) * 2007-01-17 2008-09-09 Hemocue Ab Apparatus and method for position determination of objects contained in a sample
WO2008088249A1 (en) * 2007-01-17 2008-07-24 Hemocue Ab Apparatus for determining positions of objects contained in a sample
US20090238435A1 (en) * 2008-03-21 2009-09-24 Applied Imaging Corp. Multi-Exposure Imaging for Automated Fluorescent Microscope Slide Scanning
US8737755B2 (en) 2009-12-22 2014-05-27 Apple Inc. Method for creating high dynamic range image
CN113869291B (en) * 2021-12-02 2022-03-04 杭州魔点科技有限公司 Method, system, device and medium for adjusting human face exposure based on ambient brightness

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4590582A (en) * 1982-10-07 1986-05-20 Tokyo Shibaura Denki Kabushiki Kaisha Image data processing apparatus for performing spatial filtering of image data
EP0387817A2 (en) 1989-03-16 1990-09-19 Konica Corporation Electronic still camera
US5033096A (en) * 1987-04-22 1991-07-16 John Lysaght (Australia) Limited Non-contact determination of the position of a rectilinear feature of an article
US5309243A (en) 1992-06-10 1994-05-03 Eastman Kodak Company Method and apparatus for extending the dynamic range of an electronic imaging system
US5801773A (en) * 1993-10-29 1998-09-01 Canon Kabushiki Kaisha Image data processing apparatus for processing combined image signals in order to extend dynamic range
US5828793A (en) * 1996-05-06 1998-10-27 Massachusetts Institute Of Technology Method and apparatus for producing digital images having extended dynamic ranges
WO2000078038A1 (en) 1999-06-16 2000-12-21 Microsoft Corporation A system and process for improving the uniformity of the exposure and tone of a digital image
US7209166B2 (en) * 2000-10-26 2007-04-24 Micron Technology, Inc. Wide dynamic range operation for CMOS sensor with freeze-frame shutter

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6038038A (en) 1994-08-24 2000-03-14 Xerox Corporation Method for determining offset and gain correction for a light sensitive sensor

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4590582A (en) * 1982-10-07 1986-05-20 Tokyo Shibaura Denki Kabushiki Kaisha Image data processing apparatus for performing spatial filtering of image data
US5033096A (en) * 1987-04-22 1991-07-16 John Lysaght (Australia) Limited Non-contact determination of the position of a rectilinear feature of an article
EP0387817A2 (en) 1989-03-16 1990-09-19 Konica Corporation Electronic still camera
US5309243A (en) 1992-06-10 1994-05-03 Eastman Kodak Company Method and apparatus for extending the dynamic range of an electronic imaging system
US5801773A (en) * 1993-10-29 1998-09-01 Canon Kabushiki Kaisha Image data processing apparatus for processing combined image signals in order to extend dynamic range
US5828793A (en) * 1996-05-06 1998-10-27 Massachusetts Institute Of Technology Method and apparatus for producing digital images having extended dynamic ranges
WO2000078038A1 (en) 1999-06-16 2000-12-21 Microsoft Corporation A system and process for improving the uniformity of the exposure and tone of a digital image
US7209166B2 (en) * 2000-10-26 2007-04-24 Micron Technology, Inc. Wide dynamic range operation for CMOS sensor with freeze-frame shutter

Also Published As

Publication number Publication date
US20020186305A1 (en) 2002-12-12
EP1225756A1 (en) 2002-07-24

Similar Documents

Publication Publication Date Title
Mann et al. Beingundigital’with digital cameras
US7570390B2 (en) Image processing device and method
US7791652B2 (en) Image processing apparatus, image capture apparatus, image output apparatus, and method and program for these apparatus
US6931160B2 (en) Method of spatially filtering digital image for noise removal, noise estimation or digital image enhancement
US7650045B2 (en) Image processing apparatus and method
US7386181B2 (en) Image display apparatus
US8525900B2 (en) Multiple exposure high dynamic range image capture
US8570396B2 (en) Multiple exposure high dynamic range image capture
JP4460839B2 (en) Digital image sharpening device
US7382941B2 (en) Apparatus and method of compressing dynamic range of image
US9432589B2 (en) Systems and methods for generating high dynamic range images
US8526057B2 (en) Image processing apparatus and image processing method
US7092579B2 (en) Calculating noise estimates of a digital image using gradient analysis
US8238687B1 (en) Local contrast enhancement of images
US7760264B2 (en) Method of obtaining an image
US6934421B2 (en) Calculating noise from multiple digital images having a common noise source
JP2002216127A (en) Estimating noise for digital image utilizing updated statistics
CN100411445C (en) Image processing method and apparatus for correcting image brightness distribution
JP4479527B2 (en) Image processing method, image processing apparatus, image processing program, and electronic camera
US6753910B1 (en) Image processing apparatus and image processing method
US20070086650A1 (en) Method and Device for Color Saturation and Sharpness Enhancement
CN102223480A (en) Image processing device and image processing method
US7340104B2 (en) Correction parameter determining method, correction parameter determining apparatus, computer program, and recording medium
JP2001034748A (en) Method and device for correcting image, recording medium with the method recorded thereon, image photographing device including the device and image display device including the device
KR101923162B1 (en) System and Method for Acquisitioning HDRI using Liquid Crystal Panel

Legal Events

Date Code Title Description
AS Assignment

Owner name: SYNOPTICS LIMITED, UNITED KINGDOM

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:ATKIN, PHILIP;REEL/FRAME:012446/0787

Effective date: 20011221

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20140720