USH1914H - Method and system for mitigation of image distortion due to optical turbulence - Google Patents

Method and system for mitigation of image distortion due to optical turbulence Download PDF

Info

Publication number
USH1914H
USH1914H US08/942,186 US94218697A USH1914H US H1914 H USH1914 H US H1914H US 94218697 A US94218697 A US 94218697A US H1914 H USH1914 H US H1914H
Authority
US
United States
Prior art keywords
hyperstereo
image data
optical turbulence
stereo
time
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US08/942,186
Inventor
Wendell Watkins
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
US Department of Army
Original Assignee
US Department of Army
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by US Department of Army filed Critical US Department of Army
Priority to US08/942,186 priority Critical patent/USH1914H/en
Application granted granted Critical
Publication of USH1914H publication Critical patent/USH1914H/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • G06T5/73
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20076Probabilistic image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0081Depth or disparity estimation from stereoscopic image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0092Image segmentation from stereoscopic image signals

Definitions

  • the present invention generally relates to a multi-hyperstereo method and system for the mitigation of image distortion from optical turbulence.
  • optical turbulence distortions can severely affect visible imagery and can significantly affect thermal imagery, especially when levels of identification are sought using modern improved resolution systems. If the current trend toward higher resolution for longer range detection continues, the impact of optical turbulence will increase as well.
  • the adaptive optics approach used in astronomy does not have a counterpart for horizontal paths (that is, horizontal surveillance and target acquisition/identification) because there are no stars or guide stars to be used to drive a corrective mirror.
  • the use of frame subtraction techniques does not work because of the random distortions of scene features occurring in individual images, as well as the long time intervals for obtaining average image blur.
  • the present invention generally relates to a passive method and system for multi-hyperstereo mitigation of image distortion due to optical turbulence in a surveillance setting.
  • the invention relates to a method of mitigating image distortion due to optical turbulence in a surveillance system, the method comprising the provision of an array of cameras forming a stereo camera array, the employment of the stereo camera array to view a distant object or objects and background to obtain multi-hyperstereo image data, and the processing of the multi-hyperstereo image data in accordance with statistical comparison and integration techniques to mitigate the effects of optical turbulence in near real-time.
  • the invention also relates to a system for the mitigation of image distortion due to optical turbulence in a surveillance system, the inventive system comprising an array of cameras forming a stereo camera array, means for controlling the stereo camera array to view a distant object or objects and background to obtain multi-hyperstereo image data, and a processor for processing the multi-hyperstereo image data in accordance with statistical comparison and integration techniques to mitigate the effects of optical turbulence in near real-time.
  • Preferred embodiments of the invention process the hyperstereo image data with uncorrelated optical turbulence distortions, and also carry out reconstruction of the distant object or objects and the background.
  • the reconstruction techniques employed preferably comprise segmenting and time-integrating object edges and textures with correlations from subsequent multi-hyperstereo image data to reconstruct a stereo image of the distant object or objects and background, with substantial mitigation of distortions due to optical turbulence.
  • Time-averaging and correlation algorithms are, preferably, employed.
  • FIG. 1 is a diagrammatic representation of a conventional single line-of-sight arrangement for surveillance and/or identification of distant objects.
  • FIG. 2 is a diagrammatic representation of a multi-hyperstereo imaging arrangement in accordance with the present invention.
  • FIG. 3 is a diagrammatic representation of a typical wide baseline, multi-hyperstereo imaging arrangement in accordance with the present invention.
  • FIG. 4 is a flowchart of the operations performed by the image fusion and analysis processor of the multi-hyperstereo imaging system of the present invention.
  • FIG. 1 is a diagrammatic representation of a conventional single line-of-sight arrangement for surveillance and/or identification of distant objects.
  • a viewer 10 is positioned so as to have a field of view, defined by left limit 12 and right limit 14.
  • a field of view defined by left limit 12 and right limit 14.
  • various objects are located within the field of view 12, 14 of the viewer 10.
  • the table 26 shown in FIG. 1 represents a set of picture elements or pixels, arranged in a 4 ⁇ 4 array, without any distortion due to optical turbulence.
  • the ground 24 is clearly seen in the first pixel in the upper left hand comer of table 26
  • bush 20 is clearly seen in the second, third and fourth pixels of the first column of table 26
  • ground 24 is clearly seen in the first and second pixels in the second column of table 26
  • rock 18 is clearly seen in the third and fourth pixels in the second column of table 26, and so forth for flowers 22, tank 16 and ground 24 in the remaining two columns of table 26.
  • the distorted image is represented by table 30 in FIG. 1, in which many (but usually not all) of the pixels are distorted.
  • the first pixel in the first column of table 30 provides a view of both the bush 20 and the ground 24 due to distortion
  • the third and fourth pixels in the first column of table 30 provide a distorted view of the bush 20 and rock 18, and so forth for the remaining three columns of table 30.
  • FIG. 2 is a diagrammatic representation of a multi-hyperstereo imaging arrangement in accordance with the present invention. Since portions of FIG. 2 are common to FIG. 1, common elements have been identified by identical reference numerals in FIGS. 1 and 2.
  • imagers 32 and 34 are separated by a certain distance (preferably, up to fifty meters) and are equipped with high-powered telescopes for viewing at typical ranges (for example, battlefield ranges of 0.5-4 kilometers).
  • the field of view of imager 32 is defined by left and right limit lines 36 and 38, respectively, while the field of vision of imager 34 is defined by left and right limit lines 40 and 42, respectively.
  • the objects being viewed are identical to those being viewed in the monocular arrangement of FIG. 1, and are identified by identical reference numerals 16, 18, 20, 22 and 24.
  • imagers 32 and 34 have undistorted views of the latter objects, as indicated by table 44 (for imager 32) and table 46 (for imager 34).
  • table 44 for imager 32
  • table 46 for imager 34
  • the pixel array for imager 32 is distorted
  • the pixel array for imager 34 is also distorted.
  • correlation of edges and textures of pairs of stereo images can be utilized to reduce or eliminate distortion, thereby isolating an object or objects from its or their background. That is to say, by employing a multi-hyperstereo imaging arrangement in accordance with the present invention, a substantial reduction in the distorting effects of optical turbulence 28 over that observed through a single, monocular channel or field of view (as seen in FIG. 1) can be achieved.
  • FIG. 3 is a diagrammatic representation of a typical wide baseline, multi-hyperstereo imaging arrangement in accordance with the present invention. As seen therein, the multi-hyperstereo imaging system 50 is positioned in opposition to a collection of targets 70 arranged in a cluttered background, and optical turbulence 60 is introduced between the system 50 and the targets 70.
  • the multi-hyperstereo imaging system 50 comprises an array 52 of remotely controlled, passive image sensors, an image fusion and analysis processor 54, a display unit 56, and a stereo vision headset 58 worn by a user.
  • the image sensor array 52 comprises a plurality (preferably, up to eight) image sensors consisting of conventional camera platforms and controllers.
  • Image fusion and analysis processor 54 is any conventional computer or microprocessor which is appropriately programmed in accordance with the flow chart of operations disclosed herein and discussed below.
  • display unit 56 is any conventional display unit for receiving and displaying any information provided by processor 54.
  • stereo vision headset 58 is a conventional stereo vision headset.
  • stereo vision headset 58 can be implemented by a Binocular/Stereoscopic Development Kit, Model DK210, manufactured by Virtual Vision, Inc. of Redmond, Wash.
  • image data are provided to the image fusion and analysis processor 54 which, in accordance with the flowchart of operations discussed below, processes the image data, and provides resultant display data to display unit 56 and stereo vision headset 58 worn by a user of the system.
  • image fusion and analysis processor 54 which, in accordance with the flowchart of operations discussed below, processes the image data, and provides resultant display data to display unit 56 and stereo vision headset 58 worn by a user of the system.
  • FIG. 4 is a flowchart of the operations performed by the image fusion and analysis processor of FIG. 3.
  • distorted imagery data are derived by the plurality of sensors in the image sensor array 52 (FIG. 3), derivation of such distorted imagery data being indicated in blocks 61, 62, . . . , 6N of FIG. 4.
  • the distorted imagery data are then subjected to a running short-time averaging process involving, in the preferred embodiment, 10-30 frames (see block 70). That is to say, the imagery data from each of the plurality of sensors in image sensor array 52 are averaged to obtain a blurred image of the scene. This can be done on a "running update" basis for, typically, 10-30 frame averages.
  • thirty frames can be averaged, and then, when the next frame is received, the first frame of the original thirty frames is dropped, and the most recently received frame is added to fill out the thirty frames, which are then again averaged, and so forth.
  • a given number of frames e.g., thirty frames
  • the process is paused so as to wait for the next set of frames, and those frames are then averaged.
  • results of the short-time averaging process are then subjected to a coarse fusion process determined by the solution of the correspondence between all imagery stereo pairs (see block 71). More specifically, several possible algorithms can be used to determine the solutions of the correspondence between all of the image pairs in the multi-hyperstereo imagery process. This amounts to a determination of the offset of individual pixels or pixel areas between image pairs.
  • One possible method for performing a coarse fusion determination is by comparison of the sum of the squares of the differences between the intensity distribution of a small area in one image with its best fit in a second image. This process can be performed for individual pixels or for small averaged areas of pixels.
  • the combination of the solutions of all stereo pairs in the array will provide a coarse fusion (that is, a set of blurred images) for all prominent features and textures in the scene.
  • certain prominent features can be distinguished from other prominent features (e.g., a tree can be distinguished from a tank or a rock).
  • individual pixels/areas of the FOV are compared for correspondence solution of each stereo pair (see block 72). That is to say, the correspondence can be applied to incoming or previously collected sets of array images collected at the same time.
  • the comparison of individual pixels or pixel areas is performed on all pairs of stereo images for the array using the coarse fusion technique, as previously described, and as a result the search areas of consideration for comparison of individual pixels or pixel areas can be narrowed.
  • the unaveraged images will have sharper feature and texture content, although they will be more distorted than the short-time averaged images, and will perhaps require a different matching algorithm relative to the algorithm used in the coarse fusion process (the "sum of squared differences" algorithm previously mentioned).
  • a determination of non-distortion is made by comparing pixel areas between stereo image pairs based on the coarse fusion solution and overlap with previously retained undistorted images for the cases where occlusions of pixels or pixel areas occur.
  • This process could, for example, employ a neural net approach to reduce the number of incorrectly retained pixels or pixel areas.
  • the neural net approach previously referred to is a conventional algorithm which can be used to fill in blanks in intermittent data. More specifically, the neural net approach is a conventional statistical technique for determining correctness or adding weight to the correct choice of undistorted pixel areas.
  • Undistorted portions of the incoming imagery data are retained and merged (block 73), and undistorted images are produced for each image and the best stereo pair displayed based on the comparison of the correspondence of all stereo pairs (see block 74). That is to say, once undistorted images have been obtained for each of the images of the array, those images can be used to produce a stereo pair for display using the best angular separation for showing depth at a specified range and depth of field.
  • the images in the array can be used to fill in portions of the images where occlusions occur when comparing individual lines of sight due to close-in objects, like branches, that block portions of scene features within a range band in an uncorrelated fashion.
  • tables 44 and 46 represent data detected in undistorted FOV's, each comprising a 4 ⁇ 4 pixel area region, for left and right imagers, respectively.
  • tables 48 and 50 represent image data derived for distorted FOV's, again comprising a 4 ⁇ 4 pixel area region, for left and right imagers, respectively.
  • the upper left pixel area in each of tables 44 and 46 contains the same indication--ground (G).
  • the corresponding distorted pixel areas in tables 48 and 50 contain a mixture of ground (G) and flowers (F) in table 48 and ground (G) and bush (B) in table 50. In the latter case, there is no fusion match, and the distorted pixel areas are discarded in accordance with the present invention.
  • the next pixel area to the right in the top row of each of tables 48 and 50 has a mismatch for the left image sensor (G/R in table 48, second column, first row) and a match for the right image sensor (in table 50, second column, first row). Nevertheless, the data for both pixel areas are discarded in accordance with the present invention.
  • the rightmost pixel area in the second row (fourth column, second row) in both tables 48 and 50 indicates a tank (T). It should be noted that a tank (T) is also indicated in the corresponding pixels (fourth column, second row) of tables 44 and 46 representing undistorted image data. Thus, in this case, both pixel areas match what they are supposed to be based on the time-averaged fusion correspondence, and they are retained.
  • the second pixel area from the left in the third row (second column, third row) of each table represents a pixel area that gives depth perception to the tank because, at the edge of the tank, there is different terrain seen from the two FOV's along the imager lines of sight.
  • the rock (R) (table 48, second column, third row) matches what it is supposed to be (as indicated by the second column, third row of table 44)
  • the bush (B) appearing in table 50 (at second column, third row) matches what it is supposed to be (as indicated in table 46, second column, third row).
  • the leftmost pixel area (first column, fourth row) indicates a bush in table 48 and in table 50, and this matches the corresponding undistorted indications in tables 48 and 46, respectively.
  • the rightmost pixel area (fourth column, fourth row) of each of tables 48 and 50 indicates a tree (T), and this matches the undistorted indications in tables 44 and 46, respectively.
  • the latter process isolates the undistorted portions of images in the distorted imagery, and begins an undistorted imagery construction process in time, as the undistorted portions of the imagery move about in the individual hyperstereo image pairs.

Abstract

A passive method and system for mitigating image distortion due to optical turbulence in a surveillance system is disclosed. An array of cameras forming a stereo camera array is employed to view a distant object or objects and background to obtain multi-hyperstereo image data, and the latter data are processed in accordance with statistical comparison and integration techniques to mitigate the effects of optical turbulence in near real-time. Further features of the invention involve comparison of multi-hyperstereo images with uncorrelated optical turbulence distortions, the reconstruction of the distant object and background using segmenting and time-integration techniques, and the execution of time-averaging and correlation algorithms.

Description

"This application is a continuation, of application Ser. No. 08/687,069, filed Jul. 8, 1996 now abandoned."
GOVERNMENTAL INTEREST
The invention described herein may be manufactured, used and licensed by or for the United States Government without payment to me of any royalty thereon.
BACKGROUND OF THE INVENTION
1. Cross-reference to Related Applications
The subject matter of this application is related to that disclosed in copending applications application Ser. No. 08/633,712, now U.S. Pat. No. 5,756,990, filed on Apr. 17, 1996.
2. Field of the Invention
The present invention generally relates to a multi-hyperstereo method and system for the mitigation of image distortion from optical turbulence.
3. Description of the Prior Art
In the areas of horizontal, passive surveillance and target acquisition/identification, optical turbulence distortions can severely affect visible imagery and can significantly affect thermal imagery, especially when levels of identification are sought using modern improved resolution systems. If the current trend toward higher resolution for longer range detection continues, the impact of optical turbulence will increase as well. The adaptive optics approach used in astronomy does not have a counterpart for horizontal paths (that is, horizontal surveillance and target acquisition/identification) because there are no stars or guide stars to be used to drive a corrective mirror. The use of frame subtraction techniques does not work because of the random distortions of scene features occurring in individual images, as well as the long time intervals for obtaining average image blur.
One approach that has been used for the mitigation of aerosol-induced image blur involves a long-term (tens of seconds) time-average measurement of the aerosol modulation transfer function (MTF) that can be applied in near real-time to subsequent images because of the uniform nature of the scattering blur. See D. Sadot, et al., "Restoration of Thermal Images Distorted by the Atmosphere, Based on Measured and Theoretical Atmospheric Modulation Transfer Function," OPT.ENG. 33(1), pp. 44-53 (January 1994). However, the random nature of the optical turbulence distortions does not lend itself to the application of long-term, time-averaged MTF corrections of individual frames.
Experimental results showed that the use of hyperstereo imaging produced appreciable reduction of the optical turbulence distortions on objects viewed at 1-Km range with 10X visible stereo cameras with a 10-m platform separation. If a linear (or possibly area) array of cameras were used to view distant terrain, statistical comparison and integration of the multi-hyperstereo imagery could be used to mitigate the effects of optical turbulence in near real time. For example, if 1-m spacing between the individual cameras were used, the imagery would have comparable optical distortion statistics along the different camera lines-of-sight; but they would be uncorrelated.
The level of effort required to fully field the hardware and software necessary for such a technique is substantial due to the complexity of replicating, with appropriate algorithms, the manner in which human processing of live stereo video reduces the effects of optical turbulence distortions. The means of statistically averaging and correlating multiple line-of-sight imagery is even more complicated. However, the problem does not appear to be totally intractable because stereo vision is already being used in robotics for depth perception, although the ranges that are currently being used are only tens of meters, rather than hundreds of meters. See Takeo Kanade, "Development of a Video-Rate Stereo Machine," Proceedings of 94 ARPA Image Understanding Workshop, Nov. 14-16, 1994, Monterey, Calif.
Accordingly, it is clear that the areas of both surveillance and target acquisition/identification would be significantly benefitted by the reduction of the effects of optical turbulence distortion in terms of increased range and better target identification. This benefit is especially true for future aided target recognition systems. A passive technique would make the user of such surveillance and target acquisition/identification systems less detectable, as compared to the "active system" approaches.
SUMMARY OF THE INVENTION
The present invention generally relates to a passive method and system for multi-hyperstereo mitigation of image distortion due to optical turbulence in a surveillance setting.
More particularly, the invention relates to a method of mitigating image distortion due to optical turbulence in a surveillance system, the method comprising the provision of an array of cameras forming a stereo camera array, the employment of the stereo camera array to view a distant object or objects and background to obtain multi-hyperstereo image data, and the processing of the multi-hyperstereo image data in accordance with statistical comparison and integration techniques to mitigate the effects of optical turbulence in near real-time.
The invention also relates to a system for the mitigation of image distortion due to optical turbulence in a surveillance system, the inventive system comprising an array of cameras forming a stereo camera array, means for controlling the stereo camera array to view a distant object or objects and background to obtain multi-hyperstereo image data, and a processor for processing the multi-hyperstereo image data in accordance with statistical comparison and integration techniques to mitigate the effects of optical turbulence in near real-time.
Preferred embodiments of the invention process the hyperstereo image data with uncorrelated optical turbulence distortions, and also carry out reconstruction of the distant object or objects and the background. Moreover, the reconstruction techniques employed preferably comprise segmenting and time-integrating object edges and textures with correlations from subsequent multi-hyperstereo image data to reconstruct a stereo image of the distant object or objects and background, with substantial mitigation of distortions due to optical turbulence. Time-averaging and correlation algorithms are, preferably, employed.
Therefore, it is a primary object of the present invention to provide a method and system for mitigating image distortion due to optical turbulence in a surveillance system.
It is an additional object of the present invention to provide a method and system which employs stereo cameras in an array to view a distant object or objects and background to obtain multi-hyperstereo image data.
It is an additional object of the present invention to provide a method and system wherein multi-hyperstereo image data is processed in accordance with statistical comparison and integration techniques to mitigate the effects of optical turbulence in near real-time.
It is an additional object of the present invention to provide a method and system wherein multi-hyperstereo image data are compared with uncorrelated optical turbulence distortions.
It is an additional object of the present invention to provide a method and system of mitigating image distortion due to optical turbulence, wherein the distant object or objects and background are reconstructed.
It is an additional object of the present invention to provide a method and system of mitigating image distortion due to optical turbulence, wherein reconstruction of the distant object or objects and background is carried out by segmenting and time-integrating object edges and textures with correlations from subsequent multi-hyperstereo image data.
It is an additional object of the present invention to provide a method and system for mitigating image distortion due to optical turbulence involving the execution of time-averaging and correlation algorithms.
The above and other objects, and the nature of the invention, will be more clearly understood by reference to the following detailed description, the appended claims, and the accompanying drawings.
BRIEF DESCRIPTION OF DRAWINGS
FIG. 1 is a diagrammatic representation of a conventional single line-of-sight arrangement for surveillance and/or identification of distant objects.
FIG. 2 is a diagrammatic representation of a multi-hyperstereo imaging arrangement in accordance with the present invention.
FIG. 3 is a diagrammatic representation of a typical wide baseline, multi-hyperstereo imaging arrangement in accordance with the present invention.
FIG. 4 is a flowchart of the operations performed by the image fusion and analysis processor of the multi-hyperstereo imaging system of the present invention.
DETAILED DESCRIPTION
The invention will now be described in more detail with reference to the figures of the drawings.
FIG. 1 is a diagrammatic representation of a conventional single line-of-sight arrangement for surveillance and/or identification of distant objects.
As seen therein, a viewer 10 is positioned so as to have a field of view, defined by left limit 12 and right limit 14. Within the field of view 12, 14 of the viewer 10, various objects are located. Specifically, using a military situation or battlefield situation as an example, within the field of view 12, 14 of the viewer 10, there may be a tank 16, rock 18, bush 20, and flowers 22, all found on an overall plot of ground 24.
Without distortion due to optical turbulence, the viewer 10 has an undistorted view of the aforementioned objects. The table 26 shown in FIG. 1 represents a set of picture elements or pixels, arranged in a 4×4 array, without any distortion due to optical turbulence. Thus, the ground 24 is clearly seen in the first pixel in the upper left hand comer of table 26, bush 20 is clearly seen in the second, third and fourth pixels of the first column of table 26, ground 24 is clearly seen in the first and second pixels in the second column of table 26, rock 18 is clearly seen in the third and fourth pixels in the second column of table 26, and so forth for flowers 22, tank 16 and ground 24 in the remaining two columns of table 26.
When optical turbulence 28 is introduced at a point between the viewer 10 and the aforementioned objects 16, 18, 20, 22 and 24, distortion of the image results. The distorted image is represented by table 30 in FIG. 1, in which many (but usually not all) of the pixels are distorted. Thus, the first pixel in the first column of table 30 provides a view of both the bush 20 and the ground 24 due to distortion, the third and fourth pixels in the first column of table 30 provide a distorted view of the bush 20 and rock 18, and so forth for the remaining three columns of table 30.
FIG. 2 is a diagrammatic representation of a multi-hyperstereo imaging arrangement in accordance with the present invention. Since portions of FIG. 2 are common to FIG. 1, common elements have been identified by identical reference numerals in FIGS. 1 and 2.
As seen in FIG. 2, imagers 32 and 34 are separated by a certain distance (preferably, up to fifty meters) and are equipped with high-powered telescopes for viewing at typical ranges (for example, battlefield ranges of 0.5-4 kilometers). The field of view of imager 32 is defined by left and right limit lines 36 and 38, respectively, while the field of vision of imager 34 is defined by left and right limit lines 40 and 42, respectively. The objects being viewed are identical to those being viewed in the monocular arrangement of FIG. 1, and are identified by identical reference numerals 16, 18, 20, 22 and 24.
Without any distortion due to optical turbulence, imagers 32 and 34 have undistorted views of the latter objects, as indicated by table 44 (for imager 32) and table 46 (for imager 34). However, when optical turbulence 28 is introduced at a position between imagers 32 and 34, on the one hand, and the objects 16, 18, 20, 22 and 24, on the other hand, the pixel array for imager 32, as represented by table 48, is distorted, and the pixel array for imager 34, as represented by table 50, is also distorted.
In accordance with the present invention, correlation of edges and textures of pairs of stereo images can be utilized to reduce or eliminate distortion, thereby isolating an object or objects from its or their background. That is to say, by employing a multi-hyperstereo imaging arrangement in accordance with the present invention, a substantial reduction in the distorting effects of optical turbulence 28 over that observed through a single, monocular channel or field of view (as seen in FIG. 1) can be achieved.
FIG. 3 is a diagrammatic representation of a typical wide baseline, multi-hyperstereo imaging arrangement in accordance with the present invention. As seen therein, the multi-hyperstereo imaging system 50 is positioned in opposition to a collection of targets 70 arranged in a cluttered background, and optical turbulence 60 is introduced between the system 50 and the targets 70.
The multi-hyperstereo imaging system 50 comprises an array 52 of remotely controlled, passive image sensors, an image fusion and analysis processor 54, a display unit 56, and a stereo vision headset 58 worn by a user.
Preferably, the image sensor array 52 comprises a plurality (preferably, up to eight) image sensors consisting of conventional camera platforms and controllers. Image fusion and analysis processor 54 is any conventional computer or microprocessor which is appropriately programmed in accordance with the flow chart of operations disclosed herein and discussed below. Similarly, display unit 56 is any conventional display unit for receiving and displaying any information provided by processor 54. Finally, stereo vision headset 58 is a conventional stereo vision headset. For example, stereo vision headset 58 can be implemented by a Binocular/Stereoscopic Development Kit, Model DK210, manufactured by Virtual Vision, Inc. of Redmond, Wash.
In operation, objects within the field of vision (FOV) of the image sensor array 52 are detected, resulting in the generation of image data by the image sensor array 52. The image data are provided to the image fusion and analysis processor 54 which, in accordance with the flowchart of operations discussed below, processes the image data, and provides resultant display data to display unit 56 and stereo vision headset 58 worn by a user of the system.
FIG. 4 is a flowchart of the operations performed by the image fusion and analysis processor of FIG. 3. As seen therein, distorted imagery data are derived by the plurality of sensors in the image sensor array 52 (FIG. 3), derivation of such distorted imagery data being indicated in blocks 61, 62, . . . , 6N of FIG. 4.
The distorted imagery data are then subjected to a running short-time averaging process involving, in the preferred embodiment, 10-30 frames (see block 70). That is to say, the imagery data from each of the plurality of sensors in image sensor array 52 are averaged to obtain a blurred image of the scene. This can be done on a "running update" basis for, typically, 10-30 frame averages.
For example, thirty frames can be averaged, and then, when the next frame is received, the first frame of the original thirty frames is dropped, and the most recently received frame is added to fill out the thirty frames, which are then again averaged, and so forth. Alternatively, a given number of frames (e.g., thirty frames) are averaged, and then the process is paused so as to wait for the next set of frames, and those frames are then averaged. Once the averaging process is performed, the images can be compared, in the next step of the process, to obtain range mapping of prominent features and textures in the scene, as will now be discussed.
The results of the short-time averaging process are then subjected to a coarse fusion process determined by the solution of the correspondence between all imagery stereo pairs (see block 71). More specifically, several possible algorithms can be used to determine the solutions of the correspondence between all of the image pairs in the multi-hyperstereo imagery process. This amounts to a determination of the offset of individual pixels or pixel areas between image pairs.
One possible method for performing a coarse fusion determination is by comparison of the sum of the squares of the differences between the intensity distribution of a small area in one image with its best fit in a second image. This process can be performed for individual pixels or for small averaged areas of pixels. The combination of the solutions of all stereo pairs in the array will provide a coarse fusion (that is, a set of blurred images) for all prominent features and textures in the scene. In other words, as a result of the coarse fusion process, certain prominent features can be distinguished from other prominent features (e.g., a tree can be distinguished from a tank or a rock).
Then, individual pixels/areas of the FOV are compared for correspondence solution of each stereo pair (see block 72). That is to say, the correspondence can be applied to incoming or previously collected sets of array images collected at the same time. The comparison of individual pixels or pixel areas is performed on all pairs of stereo images for the array using the coarse fusion technique, as previously described, and as a result the search areas of consideration for comparison of individual pixels or pixel areas can be narrowed. The unaveraged images will have sharper feature and texture content, although they will be more distorted than the short-time averaged images, and will perhaps require a different matching algorithm relative to the algorithm used in the coarse fusion process (the "sum of squared differences" algorithm previously mentioned). A determination of non-distortion is made by comparing pixel areas between stereo image pairs based on the coarse fusion solution and overlap with previously retained undistorted images for the cases where occlusions of pixels or pixel areas occur. This process could, for example, employ a neural net approach to reduce the number of incorrectly retained pixels or pixel areas. The neural net approach previously referred to is a conventional algorithm which can be used to fill in blanks in intermittent data. More specifically, the neural net approach is a conventional statistical technique for determining correctness or adding weight to the correct choice of undistorted pixel areas.
Undistorted portions of the incoming imagery data are retained and merged (block 73), and undistorted images are produced for each image and the best stereo pair displayed based on the comparison of the correspondence of all stereo pairs (see block 74). That is to say, once undistorted images have been obtained for each of the images of the array, those images can be used to produce a stereo pair for display using the best angular separation for showing depth at a specified range and depth of field. In addition, the images in the array can be used to fill in portions of the images where occlusions occur when comparing individual lines of sight due to close-in objects, like branches, that block portions of scene features within a range band in an uncorrelated fashion.
Referring now to FIG. 2, it will be recalled that tables 44 and 46 represent data detected in undistorted FOV's, each comprising a 4×4 pixel area region, for left and right imagers, respectively. Similarly, tables 48 and 50 represent image data derived for distorted FOV's, again comprising a 4×4 pixel area region, for left and right imagers, respectively.
Referring to tables 44 and 46, if the imagery FOV's have no distortion, then the fusion process performed by image fusion and analysis processor 54 of FIG. 3 provides exact solutions to the correspondence for the imagery resolution and depth perception. Thus, in tables 44 and 46 of FIG. 2, a 4×4 pixel area region is given for left and right FOV's, wherein some of the pixel areas contain the same terrain feature and some do not. The aforementioned short-time image averaging process (referred to above with respect to block 70 of FIG. 4) can be used to obtain fusion of the overall scene content. This permits determination of a match between the content of the individual pixel areas and what that content should be.
For example, the upper left pixel area in each of tables 44 and 46 (left and right vision) contains the same indication--ground (G). However, the corresponding distorted pixel areas in tables 48 and 50 contain a mixture of ground (G) and flowers (F) in table 48 and ground (G) and bush (B) in table 50. In the latter case, there is no fusion match, and the distorted pixel areas are discarded in accordance with the present invention.
The next pixel area to the right in the top row of each of tables 48 and 50 has a mismatch for the left image sensor (G/R in table 48, second column, first row) and a match for the right image sensor (in table 50, second column, first row). Nevertheless, the data for both pixel areas are discarded in accordance with the present invention.
Further referring to tables 48 and 50, the rightmost pixel area in the second row (fourth column, second row) in both tables 48 and 50 indicates a tank (T). It should be noted that a tank (T) is also indicated in the corresponding pixels (fourth column, second row) of tables 44 and 46 representing undistorted image data. Thus, in this case, both pixel areas match what they are supposed to be based on the time-averaged fusion correspondence, and they are retained.
Next, the second pixel area from the left in the third row (second column, third row) of each table represents a pixel area that gives depth perception to the tank because, at the edge of the tank, there is different terrain seen from the two FOV's along the imager lines of sight. In this case, the rock (R) (table 48, second column, third row) matches what it is supposed to be (as indicated by the second column, third row of table 44), and the bush (B) appearing in table 50 (at second column, third row) matches what it is supposed to be (as indicated in table 46, second column, third row). Thus, both pixel areas, although different from each other, are retained.
Referring to the last row of each table, the leftmost pixel area (first column, fourth row) indicates a bush in table 48 and in table 50, and this matches the corresponding undistorted indications in tables 48 and 46, respectively. Similarly, the rightmost pixel area (fourth column, fourth row) of each of tables 48 and 50 indicates a tree (T), and this matches the undistorted indications in tables 44 and 46, respectively. Thus, both sets of image data are retained.
The latter process isolates the undistorted portions of images in the distorted imagery, and begins an undistorted imagery construction process in time, as the undistorted portions of the imagery move about in the individual hyperstereo image pairs.
While preferred forms and arrangements have been shown in illustrating the invention, it is to be understood that various changes and modifications may be made without departing from the spirit and scope of this disclosure.

Claims (20)

What is claimed is:
1. A passive method of mitigating image distortion due to moderate to strong optical turbulence in a surveillance system, comprising:
(a) providing an array of cameras each having a narrow field of view forming a stereo camera array, said cameras separated by at least one meter and having incorporated high power magnifying optics therein;
(b) employing the stereo camera array to view a distant object and background to obtain multiple uncoupled, optical turbulence distorted, multi-hyperstereo image data; and
(c) processing in real time said multiple uncoupled, optical turbulence distorted, multi-hyperstereo image data in accordance with statistical comparison and integration techniques to mitigate the effects of optical turbulence in near real-time.
2. The method of claim 1, wherein step (c) comprises comparing multi-hyperstereo images with uncorrelated optical turbulence distortions.
3. The method of claim 1, further comprising step (d) of reconstructing the distant object and background.
4. The method of claim 3, wherein step (d) comprises segmenting and time-integrating object edges and textures with correlations from subsequent multi-hyperstereo image data to reconstruct a stereo image of the distant object and background with substantial mitigation of distortions due to optical turbulence.
5. The method of claim 4, wherein step (d) comprises executing time-averaging and correlation algorithms.
6. The method of claim 1, wherein step (c) comprises short-time averaging a plurality of frames of imagery data.
7. The method of claim 1, wherein step (c) comprises carrying out a coarse fusion process determined by a solution of correspondence between all imagery stereo pairs.
8. The method of claim 1, wherein step (c) comprises comparing one of individual pixels and individual pixel areas for correspondence solution of a given stereo pair.
9. The method of claim 1, wherein step (c) comprises retaining and merging undistorted portions of image data derived during step (b).
10. The method of claim 1, wherein step (c) comprises producing undistorted images for each of a plurality of images and best stereo pair display based on a comparison of correspondence of all stereo pairs in the multi-hyperstereo image data.
11. A passive system for mitigating image distortion due to moderate to strong optical turbulence in a surveillance system, comprising:
an array of cameras each having a narrow field of view forming a stereo camera array, said cameras separated by at least one meter and having incorporated high power magnifying optics therein;
control means for controlling the stereo camera array to view a distant object and background to obtain multiple uncoupled, optical turbulence distorted, multi-hyperstereo image data; and
real time processing means for processing said multiple uncoupled, optical turbulence distorted, multi-hyperstereo image data in accordance with statistical comparison and integration techniques to mitigate the effects of optical turbulence in near real-time.
12. The system of claim 11, wherein said processing means compares multi-hyperstereo images with uncorrelated optical turbulence distortions.
13. The system of claim 11, further comprising reconstruction means for reconstructing the distant object and background.
14. The system of claim 13, wherein said reconstruction means segments and time-integrates object edges and textures with correlations from subsequent multi-hyperstereo image data to reconstruct a stereo image of the distant object and background with substantial mitigation of distortions due to optical turbulence.
15. The system of claim 14, wherein said reconstruction means executes time-averaging and correlation algorithms.
16. The system of claim 11, wherein said processing means short-time averages a plurality of frames of image data.
17. The system of claim 11, wherein said processing means executes a coarse fusion process determined by a solution of correspondence between all imagery stereo pairs.
18. The system of claim 11, wherein said processing means compares one of individual pixels and individual pixel areas for correspondence solution of a given stereo pair.
19. The system of claim 11, wherein said processing means retains and merges undistorted portions of image data derived by said control means.
20. The system of claim 11, wherein said processing means produces undistorted images for each of a plurality of images and best stereo pair display based on a comparison of correspondence of all stereo pairs in the multi-hyperstereo image data.
US08/942,186 1996-07-08 1997-10-01 Method and system for mitigation of image distortion due to optical turbulence Abandoned USH1914H (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US08/942,186 USH1914H (en) 1996-07-08 1997-10-01 Method and system for mitigation of image distortion due to optical turbulence

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US68706996A 1996-07-08 1996-07-08
US08/942,186 USH1914H (en) 1996-07-08 1997-10-01 Method and system for mitigation of image distortion due to optical turbulence

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US68706996A Continuation 1996-07-08 1996-07-08

Publications (1)

Publication Number Publication Date
USH1914H true USH1914H (en) 2000-11-07

Family

ID=24758921

Family Applications (1)

Application Number Title Priority Date Filing Date
US08/942,186 Abandoned USH1914H (en) 1996-07-08 1997-10-01 Method and system for mitigation of image distortion due to optical turbulence

Country Status (1)

Country Link
US (1) USH1914H (en)

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7738008B1 (en) 2005-11-07 2010-06-15 Infrared Systems International, Inc. Infrared security system and method
US20130070103A1 (en) * 2011-09-19 2013-03-21 Michael Mojaver Super resolution binary imaging and tracking system
US20130300856A1 (en) * 2011-01-28 2013-11-14 Electricite De France Processing of image data comprising effects of turbulence in a liquid medium
US20160171657A1 (en) * 2013-07-31 2016-06-16 Mbda Uk Limited Image processing
US10043242B2 (en) 2013-07-31 2018-08-07 Mbda Uk Limited Method and apparatus for synthesis of higher resolution images
US10109034B2 (en) 2013-07-31 2018-10-23 Mbda Uk Limited Method and apparatus for tracking an object
US10404910B2 (en) 2011-09-19 2019-09-03 Epilog Imaging Systems Super resolution imaging and tracking system
US10924668B2 (en) 2011-09-19 2021-02-16 Epilog Imaging Systems Method and apparatus for obtaining enhanced resolution images
CN112907704A (en) * 2021-02-04 2021-06-04 浙江大华技术股份有限公司 Image fusion method, computer equipment and device
CN112907704B (en) * 2021-02-04 2024-04-12 浙江大华技术股份有限公司 Image fusion method, computer equipment and device

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5093563A (en) * 1987-02-05 1992-03-03 Hughes Aircraft Company Electronically phased detector arrays for optical imaging
US5469250A (en) * 1993-05-17 1995-11-21 Rockwell International Corporation Passive optical wind profilometer

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5093563A (en) * 1987-02-05 1992-03-03 Hughes Aircraft Company Electronically phased detector arrays for optical imaging
US5469250A (en) * 1993-05-17 1995-11-21 Rockwell International Corporation Passive optical wind profilometer

Non-Patent Citations (10)

* Cited by examiner, † Cited by third party
Title
Bhavnagri, Models fro Recognition and Correspondence Matching of Objects, pp: 563 543 1996. *
Bhavnagri, Models fro Recognition and Correspondence Matching of Objects, pp: 563-543 1996.
H. van der Elst et al., Modelling and Restoring Images Distorted by Atmospheric Turbulence, COMSIG, pp:162 167 Feb. 1994. *
H. van der Elst et al., Modelling and Restoring Images Distorted by Atmospheric Turbulence, COMSIG, pp:162-167 Feb. 1994.
Kanade, "Development of a Video-Rate Stereo Machine", Nov. 14, 1994.
Kanade, Development of a Video Rate Stereo Machine , Nov. 14, 1994. *
Sadot et al., Restoration of Thermal Images Distorted by the Atmosphere, ed on Measured and Theoretical Atmospheric Modulation Transfer Function, Jan. 1994.
Sadot et al., Restoration of Thermal Images Distorted by the Atmosphere, Based on Measured and Theoretical Atmospheric Modulation Transfer Function, Jan. 1994. *
Salerno, Neural Nets WIRN VIETRI 95, World Scientific, pp: 215 225 May 18, 1995. *
Salerno, Neural Nets WIRN VIETRI-95, World Scientific, pp: 215-225 May 18, 1995.

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7738008B1 (en) 2005-11-07 2010-06-15 Infrared Systems International, Inc. Infrared security system and method
US9749592B2 (en) * 2011-01-28 2017-08-29 Electricite De France Processing of image data comprising effects of turbulence in a liquid medium
US20130300856A1 (en) * 2011-01-28 2013-11-14 Electricite De France Processing of image data comprising effects of turbulence in a liquid medium
US10348963B2 (en) 2011-09-19 2019-07-09 Epilog Imaging System, Inc. Super resolution binary imaging and tracking system
US9137433B2 (en) * 2011-09-19 2015-09-15 Michael Mojaver Super resolution binary imaging and tracking system
US20130070103A1 (en) * 2011-09-19 2013-03-21 Michael Mojaver Super resolution binary imaging and tracking system
US10404910B2 (en) 2011-09-19 2019-09-03 Epilog Imaging Systems Super resolution imaging and tracking system
US10924668B2 (en) 2011-09-19 2021-02-16 Epilog Imaging Systems Method and apparatus for obtaining enhanced resolution images
US11689811B2 (en) 2011-09-19 2023-06-27 Epilog Imaging Systems, Inc. Method and apparatus for obtaining enhanced resolution images
US20160171657A1 (en) * 2013-07-31 2016-06-16 Mbda Uk Limited Image processing
US9792669B2 (en) * 2013-07-31 2017-10-17 Mbda Uk Limited Method and apparatus for synthesis of higher resolution images
US10043242B2 (en) 2013-07-31 2018-08-07 Mbda Uk Limited Method and apparatus for synthesis of higher resolution images
US10109034B2 (en) 2013-07-31 2018-10-23 Mbda Uk Limited Method and apparatus for tracking an object
CN112907704A (en) * 2021-02-04 2021-06-04 浙江大华技术股份有限公司 Image fusion method, computer equipment and device
CN112907704B (en) * 2021-02-04 2024-04-12 浙江大华技术股份有限公司 Image fusion method, computer equipment and device

Similar Documents

Publication Publication Date Title
US5877803A (en) 3-D image detector
EP2779624B1 (en) Apparatus and method for multispectral imaging with three-dimensional overlaying
US7425984B2 (en) Compound camera and methods for implementing auto-focus, depth-of-field and high-resolution functions
US20100302355A1 (en) Stereoscopic image display apparatus and changeover method
US20100080453A1 (en) System for recovery of degraded images
US11398053B2 (en) Multispectral camera external parameter self-calibration algorithm based on edge features
CN105163033A (en) Image capturing apparatus, image processing apparatus, and image processing method
US10726531B2 (en) Resolution enhancement of color images
EP3756161B1 (en) Method and system for calibrating a plenoptic camera system
CN109919889B (en) Visibility detection algorithm based on binocular parallax
WO2015105585A1 (en) Artificial vision system
USH1914H (en) Method and system for mitigation of image distortion due to optical turbulence
CN106023108A (en) Image defogging algorithm based on boundary constraint and context regularization
CN115375581A (en) Dynamic visual event stream noise reduction effect evaluation method based on event time-space synchronization
CN112561996A (en) Target detection method in autonomous underwater robot recovery docking
CN110099268B (en) Blind area perspective display method with natural color matching and natural display area fusion
CN113935917A (en) Optical remote sensing image thin cloud removing method based on cloud picture operation and multi-scale generation countermeasure network
US10614559B2 (en) Method for decamouflaging an object
CN112700502A (en) Binocular camera system and binocular camera space calibration method
AU2020408599A1 (en) Light field reconstruction method and system using depth sampling
US7268804B2 (en) Compound camera and method for synthesizing a virtual image from multiple input images
CN113225484B (en) Method and device for rapidly acquiring high-definition picture shielding non-target foreground
KR100927234B1 (en) Method, apparatus for creating depth information and computer readable record-medium on which program for executing method thereof
CN113344997B (en) Method and system for rapidly acquiring high-definition foreground image only containing target object
Li et al. A Hybrid Image Enhancement Framework for Underwater 3D Reconstruction

Legal Events

Date Code Title Description
STCF Information on status: patent grant

Free format text: PATENTED CASE