US20110205368A1 - Digitally enhanced night vision device - Google Patents
Digitally enhanced night vision device Download PDFInfo
- Publication number
- US20110205368A1 US20110205368A1 US13/098,439 US201113098439A US2011205368A1 US 20110205368 A1 US20110205368 A1 US 20110205368A1 US 201113098439 A US201113098439 A US 201113098439A US 2011205368 A1 US2011205368 A1 US 2011205368A1
- Authority
- US
- United States
- Prior art keywords
- image
- sensors
- user
- sensor
- display
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Abandoned
Links
Images
Classifications
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/017—Head mounted
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T11/00—2D [Two Dimensional] image generation
- G06T11/001—Texturing; Colouring; Generation of texture or colour
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration by the use of more than one image, e.g. averaging, subtraction
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/11—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0112—Head-up displays characterised by optical features comprising device for genereting colour display
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/0138—Head-up displays characterised by optical features comprising image capture systems, e.g. camera
-
- G—PHYSICS
- G02—OPTICS
- G02B—OPTICAL ELEMENTS, SYSTEMS OR APPARATUS
- G02B27/00—Optical systems or apparatus not provided for by any of the groups G02B1/00 - G02B26/00, G02B30/00
- G02B27/01—Head-up displays
- G02B27/0101—Head-up displays characterised by optical features
- G02B2027/014—Head-up displays characterised by optical features comprising information/image processing systems
Definitions
- the invention relates to the field of imaging systems and more particularly to a method and system for fusing image data from multiple sources.
- Multiple sensor imaging systems generate an image of an object by fusing data that is collected using multiple sensors. Gathering image data using multiple sensors, however, has posed challenges. In some systems, the sensors detect light received from separate apertures. Data generated from light from separate apertures, however, describe different points of view of an object that need to be reconciled in order to fuse the data into a single image. Additionally, using separate apertures for different sensors may increase the bulk of an imaging system.
- Reflective and refractive elements are typically used to direct the light to different sensors.
- the system described in U.S. Pat. No. 5,729,376 to Hall et al. includes multiple reflective and refractive elements such as a lens that reflects light towards one sensor and refracts light towards another sensor.
- Each individual sensor detects only a component of light, for example, only specific wavelengths of light, and thus cannot generate image data from the full spectrum.
- multiple reflective and refractive elements may add to the bulk and weight of an imaging system. Consequently, gathering image data from multiple sensors has posed challenges for the design of imaging systems.
- Image fusion involves combining two or more images produced by two or more image sensors into one single image. Producing one image that mitigates the weak aspects of the individual images while retaining the strong ones is a complicated task, often requiring a computer or processor with substantial computing power.
- the prior attempts to fuse or combine images from a plurality of sensors into a single image generated a monochrome image where the only variable in the viewed combined image was a difference in light intensity or the intensity of a single color.
- Such monochrome images result in the loss of information to the viewer that is otherwise obtainable from the individual sensor images.
- the human eye has three different types of color sensitive cones.
- the response of the eye is best described in terms of three “tristimulus values.”
- any color can be expressed in terms of the two color coordinates x and y.
- the colors which can be matched by combining a given set of three primary colors are represented on a chromaticity diagram by a triangle joining the coordinates for the three colors.
- a typical night vision system that uses only a single sensor such as an image intensifier tube is a direct view system that means the axis of the user's eye is in optical alignment with the primary viewing axis of the image intensifier tube while the user looks at a phosphorous screen.
- Alternative systems using liquid crystal displays (LCD) or other electronic displays for viewing an image from a sensor are not axially aligned with the sensor generating the electronic image.
- LCD liquid crystal displays
- U.S. Pat. No. 4,786,966, issued 22 Nov. 1988, to Charles M. Hanson et al. teaches a head mounted display using a camera or sensor that is remote from the display such that the electronic display for viewing the image from a sensor is not axially aligned with the sensor generating the electronic image and the user's optical axis.
- U.S. Pat. No. 6,560,029 issued 5 May 2003, to Blair R. Dobbie et al., teaches a side mounted, man portable night vision goggle with an image from a thermal camera electronically fused with the images from a video camera on a display.
- a user portable viewing device includes a plurality of non-coaxially aligned sensors for sensing a scene to be viewed and operably attached to a processing element combining electronic images originating from at least two sensors into a single electronic image for viewing by a user.
- An electronic display is operably connected to the processing element for displaying the combined electronic image.
- One of the sensors has an input end to sense the scene to be viewed and is adaptable for mounting in an optical axis extending from an eye of the user.
- the electronic display is adapted to be positioned in the optical axis of the user between the eye of the user and the input end of the sensor for direct view of the display of the combined image by the user.
- a system for fusing images from a plurality of sensors into a colored image comprises two or more sensors for generating two or more sets of image data.
- An information processor receives and samples the sets of image data to compute and generate a fused non-monochromatic or colored image from the sample data.
- a display receives the fused image array and displays the fused image generated from the fused image array.
- a four-step method for fusing images is disclosed.
- Step one calls for receiving sets of image data generated by the plurality of sensors.
- Step two provides for sampling each set of image data to produce two or more sets of sampled data.
- the method provides for generating a colorized array for each set of sampled data, which array is calculated by computing or assigning a specific color value represented by a point along a pre-selected vector in an X-Y colorspace and which color value is determined based on and corresponding to each pixel to be analyzed from the sample data based on the intensity or other factor associated with the analyzed pixel.
- the last step calls for displaying a fused image generated mathematically combining corresponding points from each of the colorized arrays.
- FIG. 1 is a right side elevational view of the present invention affixed to a standard ballistic type helmet.
- FIG. 2 is a right side elevational view of the digitally enhanced night vision (“D-ENVG”) device with the housing cut-away showing the interior components.
- D-ENVG digitally enhanced night vision
- FIG. 3 is a cross sectional view of the present D-ENVG device.
- FIG. 4 is a frontal view of the present D-ENVG device.
- FIG. 5 is a block diagram showing the information flow through the present information to the viewing display.
- FIG. 6 is a flowchart demonstrating one method of fusing colorized images in accordance with the present invention.
- a user portable viewing device D includes a plurality of non-coaxially aligned sensors 26 and 32 for sensing a scene 74 to be viewed and operably attached to a processing element P combining electronic images originating from at least two sensors into a single electronic image for viewing by a user U.
- An electronic display 24 is operably connected to the processing element P for displaying the combined electronic image.
- One of the sensors 26 has an input end 28 to sense the scene 74 to be viewed and is adaptable for mounting in an optical axis A 1 extending from an eye 20 of the user U.
- the electronic display 24 is adapted to be positioned in the optical axis A 1 of the user U between the eye 20 of the user and the input end 28 of the first sensor 26 for direct view of the display 24 of the combined image by the user U.
- an multi-sensor viewing device D is shown attached to a ballistic type helmet H worn by a user U by a helmet mount M.
- a separate electronics interface module E is operably connected to the multi-sensor viewing device D.
- the processing element P may be mounted with the separate electronics interface module E or mounted closed to the electronic display 24 in the viewing device D.
- a first sensor 52 may be of an image intensification tube type 26 for example.
- image intensifier tubes generate a viewable image either directly from a phosphorous screen or indirectly through a digital display 24 that is operably connected to a CMOS camera or assembly that is in turn operably connected to a processing unit P.
- the observer U using one of their eyes 20 views the display 24 through an optical eyepiece 22 .
- the input end 28 of the first sensor or image intensifier tube 26 is positioned during viewing within the optical axis A 1 extending from the user's eye 20 through the display 24 .
- the alignment of the present invention acts to reduce or minimize problems of the user U due to the user U being guided by looking at the image on the display 24 that is off axis, which may be a cause of disorientation in a soldier relying mainly on the visual cues from the display 24 .
- the second sensor 56 generally senses the scene 74 to be observed at a different wavelength of the light spectrum, such as infrared (IR) for example.
- IR infrared
- the second sensor 56 may be a known IR camera type device 32 having an input end 34 for receiving the image to be viewed or sensed.
- axis A 1 there is a direct line of view (axis A 1 ) from the user's eye 20 of the fused image displayed on the electronic image display 24 with an input axis A 3 of the first sensor 26 providing a source of an image to be fused prior to display.
- the viewing axis A 2 of the second sensor 32 generally is substantially parallel to the optical axis A 1 , but axis A 1 and axis A 2 are not coaxially aligned, which may cause a problem with parallel adjustments.
- the multi-sensor device D may optionally include accessory assemblies such as an IR illuminator 36 to generate a sufficient source of energy at an appropriate wavelength to illuminate the target scene 74 .
- the first and second sensors are normally contained within a protective housing 38 that may also contain the display 24 and other components necessary for the operation of the device D.
- a system for fusing images comprises sensors 52 and 56 for generating sets of image data representative of a scene 74 to be observed.
- the first sensor 52 generates a set of image data representative of the scene 74 as observed in the field of view 50 of the first sensor 52 .
- the first set of image data from the first sensor 52 is electrically communicated through electrical path or output information signal stream 58 to a processing module P.
- the second sensor 56 generates a second set of image data representative of the scene 74 as observed in the field of view 54 of the second sensor 56 .
- the second set of image data from the second sensor 56 is electrically communicated through the second, parallel electrical path 60 to the processing module P.
- the information processor module P receives and samples the multiple sets of image data originating from the first sensor 52 and the second sensor 56 to generate sample data for computing a fused image array.
- Each of the two or more sets of image data generated by the two or more sensors 52 and 56 comprise or are equivalent to an array or matrix of datapoints with each datapoint in the matrix being representative of at least one information factor that is mapped to a pre-determined area of the scene as received by the corresponding sensor.
- each datapoint in the matrix may be considered to be a pixel represented as a 2 bit, 8 bit, or other sized grouping or chunk of information that signifies a characteristic or factor sensed within the pre-determined area of the scene.
- the sensor is an IR camera
- the sampled factor could be heat or temperature such that the higher temperature observed by the sensor within that pre-determined area of the scene, the higher the value assigned to the corresponding datapoint in the sampled data matrix.
- information may include a function representative of a difference in contrast between the specific pre-determined area of the scene or adjacent areas of the scene.
- Each set of image data should have the same pixel arrangement.
- the first set of image data from the first sensor 52 is communicated through signal path 58 to a first processing unit 62 .
- the first processing unit 62 samples the first set of image data from the first sensor 52 to produce corresponding mapped sample data array for each set of image data, if there are repeated image data sets taken, such as in a video stream.
- the second set of image data from the second sensor 56 is communicated through signal path 60 to a second processing unit 64 .
- the second processing unit 64 samples the second set of image data from the second sensor 56 to produce corresponding mapped sample data array for each set of image data, if there are repeated image data sets taken, such as in a video stream.
- Each sample data array comprising a mapping of datapoints from one of the image data sets that has been mapped to a corresponding datapoint in the related sample data array using a pre-determined function; the mapping function being computed by assigning a point along a color vector in an X-Y, or multi-dimensional colorspace to a selected value of the information factor.
- Such multi-dimensional colorspace may be similar to that known as the RGB (Red-Green-Blue) colorspace (for media that transmit light), or as that advanced by the International Commission on Illumination (“CIE”) in 1931 and also known as the CIE 1931 color space.
- x and y are projective coordinates and the colors of the chromaticity diagram occupies a region of the real projection plane.
- the colorspace may be represented by the origin (point 0,0) being black and the x axis intensities of red and the y axis intensities of green, for example.
- the user may have a control to select or choose one or more vectors or rays through colorspace to be mapped to a specific sensor's output.
- a control to select or choose one or more vectors or rays through colorspace to be mapped to a specific sensor's output.
- a fusion module 70 is electrically connected to the first processor 62 by output path 66 and is also electrically connected to the second processor 64 by output path 68 .
- the fusion processing module 70 combines the two or more sets of sampled colorized data sets or matrices and computes a fused colorized image array to be displayed from the sets of mapped sample data arrays.
- a fused colorized image array may be computed from the sample data arrays by combining corresponding datapoints from each of the sample data arrays using a pre-selected arithmetic vector function.
- Examples of such final fusion combination schemes may be: (1) simple, linear mathematical addition of the values from the first sensor and the second sensor; (2) a weighted addition; (3) a non-linear combination, such as a quadratic equation or a Taylor function; or (4) the sum of scalar multiples of each value from the first sensor and the second sensor.
- the important consideration is that each spectral channel's identity be maintained for the final image discrimination and that no information be lost from the choice of color vectors or method of combining the corresponding datapoints from each of the two sensors.
- the fused colorized image array can be computed from the sample data arrays using a class of one or more image fusion algorithms that performs an addition or subtraction process or similar desired functions.
- the fused output stream 72 is communicated to an electrical display 24 that receives the fused image array and displays a fused colorized image generated from the fused image array.
- the first and second sensors may generate an analog or digital output signal as desired.
- the input signals conveyed over the signal paths 58 and 60 to the processor P can be analog or digital.
- FIG. 6 is a flowchart demonstrating one method of image fusion in accordance with the present invention. The following steps may be performed automatically using an information processor P.
- the method begins with step 102 , where two or more image sensors generate two or more sets of image data.
- two or more image sensors generate two or more sets of image data.
- each set of image data comprise an array of datapoints with each datapoint being representative of at least one information factor mapped to a pre-determined area of the scene as received by the sensor.
- step 104 the sets of image data are sampled to produce corresponding mapped sample data arrays for each set of image data and for use in computing a fused colorized image array to be displayed from the sample data.
- Each sample data array comprises a mapping of datapoints from an image data set mapped to a corresponding datapoint in the related sample data array using a pre-determined function as described above.
- the mapping function is computed by assigning a point along a color vector in a multi dimensional colorspace to a selected value of the information factor.
- a fused colorized image array is computed from the sample data arrays.
- a colorized image fusion array or matrix is calculated from the sample data sets.
- the values of the colorized image fusion array may be assigned as solutions to linear or non-linear combinations or functions of the sample data to resulting pixel values that are subsequently conveyed to the display. These values, for example, may give the relative weight of the data from each sensor, such that the data from the sensor that produces the better image is given more weight. Or, these values may be used to provide a control for the production of, for example, a color image more appropriate for threat assessment by a soldier.
- step 108 a fused colorized image generated from the fused colorized image array is displayed for the user U.
- the processor unit P, the electronic interface module E or the image fusion device D can be networked or interoperably connected with other compatible devices or networks to exchange image transmission or other information through local area type networks or larger tactical networks using known techniques.
Abstract
A user portable viewing device (D) includes a plurality of non-coaxially aligned sensors (26 and 32) for sensing a scene (74). A processing element (P) combines electronic images into a single electronic image. A display (24) displaying the combined electronic image is adaptable for mounting in an optical axis (A1) including an eye (20) of the user (U) and an input end (28) of the first sensor 26 for direct view. In a second embodiment, a system for fusing images comprises sensors (52 and 56) for generating sets of image data. An information processor (P) receives and samples the sets of image data to generate sample data for computing a fused image array. A display (24) receives the fused image array and displays a fused colorized image generated from the fused image array.
Description
- This application is a divisional U.S. patent application Ser. No. 11/308,461, filed Mar. 28, 2006, entitled DIGITALLY ENHANCED NIGHT VISION DEVICE; and, application Ser. No. 11/308,461 further claims the benefit of U.S. Provisional Application Serial No. 60/594,337, filed Mar. 30, 2005, entitled DIGITALLY ENHANCED NIGHT VISION DEVICE.
- 1. Technical Field
- The invention relates to the field of imaging systems and more particularly to a method and system for fusing image data from multiple sources.
- 2. Background Art
- Multiple sensor imaging systems generate an image of an object by fusing data that is collected using multiple sensors. Gathering image data using multiple sensors, however, has posed challenges. In some systems, the sensors detect light received from separate apertures. Data generated from light from separate apertures, however, describe different points of view of an object that need to be reconciled in order to fuse the data into a single image. Additionally, using separate apertures for different sensors may increase the bulk of an imaging system.
- In other systems, light from an aperture is split into components before entering the sensors. Reflective and refractive elements are typically used to direct the light to different sensors. For example, the system described in U.S. Pat. No. 5,729,376 to Hall et al. includes multiple reflective and refractive elements such as a lens that reflects light towards one sensor and refracts light towards another sensor. Each individual sensor, however, detects only a component of light, for example, only specific wavelengths of light, and thus cannot generate image data from the full spectrum. Additionally, multiple reflective and refractive elements may add to the bulk and weight of an imaging system. Consequently, gathering image data from multiple sensors has posed challenges for the design of imaging systems.
- Yet other systems electronically combine the images generated from separate sensors. Image fusion involves combining two or more images produced by two or more image sensors into one single image. Producing one image that mitigates the weak aspects of the individual images while retaining the strong ones is a complicated task, often requiring a computer or processor with substantial computing power.
- Generally, the prior attempts to fuse or combine images from a plurality of sensors into a single image generated a monochrome image where the only variable in the viewed combined image was a difference in light intensity or the intensity of a single color. Such monochrome images result in the loss of information to the viewer that is otherwise obtainable from the individual sensor images.
- Further, it is known that the human eye has three different types of color sensitive cones. The response of the eye is best described in terms of three “tristimulus values.” However, it has been found that any color can be expressed in terms of the two color coordinates x and y.
- The colors which can be matched by combining a given set of three primary colors (such as the blue, green, and red of a color television screen) are represented on a chromaticity diagram by a triangle joining the coordinates for the three colors.
- Also, a typical night vision system that uses only a single sensor such as an image intensifier tube is a direct view system that means the axis of the user's eye is in optical alignment with the primary viewing axis of the image intensifier tube while the user looks at a phosphorous screen. Alternative systems using liquid crystal displays (LCD) or other electronic displays for viewing an image from a sensor are not axially aligned with the sensor generating the electronic image. For example, U.S. Pat. No. 4,786,966, issued 22 Nov. 1988, to Charles M. Hanson et al. teaches a head mounted display using a camera or sensor that is remote from the display such that the electronic display for viewing the image from a sensor is not axially aligned with the sensor generating the electronic image and the user's optical axis.
- Additionally, multiple sensor units with coaxially aligned sensors are known and taught in U.S. Pat. No. 6,593,561, issued 15 Jul. 2003, to Antonio V. Bacarella et al., as an example. However, the alignment of the display and at least one of the sensors along a single optical axis is not disclosed.
- U.S. Pat. No. 6,560,029, issued 5 May 2003, to Blair R. Dobbie et al., teaches a side mounted, man portable night vision goggle with an image from a thermal camera electronically fused with the images from a video camera on a display.
- While the above cited references introduce and disclose a number of noteworthy advances and technological improvements within the art, none completely fulfills the specific objectives achieved by this invention.
- In accordance with the present invention, a user portable viewing device includes a plurality of non-coaxially aligned sensors for sensing a scene to be viewed and operably attached to a processing element combining electronic images originating from at least two sensors into a single electronic image for viewing by a user. An electronic display is operably connected to the processing element for displaying the combined electronic image. One of the sensors has an input end to sense the scene to be viewed and is adaptable for mounting in an optical axis extending from an eye of the user. The electronic display is adapted to be positioned in the optical axis of the user between the eye of the user and the input end of the sensor for direct view of the display of the combined image by the user.
- In a second embodiment of the present invention, a system for fusing images from a plurality of sensors into a colored image is disclosed. The system comprises two or more sensors for generating two or more sets of image data. An information processor receives and samples the sets of image data to compute and generate a fused non-monochromatic or colored image from the sample data. A display receives the fused image array and displays the fused image generated from the fused image array.
- A four-step method for fusing images is disclosed. Step one calls for receiving sets of image data generated by the plurality of sensors. Step two provides for sampling each set of image data to produce two or more sets of sampled data. In step three, the method provides for generating a colorized array for each set of sampled data, which array is calculated by computing or assigning a specific color value represented by a point along a pre-selected vector in an X-Y colorspace and which color value is determined based on and corresponding to each pixel to be analyzed from the sample data based on the intensity or other factor associated with the analyzed pixel. The last step calls for displaying a fused image generated mathematically combining corresponding points from each of the colorized arrays.
- These and other objects, advantages and features of this invention will be apparent from the following description taken with reference to the accompanying drawings, wherein is shown the preferred embodiments of the invention.
- A more particular description of the invention briefly summarized above is available from the exemplary embodiments illustrated in the drawings and discussed in further detail below. Through this reference, it can be seen how the above cited features, as well as others that will become apparent, are obtained and can be understood in detail. The drawings nevertheless illustrate only typical, preferred embodiments of the invention and are not to be considered limiting of its scope as the invention may admit to other equally effective embodiments.
-
FIG. 1 is a right side elevational view of the present invention affixed to a standard ballistic type helmet. -
FIG. 2 is a right side elevational view of the digitally enhanced night vision (“D-ENVG”) device with the housing cut-away showing the interior components. -
FIG. 3 is a cross sectional view of the present D-ENVG device. -
FIG. 4 is a frontal view of the present D-ENVG device. -
FIG. 5 is a block diagram showing the information flow through the present information to the viewing display. -
FIG. 6 is a flowchart demonstrating one method of fusing colorized images in accordance with the present invention. - So that the manner in which the above recited features, advantages, and objects of the present invention are attained can be understood in detail, more particular description of the invention, briefly summarized above, may be had by reference to the embodiment thereof that is illustrated in the appended drawings. In all the drawings, identical numbers represent the same elements.
- This application relates to pending application Ser. No. 10/250,196, filed Jun. 11, 2003 and published as US 2003/0231804 A1, the specification of which is incorporated by reference as if fully set forth herein.
- In a first embodiment of the invention, a user portable viewing device D includes a plurality of non-coaxially aligned
sensors scene 74 to be viewed and operably attached to a processing element P combining electronic images originating from at least two sensors into a single electronic image for viewing by a user U. Anelectronic display 24 is operably connected to the processing element P for displaying the combined electronic image. One of thesensors 26 has aninput end 28 to sense thescene 74 to be viewed and is adaptable for mounting in an optical axis A1 extending from aneye 20 of the user U. - The
electronic display 24 is adapted to be positioned in the optical axis A1 of the user U between theeye 20 of the user and theinput end 28 of thefirst sensor 26 for direct view of thedisplay 24 of the combined image by the user U. - Referring particularly to
FIG. 1 , an multi-sensor viewing device D is shown attached to a ballistic type helmet H worn by a user U by a helmet mount M. A separate electronics interface module E is operably connected to the multi-sensor viewing device D. Optionally, the processing element P may be mounted with the separate electronics interface module E or mounted closed to theelectronic display 24 in the viewing device D. - The multi-sensor device D of the present invention is more fully shown in
FIGS. 2 through 4 . Afirst sensor 52 may be of an imageintensification tube type 26 for example. Such known image intensifier tubes generate a viewable image either directly from a phosphorous screen or indirectly through adigital display 24 that is operably connected to a CMOS camera or assembly that is in turn operably connected to a processing unit P. The observer U using one of theireyes 20 views thedisplay 24 through anoptical eyepiece 22. Theinput end 28 of the first sensor orimage intensifier tube 26 is positioned during viewing within the optical axis A1 extending from the user'seye 20 through thedisplay 24. The alignment of the present invention acts to reduce or minimize problems of the user U due to the user U being guided by looking at the image on thedisplay 24 that is off axis, which may be a cause of disorientation in a soldier relying mainly on the visual cues from thedisplay 24. - The
second sensor 56 generally senses thescene 74 to be observed at a different wavelength of the light spectrum, such as infrared (IR) for example. Thus, thesecond sensor 56 may be a known IRcamera type device 32 having aninput end 34 for receiving the image to be viewed or sensed. - Thus, there is a direct line of view (axis A1) from the user's
eye 20 of the fused image displayed on theelectronic image display 24 with an input axis A3 of thefirst sensor 26 providing a source of an image to be fused prior to display. The viewing axis A2 of thesecond sensor 32 generally is substantially parallel to the optical axis A1, but axis A1 and axis A2 are not coaxially aligned, which may cause a problem with parallel adjustments. - The multi-sensor device D may optionally include accessory assemblies such as an
IR illuminator 36 to generate a sufficient source of energy at an appropriate wavelength to illuminate thetarget scene 74. - The first and second sensors, such as the
image intensifier tube 26 andIR camera 32, are normally contained within aprotective housing 38 that may also contain thedisplay 24 and other components necessary for the operation of the device D. - In a second embodiment of the present invention and with particular reference to
FIGS. 5 and 6 , a system for fusing images comprisessensors scene 74 to be observed. - The
first sensor 52 generates a set of image data representative of thescene 74 as observed in the field ofview 50 of thefirst sensor 52. The first set of image data from thefirst sensor 52 is electrically communicated through electrical path or outputinformation signal stream 58 to a processing module P. Similarly, thesecond sensor 56 generates a second set of image data representative of thescene 74 as observed in the field ofview 54 of thesecond sensor 56. The second set of image data from thesecond sensor 56 is electrically communicated through the second, parallelelectrical path 60 to the processing module P. - The information processor module P receives and samples the multiple sets of image data originating from the
first sensor 52 and thesecond sensor 56 to generate sample data for computing a fused image array. - Each of the two or more sets of image data generated by the two or
more sensors - Generally, the first set of image data from the
first sensor 52 is communicated throughsignal path 58 to afirst processing unit 62. Thefirst processing unit 62 samples the first set of image data from thefirst sensor 52 to produce corresponding mapped sample data array for each set of image data, if there are repeated image data sets taken, such as in a video stream. Likewise, the second set of image data from thesecond sensor 56 is communicated throughsignal path 60 to asecond processing unit 64. Thesecond processing unit 64 samples the second set of image data from thesecond sensor 56 to produce corresponding mapped sample data array for each set of image data, if there are repeated image data sets taken, such as in a video stream. - Each sample data array comprising a mapping of datapoints from one of the image data sets that has been mapped to a corresponding datapoint in the related sample data array using a pre-determined function; the mapping function being computed by assigning a point along a color vector in an X-Y, or multi-dimensional colorspace to a selected value of the information factor. Such multi-dimensional colorspace may be similar to that known as the RGB (Red-Green-Blue) colorspace (for media that transmit light), or as that advanced by the International Commission on Illumination (“CIE”) in 1931 and also known as the CIE 1931 color space. In essence the CIE colorspace is a two dimensional representation wherein the x axis and the y axis are related to the three tristimulus values (X, Y, and Z) according to x=X/(X+Y+Z), and y=Y/(X+Y+Z). (Mathematically, x and y are projective coordinates and the colors of the chromaticity diagram occupies a region of the real projection plane.) Alternatively, the colorspace may be represented by the origin (point 0,0) being black and the x axis intensities of red and the y axis intensities of green, for example.
- Optionally, the user may have a control to select or choose one or more vectors or rays through colorspace to be mapped to a specific sensor's output. Such an alternative would produce a substantial number of color variations in the fused image that may be better suited for the user's task.
- A
fusion module 70 is electrically connected to thefirst processor 62 byoutput path 66 and is also electrically connected to thesecond processor 64 byoutput path 68. Thefusion processing module 70 combines the two or more sets of sampled colorized data sets or matrices and computes a fused colorized image array to be displayed from the sets of mapped sample data arrays. - A fused colorized image array may be computed from the sample data arrays by combining corresponding datapoints from each of the sample data arrays using a pre-selected arithmetic vector function. Examples of such final fusion combination schemes may be: (1) simple, linear mathematical addition of the values from the first sensor and the second sensor; (2) a weighted addition; (3) a non-linear combination, such as a quadratic equation or a Taylor function; or (4) the sum of scalar multiples of each value from the first sensor and the second sensor. The important consideration is that each spectral channel's identity be maintained for the final image discrimination and that no information be lost from the choice of color vectors or method of combining the corresponding datapoints from each of the two sensors.
- Alternatively, the fused colorized image array can be computed from the sample data arrays using a class of one or more image fusion algorithms that performs an addition or subtraction process or similar desired functions.
- Ultimately, the fused
output stream 72 is communicated to anelectrical display 24 that receives the fused image array and displays a fused colorized image generated from the fused image array. - The first and second sensors, such as the
image intensifier tube 26 andIR camera 32, may generate an analog or digital output signal as desired. Similarly, the input signals conveyed over thesignal paths -
FIG. 6 is a flowchart demonstrating one method of image fusion in accordance with the present invention. The following steps may be performed automatically using an information processor P. The method begins withstep 102, where two or more image sensors generate two or more sets of image data. As above, suppose that there are two image sensors, each with the same pixel arrangement and each set of image data comprise an array of datapoints with each datapoint being representative of at least one information factor mapped to a pre-determined area of the scene as received by the sensor. - The method then proceeds to step 104 where the sets of image data are sampled to produce corresponding mapped sample data arrays for each set of image data and for use in computing a fused colorized image array to be displayed from the sample data. Each sample data array comprises a mapping of datapoints from an image data set mapped to a corresponding datapoint in the related sample data array using a pre-determined function as described above. The mapping function is computed by assigning a point along a color vector in a multi dimensional colorspace to a selected value of the information factor.
- Referring again to
FIG. 6 , insteps step 106, a colorized image fusion array or matrix is calculated from the sample data sets. The values of the colorized image fusion array may be assigned as solutions to linear or non-linear combinations or functions of the sample data to resulting pixel values that are subsequently conveyed to the display. These values, for example, may give the relative weight of the data from each sensor, such that the data from the sensor that produces the better image is given more weight. Or, these values may be used to provide a control for the production of, for example, a color image more appropriate for threat assessment by a soldier. - Finally in step 108 a fused colorized image generated from the fused colorized image array is displayed for the user U.
- Also as an optional feature, the processor unit P, the electronic interface module E or the image fusion device D can be networked or interoperably connected with other compatible devices or networks to exchange image transmission or other information through local area type networks or larger tactical networks using known techniques.
- The foregoing disclosure and description of the invention are illustrative and explanatory thereof, and various changes in the size, shape and materials, as well as in the details of the illustrated construction may be made without departing from the spirit of the invention.
Claims (3)
1. A user portable viewing device including a plurality of non-coaxial sensors for sensing a scene to be viewed and operably attached to a processing element combining electronic images originating from at least two sensors into a single electronic image for viewing by a user, the invention comprising:
an electronic display operably connected to the processing element for displaying the combined electronic image;
one of the sensors having an input end to sense the scene to be viewed and being adaptable for mounting in an optical axis extending from an eye of the user; and
the electronic display being adapted to be positioned in the optical axis of the user and between the eye of the user and the input end of the sensor for direct view of the display of the combined image by the user.
2. The device of claim 1 wherein the plurality of sensors include an image intensifier tube and an infrared camera.
3. The device of claim 1 wherein the display to be viewed and the plurality of sensors are mounted within a housing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/098,439 US20110205368A1 (en) | 2005-03-30 | 2011-04-30 | Digitally enhanced night vision device |
Applications Claiming Priority (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US59433705P | 2005-03-30 | 2005-03-30 | |
US11/308,461 US7969462B2 (en) | 2005-03-30 | 2006-03-28 | Digitally enhanced night vision device |
US13/098,439 US20110205368A1 (en) | 2005-03-30 | 2011-04-30 | Digitally enhanced night vision device |
Related Parent Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/308,461 Division US7969462B2 (en) | 2005-03-30 | 2006-03-28 | Digitally enhanced night vision device |
Publications (1)
Publication Number | Publication Date |
---|---|
US20110205368A1 true US20110205368A1 (en) | 2011-08-25 |
Family
ID=37069898
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/308,461 Active 2030-04-28 US7969462B2 (en) | 2005-03-30 | 2006-03-28 | Digitally enhanced night vision device |
US13/098,439 Abandoned US20110205368A1 (en) | 2005-03-30 | 2011-04-30 | Digitally enhanced night vision device |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US11/308,461 Active 2030-04-28 US7969462B2 (en) | 2005-03-30 | 2006-03-28 | Digitally enhanced night vision device |
Country Status (5)
Country | Link |
---|---|
US (2) | US7969462B2 (en) |
EP (1) | EP1864509A4 (en) |
JP (1) | JP4971301B2 (en) |
CA (1) | CA2599821C (en) |
WO (1) | WO2006110325A2 (en) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9436894B2 (en) | 2013-08-20 | 2016-09-06 | Hanwha Techwin Co., Ltd. | Image alignment apparatus and image alignment method of using the same |
WO2021005309A1 (en) * | 2019-07-10 | 2021-01-14 | Photonis France | Digital nocturnal vision apparatus with adjustable image acquisition speed |
Families Citing this family (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7427758B2 (en) | 2003-05-28 | 2008-09-23 | Opto-Knowledge Systems, Inc. | Cryogenically cooled adjustable apertures for infra-red cameras |
US20060126085A1 (en) * | 2004-12-10 | 2006-06-15 | Owen Robert A | Non-linear colorization for imaging systems |
US7211778B1 (en) * | 2005-10-07 | 2007-05-01 | Itt Manufacturing Enterprises, Inc. | Night vision goggle with separate camera and user output paths |
US8164813B1 (en) | 2007-06-16 | 2012-04-24 | Opto-Knowledge Systems, Inc. | Non-circular continuous variable aperture or shutter for infrared cameras |
US7962313B2 (en) * | 2007-12-14 | 2011-06-14 | Palo Alto Research Center Incorporated | Method and apparatus for using mobile code for distributed data fusion in networked sensing systems |
FR2954090B1 (en) * | 2009-12-22 | 2012-08-31 | Commissariat Energie Atomique | DIGITAL EYE PROTECTION GLASSES WITH SIGNAL PROCESSING |
US8836793B1 (en) | 2010-08-13 | 2014-09-16 | Opto-Knowledge Systems, Inc. | True color night vision (TCNV) fusion |
EP2679000B1 (en) * | 2011-02-25 | 2016-11-02 | Photonis Netherlands B.V. | Acquiring and displaying images in real-time |
CA2934528C (en) * | 2013-12-17 | 2022-06-28 | Marsupial Holdings Inc. | Integrated microoptic imager, processor, and display |
CN112153247B (en) * | 2019-06-27 | 2022-02-01 | 杭州海康威视数字技术股份有限公司 | Video camera |
US20210314500A1 (en) * | 2020-04-03 | 2021-10-07 | Rockwell Collins, Inc. | Color night vision system and method |
US11375166B2 (en) * | 2020-09-21 | 2022-06-28 | Microsoft Technology Licensing, Llc | Selective colorization of thermal imaging |
Citations (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4786966A (en) * | 1986-07-10 | 1988-11-22 | Varo, Inc. | Head mounted video display and remote camera system |
US5555324A (en) * | 1994-11-01 | 1996-09-10 | Massachusetts Institute Of Technology | Method and apparatus for generating a synthetic image by the fusion of signals representative of different views of the same scene |
US5561751A (en) * | 1994-12-13 | 1996-10-01 | Microsoft Corporation | System and method for displaying a color image using vector error diffusion |
US5729376A (en) * | 1996-07-01 | 1998-03-17 | The United States Of America As Represented By The Secretary Of The Army | Catadioptric multi-functional optical assembly |
US6211911B1 (en) * | 1994-10-14 | 2001-04-03 | Olympus Optical Co., Ltd. | Image processing apparatus |
US20020195561A1 (en) * | 2001-06-22 | 2002-12-26 | Bacarella Antonio V. | Method and system for gathering image data using multiple sensors |
US6560029B1 (en) * | 2001-12-21 | 2003-05-06 | Itt Manufacturing Enterprises, Inc. | Video enhanced night vision goggle |
US20030231804A1 (en) * | 2002-06-12 | 2003-12-18 | Litton Systems, Inc. | System for multi-sensor image fusion |
US20040196566A1 (en) * | 2002-06-06 | 2004-10-07 | Litton Systems, Inc. | Integrated display image intensifier assembly |
US20050190990A1 (en) * | 2004-01-27 | 2005-09-01 | Burt Peter J. | Method and apparatus for combining a plurality of images |
US20070076917A1 (en) * | 2003-03-21 | 2007-04-05 | Lockheed Martin Corporation | Target detection improvements using temporal integrations and spatial fusion |
US20090256908A1 (en) * | 2008-04-10 | 2009-10-15 | Yong-Sheng Chen | Integrated image surveillance system and image synthesis method thereof |
US7620265B1 (en) * | 2004-04-12 | 2009-11-17 | Equinox Corporation | Color invariant image fusion of visible and thermal infrared video |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2001082593A1 (en) * | 2000-04-24 | 2001-11-01 | The Government Of The United States Of America, As Represented By The Secretary Of The Navy | Apparatus and method for color image fusion |
JP3930359B2 (en) | 2002-03-29 | 2007-06-13 | オリンパス株式会社 | Sentinel lymph node detection apparatus and detection method |
JP3718660B2 (en) | 2002-04-11 | 2005-11-24 | フジノン株式会社 | Night vision camera |
JP2004003878A (en) | 2002-04-30 | 2004-01-08 | Kobe Steel Ltd | Device and method of measuring activity of plant |
US7092013B2 (en) | 2002-06-12 | 2006-08-15 | Litton Systems, Inc. | InGaAs image intensifier camera |
-
2006
- 2006-03-28 US US11/308,461 patent/US7969462B2/en active Active
- 2006-03-29 WO PCT/US2006/011404 patent/WO2006110325A2/en active Application Filing
- 2006-03-29 EP EP06758218A patent/EP1864509A4/en not_active Withdrawn
- 2006-03-29 JP JP2008504293A patent/JP4971301B2/en active Active
- 2006-03-29 CA CA2599821A patent/CA2599821C/en not_active Expired - Fee Related
-
2011
- 2011-04-30 US US13/098,439 patent/US20110205368A1/en not_active Abandoned
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US4786966A (en) * | 1986-07-10 | 1988-11-22 | Varo, Inc. | Head mounted video display and remote camera system |
US6211911B1 (en) * | 1994-10-14 | 2001-04-03 | Olympus Optical Co., Ltd. | Image processing apparatus |
US5555324A (en) * | 1994-11-01 | 1996-09-10 | Massachusetts Institute Of Technology | Method and apparatus for generating a synthetic image by the fusion of signals representative of different views of the same scene |
US5561751A (en) * | 1994-12-13 | 1996-10-01 | Microsoft Corporation | System and method for displaying a color image using vector error diffusion |
US5729376A (en) * | 1996-07-01 | 1998-03-17 | The United States Of America As Represented By The Secretary Of The Army | Catadioptric multi-functional optical assembly |
US6593561B2 (en) * | 2001-06-22 | 2003-07-15 | Litton Systems, Inc. | Method and system for gathering image data using multiple sensors |
US20020195561A1 (en) * | 2001-06-22 | 2002-12-26 | Bacarella Antonio V. | Method and system for gathering image data using multiple sensors |
US6560029B1 (en) * | 2001-12-21 | 2003-05-06 | Itt Manufacturing Enterprises, Inc. | Video enhanced night vision goggle |
US20040196566A1 (en) * | 2002-06-06 | 2004-10-07 | Litton Systems, Inc. | Integrated display image intensifier assembly |
US20030231804A1 (en) * | 2002-06-12 | 2003-12-18 | Litton Systems, Inc. | System for multi-sensor image fusion |
US20070076917A1 (en) * | 2003-03-21 | 2007-04-05 | Lockheed Martin Corporation | Target detection improvements using temporal integrations and spatial fusion |
US20050190990A1 (en) * | 2004-01-27 | 2005-09-01 | Burt Peter J. | Method and apparatus for combining a plurality of images |
US7620265B1 (en) * | 2004-04-12 | 2009-11-17 | Equinox Corporation | Color invariant image fusion of visible and thermal infrared video |
US20090256908A1 (en) * | 2008-04-10 | 2009-10-15 | Yong-Sheng Chen | Integrated image surveillance system and image synthesis method thereof |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9436894B2 (en) | 2013-08-20 | 2016-09-06 | Hanwha Techwin Co., Ltd. | Image alignment apparatus and image alignment method of using the same |
WO2021005309A1 (en) * | 2019-07-10 | 2021-01-14 | Photonis France | Digital nocturnal vision apparatus with adjustable image acquisition speed |
FR3098605A1 (en) * | 2019-07-10 | 2021-01-15 | Photonis France | DIGITAL NIGHT VISION DEVICE WITH ADJUSTABLE IMAGE ACQUISITION SPEED. |
Also Published As
Publication number | Publication date |
---|---|
EP1864509A2 (en) | 2007-12-12 |
US20060221180A1 (en) | 2006-10-05 |
CA2599821A1 (en) | 2006-10-19 |
JP2008537665A (en) | 2008-09-18 |
US7969462B2 (en) | 2011-06-28 |
JP4971301B2 (en) | 2012-07-11 |
EP1864509A4 (en) | 2012-11-07 |
CA2599821C (en) | 2012-09-18 |
WO2006110325A3 (en) | 2007-12-13 |
WO2006110325A2 (en) | 2006-10-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7969462B2 (en) | Digitally enhanced night vision device | |
US7307793B2 (en) | Fusion night vision system | |
US20150213754A1 (en) | High efficiency beam combiner coating | |
US5555324A (en) | Method and apparatus for generating a synthetic image by the fusion of signals representative of different views of the same scene | |
US9373277B2 (en) | Extending dynamic range of a display | |
JP6283020B2 (en) | Viewer with display overlay | |
US6762884B2 (en) | Enhanced night vision goggle assembly | |
US7411193B2 (en) | Portable radiometry and imaging apparatus | |
US6798578B1 (en) | Integrated display image intensifier assembly | |
US20070228259A1 (en) | System and method for fusing an image | |
US7746551B2 (en) | Vision system with eye dominance forced to fusion channel | |
US8824828B1 (en) | Thermal V-curve for fusion image declutter | |
US11120534B2 (en) | Color night vision goggle | |
US20050029456A1 (en) | Sensor array with a number of types of optical sensors | |
US20180109739A1 (en) | Apparatus, system and method of modifying an image sensor to achieve hyperspectral imaging in low light | |
WO2000037970A2 (en) | Extreme temperature radiometry and imaging apparatus | |
US20030231245A1 (en) | Ingaas image intensifier camera | |
EP2104340A1 (en) | Combined thermal and visible imaging | |
JP3217276B2 (en) | Simulated view image generator | |
GB2538256A (en) | Optical data insertion device | |
CN115524102A (en) | Multi-picture multi-scene night vision mirror tester | |
Wolff et al. | Versatile low-power multispectral video fusion hardware |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STCB | Information on status: application discontinuation |
Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION |