WO1992001264A1 - Method and structure for calibrating a computer generated image - Google Patents

Method and structure for calibrating a computer generated image Download PDF

Info

Publication number
WO1992001264A1
WO1992001264A1 PCT/US1991/004882 US9104882W WO9201264A1 WO 1992001264 A1 WO1992001264 A1 WO 1992001264A1 US 9104882 W US9104882 W US 9104882W WO 9201264 A1 WO9201264 A1 WO 9201264A1
Authority
WO
WIPO (PCT)
Prior art keywords
image data
calibration
picture
data
image
Prior art date
Application number
PCT/US1991/004882
Other languages
French (fr)
Inventor
Robert L. Cook
Original Assignee
Light Source Computer Images, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Light Source Computer Images, Inc. filed Critical Light Source Computer Images, Inc.
Publication of WO1992001264A1 publication Critical patent/WO1992001264A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • H04N1/56Processing of colour picture signals
    • H04N1/60Colour correction or control
    • H04N1/603Colour correction or control controlled by characteristics of the picture signal generator or the picture reproducer
    • H04N1/6052Matching two or more picture signal generators or two or more picture reproducers
    • H04N1/6055Matching two or more picture signal generators or two or more picture reproducers using test pattern analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/40Picture signal circuits
    • H04N1/407Control or modification of tonal gradation or of extreme levels, e.g. background level
    • H04N1/4076Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on references outside the picture
    • H04N1/4078Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on references outside the picture using gradational references, e.g. grey-scale test pattern analysis

Definitions

  • This invention pertains to a system for automatically adjusting computer gray scale or color images in order to compensate for deviations resulting from the use of input devices such as scanners and/or output devices such as printers.
  • the print or video display of the image closely matches the original image, be it a computer rendered image or an image which was initially received via a scanner.
  • the system accounts for the particular printing process used and for any programs that subsequently manipulate the image or prepare it for printing by doing color separation and half tone generation.
  • a first such prior art image system is used by certain systems typically used in professional pre-print shops. With this method, the user specifies a small number of "aim points" (usually 3 or 4) on the image and controls the colors at those points. It has significant disadvantages: 1. Because it allows control over only a small number of aim points, other colors are often considerably skewed. This leaves the user with having to choose which parts of the image are the most important and taking what he gets with the rest. For example, in an advertisement, the user often has to choose between getting the color of the product right and having the flesh tones of the people be wrong, or getting the flesh tones right and having the product be the wrong color.
  • a second method is used by desk top computer scanning programs. With this method, the user has control over a few parameters, such as "brightness” and “contrast”, that control a mapping of the colors seen by the scanner. Unfortunately, there are a number of problems with this method:
  • a third prior art method exemplified by U.S. Patent No. 4,500,919, involves measuring the characteristics of the scanner and the printer in terms of device-independent CIE color coordinates. This approach is more accurate than the previous two methods. Its calibration technique involves printing a calibration sheet and then measuring color coordinates on that sheet with a device called a colorimeter; these measurements are tedious, time-consuming and require professional training. The need for these device- independent measurements is inherent in the method.
  • a fourth prior art method described in U.S.
  • Patent No. 4,802,007 also involves printing a calibration sheet, but it does not require the measurements of that sheet to be device-independent. Instead of an expensive colorimeter, it uses a scanner to measure colors on the sheet. The resulting calibration is for the combination of scanner and printer, rather than for the printer separate from the scanner. This has the advantage that the calibration process can be made automatic because the color measurement information is directly available to the computer, but it imposes some significant limitations. Some of these limitations are a result of not having device-independent measurements and calibrating only for a scanner-printer combination; others arise because the method was designed to be part of a dedicated hardware system for a particular application and has characteristics that prevent it from being usable outside that system.
  • the calibration data is tied to a particular printer and video system.
  • the calibration operation must be performed for each of the SxP combinations of scanners and printers, and calibration data related to each such calibration must be stored and readily available. Not only is this quite cumbersome, but in many situations certain of such combinations of input and output devices cannot be readily calibrated. For example, if the combination consists of an scanner being a slide reader and an printer being a flatbed printer, calibration cannot be performed because there is no direct way of having the slide reader input the flatbed print which has been output by the system.
  • the method does not provide that the data stored in the computer pertaining to particular areas within an image indicate intensities which are linearly proportional to the intensities contained in the original image which has been scanned, i.e., such that a first area in the original image which is twice as bright as a second area in the original image is stored as computer data having an intensity value which is twice the intensity value stored for the corresponding second region.
  • the use of linearly proportional data is necessary to perform image editing operations without introducing distortions.
  • a typical example of such distortions is the familiar jagged edges and "halo" effect of electronically "pasting" a foreground image over a background image, which may occur if the data associated with the foreground and background images are not both linearly proportional.
  • the method allows no way to accurately display or print computer rendered images, i.e., images that do no originate as scanned images but rather are originally generated by the computer.
  • the present invention has significant advantages over the prior art.
  • one is able to calibrate the printer and scanner in a device-independent manner, but without requiring device- independent color measurements of the printed calibration sheet.
  • the calibration is performed whereby a calibration image in the system is stored and manipulated as desired, and then printed out as a resultant calibration picture.
  • the resultant calibration picture is then input to the system again to create a resultant calibration image.
  • a comparison is made between the original calibration picture and the resultant calibration picture. This comparison yields calibration data indicating the distortion introduced by the particular hardware and software combination used in the calibration process.
  • the calibration data ' is then used in a correction stage wherein a picture is input to the system, anti-distorted utilizing the calibration data, and an output picture is provided with the anti- distortion causing the output picture to appear substantially identical to the input picture.
  • the aforementioned calibration data is used together with a second set of calibration data which is generated by comparing scanned values of an input calibration picture with known accurate data values corresponding to said calibration picture.
  • the second set of calibration data is representative of the distortions caused by the input device only.
  • the first and second sets of calibration data are then combined to yield a third set of calibration data which represents distortions introduced by the output device only.
  • the second and third sets of calibration data are then used during normal operation of the system in order to provide computer images which are adjusted for the distortions which would otherwise be introduced by the input device, and which provide output pictures which are adjusted for the distortions which would otherwise be introduced by the output device.
  • a plurality of said calibration data are stored, one for each input device, and one for each output device. Combinations of input and output devices are then selected as desired, and the entire system is calibrated using previously stored calibration images pertaining to the selected input and output devices. In this manner, it is not necessary to perform a calibration and store calibration data for every possible combination input device and output device.
  • Figure 1 is a diagram depicting a first embodiment of a computer imaging system including calibration and correction which is performed in accordance with the teachings of this invention
  • Figure 2a depicts a second embodiment of a calibration operation performed in accordance with the teachings of this invention
  • Figure 2b depicts a second embodiment of a correction operation performed in accordance with the teachings of this invention which utilizes the calibration generated by the embodiment of Figure 2a
  • Figure 3 depicts how the embodiments of Figures 2a and 2b are utilized in conjunction with a plurality of input devices, output programs, and output devices.
  • Picture physical representation of a real-world scene, such as a print, slide, or a video signal.
  • Image pixel data stored in computer memory.
  • Scanner any image input device or process that converts a picture into an image, such as flat bed, drum, slide scanners, and video and still video cameras.
  • Printer any image output device or process that converts an image into a picture such as laserwriters, phototypesetters, offset printers, chromalins, video displays, lithography and the like.
  • Color color or grey scale, (of any number of bits and/or channels).
  • Computer an image which has its origin as computer Rendered generated data rather than an image which Image is provided to the computer from a picture via a scanner
  • Color Space any coordinate system for numerical color data in an image based on physical colors in a picture.
  • the first stage 1A is a calibration stage which is preferably performed relatively rarely, depending on system drift, for example.
  • Stage IB is an image correction stage which is performed automatically on each scanned image.
  • the user begins with calibration image 103 stored as a pixel pattern in computer memory.
  • Calibration image 103 is provided, for example, on a disk or is generated by a program, as is well known.
  • Calibration image 103 is processed in 104, for example, by program A, which in fact may either be called by or be a portion of another one or more programs.
  • Running program A ultimately results in the image being printed as calibration picture 106 by printer 105.
  • program A may include color separations and half tone dot generation.
  • the resulting calibration picture 106 is scanned by scanner 107, with the resultant scanner data being stored in computer memory as calibration image 108.
  • Calibration image 108 is then compared with original calibration image 103. The result of this comparison is used to compute data which is stored as calibration data 109.
  • picture 110 is scanned by scanner 107 in order to obtain image 112 stored in computer memory.
  • Correction process 113 is performed utilizing calibration data from Step 109 in order to create anti-distorted image 114 from image 112.
  • Anti- distorted image 114 is then processed by program A shown in Step 104, which is the same or substantially similar to program A shown in Step 104 of calibration stage 1A.
  • Step 113 anti-distorts image 112 to prov ie anti-distorted image 114 which is then distorted by program A in Step 104.
  • Step 113 compensates for the distortion introduced by program A, scanner 107, and printer 105.
  • Correction Step 113 is based on the following principal.
  • calibration stage 1A assume that a region of calibration image 103 has a color Cl, as shown. Further, assume that after being processed by program A in 104 of calibration stage 1A, and printed by printer 105 to provide calibration picture 106, this region appears as the color C2 when calibration picture 106 is scanned by scanner 107 to provide calibration image 108.
  • correction stage IB assume a region in picture 110 becomes color C2 when scanned by scanner 107. Correction process 113 anti-distorts the computer image by substituting color Cl for color C2.
  • a second embodiment of this invention also consists of two stages.
  • the calibration stage is illustrated in Fig. 2a
  • the correction stage is illustrated in Fig. 2b.
  • calibration stage 2A includes calibration stage 2A-1 (which has been previously described as calibration stage 1A with reference to Fig. 1), in order to obtain calibration data 109.
  • calibration stage 2A of this embodiment includes stage 2A-2.
  • the user scans a known picture 205 using scanner 107 (i.e. the same scanner used in calibration stage 1A) .
  • the result is stored as calibration image 207.
  • Calibration image 207 is compared with the known correct values for picture 205, which have been previously determined and entered as data which is stored as calibration image 208.
  • the results of this comparison are stored as calibration data 209, the data necessary to calibrate scanner 107 in order to scan a picture and produce linearly proportional data.
  • Calibration data 209 and calibration data 109 are used to generate calibration data 210, the data necessary to calibrate printer 105 in combination with "Program A" of Step 104 so that -the printed colors will have intensity values that are linearly proportional to the image data.
  • this separates the calibration of printer 105 which is based on calibration data 210 from the calibration of scanner 107, which is based on calibration data 209.
  • picture 302 is scanned by scanner 107 to create image 304.
  • Image 304 is corrected by a process 305 based on calibration data 209 to obtain image 306, which has been corrected for distortions introduced by scanner 107.
  • Image 306 has pixel values that are linearly proportional to the intensity values in the original picture 302 and thus are better suited for processing by "Programs B" (Step 307), such as image editing, painting, or matting programs. • Image 306 is then, if desired, manipulated (Step 307) by "Programs B” to produce an image 308 which is further corrected by Step 309 based on calibration data 210, which in turn is based on calibration data 109 and 209 as previously described with reference to Figure 2. Correction Step 309 thus produces image 310 which has been corrected to account for distortions that will be introduced by program 104 and printer 105. Image 310 is then processed by "Program A” in Step 104 and printed on printer 105 to produce picture 312. Picture 312 looks substantially like picture 302, except for manipulations performed by Step 307.
  • the correction process of Step 305 is based on the following principle.
  • calibration stage 2A (Fig. 2a)
  • a region of the resulting calibration image 207 has a color C2.
  • the known correct value for this region is the color C3, as stored in calibration image 208.
  • correction stage 2B (Fig. 2b)
  • a region in picture 302 is stored as color C2 when scanned by scanner 107.
  • Correction Step 305 substitutes color C3 for C2 to create image 306.
  • the color stored in image 306 is now the correct color and can be manipulated as such by Programs B of Step 307.
  • Step 309 The correction process of Step 309 is based on the following principle.
  • calibration stage 2A (Fig. 2a) assume that a region of calibration image 103 has a color Cl. Further assume that after being processed by "Program A" of (Step 104) and printed on printer 105, this region has a color C2 in image 108 when calibration picture 106 is scanned by scanner 107.
  • correction stage 2B (Fig. 2b) assume a region in picture 302 is stored as color C2 in image 304 when picture 302 is scanned by scanner 107. Assume further that this color C2 is changed to color C3 by the correction process of Step 305 to create image 306. Further assume for the purposes of this explanation that Program B does not manipulate the image data.
  • Programs B are intended to alter the image, so the intent here is to provide programs B with linearly proportional data so that the manipulations do not have errors introduced from working on distorted data.
  • the correction process of Step 309 is equivalent to reversing the correction process of Step 305 and then applying the correction process of Step 113 from Figure 1.
  • the region of image 308 of color C3 is known to represent a region that was scanned as color C2 (before correction by Step 305), and color Cl is known to print so that it looks like color C2 to scanner 107. So the correction process of Step 309 substitutes color Cl for C3 to create image 310. Notice that this substitution of color Cl for color C3 does not depend on the color C2 seen by the scanner 107, but only on the scanner-independent color C3.
  • Step 309 eliminates from Step 309 any dependence on the characteristics of scanner 107. If some other scanner were used that provided color C4, rather than color C2, the calibration data associated with the other scanner would cause color C3 to be substituted for color C4 in Step 305, as desired. In this event. Step 309 will still substitute color Cl for color C3.
  • color Cl is printed in picture 312 as color C2 as seen by scanner 107.
  • picture 312 has the same colors as original picture 302, except for changes made by
  • Programs B of Step 307 as desired.
  • a similar result occurs whe Program B actually manipulates the image data, with the changes being accurately reflected due to the use of linearly proportional data.
  • This method also works when image 308 is a computer rendered image, rather than originating from picture 302 via scanner 107, assuming that computer rendered image 308 has linearly proportional data.
  • the calibration stage is performed on a number of different scanners Si through Sn and a number of different printing programs Al through Ak and printers Pi through P m .
  • the system is capable of taking input data from any of scanners Si through S n and providing output data to any of printers Pi through P m , utilizing, if desired, any one of output programs Ai through Ak- Regardless of the combination selected, the image printed by the selected printer is calibrated to appear substantially identical with the input image received from the selected scanner, except for those manipulations performed by "programs B", if utilized. This is true even if the scanners and printers have not been calibrated in every combination.
  • a slide scanner is used for input and calibrated using a printing process that makes slides, but a reproduction of a scanned slide is also capable of being printed on paper using a different printing process that was calibrated using a flatbed scanner.
  • linear data values are generated, i.e., data values which are linearly proportional to the actual values of. the picture.
  • This allows separate calibrations to be performed for scanners, output programs A and printers, which can be than be selected as desired without further calibration, in order to cause the image stored in the computer look like the print, and to make accurate prints of computer-generated data.
  • manipulation programs B such as image editing, painting, or matting programs. These programs require pixel values in the image that are linearly proportional to the intensity values in the original image; otherwise they may introduce artifacts into the image.
  • color Cl when processed by program Al and printed on printer PI, appears to the scanner as color C2. Then, as described above, if the color C2 is subsequently scanned, the color Cl is substituted for it so that the print produced by program Al and printer PI looks to the scanner like the original image. Now consider the color produced as an intermediate step between program Al and printer PI. Let us call this color Cl' . We have no way of measuring the value of this color because it is an intermediate step in the output process.
  • the color substitutions performed by the correction processes of steps 113, 305 and 309 can be based on any suitable technique, such as lookup tables, approximations using fitted correction curves, or other means which are well known in the prior art. Accordingly, it is not necessary to include all possible colors Cl in the calibration image; a representative subset of the colors can be used, in which case the calibration information for the rest of the colors can be derived from the calibration information of the representative subset using suitable interpolation techniques as are will known.
  • Any colors Cl which cannot be produced by outputting any color C2 are said to be outside the range or gamut of the printing device; when one of these colors is scanned in step 107, it can be replaced as part of process 113 by a similar color that is inside the gamut, using techniques as are well known (e.g., decreasing the saturation or purity of the color) .
  • all of the colors Cl can be adjusted in a consistent manner so that all of the adjusted colors can be produced on the printing device.
  • default calibration data can be provided such that the user is capable of obtaining good, though not necessarily optimal, results with their particular hardware (e.g. scanner and printer) without the need for the user to perform a calibration stage.
  • the vendor of a system constructed in accordance with the teachings of this invention can provide a plurality of sets of calibration data, each set corresponding to an associated model of scanner or printer. A user is then able to select the default calibration for his scanner and printer models. While this calibration data will provide highly accurate results, even greater accuracy can be provided should the user form his own calibration sets for the particular scanner(s) and ⁇ rinter(s) used.
  • a wide variety of image input, generation, manipulation, and output functions can be performed in a calibrated manner which adjusts for the idiosyncracies of the input and output devices, as well as the methods used for manipulating the image data.
  • the teachings of this invention can be used in conjunction with a color separation process, i.e., process that translates RGB color data into the CMYK color data needed by many printers.
  • the teachings of this invention can be used as a new color separation process, replacing the traditional color separation process.
  • this this follows simply by using CMYK color space values for color Cl and RGB color space values for color C2, as shown in Table 2.
  • RGB color space values are used for color C3, as shown in Table 3.
  • Table 2 shows various alternative embodiments, which are not to be construed as limiting, which show the relationship between colors Cl and C2 of embodiment 1 described above. TABLE 2
  • Table 3 shows the various alternative color relationships between colors Cl, C2 and C3 in embodiment 2, without intending this list to be limiting.
  • color spaces also known as "color spaces” can be used with regard to one or more of the scanners and printer.

Abstract

A computer imaging system allows for calibration (1A) of scanners (107) and printers (105). The calibration (1A) is performed whereby a calibration image (103) in the system is stored (109) and manipulated as desired, and printed out as a resultant calibration picture (116). The resultant calibration picture (106) is then input to the system again to create a resultant calibration image (116). A comparison is made between the original calibration image (103) and the resultant calibration image (108). This comparison yields calibration data (109) indicating the distortion introduced by the particular hardware and software combination used in the calibration process (1A). The calibration data (109) is then used in a correction stage (1B) wherein a picture is input to the system, anti-distorted utilizing the calibration data (109) and an output picture (116) is provided with the anti-distorted data causing the output picture (116) to appear substantially identical to the input picture.

Description

ME HOD AND STRUCTURE FOR CALIBRATING A COMPUTER GENERATED IMAGE
INTRODUCTION Background This invention pertains to a system for automatically adjusting computer gray scale or color images in order to compensate for deviations resulting from the use of input devices such as scanners and/or output devices such as printers. In this manner, the print or video display of the image closely matches the original image, be it a computer rendered image or an image which was initially received via a scanner. The system accounts for the particular printing process used and for any programs that subsequently manipulate the image or prepare it for printing by doing color separation and half tone generation.
DESCRIPTION OF THE PRIOR ART Computer imaging systems are faced with the problem that the printed image, for example that sent to a printer or a video screen, may not accurately represent the image as defined by the computer. This problem is exacerbated when the computer stored image is originated via a method which itself provides its own distortions, for example when an original image is provided to the computer via a scanner. Certain prior art systems have been developed which attempt to mitigate these problems.
A first such prior art image system is used by certain systems typically used in professional pre-print shops. With this method, the user specifies a small number of "aim points" (usually 3 or 4) on the image and controls the colors at those points. It has significant disadvantages: 1. Because it allows control over only a small number of aim points, other colors are often considerably skewed. This leaves the user with having to choose which parts of the image are the most important and taking what he gets with the rest. For example, in an advertisement, the user often has to choose between getting the color of the product right and having the flesh tones of the people be wrong, or getting the flesh tones right and having the product be the wrong color.
2. The process of specifying aim points is an esoteric art and requires a highly trained operator.
A second method is used by desk top computer scanning programs. With this method, the user has control over a few parameters, such as "brightness" and "contrast", that control a mapping of the colors seen by the scanner. Unfortunately, there are a number of problems with this method:
1. While these controls are somewhat easier to use than aim points, they are still intimidating to the average user, and it requires a significant amount of training to learn to use these controls effectively.
2. The only feedback to the user is what appears on the computer screen, which is usually very different from what is printed. The user can accurately determine the result of manipulations only by making a print. This makes the process very inaccurate and slow. 3. The entire process depends on the judgement of the user in determining which settings of the controls provide the best print.
4. As with the first prior art method, described above, no one setting of the controls will give an optimal result over the entire image.
A third prior art method, exemplified by U.S. Patent No. 4,500,919, involves measuring the characteristics of the scanner and the printer in terms of device-independent CIE color coordinates. This approach is more accurate than the previous two methods. Its calibration technique involves printing a calibration sheet and then measuring color coordinates on that sheet with a device called a colorimeter; these measurements are tedious, time-consuming and require professional training. The need for these device- independent measurements is inherent in the method. A fourth prior art method, described in U.S.
Patent No. 4,802,007, also involves printing a calibration sheet, but it does not require the measurements of that sheet to be device-independent. Instead of an expensive colorimeter, it uses a scanner to measure colors on the sheet. The resulting calibration is for the combination of scanner and printer, rather than for the printer separate from the scanner. This has the advantage that the calibration process can be made automatic because the color measurement information is directly available to the computer, but it imposes some significant limitations. Some of these limitations are a result of not having device-independent measurements and calibrating only for a scanner-printer combination; others arise because the method was designed to be part of a dedicated hardware system for a particular application and has characteristics that prevent it from being usable outside that system. These limitations are: The calibration image data is fed directly to the printer, so that it cannot be processed by the computer. Therefore, this method will only work as part of a closed system in a way that fundamentally precludes its use with other software programs, for example page layout, desktop presentation, image editing and retouching programs, and the like.
2. The calibration data is tied to a particular printer and video system. Although not taught or suggested, if the system were to be expanded to calibrate a plurality of S scanners and a plurality of P printers, the calibration operation must be performed for each of the SxP combinations of scanners and printers, and calibration data related to each such calibration must be stored and readily available. Not only is this quite cumbersome, but in many situations certain of such combinations of input and output devices cannot be readily calibrated. For example, if the combination consists of an scanner being a slide reader and an printer being a flatbed printer, calibration cannot be performed because there is no direct way of having the slide reader input the flatbed print which has been output by the system.
3. The method does not provide that the data stored in the computer pertaining to particular areas within an image indicate intensities which are linearly proportional to the intensities contained in the original image which has been scanned, i.e., such that a first area in the original image which is twice as bright as a second area in the original image is stored as computer data having an intensity value which is twice the intensity value stored for the corresponding second region. The use of linearly proportional data is necessary to perform image editing operations without introducing distortions. A typical example of such distortions is the familiar jagged edges and "halo" effect of electronically "pasting" a foreground image over a background image, which may occur if the data associated with the foreground and background images are not both linearly proportional.
4. The method allows no way to accurately display or print computer rendered images, i.e., images that do no originate as scanned images but rather are originally generated by the computer.
SUMMARY OF THE INVENTION
The present invention has significant advantages over the prior art. In accordance with this invention, one is able to calibrate the printer and scanner in a device-independent manner, but without requiring device- independent color measurements of the printed calibration sheet.
In accordance with the teachings of this invention, a novel computer imaging system and method are taught which allow for calibration of scanners and printers.
In one embodiment of this invention, the calibration is performed whereby a calibration image in the system is stored and manipulated as desired, and then printed out as a resultant calibration picture. The resultant calibration picture is then input to the system again to create a resultant calibration image. A comparison is made between the original calibration picture and the resultant calibration picture. This comparison yields calibration data indicating the distortion introduced by the particular hardware and software combination used in the calibration process. The calibration data'is then used in a correction stage wherein a picture is input to the system, anti-distorted utilizing the calibration data, and an output picture is provided with the anti- distortion causing the output picture to appear substantially identical to the input picture.
In an alternative embodiment, the aforementioned calibration data is used together with a second set of calibration data which is generated by comparing scanned values of an input calibration picture with known accurate data values corresponding to said calibration picture. The second set of calibration data is representative of the distortions caused by the input device only. The first and second sets of calibration data are then combined to yield a third set of calibration data which represents distortions introduced by the output device only. The second and third sets of calibration data are then used during normal operation of the system in order to provide computer images which are adjusted for the distortions which would otherwise be introduced by the input device, and which provide output pictures which are adjusted for the distortions which would otherwise be introduced by the output device.
In accordance with this second embodiment of this invention, a plurality of said calibration data are stored, one for each input device, and one for each output device. Combinations of input and output devices are then selected as desired, and the entire system is calibrated using previously stored calibration images pertaining to the selected input and output devices. In this manner, it is not necessary to perform a calibration and store calibration data for every possible combination input device and output device. BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is a diagram depicting a first embodiment of a computer imaging system including calibration and correction which is performed in accordance with the teachings of this invention;
Figure 2a depicts a second embodiment of a calibration operation performed in accordance with the teachings of this invention; Figure 2b depicts a second embodiment of a correction operation performed in accordance with the teachings of this invention which utilizes the calibration generated by the embodiment of Figure 2a; and Figure 3 depicts how the embodiments of Figures 2a and 2b are utilized in conjunction with a plurality of input devices, output programs, and output devices.
DETAILED DESCRIPTION
For the purposes of this specification, the following terms are used with the following definitions:
Picture: physical representation of a real-world scene, such as a print, slide, or a video signal.
Image: pixel data stored in computer memory. Scanner: any image input device or process that converts a picture into an image, such as flat bed, drum, slide scanners, and video and still video cameras. Printer: any image output device or process that converts an image into a picture such as laserwriters, phototypesetters, offset printers, chromalins, video displays, lithography and the like.
Color: color or grey scale, (of any number of bits and/or channels).
Computer an image which has its origin as computer Rendered generated data rather than an image which Image: is provided to the computer from a picture via a scanner
Color Space: any coordinate system for numerical color data in an image based on physical colors in a picture.
Linearly data pertaining to an image which has the Proportional characteristic that numerical intensity Data: values of the image have ratios equal to the corresponding ratios of physical intensity values for an associated input picture or output picture. This includes data where the ratios are known but not equal, where equal ratios can be derived
(e.g. logarithmic data).
First Embodiment One embodiment of this invention is described with reference to Fig. 1, and includes two stages. The first stage 1A is a calibration stage which is preferably performed relatively rarely, depending on system drift, for example. Stage IB is an image correction stage which is performed automatically on each scanned image. Referring still to Fig. 1, in calibration stage 1A, the user begins with calibration image 103 stored as a pixel pattern in computer memory. Calibration image 103 is provided, for example, on a disk or is generated by a program, as is well known. Calibration image 103 is processed in 104, for example, by program A, which in fact may either be called by or be a portion of another one or more programs. Running program A ultimately results in the image being printed as calibration picture 106 by printer 105. If desired, program A may include color separations and half tone dot generation. The resulting calibration picture 106 is scanned by scanner 107, with the resultant scanner data being stored in computer memory as calibration image 108. Calibration image 108 is then compared with original calibration image 103. The result of this comparison is used to compute data which is stored as calibration data 109. In correction stage IB, picture 110 is scanned by scanner 107 in order to obtain image 112 stored in computer memory. Correction process 113 is performed utilizing calibration data from Step 109 in order to create anti-distorted image 114 from image 112. Anti- distorted image 114 is then processed by program A shown in Step 104, which is the same or substantially similar to program A shown in Step 104 of calibration stage 1A. Since program A in Step 104 will introduce the same distortions as it does in calibration stage 1A, when anti-distorted image 114 is later printed by printer 105, the resultant picture 116 appears substantially the same as picture 110. Thus, Step 113 anti-distorts image 112 to prov ie anti-distorted image 114 which is then distorted by program A in Step 104.
The anti-distortion provided by Step 113 compensates for the distortion introduced by program A, scanner 107, and printer 105.
Correction Step 113 is based on the following principal. In calibration stage 1A, assume that a region of calibration image 103 has a color Cl, as shown. Further, assume that after being processed by program A in 104 of calibration stage 1A, and printed by printer 105 to provide calibration picture 106, this region appears as the color C2 when calibration picture 106 is scanned by scanner 107 to provide calibration image 108. In correction stage IB, assume a region in picture 110 becomes color C2 when scanned by scanner 107. Correction process 113 anti-distorts the computer image by substituting color Cl for color C2. We know from calibration stage 1A when the resulting image 114 is run through the same printing process (i.e. program A and the same printer) as calibration image 103 was in calibration stage 1A, color Cl is printed as a color that is seen by scanner 107 as color C2. Thus, to scanner 107, output picture 116 has the same colors as original input picture 110. Second Embodiment
A second embodiment of this invention also consists of two stages. The calibration stage is illustrated in Fig. 2a, and the correction stage is illustrated in Fig. 2b.
Referring to Fig. 2a, calibration stage 2A includes calibration stage 2A-1 (which has been previously described as calibration stage 1A with reference to Fig. 1), in order to obtain calibration data 109. In addition, calibration stage 2A of this embodiment includes stage 2A-2. Here, the user scans a known picture 205 using scanner 107 (i.e. the same scanner used in calibration stage 1A) . The result is stored as calibration image 207. Calibration image 207 is compared with the known correct values for picture 205, which have been previously determined and entered as data which is stored as calibration image 208. The results of this comparison are stored as calibration data 209, the data necessary to calibrate scanner 107 in order to scan a picture and produce linearly proportional data. Calibration data 209 and calibration data 109 are used to generate calibration data 210, the data necessary to calibrate printer 105 in combination with "Program A" of Step 104 so that -the printed colors will have intensity values that are linearly proportional to the image data. Of interest, this separates the calibration of printer 105 which is based on calibration data 210 from the calibration of scanner 107, which is based on calibration data 209. In correction stage 2B of this embodiment, shown in Fig. 2b, picture 302 is scanned by scanner 107 to create image 304. Image 304 is corrected by a process 305 based on calibration data 209 to obtain image 306, which has been corrected for distortions introduced by scanner 107. Image 306 has pixel values that are linearly proportional to the intensity values in the original picture 302 and thus are better suited for processing by "Programs B" (Step 307), such as image editing, painting, or matting programs. Image 306 is then, if desired, manipulated (Step 307) by "Programs B" to produce an image 308 which is further corrected by Step 309 based on calibration data 210, which in turn is based on calibration data 109 and 209 as previously described with reference to Figure 2. Correction Step 309 thus produces image 310 which has been corrected to account for distortions that will be introduced by program 104 and printer 105. Image 310 is then processed by "Program A" in Step 104 and printed on printer 105 to produce picture 312. Picture 312 looks substantially like picture 302, except for manipulations performed by Step 307.
In this embodiment, the correction process of Step 305 is based on the following principle. In calibration stage 2A (Fig. 2a), assume that when calibration picture 205 is scanned by scanner 107, a region of the resulting calibration image 207 has a color C2. Furthermore, assume that the known correct value for this region is the color C3, as stored in calibration image 208. In correction stage 2B (Fig. 2b), assume a region in picture 302 is stored as color C2 when scanned by scanner 107. Correction Step 305 substitutes color C3 for C2 to create image 306. The color stored in image 306 is now the correct color and can be manipulated as such by Programs B of Step 307.
The correction process of Step 309 is based on the following principle. In calibration stage 2A (Fig. 2a) assume that a region of calibration image 103 has a color Cl. Further assume that after being processed by "Program A" of (Step 104) and printed on printer 105, this region has a color C2 in image 108 when calibration picture 106 is scanned by scanner 107. In correction stage 2B (Fig. 2b), assume a region in picture 302 is stored as color C2 in image 304 when picture 302 is scanned by scanner 107. Assume further that this color C2 is changed to color C3 by the correction process of Step 305 to create image 306. Further assume for the purposes of this explanation that Program B does not manipulate the image data. Programs B are intended to alter the image, so the intent here is to provide programs B with linearly proportional data so that the manipulations do not have errors introduced from working on distorted data. The correction process of Step 309 is equivalent to reversing the correction process of Step 305 and then applying the correction process of Step 113 from Figure 1. The region of image 308 of color C3 is known to represent a region that was scanned as color C2 (before correction by Step 305), and color Cl is known to print so that it looks like color C2 to scanner 107. So the correction process of Step 309 substitutes color Cl for C3 to create image 310. Notice that this substitution of color Cl for color C3 does not depend on the color C2 seen by the scanner 107, but only on the scanner-independent color C3. This eliminates from Step 309 any dependence on the characteristics of scanner 107. If some other scanner were used that provided color C4, rather than color C2, the calibration data associated with the other scanner would cause color C3 to be substituted for color C4 in Step 305, as desired. In this event. Step 309 will still substitute color Cl for color C3. When the resulting image 310 is run through the same printing process as was calibration image 103 in calibration process 2A-1, color Cl is printed in picture 312 as color C2 as seen by scanner 107. Thus to scanner 107, picture 312 has the same colors as original picture 302, except for changes made by
Programs B of Step 307, as desired. A similar result occurs whe Program B actually manipulates the image data, with the changes being accurately reflected due to the use of linearly proportional data. This method also works when image 308 is a computer rendered image, rather than originating from picture 302 via scanner 107, assuming that computer rendered image 308 has linearly proportional data. As shown in the embodiment of Figure 3, the calibration stage is performed on a number of different scanners Si through Sn and a number of different printing programs Al through Ak and printers Pi through Pm. In this embodiment, the system is capable of taking input data from any of scanners Si through Sn and providing output data to any of printers Pi through Pm, utilizing, if desired, any one of output programs Ai through Ak- Regardless of the combination selected, the image printed by the selected printer is calibrated to appear substantially identical with the input image received from the selected scanner, except for those manipulations performed by "programs B", if utilized. This is true even if the scanners and printers have not been calibrated in every combination.
For example, in one embodiment a slide scanner is used for input and calibrated using a printing process that makes slides, but a reproduction of a scanned slide is also capable of being printed on paper using a different printing process that was calibrated using a flatbed scanner.
In this embodiment, linear data values are generated, i.e., data values which are linearly proportional to the actual values of. the picture. This allows separate calibrations to be performed for scanners, output programs A and printers, which can be than be selected as desired without further calibration, in order to cause the image stored in the computer look like the print, and to make accurate prints of computer-generated data. The teachings of this embodiment allow for manipulation of the image by manipulation programs B such as image editing, painting, or matting programs. These programs require pixel values in the image that are linearly proportional to the intensity values in the original image; otherwise they may introduce artifacts into the image.
So far, the effects of using different programs A have not been considered separately from using different 14 printers P. For a given scanner, the. user may wish to use at times n different programs A and require m different printers P. As described thus far, both embodiments would require the user to perform n*m different calibrations in this case. A modification to this technique would instead require the user to perform calibrations only once for each printer P and once for each program A, a total of n+m-1 calibrations. For example, the user could calibrate each of the m printers PI through Pm using program Al, and then calibrate each of the n programs Al through An using printer PI. From these n+m-1 calibrations, the rest of the n*m calibrations could be derived for use in the correction process, using the following principle. Assume that we have calibrated the following combinations:
1. Program Al with printer PI
2. Program Al with printer P2
3. Program A2 with printer PI
We will now show with reference to Table 1, how to derive from this information the calibration information for the combination,
4. Program A2 with printer P2
Table 1
Cl * program Al * (Cl') * printer PI + scanner * C2
Cl * program Al + (Cl') + printer P2 + scanner •* C3
C4 * program A2 -»■ (Cl') * printer PI -* scanner -*• C2
?? * program A2 -* (Cl') -* printer P2 -* scanner + C3
Assume that color Cl, when processed by program Al and printed on printer PI, appears to the scanner as color C2. Then, as described above, if the color C2 is subsequently scanned, the color Cl is substituted for it so that the print produced by program Al and printer PI looks to the scanner like the original image. Now consider the color produced as an intermediate step between program Al and printer PI. Let us call this color Cl' . We have no way of measuring the value of this color because it is an intermediate step in the output process.
Now assume that for program Al and Printer P2, color Cl prints as color C3. Note that the substitution of P2 for PI does not change the value of
Cl', since Cl' depends only on processing by program Al. Further assume that for program A2 and printer PI, a different color, which we shall call C4, prints as color C2. Because the relationship between the input and the output of the printer is continuous and well-behaved, it is reasonable to assume that the input to printer PI (and thus the output of program A2) was again Cl'. Since this intermediate color Cl' will print as color C3 on printer P2, and since we already know that program A2 will convert color C4 to color Cl', we know that for the combination of program A2 and printer P2, the color C4 will print as C3. Thus we have derived that the last line in Table 1 would be
C4 * program A2 + (Cl') + printer P2 -* scanner •* C3.
We know this without having to perform the calibration on this combination of program and printer explicitly. Given any three of the following calibrations:
Figure imgf000018_0001
we can derive the fourth without having to perform the calibration explicitly.
Alternative Embodiments
If desired, the color substitutions performed by the correction processes of steps 113, 305 and 309 can be based on any suitable technique, such as lookup tables, approximations using fitted correction curves, or other means which are well known in the prior art. Accordingly, it is not necessary to include all possible colors Cl in the calibration image; a representative subset of the colors can be used, in which case the calibration information for the rest of the colors can be derived from the calibration information of the representative subset using suitable interpolation techniques as are will known. Any colors Cl which cannot be produced by outputting any color C2 are said to be outside the range or gamut of the printing device; when one of these colors is scanned in step 107, it can be replaced as part of process 113 by a similar color that is inside the gamut, using techniques as are well known (e.g., decreasing the saturation or purity of the color) . Alternatively, all of the colors Cl can be adjusted in a consistent manner so that all of the adjusted colors can be produced on the printing device.
If desired, default calibration data can be provided such that the user is capable of obtaining good, though not necessarily optimal, results with their particular hardware (e.g. scanner and printer) without the need for the user to perform a calibration stage. For example, the vendor of a system constructed in accordance with the teachings of this invention can provide a plurality of sets of calibration data, each set corresponding to an associated model of scanner or printer. A user is then able to select the default calibration for his scanner and printer models. While this calibration data will provide highly accurate results, even greater accuracy can be provided should the user form his own calibration sets for the particular scanner(s) and ρrinter(s) used. In accordance with the teachings of this invention, a wide variety of image input, generation, manipulation, and output functions can be performed in a calibrated manner which adjusts for the idiosyncracies of the input and output devices, as well as the methods used for manipulating the image data. For example, the teachings of this invention can be used in conjunction with a color separation process, i.e., process that translates RGB color data into the CMYK color data needed by many printers. Alternatively, the teachings of this invention can be used as a new color separation process, replacing the traditional color separation process. In the first embodiment of this invention this follows simply by using CMYK color space values for color Cl and RGB color space values for color C2, as shown in Table 2. In the second embodiment of this invention, this follows simply by using CMYK color space values for colors Cl and C2, while RGB color space values are used for color C3, as shown in Table 3.
Table 2 shows various alternative embodiments, which are not to be construed as limiting, which show the relationship between colors Cl and C2 of embodiment 1 described above. TABLE 2
Grey.. Grey2
R. G. o. R2G2B2
CMYK RGB
Similarly, Table 3 shows the various alternative color relationships between colors Cl, C2 and C3 in embodiment 2, without intending this list to be limiting.
TABLE 3
Cl C2 C3
Grey! Grey, Grey-
R1G1B1 R2G2B2 R3G3B3
Figure imgf000020_0001
C1 1Y]_K1 C2M2Y2K2
3G3B3
Naturally, various alternative color combinations, also known as "color spaces" can be used with regard to one or more of the scanners and printer.
All publications and patent applications are herein incorporated by reference to the same extent as if each individual publication or patent application was specifically and individually indicated to be incorporated by reference. The invention now being fully described,, it will be apparent to one of ordinary skill in the art that many changes and modifications can be made thereto without departing from the spirit or scope of the appended claims. For example, the teachings of this invention are equally applicable to any image input devices (scanners) and/or image output devices (printers), such as those described above.

Claims

WHAT IS CLAIMED IS:
1. A method for providing images from an imaging system comprising a computer, one or more printers, and one or more scanners, said method comprising the steps of:
performing a calibration operation comprising the steps of: generating a set of known calibration image data; producing a first set of derived image data from said set of known calibration image data; printing on one of said printers a calibration picture from such first set of derived image data; scanning said calibration picture via one of said scanners, and generating a set of scanned image data therefrom; comparing said known image data and said scanned image data; and creating a set of calibration data as a result of said comparison; and
performing a correction operation comprising the steps of: scanning an input picture via one of said scanners and generating a set of input image data; using said calibration data to anti-distort said input image data and create a set of anti- distorted image data; producing a second set of derived image data from said anti-distorted image data using said program; and printing on one of said printers said input picture as an output picture utilizing said second set of derived image data.
2. The method.as in Claim 1, wherein said step of producing a first set of derived image data comprises the step of providing altered image data using a program having characteristics which need not be known to said method.
3. The method as in Claim 1, wherein the appearance of said output picture is substantially similar to the appearance to said input picture.
4. The method as in Claim 1, wherein said anti- distortion compensates for the distortions caused during said steps of scanning, altering, and printing.
5. The method as in Claim 1, wherein said method serves as a color separation process or means for converting from one color space to another.
6. The method as in Claim 5, wherein said color separation process converts from RGB color space to CMYK color space.
7. A method for providing images comprising the steps of:
performing a calibration operation comprising the steps of: generating a first set of known image data; producing a first set of derived image data from said first set of known image data; printing a first calibration picture from such first set of derived image data; scanning said first calibration picture and generating a first set of scanned imaged data therefrom; performing a first comparison of said first set of known image data and said first set of scanned image data, and creating a first set of calibration data as a result of said comparison; scanning a second known calibration picture and generating a second set of scanned image data therefrom; performing a second comparison of said second set of scanned image data with a second set of known image data, said second set of known image data being previously stored as known accurate data corresponding to said second known calibration picture; generating a second set of calibration data from said second comparison, said second set of calibration data being associated with distortions introduced by said scanning process; and utilizing said first and second sets of calibration data to generate a third set of calibration data, said third set of calibration data being associated with distortions introduced by said printing process; and
performing a correction operation comprising the steps of: scanning an input picture and generating a set of input image data; using said second set of calibration data to anti-distort said set of input image data and creating a set of linearly proportional image data which has been corrected for distortions introduced by said step of scanning said input picture; producing a set of edited image data from said set of linearly proportional image data using a second program; using the said third set of calibration data to anti-distort said set of edited image data, thereby creating a set of anti-distorted image data which has been compensated for distortions which will be introduced by to-be-performed steps of altering and printing; producing a second set of derived image data from said set of anti-distorted image data using said first program; and printing an output picture utilizing said second set of derived image data.
8. The method as in Claim 7, wherein said step of producing a first set of derived image data comprises the step of providing altered image data using a program having characteristics which need not be known to said method.
9. A method as in Claim 7, wherein said second program performs operations which edit said image data but does not introduce unwanted distortions.
10. The method as in Claim 7, wherein the appearance of said output picture is substantially similar to the appearance to said input picture.
11. The method as in Claim 7, wherein said anti- distortion compensates for the distortions caused during said steps of scanning, altering, and printing.
12. The method of Claim 7, wherein said method serves as a color separation process or means for converting from one color space to another.
13. The method as in Claim 12, wherein said color separation process converts from RGB color space to CMYK color space.
PCT/US1991/004882 1990-07-12 1991-07-10 Method and structure for calibrating a computer generated image WO1992001264A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US55146190A 1990-07-12 1990-07-12
US551,461 1990-07-12

Publications (1)

Publication Number Publication Date
WO1992001264A1 true WO1992001264A1 (en) 1992-01-23

Family

ID=24201367

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1991/004882 WO1992001264A1 (en) 1990-07-12 1991-07-10 Method and structure for calibrating a computer generated image

Country Status (2)

Country Link
AU (1) AU8327991A (en)
WO (1) WO1992001264A1 (en)

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992015957A2 (en) * 1991-03-04 1992-09-17 Eastman Kodak Company Production of second-generation camera-original control tool photographies via photography of digitally-generated transparency of an original scene
EP0518525A2 (en) * 1991-06-12 1992-12-16 Hewlett-Packard Company Automated image calibration
EP0565283A1 (en) * 1992-03-29 1993-10-13 Scitex Corporation Ltd. Apparatus and method for tone and color reproduction control
US5416613A (en) * 1993-10-29 1995-05-16 Xerox Corporation Color printer calibration test pattern
EP0674429A2 (en) * 1994-03-25 1995-09-27 Eastman Kodak Company Field calibration method and apparatus for color image reproduction system
EP0735504A2 (en) * 1995-03-30 1996-10-02 Kabushiki Kaisha TEC Colour printer
EP0744863A1 (en) * 1995-05-25 1996-11-27 Mita Industrial Co. Ltd. Color correcting device and color correcting method
EP0785672A1 (en) * 1996-01-11 1997-07-23 Eastman Kodak Company System for creating a device specific colour profile
WO2000044164A1 (en) * 1999-01-22 2000-07-27 Electronics For Imaging, Inc. Automatic scanner calibration
US6215562B1 (en) 1998-12-16 2001-04-10 Electronics For Imaging, Inc. Visual calibration
WO2002015561A1 (en) * 2000-08-11 2002-02-21 Carl Zeiss Jena Gmbh Method for electronic image rectification in laser scanner devices
EP1349374A1 (en) * 2002-03-29 2003-10-01 Brother Kogyo Kabushiki Kaisha Image forming apparatus and image capturing apparatus
WO2004030342A1 (en) * 2002-09-26 2004-04-08 Kimberly-Clark Worldwide, Inc. Method of adjusting gray scale response to more closely correlate scanner based image analysis systems
US7895011B2 (en) 2008-12-17 2011-02-22 Bausch & Lomb Incorporated Method and apparatus for performing remote calibration verification

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4757389A (en) * 1987-06-15 1988-07-12 Xerox Corporation Calibration improvement with dither for a raster input scanner
US5018008A (en) * 1988-08-11 1991-05-21 Dainippon Screen Mfg. Co. Ltd. Method of and appartus for setting color separation

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4757389A (en) * 1987-06-15 1988-07-12 Xerox Corporation Calibration improvement with dither for a raster input scanner
US5018008A (en) * 1988-08-11 1991-05-21 Dainippon Screen Mfg. Co. Ltd. Method of and appartus for setting color separation

Cited By (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1992015957A3 (en) * 1991-03-04 1992-11-12 Eastman Kodak Co Production of second-generation camera-original control tool photographies via photography of digitally-generated transparency of an original scene
WO1992015957A2 (en) * 1991-03-04 1992-09-17 Eastman Kodak Company Production of second-generation camera-original control tool photographies via photography of digitally-generated transparency of an original scene
EP0518525A2 (en) * 1991-06-12 1992-12-16 Hewlett-Packard Company Automated image calibration
EP0518525A3 (en) * 1991-06-12 1993-11-10 Hewlett Packard Co Automated image calibration
EP0565283A1 (en) * 1992-03-29 1993-10-13 Scitex Corporation Ltd. Apparatus and method for tone and color reproduction control
US5416613A (en) * 1993-10-29 1995-05-16 Xerox Corporation Color printer calibration test pattern
EP0674429A2 (en) * 1994-03-25 1995-09-27 Eastman Kodak Company Field calibration method and apparatus for color image reproduction system
EP0674429A3 (en) * 1994-03-25 1996-05-22 Eastman Kodak Co Field calibration method and apparatus for color image reproduction system.
EP0735504A3 (en) * 1995-03-30 1998-04-08 Kabushiki Kaisha TEC Colour printer
EP0735504A2 (en) * 1995-03-30 1996-10-02 Kabushiki Kaisha TEC Colour printer
US5831658A (en) * 1995-03-30 1998-11-03 Kabushiki Kaisha Tec Printer device and method for printing deviation test patterns to measure deviations of printing positions
US5790261A (en) * 1995-05-25 1998-08-04 Mita Industrial Co., Ltd. Color correction device to correct errors in input and output portions of an image developing device and method thereof
EP0744863A1 (en) * 1995-05-25 1996-11-27 Mita Industrial Co. Ltd. Color correcting device and color correcting method
EP0785672A1 (en) * 1996-01-11 1997-07-23 Eastman Kodak Company System for creating a device specific colour profile
US6215562B1 (en) 1998-12-16 2001-04-10 Electronics For Imaging, Inc. Visual calibration
WO2000044164A1 (en) * 1999-01-22 2000-07-27 Electronics For Imaging, Inc. Automatic scanner calibration
US6327047B1 (en) 1999-01-22 2001-12-04 Electronics For Imaging, Inc. Automatic scanner calibration
US7812999B2 (en) 1999-01-22 2010-10-12 Electronics For Imaging, Inc. Methods and apparatus for automatic scanner calibration
US7212312B2 (en) 1999-01-22 2007-05-01 Electronics For Imaging, Inc. Automatic scanner calibration
US7102799B2 (en) 2000-08-11 2006-09-05 Carl Zeiss Jena Gmbh Method for electronic image rectification in laser scanner devices
WO2002015561A1 (en) * 2000-08-11 2002-02-21 Carl Zeiss Jena Gmbh Method for electronic image rectification in laser scanner devices
EP1349374A1 (en) * 2002-03-29 2003-10-01 Brother Kogyo Kabushiki Kaisha Image forming apparatus and image capturing apparatus
US6934055B2 (en) 2002-09-26 2005-08-23 Kimberly-Clark Worldwide, Inc. Method of adjusting gray scale response to more closely correlate scanner based image analysis systems
WO2004030342A1 (en) * 2002-09-26 2004-04-08 Kimberly-Clark Worldwide, Inc. Method of adjusting gray scale response to more closely correlate scanner based image analysis systems
US7895011B2 (en) 2008-12-17 2011-02-22 Bausch & Lomb Incorporated Method and apparatus for performing remote calibration verification

Also Published As

Publication number Publication date
AU8327991A (en) 1992-02-04

Similar Documents

Publication Publication Date Title
US5271096A (en) Method and structure for calibrating a computer generated image
JP3721206B2 (en) Image reproduction device
US5257097A (en) Method and apparatus for selective interception of a graphics rendering operation for effecting image data modification
EP1821518B1 (en) Personalized color reproduction
US7450281B2 (en) Image processing apparatus and information processing apparatus, and method thereof
US5212546A (en) Color correction system employing reference pictures
US5420979A (en) Method and apparatus for using composite transforms to form intermediary image data metrics which achieve device/media compatibility for subsequent imaging applications
JP3866304B2 (en) Reproduction method of natural scenery image
US20010035989A1 (en) Method, apparatus and recording medium for color correction
JPH05199410A (en) Color processing method
JPH05504455A (en) Calibration system for calibrating color and tone for printers using electronically generated input image data
WO1992001264A1 (en) Method and structure for calibrating a computer generated image
JP2001245168A (en) Color correction device, recording medium, storage device and color correction method
JP2001189875A (en) Configuration of profile to compensate nonlinearity of image acquisition
US5764796A (en) Image processing apparatus for and a method of preparing data representing a colour image
JP2001111858A (en) Color correction definition preparing method, color correction definition generator, and storage medium for color correction definition generation program
US7747073B2 (en) Method and apparatus for adjusting color profiles to meet a desired aim
JP2001285638A (en) Image processing method
US7369273B2 (en) Grayscale mistracking correction for color-positive transparency film elements
JPH08289143A (en) Color image picture quality adjusting device
JP2749102B2 (en) Printing simulation device
JP2001036762A (en) Device and method for preparing color correction definition
JP2001069365A (en) Method and means for generating high-quality digital reflection print from transparent picture
JPH0923447A (en) Picture processor and picture processing method
JPH06291996A (en) Color correction method between pictures reproduced by plural picture output devices

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AU CA JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IT LU NL SE

NENP Non-entry into the national phase

Ref country code: CA