WO2006031537A2 - Microplate analysis system and method - Google Patents

Microplate analysis system and method Download PDF

Info

Publication number
WO2006031537A2
WO2006031537A2 PCT/US2005/031772 US2005031772W WO2006031537A2 WO 2006031537 A2 WO2006031537 A2 WO 2006031537A2 US 2005031772 W US2005031772 W US 2005031772W WO 2006031537 A2 WO2006031537 A2 WO 2006031537A2
Authority
WO
WIPO (PCT)
Prior art keywords
illumination
light
sample
detector
image
Prior art date
Application number
PCT/US2005/031772
Other languages
French (fr)
Other versions
WO2006031537A3 (en
Inventor
David M. Heffelfinger
Robert M. Watson, Jr.
Charles S. Smith, Iii
Siavash Ghazvini
Christopherm F. Bragg
John M. Ii Collier
Gibson T. Lam
Original Assignee
Alpha Innotech Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alpha Innotech Corporation filed Critical Alpha Innotech Corporation
Publication of WO2006031537A2 publication Critical patent/WO2006031537A2/en
Publication of WO2006031537A3 publication Critical patent/WO2006031537A3/en

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N35/00Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor
    • G01N35/02Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations
    • G01N35/028Automatic analysis not limited to methods or materials provided for in any single one of groups G01N1/00 - G01N33/00; Handling materials therefor using a plurality of sample containers moved by a conveyor system past one or more treatment or analysis stations having reaction cells in the form of microtitration plates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/10Arrangements of light sources specially adapted for spectrometry or colorimetry
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/251Colorimeters; Construction thereof
    • G01N21/253Colorimeters; Construction thereof for batch operation, i.e. multisample apparatus
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/6428Measuring fluorescence of fluorescent products of reactions or of fluorochrome labelled reactive substances, e.g. measuring quenching effects, using measuring "optrodes"
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/645Specially adapted constructive features of fluorimeters
    • G01N21/6452Individual samples arranged in a regular 2D-array, e.g. multiwell plates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/62Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light
    • G01N21/63Systems in which the material investigated is excited whereby it emits light or causes a change in wavelength of the incident light optically excited
    • G01N21/64Fluorescence; Phosphorescence
    • G01N21/645Specially adapted constructive features of fluorimeters
    • G01N21/6456Spatial resolved fluorescence measurements; Imaging
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J1/00Photometry, e.g. photographic exposure meter
    • G01J1/02Details
    • G01J1/04Optical or mechanical part supplementary adjustable parts
    • G01J1/0488Optical or mechanical part supplementary adjustable parts with spectral filtering
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/12Generating the spectrum; Monochromators
    • G01J2003/1213Filters in general, e.g. dichroic, band
    • G01J2003/1217Indexed discrete filters or choppers
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J3/00Spectrometry; Spectrophotometry; Monochromators; Measuring colours
    • G01J3/02Details
    • G01J3/0205Optical elements not provided otherwise, e.g. optical manifolds, diffusers, windows

Definitions

  • the present invention relates to methods and devices for optical analysis and more specifically relates to analyzers using area array detectors.
  • Optical analyzers have been adapted to assay a number of different targets, such as arrays.
  • a number of different types of analyzers have been developed. These include laser-based array scanners, microscope-based imaging detectors, flow cytometry systems, and imaging cytometers.
  • One such device is the Alpha Innotech AlphaArrayTM imager.
  • Fig. 1 shows a schematic of this device. Illumination light 180 from a broad spectrum light source 150 is directed through optical filter 160. The light is directed onto substrate 700. Printed on substrate 700 are spots 710. Spots 710 include a plurality of detectable moieties 720, 730. The wavelengths selected by filter 160 is the illumination light which excites fluorescence from at least one of the fluorescent moieties 720, 730.
  • the emitted fluorescent light 200 is collected by lens 120 and directed through emission filter 130 onto area array detector 140.
  • the broad spectrum light source may be an arc lamp, light emitting diode, or any source of illumination light of a selected wavelength. This would include arc lamps that provide substantially more illumination in some wavelengths than others.
  • the illumination filter 160 may include a selectable filter such as a filter wheel in which any of the number of filters may be rotated into the pathway of the illumination light.
  • the illumination light directed through illumination filter 160 preferably is directed through at a location of parallel light rays. The illumination light may then be directed onto substrate 700 using a mirror, optical fiber or other means.
  • Substrate 700 may be a glass or plastic slide, microplate well bottom, or any other solid or semisolid surface.
  • Spots 710 may be spots on a two- dimensional array of spots such as a bio array or DNA array, or could be non ordered or ordered cells or beads on the substrate surface.
  • fluorescent moieties 720, 730 are shown as discrete spots on spot 710 it is also possible that these spots are overlapping.
  • excitation filter 130 may comprise a filter wheel or other changeable filter means that would allow a selectable filter to be rotated into the pathway of the collected light. In this way subsequent, potentially overlapping fluorescent dyes may be analyzed.
  • Area array detector 140 may be a two- dimensional array of photodiodes, a CCD detector, or any other detection means that allows simultaneous measurement of a plurality of distinct targets.
  • FIG. 2 An implementation of this device is shown in Figs. 2 and 3.
  • the device comprises of an image capture module 1 which is shown in further detail in Fig. 3.
  • an arc lamp with eight-position excitation filter wheel 2 allows illumination light of a selectable wavelength to be generated by a single source.
  • the illumination light is directed into a bifurcated optical fiber cable allowing off-axis illumination from two or more sides onto the sample.
  • the present configuration illustrates a top illumination configuration illumination from the bottom is also possible.
  • the present configuration is an illumination configuration in which the illumination and collection of emitted light occurs from the same side of the substrate.
  • transillumination with illumination from one side of a substrate and a collection of light from the other is also possible.
  • a sample is held on a sample stage that is movable along the x-y axis.
  • the sample stage has one micron movement ability for advancing and positioning samples with sufficient precision.
  • An autofocus motor and encoder 5 allows accurate focusing onto a sample.
  • Mounted on the sample stage is a twelve-position slide holder 6 allowing multiple substrates to be held and subsequently imaged.
  • the inset of 1 of Fig. 2 shows the image capture module composed of a cooled 1.3 megapixel CCD detector 11.
  • An upper imaging lens 12 focuses the image onto detector 11.
  • the lower objective lens 13 collects light emitted from a substrate held on the stage and directs the light through a selected filter on a positioned emission filter wheel 14.
  • this emission filter 14 is in an infinite focus region of the collected light. The filter will function most efficiently if parallel light rays are passed through this filter.
  • An emission wheel motor and encoder 16 controls the movement of an emission filter 14.
  • CCD detection allows the use of the rapidly advancing CCD detectors currently used in a number of photographic and movie recording devices. These provide efficient detection at relatively low cost over a large area.
  • This system illustrated in Figs. 2 and 3 is optimized for optically detecting biological samples on glass slides. The system was developed with 15 micron resolution which is sufficient to detect some types of individual cells (generally 30-50 microns in diameter) . However, this system is not able to resolve intracellular features. This working system allows epi-fluorescent configuration.
  • the 1.3 megapixel cooled CCD detector allows progressive scan interline transfer, a 50% quantum efficiency at 400 ran, 40% quantum efficiency at 550 ran, 20% quantum efficiency at 700 nm. This detector has dark current less than 0.003 electrons/second/pixel and
  • the use of a cooled CCD for the detector allows for tradeoff between long integration periods and high sensitivity on one hand, or short integration times for kinetic studies yielding lowing sensitivity.
  • the individual views of each CCD exposure may be combined using a cross correlation algorithm into a single mosaic image.
  • a single microscope slide can be read in 15 seconds.
  • the current 0.13 numerical aperture lens system is not based on a microscope objective, allowing for an aperture of 25 millimeters. This lens is well matched at approximately 0.5 magnification to the large format 1.3 megapixel CCD. This in turn provides the large working distance that can accommodate the full height of a micro plate.
  • the light source is a white light source with relatively flat spectral illumination from the UV light to the near infrared region allowing for large number of optical visualization techniques to be applied.
  • Fig. 1 is a schematic view of a prior art optical analyzer.
  • Fig. 2 is a side view of an implementation of the optical analyzer conforming to the schematic of fig. 1.
  • Fig. 3 is a side cross sectional view of a section of the analyzer of Fig. 2.
  • Fig. 4 is a plan view of an optical analysis system.
  • Fig. 5a is a side perspective view of an illumination optics.
  • Fig. 5b is an optical diagram of an illumination system.
  • Fig. 6 is an angled side view of the stage and collection and detection optics of the system.
  • Fig. 7 is a perspective view of the instrument housing.
  • Fig. 8 is an exploded view of the stage drawer and stage mounts.
  • Fig. 9 is an exploded view of the stage drawer.
  • Fig. 10 is a side view of the system showing the frame, stage and mounts, and light collection and detection optics.
  • Fig. 11 is a partially exploded view of a slide holder.
  • Fig. 12a is a perspective view of an alternative illumination system.
  • Fig. 12b is an optical diagram of one broad spectrum light source and one narrow wavelength additive light source with light rays from the narrow wavelength additive light source shown.
  • Fig. 12c is an exploded view of the LED component shown in Fig. 12a.
  • Fig. 13 is a side perspective view of a kinematic mount.
  • Fig. 14 is a cartoon showing the orientation of a sample plane with respect to the detector surface.
  • Figs. 15a, b are plan views of stage configurations.
  • Fig. 15c is the inverted view of a sample drawer and stage with x-y-z axis arms shown.
  • Illumination source 202 is preferably a broad spectrum light source.
  • a light source with narrow spectral wavelength bands may be used.
  • the use of such a light source allows higher power levels at specific wavelengths that correspond to useful targets.
  • a subsequent filter can allow for selection of a specific illumination wavelength.
  • a broad spectrum light source allows use of a single light source for a large number of different dyes.
  • a narrow band light source generally has several strong spectral bands, has low cost, and has very high efficiency at these wavelengths.
  • One preferred light source is an arc lamp.
  • arc lamps Two possible drawbacks with the use of arc lamps is the heat and UV generated and the non-uniformity of light produced. Any light source is going to have some variation in light intensity, typically with a Gaussian distribution with a higher intensity at the center and a lower intensity at the edges.
  • a xenon arc lamp having an arc width of 1.8 mm FWHM (full width, half maximum) is used. This produces a broad spectrum light source with relatively flat spectral profile across the visible spectrum.
  • small arc sources are preferred to large arc sources.
  • small arc source is meant an arc lamp having an arc that is both small in volume and bright in intensity.
  • the volume of an arc lamp is measured by referring to the diameter of a sphere that encompasses a certain percent of the total energy of the arc. Common energy thresholds are 50% and 95% of total energy.
  • the intensity of an arc lamp is measured in units of Watts (radiant intensity) or lumens (visible intensity) .
  • Another possible arc lamp is a mercury arc lamp.
  • This provides an illumination source producing a bright illumination beam across a number of useful wavelengths. This provides a very intense light source, allowing enhanced illumination strength and greater sensitivity.
  • One such source is a mercury arc lamp with a 1 mm FWHM arc diameter.
  • a number of arc lamps (such as xenon) have output that is flat across a number of wavelengths.
  • mercury arc lamps have an output that is characterized by narrow spectral lines. The use of this narrow spectral illumination source produces less autofluorescence from some targeted substrates, such as glass.
  • the system is adaptable to a number of targets, including fluorescent dyes and quantum dots.
  • An alternate light source is a laser. While the cost of a laser is high, and the electrical efficiency is often low (semiconductor lasers may have good electrical efficiency) , the spectral lines of a laser are very well matched to certain fluorescent dyes and to quantum dots .
  • One disadvantage of lasers is its non-uniform irradiance. Generally, lasers have an irradiance pattern that is Gaussian, although many lasers (especially semiconductor lasers) may have non-Gaussian non-uniformities.
  • Another alternate light source is an LED. The cost of an LED is very low, and the electrical efficiency is high, and the spectral bands of an LED are very well matched to certain fluorescent dyes and to quantum dots.
  • One disadvantage of LEDs are their non-uniform irradiance. Generally, LEDs have an irradiance pattern that is Gaussian, although many LEDs may have non- Gaussian non-uniformities.
  • the above light sources can be classified as either broad spectrum light sources or narrow spectrum light sources.
  • Broad spectrum light sources approach a continuum across the visible light spectrum.
  • Arc lamps represent one example of broad spectrum light sources.
  • Narrow spectrum light sources such as lasers
  • multiple lasers will have to be included to obtain the required excitation wavelengths if lasers are used for the illumination source.
  • a single broad spectrum light source may be used for a variety of targeted dyes. In the present systems, use of a broad spectrum illumination source is preferred.
  • another alternate light source is to put- two or more light sources together in order to take advantage of their unique advantages.
  • One embodiment of such a combination light source is two light sources normal to each other with a dichroic beam splitter used to transmit certain spectral bands of one light source and reflect other spectral bands of the second light source.
  • each module consists of an LED, a mount for all the optics of the module, a lens (or a mirror) , and filter.
  • the LED booster can be used with any broad spectrum arc lamp or a narrow band arc lamp to enhance the optical excitation power available for fluorescent dyes. This is particularly useful if the arc lamp is of the type that has very narrow spectral lines. However, the LED booster may also be used with arc lamps to increase the spectral intensity of any desired region. A combination of boosters of different wavelengths may also be useful. LED's may be turned on one at a time or all at once. LED's may also be pulsed quickly, which can be very useful in time resolved measurements.
  • the LED module is designed to illuminate at only one wavelength band that may either 1) fill in a gap in the wavelengths emitted by another illumination source or 2) provide greater intensity of illumination at one specific wavelength.
  • the wavelength produced by the LED may correspond to a certain fluorescent dye.
  • an LED module was so constructed which gave 80% of the power produced by a Xenon arc lamp-filter combination in the wavelength band for Cy5 dye excitation.
  • Figs. 12b, c one possible arrangement of the illumination sources is illustrated.
  • the broad spectrum light source 300 is mounted in the center with the LED illumination sources 302a-302d are arranged in a ring surrounding the broad spectrum light source and angled such that all of the light sources target a single optical element, such as the heat elimination optic or the light integrator.
  • LED illuminators 302a-d are illustrated. This is shown for illustrative purposes only. In practice, between one and more than 10 LED illuminators may be used. If the LED illuminators are used to fill in for wavelengths which are less strongly produced by the central illumination source, the plurality of LEDs allow design such that each LED may fill in for one specific wavelength. Alternatively, if additional excitation light for one specific dye is required, the LEDs may all emit at that specific dye's illumination wavelength. This allows greater sensitivity of the system. In Fig. 12b, the rays of illumination light are shown. The light from the supplemental illumination source is focused by a condensing lens onto a target optical element, such as the heat and UV filter.
  • a target optical element such as the heat and UV filter.
  • LED sources are commonly not uniform in their illumination patterns.
  • the present system includes a integration optic that results in uniform illumination light .
  • Control of the activation of the additional LED illuminators would be by a central controller. In the regulation of this illumination source allows an initial determination of presence of a selected dye, followed by utilizing the additional illumination sources to provide additional illumination strength at the specified wavelength.
  • the described LED illuminators are low cost, relatively small (adding light to system weight or foot print) may be simply mounted, and use relatively little power.
  • the direct power measurement of the red LED puts it at about 80% of the Cy5 exciter Xenon driven illumination.
  • the image showed this same increase in signal intensity but nearly 2x increased the signal to noise for the LED illumination. This was due to the background being half with the LED illumination.
  • Alternate embodiments will use more than one LES to achieve significant improvement in overall power and signal to noise ratio. For example, it would be expected that the use of ten LED's would produce an increase of power in the Cy5 excitation spectrum of 800%.
  • the signal to noise improvement expected from such a design would be approximately the square root of 8. Due to the lower background of the LED, the signal to noise ration could improve even more. Similar improvements of greater or lesser degree can be expected by using other LED's with different wavelength properties.
  • One LED that is adaptable is the Luxcon Emitter (Lumileds Lighting, L.L.C, San Jose, California) .
  • the illumination beam from the arc lamp is directed through the infrared (IR) and ultraviolet (UV) removal optic.
  • IR infrared
  • UV ultraviolet
  • This as shown in Fig. 5b is composed of an IR and UV reflector 204 and an IR and UV absorbing element 206 mounted in a parallel configuration such that the illumination light passes first through the reflector 204 and second through the absorbing filter, which absorbs most of the remaining UV and IR light while allowing other wavelengths to pass through. It was found that the use of simply one IR reflector was not sufficient for required blocking of UV and IR. However, the use of a first filter that reflects back more than
  • 80% of the UV and IR combined with a second filter that absorbs 99.9% or more of the IR allows removal of substantially all of the UV and IR from the system.
  • the described configuration allows removal of more than 99.9% of UV and IR light without long-term degradation of any of the components of the optical system.
  • the absorbing filter cannot be used alone for to two reasons 1) solarization of the glass and 2) heat stress leading to breakage of the filter. Solarization of absorbing filters is caused by excessive UV which degrades the performance of the filter. Heat stress is caused by uneven heating of the absorbing filter leading to large mechanical stress build-ups in the filter that may cause breakage. In the present system a filter with dielectric coating rejects 80% to 99% of UV and IR while transmitting on average 80% of visible.
  • the useful range of this combination of filters is the transmission of wavelengths from 300 to 800 nm (herein below 300 is considered UV, above 800 nm is considered IR) . However, in some applications it may be desirable to reduce this range to minimize autofluorescence of optical components and hence background levels. So in other embodiments a range of 375 for UV cutoff and 725 nm for IR is preferred.
  • the reflective and absorptive coatings and materials may be selected to achieve these cutoffs.
  • the dielectric filter is 38 mm x 38 mm.
  • the dielectric filter is mounted on one side of a metal mount 205 that acts as a coarse aperture for light and as a heat sink.
  • a high temperature RTV adhesive is used to mount the filter to the metal mount .
  • An absorbing filter made of Schott KG5 glass further blocks UV and IR wavelengths.
  • the absorbing filter is two mm thick and 50 x 50 mm square.
  • the absorbing filter is mounted to the same metal mount as the dielectric filter with the same adhesive.
  • one drawback to the use of a high intensity illumination system is the production of heat and UV.
  • the present solution allows control of IR and UV and use of high intensity, small arc lamps. However, this configuration is useful for a variety of illumination sources that produce UV and IR.
  • the reflective layer may be a coating on the layer of absorptive material in some embodiments.
  • This is a elongate, rectangular quartz bar.
  • the bar is internally reflective and allows propagation of the illumination light though the bar. As the light passes through the bar, the internal reflections form virtual images at the output plane of the bar. The summation of these virtual images acts to homogenize the light.
  • Both the length of the bar, the cross sectional area of the bar and the shape of the bar are important for its function.
  • the non-radially symmetric shape of the bar allows enhanced light homogenization of the illumination light.
  • the length is sufficient that the various light rays entering the first end of the rod have a sufficient distance to be reflected to a sufficient extent that the rays coming out of the second end of the integration bar are substantially uniform.
  • the illumination light coming from the second end of the integration bar is substantially uniform, with greater than 80% intensity uniformity of light from locations across the second end of the integration bar. This is shown in Fig. 17a, with a prior illumination intensity over space 510 may be compared to a new intensity.
  • This graph is a projected illustration of results one might expect from old type non-uniform illumination and a new, integrated illumination.
  • an integration bar with cross sectional area of 6 x 6 mm and length 50.8 mm was used.
  • a xenon arc lamp with arc size of 1.8 mm FWHM and elliptical reflector of focal length 27.6 mm, f/# of 1.3 illuminates the input plane of the bar.
  • the cross sectional area of the bar input plane should be large enough to encompass substantially all of the energy focused by the lamp reflector.
  • the material of the integrating bar is fused silica.
  • the integration bar serves a secondary function.
  • the end of the integration bar has a rectangular cross section and the illumination light retains this profile.
  • the field of view as imaged by the CCD is rectangular.
  • Fig. 17b shows a circular illumination system.
  • Circle 504 indicates the illumination area.
  • Rectangle 502 shows the detector area.
  • the area not within rectangle 502 but within circle 504 is wasted light.
  • the illumination area is novel to analyze new areas on the slide some of the detected area will already have been exposed to illumination light.
  • By illuminating with a rectangular illumination area adjacent areas may be illuminated without exposing edge regions to additional illumination light. It is preferred that all target spots be analyzed under as similar conditions as possible.
  • the creation of a rectangular illumination area prevents areas of the sample from being illuminated but not detected, this subject to photobleaching prior to a subsequent analysis.
  • Light entering the first end of the integration bar would substantially include the image of the arc lamp arc, and would have a brightness distribution reflective of this image. Light emerging from the second end of the integration bar no longer has the illumination profile of the image of the arc.
  • Optical principles governing the total internal reflection are the same as those relating to optical fibers. This allows the length and geometry of the bar to be selected such that all of the selected wavelengths of interest could be homogenized by the integration bar and the emerging light can be sufficiently uniform that the illumination light does not substantially contribute to non-uniformity in optical signal from targets illuminated by the illumination light.
  • the integrating bar need not have a rectangular cross section. However, as the order of the polygon formed by the bar increases, and begins to approximate a circle, the homogenizing property of the bar degrades for any fixed length. As the accompanying charts show, greater than 90% of the standard deviation in illumination variation may be removed by selecting a integration bar of various polygonal orders and lengths. A homogenizing bar of elliptical cross section may also be used.
  • the fourth order polygon allows simplified tiling. While some other shapes (such as a hexagon) also could be tiled, the use of a rectangular integration rod allows off-axis illumination while still maintaining a rectangular illumination pattern. As the sample is moved, adjacent areas can be illuminated without complex methods to ensure that gaps are not omitted from an optical analysis.
  • An additional advantage of the light homogenization is the better distribution of the light energy across the surface of optical elements.
  • light is locally concentrated, and the optical element upon which such light impinges need to be able to withstand the energy from the more intense areas of light.
  • filters could melt or degrade at areas of intense light.
  • By adding a light integration rod to make the illumination light more uniform prior to impinging on the illumination filter greatly increases the amount of light that can be used by the system without degrading filter performance. Additional discussion of light homogenization is found in SPIE, Vol. 4768 (2002) .
  • Fig. 18 An illustration of the expected effect on light intensity from parallel and angled rays is illustrated, in Fig. 18. Although this lens is shown with a fixed mount in Fig. 5a, it is envisioned that the lens could be mounted on a movable mount, or a motorized movable mount, such that the distance between the end of the integration bar 226 and the lens 208 could be controlled. In this way the illumination area could also be controlled. This feature is particularly desirable in systems having variable optical resolution.
  • Illumination filter wheel 210 holds a number of filters 214a-d.
  • the use of a broad spectrum light source combined with selectable filters allows selection of a specific illumination wavelength. This in turn allows use of any of a number of dyes.
  • Motor 212 is used to rotate filter wheel 210 to position the filters of the filter wheel in the path of the illumination light.
  • Each of filters 214a-d is removable from the wheel by a simple mount that attaches to the wheel but may be detached for exchange of filters.
  • Motor 212 is controlled by a system electronic control such that a user may simply select an illumination wavelength of specify a dye on the target and the illumination wavelength would be automatically selected by rotation of the proper filter into the illumination pathway.
  • Steering mirror 218 Light passing through the illumination filter is reflected by steering mirror 218 onto illumination focus lens 222.
  • Steering mirror is mounted on mount 220 allowing the position of the mount to be adjusted. Adjustment of the steering mirror causes the image formed by the illumination focus lens to move. The steering mirror is thusly used for fine alignment of the image produced by illumination focus lens to the desired field of view.
  • the illumination focus lens has a diameter of 38.10 mm, a focal length of 51.6 mm and is made of BK7 optical glass.
  • the focal length of the illumination focus lens is chosen to produce an image of the output face of the integrating bar at the desired magnification on the field of view.
  • off-axis illumination a square object will be imaged as a rectangle with the aspect ratio of the rectangle determined by the angle that the illumination beam forms to the optical axis of the CCD-lens system. Illumination off-axis also produces some non-uniformity of light intensity in the field of view.
  • the degree of this off- axis non-uniformity is also dependent on the angle of incidence of the illumination beam.
  • the non-uniformity is greater at higher angle of incidence. For this reason, grazing incidence illumination is not desirable.
  • the illumination beam angle of incidence is 45 degrees.
  • the above combination of components provides an illumination system that is highly efficient and provides light of a geometry that may be selectively designed to illuminate only the specified area of the target field of view.
  • the combination of elements makes use of intense arc lamp light sources having small arc widths and producing intense illumination light possible.
  • For each optical element included in an illumination system there is some cost in light loss, which varies depending on the system component. In the present design, the number of optical elements is sufficiently few that loss is minimized.
  • the design is quite compact. For example, in prior systems in which an optical fiber is used for transmission of light, two addition lenses are required to focus the illumination light into the fiber and to focus the light emitting from the fiber. Such a configuration also requires considerable space.
  • the sample drawer 240 is configured to hold a microplate or a microplate dimensioned device.
  • a holder for up to four slides may be adapted to the microplate format.
  • such a holder is simply a microplate frame with the four sides of the frame having a lip and a groove, pins or other means for positioning the slides into place.
  • a slide holding frame can be a separate device that is able to both 1) securely hold the slide in place (assuming a fixed position in both translational and rotational axes with respect to the mechanical sample holder of the instrument) and 2) allow the slide to be manipulated by automation robotics that have been configured to process devices having specific dimensions.
  • slide holder is described in U.S. Patent No. 6,118,582, hereby expressly incorporated by reference herein. This include is a slide holder for holding one or more slides having a generally rectangular frame and at least one slot for receiving one slide. Flexible retaining latches and retaining grooves are provided at each of the slots for facilitating the securing of the slides.
  • a frame 503 has a plurality of slide holding regions 507 defined by pegs 509. Pegs define a region confining the back and sides of the slide to a specific horizontal and vertical position on the holder, with the slide resting on a lip on frame 503.
  • a winged biasing clip secures the slide into place.
  • Side bars 504 are attached by arms 515 to central bar 517 to the winged biasing clip.
  • a bolt 501 is secured through washer 502, through bar 517 and attached to frame 503.
  • a single winged structure provides a simple biasing force for two slides using a structure (winged biasing clip) that is about as tall as a standard microplate.
  • the pins may be selected to be as high as the winged biasing clip, to more easily allow for stacking of the devices, as in the magazine of a robotic processing device.
  • This holder allows four slides (e.g., nucleic acid array slides) to be held by a holder and analyzed together.
  • the device may be simply manufactured of metal or plastic or any other suitable material. The force of the winged biasing clip is sufficient that even during automated movement of the holder either by a robotic loader or by the stages of the instrument, the slides maintain a fixed position.
  • An alternative holder/adapter could be used to allow scanning of a variety of objects, including gels, blots, or other samples. Such a sample would rest on a targeted substrate, such as a glass plate, positioned on the edges of the sample drawer.
  • a targeted substrate such as a glass plate
  • the illustrated holder may be tailored to hold slides made to various international or national standards, other glass or plastic substrates, or other non-slide shaped substrates, such as custom protein chips.
  • the sample drawer includes a plurality of pins 242 to secure the sample substrate into a fixed location.
  • a biasing bar 250 exerts a force on the device held on the stage to press the device against the pins and hold the sample in place during scanning.
  • biasing bar 250 forces the sample substrate against pins 242 and prevents the sample substrate from moving during scanning.
  • the sample is at a fixed position, "corner crowded" into the sample drawer in a unique position as to both rotational and translational axes.
  • the sample drawer 240 is mounted to the side of the z-axis arm. The sample drawer may then be simply be unbolted and replaced with a new sample holding device if such a change is required.
  • the sample mount which is described below, allows positioning of the drawer in a fixed rotational position.
  • a z-axis mounting bracket 460 is attached to drawer top piece 452 which is joined to drawer bottom piece 450.
  • a screw 462 slides through a hole in top piece 452 and is affixed into bottom piece 450.
  • a spring 464 is mounted annularly about screw 462 such that the spring press against the head of screw 462, producing a biasing force of piece 450 against piece 452.
  • 240 is mounted on a z axis 244 to allow selective positioning along the z axis.
  • the z axis is in turn mounted on an x axis 246, which is mounted on the y axis 248.
  • the x axis, y axis and z axis are mechanically linked to motors 254, 252 and 243 respectively, allowing the drawer to be moved in three translational dimensions.
  • These motors are precision stepper motors allowing 1 micron increment movement of the stage in each of the x- y-z directions.
  • One prospective use of the present system is for analysis using large format CCD arrays.
  • the detection surface is not flat with respect to the housing in which the detector is mounted. This is one of the reasons such large format detectors have not be adapted for the use of analysis of targets that are a few dozen microns in diameter: the positioning of the stage and movement of the stage and focusing on small targets is challenging when the detector surface is angled with respect to the stage.
  • the stage must be aligned with the detector not only along its x-y-z axis (z axis to focus, x-y coordinates to select the area on a sample substrate) .
  • the sample drawer would be mounted on arms such that the sample substrate analyzed remains close to the focus plane throughout all travel on the mounting arms.
  • the pixels in the detector should view travel as purely vertical or horizontal with respect to the CCD.
  • Any optical mount's position can be defined uniquely in terms of six independent coordinates; three translations and three rotations with respect to some arbitrary fixed coordinate system.
  • a mount is said to be kinematic when the number of degrees of freedom (axes of free motion) and the number physical constraints applied to the mount total six. This is equivalent to saying that any physical constraints applied are independent (non- redundant) .
  • a kinematic optical mount therefore has six independent constraints.
  • the realized solution is to mount the sample drawer to the stage using adjustable kinematic mounts.
  • the advantages of a kinematic mount are increased stability, distortion free optical mounting, and in the case of a kinematic base, removable and repeat-able re ⁇ positioning.
  • Features of the mounts are illustrated in Figs . 8, 10, and 13.
  • Y arm is mounted on stage 280, which is fixed within the housing of the system.
  • the plate also includes kinetic mounts for adjustment of the yaw/pitch/ roll positioning of the stage. Adjustments for the stage and drawer are made at kinematic mounts. These mounts include round tip set screws with the tips of the screws positioned into V-shaped grooves extending on a surface. With reference to Fig. 10, the system is mounted within a housing 292.
  • the housing is secured to an internal frame consisting of beams 350, 354, 352, 354, 358, 360.
  • the stage 280 is mounted on beams 350, 354 and 352.
  • One style of kinetic mount is a set of three V-grooves and three oval tip screws. A close up of one V-groove and oval tip screw is shown in Fig. 13.
  • each of the three contact points includes two screws, a screw with a soft flat tip 388 and a round tipped screw 382, which extend through brackets 384 and 386 respectively.
  • the flat tip screw 388 is tightened against the flat surface of stage 280, while the round tip screw is fit into V-shaped groove 390. By adjusting these screws, the roll, pitch and height of the stage can be adjusted to match the plane of the detector surface.
  • a screw extending from arc-shaped groove 402, 404 and into arm 248 allows adjustment of yaw rotation following the pitch and roll alignment.
  • the rotational alignment is needed to be done usually once, at the set up of the instrument. At that time, the stage would be in a fixed position with respect to the detector surface. At the same time, the movement of the arms would be calibrated to ensure that the drawer moves the sample in relation to the detector surface such that the movement is not skewed and is simply an x-y translation of the sample substrate relative to the detector surface.
  • a substrate such as a microplate
  • misalignment between the sample and the area array detector can result in areas of the sample substrate plane being out of focus. Variation in focus make compilation of images into a single image problematic.
  • the detector surface commonly is not level with respect to its housing and hence not orthogonal to the optical axis when viewed at the depth of field required for cell detection or microarray analysis.
  • the stage must be adjusted so that the angle of the detector surface matches as closely as possible the angle of the targeted substrate to the optical axis. This requires fine control of yaw, pitch and roll calibration of the sample stage.
  • the present kinematic mounts allow for such calibration.
  • Fig. 15a shows a configuration in which the sample holder could be positioned at the end of a long y positioning arm. However, this greatly increases the distance the stage must travel to analyze the sample and project the sample from the housing.
  • a structure on the sample stage such as an extendable drawer, could be used for access to the sample from outside of the instrument .
  • the exterior housing of this embodiment of the instrument is shown.
  • the stage and arms are configured to allow the sample holding stage to be extended through door 290 in housing 292.
  • the housing is optically sealed, allowing assay of devices held on the sample stage without interference from outside light.
  • this access allows compatibility with standardized robotics that are able to manipulate microplate substrates.
  • the sample drawer may be extended from the instrument housing such that the sample substrate has at least half an inch clearance around it. This allows a robotic gripper to reach down and grab the sample holding device. In addition, there are some robots that reach forward to grab the sample holding device.
  • the sample holding drawer may also allow the sample substrate to be grasped from the front or side. The use of robotics enhances the throughput of the imaging system.
  • sample stage, sample drawer and housing is adaptable to a range of sample holding devices-. These would include slides to various national/international standards, SBS standardized footprint multiwell plates, non-standard!zed plates, and customized sample holders. One such holder was described. Other holders could include a microplate dimensioned gel or membrane, or other similar device.
  • the present system includes a z-axis stage movement, that allows the sample to be moved up and down. This is similar to microscope focus. This focusing means allows for constant magnification in imaging a sample.
  • a microplate typically has a frame, with a substrate positioned within that frame at a specific depth.
  • D tot is the thickness of the microplate, consisting of the well depth (D we n) , the thickness of the well bottom, and the gap (D gap ) between the bottom of the substrate making up the well and the bottom of the microplate.
  • D fOC is the distance required to focus to the targeted plane, the microplate well bottom.
  • focusing of the sample would be done by movement of the detector lens with relation to the sample.
  • the sample stage/sample drawer would be in a fixed location during focusing. This results in variable magnification.
  • the samples are moved in focusing, with the samples in focus at one plane with respect to a detector surface.
  • a cartoon configuration is shown in Fig. 16a. This is the z-stage focusing allowed by the z-arm movement in Fig. 6.
  • the samples which are always the same distance from the detector, will all be detected at the same magnification. (For the same sample types.)
  • Anther result of this change is a ability to focus through a greater depth.
  • the present design allows •a greater than 10 mm z-axis focus range. This is rather important if the user wants to focus on a number of different targets, including microplates, that have a variety of distances to the target plane on the device on which the sample is deposited.
  • One implementation allows focusing in the z-axis for 12.7 mm.
  • Targets held on a substrate on the sample stage are illuminated by the illumination system, exciting fluorescence from the targets.
  • this emitted light is collected by objective lens 270, which collimates the light and directs the collected emission light through a filter 264 on filter wheel 260.
  • the filter wheel is controlled by motor 262.
  • the light is directed through filter 264, impinges on detector focusing lens 264.
  • the filter both removes light of the excitation wavelength and light emitted from a sample that is not of the selected wavelength (for example, autofluorescence from the sample, light from a different dye, etc. ) .
  • the objective lens is a 50 mm F/l.8 lens, such as Nikon AF D 520 lens.
  • the detector lenses is a 85 mm F/l.8 lens, such as Nikon D AF (620) .
  • This lens is mounted to the detector using a photographic lens mount.
  • the detector lens is mounted 1.615 inches from the detector with the back focal length of the lens facing the detector.
  • the objective lens is mounted approximately one focal length away from the sample target with the back focal length of the lens facing the sample.
  • the objective lens, detector lens and detector are all fixed in position. Focusing is by stage movement . These elements would need to be initially aligned during instrument focus and subsequently all focusing would be by movement of the sample stage.
  • higher resolution may simply be achieved by selection and positioning of the lens.
  • multiple lenses mounted on a turret would allow for a wide range of sample magnifications. This may range from the 15 ⁇ m/pixel resolution disclosed in U.S. Patent No. 6,271,042, to the 4 ⁇ m resolution of the above embodiment, to a less than 1 ⁇ m resolution for contemplated systems.
  • the above- described systems for alignment of the stage and uniform illumination make such resolution imaging feasible.
  • the two sets of filters, the illumination filter and emission filter allow a single light source to be used with this system for a variety of targeted dyes. To detect a specific optical label, the filter corresponding to the excitation wavelength is selected for the illumination filter and the filter for the emission wavelength is selected for the emission filter.
  • the lenses which collimate the illumination light prior to reaching the illumination filter and the collected light prior to reaching the detection filter aid in the efficiency of the filters.
  • Such filters function most efficiently in the areas of parallel light rays as previously described with reference to Fig. 18.
  • Efficiency is measured in two ways. First would be transmittance of the selected wavelength. Second is the blocking of nonselected wavelengths. Parallel rays enhance both these determinations of efficiency.
  • Emitted light focused by the detection focusing lens is detected by the area array detector 274.
  • This detector is a cooled 4 megapixel CCD, with 50% quantum efficiency at 400 nm, 40% quantum efficiency at 550 ran, dark current less than 0.1 electron/sec/pixel; 30,000 electron per well.
  • Such a detector has the following advantages.
  • the CCD or area array scanners defined in section 2 used with the rectangular area illuminator disclosed above can detect an entire area of a sample substrate at one time. This allows for both repeated analysis for short integration times for kinetic studies, and longer integration times to enhance sensitivity. Both such integration times can be combined using a cross correlation algorithm into a single mosaic image. It is generally agreed by those who are experts in microarray analysis that somewhere between 10 and 20 pixels are required to over sample a microarray spot. Thusly, with a spot diameter of 100 microns a minimum pixel size of 10 microns would be required and 5 microns would be preferred. The need for over sampling is to assess the quality of the spotting and binding by using morphological factors within the spot .
  • pixel resolution is meant the linear dimension of a pixel on the detector projected to object space.
  • the detector pixel resolution is approximately 6.7 microns. When projected to object space it is approximately 4 microns. If the detector type is expanded to 4 MPixels, and the pixel resolution is held to 4 microns then the number of views required to cover a plate or slide is reduced to approximately 25 views and 125 views respectively.
  • the illumination area is the area of the detector projected into object space. For example, if the detector has a pixel size of 6.7 microns, and a pixel resolution of 4 microns in object space, then a detector with 1000 x 1000 pixels will produce an illumination area of 4 mm x 4 mm. In practice, it is desirable for the illumination area to be slightly larger than that strictly required to map to the detector for purposes of ensuring that a region of the sample is never under- illuminated and for manufacturing efficiencies. Thusly, in the above example an actual illumination area of 4.5 mm x 4.5 mm would be preferred.
  • an excitation filter for the Cy5 dye that has a bandpass of 70 nm with a center wavelength at 540 nm together with the illuminator herein described would pass a radiant power of at least several hundred miliwatts and preferably between 500 mW and 1 W.
  • Actual illumination intensity will vary by the type of filter used, and the type of arc lamp used (broad or narrow spectrum) .
  • such an illumination area would contain from 400 to 1600 100 micron diameter spots.
  • a single controller can control all moving elements (including all stepper motors and filter wheel motors) and obtain a detector signal.
  • a controller may be dedicated to each motorized component.
  • each motorized component has a slave controller which is controlled by a single master controller.
  • Limits switches are used to indicate the end of travel or start of travel for a motorized component.
  • Microswitch.es may be used as limit switches.
  • Microswitches can provide accuracies of perhaps tenths of a millimeter in the starting or ending position of a motorized component.
  • Microswitches may also be used to indicate special positions along the way of a motorized component. For example, a microswitch may indicate the location of a filter in a filter wheel. Use of microswitches enhances the overall accuracy and safety of the motion control system.
  • microswitches (such as those sold by CappUSA) are used to verify one or both ends of travel of a stage.
  • one switch is used as a home switch (the "zero" filter position, which is actually “lights off") and another switch is used to index filter positions 1-8.
  • the emission filter wheel only one microswitch is used for the home position and an encoder is used to track indexing to high positions. In an alternate embodiment the emission filter wheel could use two microswitches and no encoder.
  • the controller would be able to effect the following processes. i. Position sample drawer to receive sample holding device from outside system housing. ii. Receive sample holding device on sample drawer, retract sample drawer and close drawer into system. iii. Obtain plate or sample holding device information from user interface, plate identification scanner (e.g. bar code reader, RF ID reader, etc) , or from prescan. iv. Identify from device information associated dyes, excitation and emission wavelengths, target size, target area, resolution required, etc. v. Select excitation and emission filters. vi. If variable magnification, set magnification. vii. Position sample drawer in position for initial scan. viii. Illuminate area of sample and detect resulting emitted light. ix.
  • plate identification scanner e.g. bar code reader, RF ID reader, etc
  • x change illumination and detection filters and detect second dye.
  • x change magnification and analyze at a higher resolution.
  • xi Move sample drawer such that additional area of the sample is in view.
  • xii Repeat steps v-xi for each area to be analyzed.
  • xiii Process image, tile images, analyze data.
  • an illumination source 20 produces an illumination beam 24.
  • the illumination source may be an arc lamp, an LED or other illumination source.
  • the use of a broad spectrum illumination source allows bright illumination of the sample and selection of a broad spectrum illumination, allowing selection of a range of wavelengths.
  • a single source of illumination may be used, or multiple combined illumination light sources may be used.
  • the illumination beam 24 passes through a heat elimination element 22 and onto condensing lens 26.
  • Heat elimination element 22 is a hot mirror or other optical element that reflects, absorbs, dissipates or otherwise prevents UV and/or for IR wavelengths from being transmitted to the other elements in the system.
  • Condensing lens 26 condenses the illumination light into an integrating bar 28.
  • Light traveling through integration bar 28 is homogenized such that light emitted from the end of this bar is uniform over both the entire area of illumination and over time. For these purposes, uniform will mean removing most of the variation which would be ordinarily observed from arc lamp illumination.
  • Light emitted from the end of bar 28 is focused by relay lens 30 and directed onto a spatial light modulator 34 by steering mirror 32.
  • the spatial light modulator allows section of individual light pixels to be reflected onto the sample substrate.
  • a spatial light modulator is any optical device able to selectively modulate light.
  • the spatial light modulator receives illumination light and selectively reflects this light onto the sample surface.
  • the selective light modulator is pixilated and produces an array of illumination light pixels. This allows the light at each pixel to either be relayed onto the sample or be not transmitted in whole or in part.
  • Such a pixilated array may be a multiple aperture array, a reflective array, a spatial absorbing array, or other such device.
  • These devices include digital micromirror devices, ferroelectric liquid crystal devices, electrostatic microshutters, microopto- electromechanical systems (preferably with high contrast abilities) and other similar devices. A number of relevant devices are described in U.S. Patent Nos .
  • One advantage of the spatial light modulator is the ability to selectively control the transmission of illumination light to individual pixels. This can be done by calibration of the spatial light modulator and detection of the illumination light, either directly or indirectly using a target surface responsive to the illumination light, by an area array detector. This is further described in the above patents.
  • the combination of the light homogenization optics and the spatial light modulator allows illumination of a selected area of the sample surface, with the pixels each illuminated with enhanced uniformity. Selection of a desired illumination pattern (for example: 1024 by 768 pixels) allows assay of a selected surface while not exposing non selected areas to potential photobleaching from illumination light.
  • the light directed from the spatial light modulator 34 passes through lens 36.
  • This lens collimates the illumination light.
  • This illumination light is directed through excitation filter 38.
  • This filter may be mounted on a filter wheel holding a plurality of filters, allowing selection of the illumination wavelength by filter selection.
  • the filter is preferably positioned in an area of parallel light rays, which enhances the efficiency of the filter.
  • the combination of a filter wheel and a broad spectrum illumination source (such as an arc lamp) allows relatively high intensity of illumination and flexibility in selection of illumination wavelength.
  • the light passing through filter 38 passes onto dichroic mirror 40.
  • Mirror 40 reflects light of the illumination wavelength onto the objective lens 42, which images the illumination light pattern onto a sample surface 44 on sample holder 46.
  • the illumination light excites fluorescence from targets or otherwise optically activates targets (e.g. array spots, cells, beads, etc.) on the sample surface.
  • targets e.g. array spots, cells, beads, etc.
  • This emitted light is collected by the objective lens, which functions as a high numerical aperture light collector.
  • This light is directed onto the dichroic mirror 40, which transmits light of the emitted light wavelength.
  • the illumination light and the collected light wavelengths are sufficiently different that they may be optically separated.
  • the dichroic mirror like the filters, be placed at a location where impinging light has parallel rays, allowing these elements to function at greatest efficiency.
  • the collected light 60 then passes through filter 52 and is focused by lens 54 onto an area array detector 56.
  • the detector used may be any detector that is able to image a two dimensional area.
  • Such detectors could include a charge coupled device (CCD) , photodiode array, charge injection device (CID) , complementary metal oxide semiconductor (CMOS) or any other detection device.
  • CCD charge coupled device
  • CID charge injection device
  • CMOS complementary metal oxide semiconductor
  • a number of the elements all may be controlled by a central electronic control 50 (e.g. a microprocessor) . These elements would include the illumination source 20, the filter wheels and other rotating mounts, the spatial light modulator 34, the sample stage motor 48 and the detector 56. In addition any add-on robotics could be also controlled by the same device.
  • a central electronic control 50 e.g. a microprocessor
  • One of the embodiments shows top down scanning of a substrate. This may be the preferred imaging configuration for scanning slides or other substrates in which the sample is positioned on the top of the substrate. Because the illumination light does not have to pass through the substrate on which the sample is deposited to reach the sample, background from the substrate is minimized. Bottom scanning may also be used and may be preferred for multiwell plates . For such plates, the sides of the wells is an obstacle to illumination and collection of light in a top imaging configuration. The sides of the well limit the numerical aperture from which light can be collected from the top. In addition, the working distances of the lenses may be too short to accommodate the depth of the well.
  • Bottom reading allows shorter working distance from the sample to the objective lens for multiwell plates, increasing the numerical aperture of light collected from the well bottom, enhancing sensitivity.
  • this configuration allows isolation of the sample into a sample holding chamber that can be robotically accessed to allow for automation and increased throughput.
  • the wells may be covered at their openings during analysis, reducing the risk of sample contamination and allowing the cells to be maintained at a more uniform temperature and environment .
  • Top reading systems have some fundamental limitations. For example, top reading systems must read microplates that do not contain any liquid. This is because the liquid in the microplate acts as a lens and refracts the light entering the well. Minute differences in the height of columns of liquid in the wells also leads to differences in optical path length from well to well. Finally, the light that provides excitation of the target sample must pass through the liquid, causing autofluorescence, and the fluorescence emitted by the target sample must also pass through the liquid causing additional autofluorescence. Also, some of the fluorescence emitted by the target sample in the top reading system may be absorbed by the liquid in the well causing errors in quantitation.
  • Bottom reading systems have the advantages of being able to combine several different targets-array elements, cells, and beads-within a liquid containing well. This is a very convenient and cost-effective format for high throughput research. This also makes for a highly versatile reading system.
  • Top reading systems may be employed, however, when the target sample is on a dry substrate. In this case, there is no optical surface between the sample and the collection optics which may reduce working distance and autofluorescence.
  • the present system is illustrated in an epi-illumination configuration, in which both illumination and light collection of emitted light occur from the same side of the sample substrate.
  • transillumination systems in which one side of a transparent substrate is illuminated and emitted light is collected from the other side is also contemplated.
  • Example 2 the illustrated illumination has the same optical axis as the collected light. In this on-axis illumination, positioning the pixels of the light from the spatial light modulator is simplified. However, off-axis illumination shown in Example 1 is also contemplated for some application, and may reduce reflection and scattering of illumination light into the collection optics (which otherwise must be removed by beam splitters or filters) .
  • the disclosed elements make possible a number of different functions. These include a number of components, systems, and methods, some of which are listed below.
  • the spatial light modulator and the integration bar combine to provide illumination that may be adapted to the geometry of the target.
  • the selected illumination area is illuminated with a much greater uniformity than is allowed using just broad spectrum illumination light that is not conditioned. This results in much more consistent data. If the illumination light varies, the light emitted by the excitation light will also vary. It is expected that an illumination source would vary, both over time and over an illuminated area.
  • the present system has a very large viewing area and is adaptable to analysis of multiple substrates in a single viewing. These arrays can be slide based arrays, multiwell plate arrays, or any other analytical substrate.
  • the illumination can be configured so that only the substrate areas of interest are illuminated.
  • Illumination system The present illumination system is highly efficient for a number of different reasons. First, the uniformity of the light better distributes light over optical elements like filters. Thus more illumination power may be used without degrading optical elements. Second, elimination of almost all of the UV and IR also allows much more illumination light to be used without degrading optical elements. The use of supplemental LED light modules allows the addition of light at specific wavelength(s) . This allows, for example, one of the dyes to be illuminated at significantly greater illumination strengths.
  • the light produced is both much more uniform than the illumination produced by the system in the background section and is shaped to match the array and detector geometries, both of which are commonly square or rectangular.
  • the light integration bar is a polygon of a specific order. This order may be selected to better match the uniformity requirements of the overall system, and/or to better match the tiling pattern. For example, there may be advantages to creating hexagonal shaped tiles rather than square tiles. It is known that hexagonal tiles are an effective pattern for tiling. In addition, the vignetting pattern caused by all lens is circularly symmetric. This results in a bright central region on the detector with darker corners.
  • a hexagonal tiling pattern may better match the vignetting pattern that is projected by the lens system on the detector. In such a system, it may be preferred to use an integration bar with a hexagonal cross section. 4.
  • the IR and UV removal optic is described as used in an array reader optical analyzer, but is seen as advantageous for a range of applications in which elimination of IR and UV light is desired. This optical element may be modified to remove almost all of the unwanted wavelengths, while still transmitting most of the desired wavelengths. For example, many absorbing filters are known to autofluoresce even when illuminated at visible wavelengths. By reducing the bandwidth of light that falls on the absorbing filter with a reflecting interference film, autofluorescence is thereby reduced.
  • the illumination optics including the IR/UV removal component, the integration bar, the collimating lens, the filter or filter wheel, the steering mirror and the illumination focusing lens may all be mounted on an single board mount in fixed positions that would require adjustment only at the set up of the instrument.
  • many different types of arc lamps with unique spectral qualities could be distributed in more than one illuminator system to illuminate the sample target at different angles of incidence, or at the same angle of incidence but distributed at different angles of rotation in the target plane.
  • the stage is mounted on a kinematic mount allowing the plane of the stage to be aligned parallel with the plane of the detector surface. Again, this is done once at instrument set up.
  • the sample drawer also has a kinematic mount to ensure that the sample drawer is properly constrained and assumes the sample plane. This in turn allows use of a large format array at high resolutions where skew of the sample substrate and the detector surface would bring areas of the sample out of focus.
  • a slide adapter which allows up to four array holding slides to be held, processed by a standardized automated system, and analyzed.
  • the device holds slides in a fixed position of the holder, such that the slides do not move during analysis.
  • the present processing and control software allows parallel function of various analytical and processing procedures. This includes the analysis and processing of previous samples as the system acquires new samples.
  • the well centric data structures allow specific well identification and tracking, automated or user defined well parameters, and processing of the well derived data to simply image display and analysis.
  • the autofocus method allows simplified identification of the optimum focus position.
  • this control system and software are able to perform a number of required functions, including focus, image analysis and data storage.
  • the system would include a tiling method including the following steps. 1. Acquire an image of a sample substrate or substrates.
  • a related idea is to analyze microarray data by stitching together the views of the multiwell plate. Once the full plate image has been created, the image is segmented into data regions from the composite, mosaic image. The data is then organized by well, with each well corresponding potentially to one array or one test condition.
  • the present system was designed to provide higher resolution imaging of the targets as well as massively parallel target analysis.
  • a number of elements such as automated filter and dichroic mirror selection, further aid in providing a high throughput system capable of rapid multiplexing.
  • This combined with sample substrate introduction automated by using robotics, greatly extends the throughput of the system.
  • a prescan further increase the throughput of the sample scanning.
  • Such a feature may be especially useful in non-ordered samples or if the target areas on a substrate is not known.
  • the spatial light modulator allows illumination of a select area of the sample substrate.
  • the detector need only record data from the area of interest.
  • This method requires an area array detector (e.g. a CCD) and a sample stage that is able to move at a known speed that can be coordinated with the detector's integration time.
  • an area array detector e.g. a CCD
  • a sample stage that is able to move at a known speed that can be coordinated with the detector's integration time.
  • the pixels from each integration interval are processed to form a single, low resolution image. This can be done without ever stopping the motion of the sample stage.
  • One effect of this is the sample target is blurred in the direction of motion.
  • the degree of blur may be identified and software could correct for non-blur in the direction orthogonal to the motion.
  • the pre-scan would then be done at a low resolution defined by this image blur. However, the pre-scan would be done at a higher speed using this method. Once the targets of interest are identified, these areas could be viewed when the stage is not in motion, to provide a complete high- resolution view of the target.
  • the local coordinates of a well must be determined. This step locates the well in relation to global coordinates of the sample drawer.
  • user input or sample identification input e.g. RFid, bar code ID, etc.
  • sample identification input e.g. RFid, bar code ID, etc.
  • the well shape round or square
  • the well type control, calibration, or experiment
  • the x and y blocks an integer which identifies the number of blocks in the array in the well, in the x and y dimensions
  • the x and y blocks spacing spot diameter local background exclusion diameter (for autogridding, which sets the default diameter in which local background is excluded)
  • autogridding parameters x and y spot spacing (the center to center dimension for the spots in the array in the x and y dimensions) are specified.
  • meta image data may be attached to images from each well . This would include a text image identifier and a parameter file identifier.
  • the parameter file identifier is text which identifies the parameter file for this image with focus, exposure, acquisition time, and other imaging parameters. The work flow proceeds as follows .
  • the acquisition parameters are set including all necessary- user inputs for the data structures involved (such as well rotations, block locations, array element locations, spot size of the array elements, desired detection, statistics and methods, the imaging process options, autofocus options, auto exposure options, exposure time, fixed focus position, focus offset from a surface, etc.)
  • the data structures involved such as well rotations, block locations, array element locations, spot size of the array elements, desired detection, statistics and methods, the imaging process options, autofocus options, auto exposure options, exposure time, fixed focus position, focus offset from a surface, etc.
  • a pre-scan could yield information about spot size, grid pattern, grid size, key points on the slide or substrate for use for focus or exposure settings, and the best statistics and methods of analysis. Typically, this pre-scan would be performed at the beginning of the batch of microwell samples and the user defined parameters would then be used as nominal inputs to the rest of the microarray samples.
  • Images are stitched together in a well-centric manner (or block-centric manner in the case of slides) , and stored in appropriate data structures.
  • the storage of images in a well- centric manner includes defining the well geometry and excluding imaging of area outside of the well .
  • a central processor tracks the progress of stitching and any other data processing steps such as background correction, background subtraction, data intensity stretching, etc. and when at least one block or well is completed that block or well is then submitted by the processor to a sub-routine or module that performs the data analysis. This analysis runs concurrently with the microarray reader or scanner.
  • the analysis module lays down a grid over the array spots based on previous use of input values. There are at least two options that the user might pre-select that would affect the subsequent work flow. 1) Autogrid alignment based on known alignment algorithms such as geometric segmentation or histogram equalization. 2) User alignment wherein user interface is provided to allow moving of the grid elements in whole or in part so as to best align them with the array spots.
  • changes to the default user input parameters are retained (i.e., "learned") and applied to the next block or well that is submitted to the analysis module.
  • Learning need not be a linear compilation of positional changes.
  • learning may involve detection of complex array spotter errors that are pin or nozzle specific and then involve other mechanical properties of the array spotter (stage errors for example) .
  • Learning may also involve the incorporation of normalization schemes using control spots on the array, or calibration of spot morphologies using calibration spots on the array. Learning may also be used to adjust subsequent reading parameters in order to optimize the detection of the array "on the fly” .
  • analysis software will attempt to do a best fit to the data in the well.
  • the user can interact with each well one at a time or "sync" a group of wells by rubber banding a group of wells. If wells are synced then the manual gridding tools operate on all the blocks and spots that are "in sync" (i.e., that have been defined as a group by the user) . Otherwise the user can operate on one well at a time.
  • a set of autogridding parameters are saved. If adjustments are made in a new analysis template is saved, then the autogridding adjustments become part of the new analysis template.
  • the analysis module may finish the remaining blocks or wells.
  • some analysis quality parameter such as overall brightness, or control spots for example
  • This real time analysis may be performed in "batch mode" so that the analysis is being completed on one slide or plate, as another is inserted into the reader by robotics and scanning commences.
  • the disadvantage of this approach is that areas of the first slide or plate cannot be re-imaged.
  • this limitation may be overcome by giving the reader or scanner the ability to read or scan a sufficient number of slides or plates such that as a new plate is being loaded, a previously read or scanned plate may be re- imaged.
  • Microarray detection and analysis follows a well-defined sequence of steps: 1) Pre-scan [optional] , 2) Scan sample, 3) Grid scanned image, 4) Create report of gridded image.
  • This sequence has been sufficient for the use of microarrays in a pure research environment.
  • the paradigm of many genes and few samples shifts to a few genes and many samples.
  • This is the so-called array of arrays format where spots of DNA, RNA, antibodies, cells and other biological samples are spotted into microplates.
  • Each well of the microplate can now represent a separate sample—for example from a different patient.
  • the first step is often an optional pre-scan.
  • Pre-scans offer the opportunity for the user to get a quick look at their data, make decisions about system gain, exposure time and light level, and rubber band regions of interest where there is data. However, many of these decisions can be automated or made without a pre-scan.
  • exposure time is determined on a view by view basis. A view is acquired, the pixel intensities within the view are measured, and then the cycle repeats until pixel intensities are in the desired range.
  • the desired range is generally three or more background standard deviations above the mean background and less than saturation.
  • the pixel intensities be with the range of 25% to 75% of the full scale range of the system.
  • a similar iterative procedure is used for focusing. A view is acquired and then it is analyzed for best focus using classical image processing techniques. One such technique is a Sobel transformation. Another is to find the peak intensity of a region of interest of the view. Whatever focus metric is used, once the metric is optimized then the image is focused.
  • Several different techniques for autoexposure and autofocus are detailed in appendix.
  • the view is acquired with proper exposure time, focus and light level it may then be tiled.
  • Tiling is the process of combining individual views into a single mosaic image.
  • the amount of overlap varies as the accuracy of the stages in the system.
  • the overlap may range from one pixel up to 50% of the width of an image.
  • balancing the cost of stage accuracy against the throughput of the system the overlap region is 10% of the width of the system.
  • Cross correlation techniques are then used to further refine the best location for a line along which to combine the views. Typically, this is achieved by first using the mechanical movement of the stages as a first approximation of the location of the line where views are to be combined. Then the cross correlation is performed to more accurately determine the line of combination of two views. Additional steps may involved the averaging of signal levels of the pixels that are in the overlap region.
  • the image For example, in the case of a microplate image it is desirable to segment the image into wells.
  • the region between the wells does not contain any data. For purposes of clear presentation of data these regions may be masked with some color, or the data beneath them may be reduced in intensity to distinguish this non-data, non-well region.
  • the image may be corrected for background variations, and to equalize the intensity values from views with different exposure times.
  • This image correction process may be quite computationally intensive and take several minutes to tens of minutes to complete for large microplate arrays at high resolution.
  • the image correction process for a microplate that is read using autoexposure settings, four microns resolution, with spots of 100 microns may take as long to complete as it takes to acquire the data in the first place.
  • the kernel for background correction image processing may be quite large in terms of the number of elements in the kernel required to cover an area in pixels that is greater than one spot diameter.
  • Gridding is a process whereby either automatically or manually regions of interest for data analysis are selected.
  • Methods for automated gridding include histogram equalization techniques and geometric techniques.
  • the user draws a matrix of regions of interest and locates each element of the matrix so that it fits around each of the spots in the array.
  • the gridding step is replaced with an object recognition step.
  • objects are determined by some combination of intensity thresholding, matching of regions of interest to geometric shapes, or other method.
  • Data that may be extracted may include the mean pixel intensity of a spot, the ratio of the mean pixel intensity in one color channel to that of another color channel, median pixel intensity, standard deviation, variance, and many other statistical values.
  • the report is then analyzed by statistical analysis package that determines the validity of the data overall, and the correlation of the data values to actual genetic activity. This is the object of the whole process.
  • a new generation of microarray detection and analysis is based on imaging by two-dimensional detectors, followed by tiling into a single contiguous image.
  • One of the advantages of this type of detection is that it is well suited to both higher throughput and the new format.
  • Two dimensional readers complete detection in contiguous blocks of data called tiles. These tiles are then put together using cross correlation algorithms to enhance the accuracy of combination. In this way, blocks of data are ready for analysis more quickly than in scanners.
  • the tiling method is well suited to matching the format of microplates with their individual wells.
  • a Block is herein defined as a region of the sample where meaningful data may be extracted and analyzed.
  • a Block is a well. Views are acquired using all the methods described above. However, views are deliberately acquired in a sequence that leads to the formation of a single Block as soon as possible. Once a block is acquired, another program thread begins to tile the views of that Block together.
  • tiled Blocks are then stored in temporary memory for image segmentation.
  • temporary memory is meant either a disk storage device or actual physical memory.
  • the various Blocks are then segmented into well areas with appropriate masks applied for non-data regions. These segmented wells are then stored temporarily.
  • Image correction is then applied to the segmented wells.
  • image correction is the longest of all steps taking as long as all other steps combined.
  • Each auto focus test image must satisfy some exposure preconditions to ensure there is enough content on the image to calculate an auto focus metric. These preconditions are:
  • Test for Overexposure a test image is considered overexposed if the top x percent of its pixels are at the maximum intensity. X is defined by the
  • MaxPixelSaturation setting in the AutoFocusPrefs section The default setting is 0.1.
  • Test for Underexposure a test image is considered underexposed if the maximum pixel intensity is less than (2 A x * max range) .
  • X is defined by the MinAECompensation setting in AutoFocusPrefs section. The default setting is -3.0.
  • the seed position is used. If subsequent test images do not meet criteria 1, the sequence is restarted at the current position. If subsequent test images do not meet criteria 2, the seed position is used.
  • the best focus position is achieved at the absolute maximum of the metric.
  • Offset (x 4 ) Remove (x 3 , f (x 3 ) ) and start from step 2 on the remaining points .
  • MinDelta in y value and the point at the other end is greater than 4 times MinDelta away.
  • the current MinDelta is 3%. If so, expose the next image on the other side of the anticipated peak at one Default Offset step away and start from step 2 on all the points. For example, if f (x 2 ) - f (X 1 ) > 4 * (f (X 1 ) - f(x 3 )) , then expose at x 3 + Default Offset.
  • Peak to Right If mi ⁇ m n , then the peak is located to the right. Remove all points except X 5 , and start from step 1 again with X 5 as the first point . 6) If at any time one end of the range of focus is reached (0 or 1) , use that point in the metric calculation i.e. try to expose an image at that focus position and calculate the metric. If an end is reached more than three times, return the limit reached.
  • ArrayEase Most images acquired by ArrayEase are composed of multiple fields of views stitched together in a rectangular array to create a larger image. Although an image can be set up to auto focus, it does not mean that ever field of view (View) will be auto focused.
  • the Auto Focus Option determines which Views will be auto focused. This is a batch setting that applies only to the images setup with auto focus. If multiple filter sets are to be acquired for the same sample, each View to be auto focused will be auto focused for each filter set. Calculations such as plane fitting are done independently on each filter set .
  • a percentage of Views will be auto focused. The percentage depends on the Adaptive or Percentage settings. These auto focus Views are spread evenly across the sample. All Views are acquired in the same order. Views that are not to be auto focused will use the previously calculated focus position. After all the Views are acquired, a plane is fitted using the auto focused positions, if auto focus was successful. Views that deviate from the plane by more than the
  • FocusDeviationFactor (set in the Def file) will be reacquired.
  • Percentage (50%, 40%, 30%, 20%, 10%) - Defines the percentage of views to be auto focused.
  • a percentage of Views will be auto focused. The percentage depends on the Adaptive or Percentage settings (same settings as above) . These auto focus Views are spread evenly across the sample, and all Views are acquired in the same order. Views that are not to be auto focused will use the previously calculated focus position. After a minimum number of Views have been auto focused
  • a percentage of Views will be auto focused. The percentage and location depends on the Adaptive or Percentage settings. Auto focused Views are imaged first. A plane is fitted to those positions. The rest of the views (and any auto focused View if it is more than a FocusDeviationFactor away from the plane) are acquired.
  • Percentage (20%, 10%, 5%, 1%, 0.5%)%) - Defines the percentage of views to be auto focused. These Views are spread evenly across the sample.
  • a percentage of Views will be auto focused. The percentage depends on the Percentage setting (note that Adaptive is not an option) .
  • These auto focus Views are spread evenly across the sample. Auto focused Views are imaged first. A plane is fitted to those positions. The rest of the views (and any auto focused View if it is more than a FocusDeviationFactor away from the plane) are acquired. After a minimum number of Views have been auto focused (currently set to 6) , a plane fit is attempted after each auto focused View. If all auto focused positions fall within the FocusDeviationFactor (set in the Def file) from the plane, auto focusing is stopped and the rest of the Views will use focus positions from the plane fitting. After all the Views are acquired, any View that deviates from the plane by more than the FocusDeviationFactor will be reacquired.
  • the calculated focus position is used for the rest of the sample.
  • the solutions for sample mapping fall into two classes. 1) Static (pre-acquisition) and 2) Dynamic (during acquisition) .
  • a set of focus positions are determined prior to acquisition and can be derived from either a preview image or some sub-sampled image of the acquisition ROI. Once this set of focus positions is determined then the position for all views can be determined by various means then acquisition can begin. In the dynamic class focus is determined during acquisition.
  • the proposed development path has three stages to full implementation of the most stable and robust formulation using the dynamic class method with an option to further develop the static class as a final formulation.
  • the first stage is a two option solution.
  • the second stage is to add consistency checks to the set of focus positions. Focus positions that are significant deviations from the expected are then refocused and reacguired. A similar type of acquisition is already supported for exposure times that are much different (ReExposeEnabled) in the autoexposure class. There are several schemes to check for consistency (e.g. deviation from a plane, or line, relative to depth of field) .
  • the third stage is to add an adaptive feature that checks the set of determined focus positions for internal consistency to enable accurate curve fitting.
  • the sampling period method proceeds with AF until enough points are in hand to extrapolate from a surface fit and then switch to basically a manual focus mode using the surface fit to determine the focus positions.
  • Boost phase Characterized by low signal levels.
  • Calculation phase Signals above a predetermined threshold.
  • Test images are acquired and measurements of signal levels are used to determine the course of action.
  • Test images are typically filtered as set by NoiseFilterEnable flag in preferences.
  • the final ' images are never filtered based on NoiseFilterEnable flag.
  • Boost Phase In the boost phase the exposure time is increased by a calculated factor and a new test image is acquired.
  • the first test image is the seed image and SeedTime is the exposure time for the first test image.
  • NewTime 01dTime* (DataMax-Black) / (ImageMax-Black) *Threshold EQN 1
  • OldTime is the test image exposure time.
  • DataMax is the cameras maximum signal level (e.g. 4095 for 12 bit cameras) .
  • ImageMax is the test image maximum intensity.
  • Threshold is a value set in the preferences ("AutoExposeLevel") .
  • Black is the bias level that is read from the bias calibration file (average value) .
  • a check on the ImageMax should be made-if
  • Noise is a noise level based on well capacity and photon noise.
  • SystemGain is the camera system gain setting (electrons per ADU) .
  • Validation is a user selectable configuration option. Once a FinalTime is determined the signal levels are to be validated based on this option. A test image is acquired and signal levels are verified to be below saturation and above a lower limit set by preferences (for example, 0.85* [DataMax-Black] ) .
  • SeedTime TestTime/2 . 0 repeat Boost Phase .
  • the cells are kept at a condition at which the cells remain alive during the course of the assay.
  • epi- fluorescent configuration allows the samples and the optics to be physically isolated from each other . This in turn provides the ability to control the environment of the cells and potentially change the condition of the cells mid assay. This can be done in one of three ways.
  • a separate, off the shelf incubator and robotic sample feeder could be used. These are commercially available, standardized, and adaptable to the present reader.
  • the present reader is able to image an entire plate at once in a relative short time period. This means that the samples in a microplate would spend minimal time outside the controlled environmental chamber. This add-on solution allows flexibility for users, could separately configure the optical system and the environmental system.
  • a second solution would be to use a microplate contained in a cartridge that included a mircoelement for temperature regulation and onboard fluidic circuits and gas environmental controls.
  • microplates Standardization of microplates and advances in robotics has led to development of a variety of devices adapted for use with microplates. These include transfer devices (such as pippetters) that seal over wells of the microplate, preventing cross contamination through aerosol droplets of dispensed reagents. Such a device could be attached to reagent reservoirs or gas sources to feed the cells. In addition, a heating element used with each channel feeding each well would allow temperature control of the well . In any of the above three implementations, the bottom scanning configuration, ability to detect individual cells, and ability to image rapidly enable the live cell assay and provide enhanced functionality. This includes kinetic measurements. Presently, image processing algorithms are capable of providing size and shape characteristics of cells as well as dye concentration, relative fluorescence intensities and other imaging parameters . These in turn can be used to identify artifacts, including contaminants, debris, coincidence of cells and out of focus cells or other targets.
  • a pixel resolution in the object plane of slightly less than half of the diameter of the cell is sufficient to classify the object as a cell.
  • Adherent cells which tend to have irregular shapes, tend to be elongate and hence longer in one dimension.
  • the resolution of the system described in one embodiment is able to classify both of these cells types as individual cells and discriminate cell cytoplasm from the nucleus.
  • the present system and components and method combine in a system having high versatility and has functions including analysis of reporter assays, is adaptable for analysis of newly developed dyes, can evaluate morphology of targets (bead, cells, etc.) , can perform kinetic assays, high content, etc.
  • Example 1 Cell arrays
  • New cell-based assays that are emerging have the potential to accelerate studies of protein functions and the effects of small molecules on cellular function. Such assays are difficult to implement on a large scale because of a lack of adequate detection systems. The lack of proper detection systems is due in part of the complex requirements of living cells. The viability and function of such cells is adversely affected by variations in temperature, air composition, and humidity among other variables. Additionally, cells grow relatively slowly and can be difficult to manipulate during an assay. Many cellular detection systems are sample destructive, prohibiting the ability to measure living cells over time (kinetic assay) . New methods for detecting proteins within living cells in a high throughput system are needed by a large number of research applications. Cell arrays are one cell-based application where the existence of a better detection device can expand the opportunities for research.
  • a cell array consists of an array of plasmids (e.g., from a plasma library) bonded to a glass slide and then transduced into cells. This creates an array where each spot location consists of a living cell that potentially over expresses a gene from a defined plasmid.
  • a number of different assays including immunologic, histochemical, and functional assays may be adapted to cell array format.
  • Using fluorescent labels with a binding agent allows great flexibility, relatively low cost, and the advantage of a known reagent technology.
  • With the optimized vector and promoter combination very high levels of expression can be achieved for biochemical and functional detection.
  • Cell arrays can be constructed in industry standard 8x12 centimeter multiwell plates. The current need is to have an optical analysis system which can efficiently detect and analyze such plates. When the spot with a desired signal is detected its position allows identification of the gene that produced the signal .
  • the libraries selected for this array may be composed of two fundamentally different types: 1) a library in which each spot represents a unique gene (e.g., in an extreme example an entire genome) , or 2) a library in which each spot represents a different mutation of the same gene.
  • a library in which each spot represents a unique gene e.g., in an extreme example an entire genome
  • each spot represents a different mutation of the same gene.
  • each cell array spot is composed of a monolayer of cells plus an underlying array of many spots taken from the library types above.
  • Such a format may be more useful for study of protein-protein interactions, cell- cell interactions such as occur inn biological tissues, and for the study of complex kinetics where compound mediates the expression of a protein.
  • the array underlying the cell array spot so defined may be also be an array of compounds designed to be inhibitors or mediators for cellular receptors.
  • beads may also be incorporated as controls or calibration means indicating size, fluorescence intensity, or as ligands for cellular binding.
  • One such class of ligands would be monoclonal anti-bodies which are attached to the surface of the beads and are specific for certain cellular receptors or proteins.
  • Said beads as ligands might be used with cell arrays as agonists or antagonists of cellular receptors, for measurements of affinity or competition, or to produce up-regulation or down-regulation of cellular receptors.
  • the fundamental difference between beads so used and the array underlying the cell array spot are that beads are mobile and may be concentrated, whereas the array elements underlying a cell array spot are fixed and non-mobile with a fixed concentration in relation to the cells.
  • the cell array is one cellular application where the existence of a better detection device will facilitate better data.
  • Cell arrays are currently being used for the study of HIV Envelope and other genes with therapeutic and vaccine potential .
  • plasmids are arrayed, cells are transfected on the array, and the cells are fixed and strained to visualize gene expression.
  • the cell array pictured in Fig. Bl is composed of a monolayer of cells, but only cells that settle on a particular spot of DNA become transfected with the fluorescent reporter. Each of the spots on the array represents a cluster of 100-200 cells (Figs. B2 and B3) .
  • arrays to date have been fixed, stained, coated with preservative, sealed within glass coverslips, and scanned using a laser-based detector where the slide must be dry, positioned within a fixed slide holder, and inserted into the machine through a narrow slot. It is not possible to keep cells alive during the process.
  • an area illuminator combined with an area array detector allow rapid analysis of fairly large areas (multiple wells) in a single view.
  • broad spectrum illuminators are used rather than lasers. The practical result of this difference is that detection of samples of variable height, unusual dimension, or with unusual environmental requirements can be readily accommodated.
  • a major benefit of being able to visualize genes with living cells is the ability to measure the kinetics of their expression over time, a major goal within many cell-based assays.
  • we were able to detect differences in expression using a promoter enhancer (sodium butyrate) suggesting that promoter and transcription factor research could benefit from this technology.
  • Each spot on our test array expressed one of two reporter proteins, but could just as easily have expressed a library of different proteins.
  • a different fluorescent marker or fusion-protein could also be used, as could different promoters, transcription factors, or cell types, depending on the scientific inquiry.
  • GFP-expressing cells were serially diluted with non-expressing cells to obtain from 100,000 to 1,500 green cells per microtiter plate well
  • Example 2 High content screening Additional interest in both academic and industry settings is shown for high content analytical systems. High content analysis can identify functional associations between cellular events by screening for correlations of morphology, cellular localization and event timing. Achieving these goals allows much higher information to be derived from any specific individual analytical event. Imaging of an entire well bottom surface maximizes the usable area for cell arrays. The microplate format capability increases throughput by testing multiple conditions in parallel.
  • High content detection systems offer a fundamental advantage in both resolution and field of view typically by imaging inter and intra cellular features across large surface area.
  • such devices offer dramatically improved sensitivity by scoring individual cells as events. For example, a microplate well containing 100,000 cells of which 1,000 are fluorescent may average as a signal just 1%. However, if individual cells are scored each cell would be measured as a discrete event also increasing the fluorescence over the background and providing a statistically relevant sample group, (e.g. 1,000 sample points, rather than just one.)
  • Example 3 Low Level Gene Expression
  • Cancer is a major public health problem. Worldwide, more than 6 million people die from cancer each year and more than 10 million new cases are detected. In developed countries, cancer is the second leaving cause of death.
  • Lung cancer is the leading cause of cancer motality world wide in both the developed and developing worlds. In the United States, we expect 169,400 new cases and a staggering 154,900 deaths in 2002. Lung cancer accounts for 28% of all cancer deaths. There are more patients who die from lung cancer than from breast, colon, and prostate cancers combined. The five-year survival rate for lung cancer detected early enough to still be localized is 48%, while the overall five-year survival rate of lung cancer is 15%.
  • Chest X-ray is capable of detecting tumors 1-2 cm in size and CT can detect peripheral tumors smaller than one cm.
  • Most lung cancers are detected in advanced stages (Stage II and greater) , but the only patients who achieve long-term survival are the minority diagnosed with stage 0 or I disease.
  • Stage II and greater the only patients who achieve long-term survival are the minority diagnosed with stage 0 or I disease.
  • Altered gene expression may be observed as inappropriate in tissue (space), time, or level.
  • the known hallmarks of cancer cell biology include loss of proliferation control as characterized by changes in cell cycle, cell cycle checkpoint function, apoptosis, angiogenesis, and other signaling pathways.
  • Assays that detect molecular alterations associated with cancer progression may be useful in a number of contexts including early diagnosis, classification of cancer subtype, predicting efficacy of therapy, staging of disease progression and prognosis.
  • RNA transcript abundance ranges over 10 5 -fold.
  • relatively few genes are expressed at high copy numbers, and the majority of genes are expressed at low copy numbers.
  • the regulatory genes likely to be of interest for cancer studies can be expected to generate low fluorescence. Failure to achieve equilibrium hybridization may further limit fluorescent signal. Nonetheless, gene expression microarray studies have made profound contributions to our understanding of cancer biology.
  • SAGE enables detection of quantitative differences of gene expression across multiple samples without prior knowledge of genes expressed.
  • SAGE describes the relative abundance of transcripts in a mRNA population by enumerating the number of copies of each mRNA represented by unique sequence tags.
  • SAGE has been validated in numerous studies including several that were focused on cancer. Numerous improvements to the procedure have resulted in robust up-to-date protocols (www.sagenet.org) .
  • microSAGE has extended the utility of this technology to analyzing minute lesions. Combining microarrays and SAGE enables the measurement of specific known genes as well as the unbiased detection- of genes not printed on the microarray.
  • SAGE libraries have been constructed to describe the early stages of the neoplastic process, including over 45 SAGE libraries of non-small cell lung cancer and normal bronchial and lung tissues. These SAGE libraries represent the largest disease specific SAGE data set currently constructed. Currently this is a resource of over six million tags with 400,000 unique tags.
  • a challenge for applying SAGE methodology to detect low expression genes is the need to sequence pools of sequence tags in which tags representing highly expressed genes occur much more frequently.
  • CGH Array-Based Comparative Genomic Hybridization
  • Array CGH or matrix CGH offers high resolution for genome-wide detection of chromosomal alterations. This technique detects gain (or loss) of chromosomal regions through competitive hybridization of probes generated from tumors/preneoplastic lesions and reference (normal) genomic DNA to a microarray of specific chromosome segments, e.g. BAC DNA.
  • the goal of these detection devices is quantitation of integrated fluorescent signals from microarray spots containing biomolecules of interest.
  • the general workflow for such devices includes acquiring an image of emitted fluorescence, gridding the spots, and integrating fluorescence by statistical analysis of pixel values.
  • the range of mRNA transcript abundance spans about 10 ⁇ -fold from the most highly expressed gene to the physiologic equivalent of zero. This greatly exceeds the "experimental dynamic range,” meaning the ratio between the highest and lowest signals in a single fluorescent dye channel measurable in a single image using current detectors.
  • Laser-PMT based microarray scanners operate using "constant exposure” in that laser power and PMT gain are not variable within a microarray image. In practice, one selects exposure settings to improve detection of low signal spots while sacrificing higher signal spots to saturation. Thus, multiple images must be taken to capture the full of the biological information in the microarray.
  • Embodiments of the system herein described can detect single cells, arrays, and single beads.
  • Data attached on Luminex beads was taken with the illuminator and software described in the background of the art section. Due to the limitations of the previous design, the beads were not imaged in solution but rather were under a cover slip at the bottom of a microplate well. Improvements in mechanical systems, alignment techniques, software and illumination enable the use of beads in solution which is of great advantage in the integration of said new instrument into automated experiments such as high throughput screening. In addition, the new instrument will improve signal to noise ratio and uniformity of illumination enhancements for reasons previously discussed.
  • Each array element is a probe for a specific target.
  • the array element may be the target and the probes are different samples hybridized to the targets.
  • the location of the array, element is associated with some degree of biological specificity. Specificity may be for DNA, RNA, proteins, or other biological entities.
  • the fixed location of the array element allows for ease of analysis in associating a specific probe with a target.
  • the address of the array is a key that provides information about the behavior of genes, proteins and other biological entities.
  • Arrays are always ordered because it is the essence of the technique to use the location of the array element as a key to specificity.
  • Arrays maybe placed on one surface or on both sides of a very thin surface.
  • the instrument disclosed is capable of reading arrays that are placed in a three dimensional matrix due to the large range of focus of the opto ⁇ mechanical system described. All that is required is that proximal layers of array elements be semi-transparent to distal layers of array elements. Properties of Cells Useful For Biological Discovery
  • Cells represent the basic unit of biological activity. As such they have great predictive value for biological studies in that they contain most of the levels of functionality and complexity of any organism. Cells may be fixed as in cell arrays, or they may be mobile as in solution. A unique property of cells is that they are alive and will grow and divide under the proper environmental conditions. Growth may occur in solution or in cell arrays. As indicated in much of the experimental data provided herein, cells may be operated on by numerous biological entities such as proteins, antibodies, enzymes, viruses, bacteria, not to exclude others.
  • Beads may be ordered or disordered. They may have various probes attached to their surface or contained within that confer specificity such as antibodies, proteins, DNA or RNA not to exclude others.
  • Beads are made by a process that can confer very accurate sizing and doping with dyes. Hence beads are excellent for controls. Because beads are mobile in solution they can be concentrated, for example they can indicate the concentration of receptors on a cell. Beads may be stained with a wide variety of dyes and since they are not living organisms the use of these dyes has no impact on their behavior. In some contexts, beads may mimic cells in their size or specificity but they are not living organisms. Properties of Arrays, Beads, and Cells When Multiplexed Together
  • Cell arrays may also be configured over fixed arrays so that biological material contained in the fixed arrays is acting upon or taken into the cell .
  • each entity may be ordered or disordered (cells and beads) in a three dimensional matrix. This may allow for example, studies in the activity of cells that are not confined to cell array monolayers.
  • Beads might also be used to act upon or transfer certain chemical agents into a cell. This could be combined with a cell array format in which the cells are growing over an array and are being transfected with different biological agents. In this way, the dimensionality of the experiment is not increased geometrically but also in terms of the biological phenomena that may be studied.

Abstract

A method and device for illuminating a multiwell plate, collecting light from a target area with wells of the multiwell plate, transmitting this light to an area array detector, detecting discrete targets within wells of the plate and stitching together images of the well plate. This device and method may include the detection of microarray and non-ordered cells within wells. The system and method may include elements for the homogenization of light, the shaping of illumination light to illuminate a selected area of the array, off axis illumination and spatial modulation of the illumination light.

Description

Description
MICROPLATE ANALYSIS SYSTEM AND METHOD
TECHINCAL FIELD
The present invention relates to methods and devices for optical analysis and more specifically relates to analyzers using area array detectors.
BACKGROUND OF THE INVENTION
The completion of the sequencing of the human genome and other genomes has led to the identification of an unprecedented number of molecular entities to analyze and characterize. Following this accomplishment have been rapid developments in proteomics, the characterization of the entire complement of proteins in a cell or organism. The tools developed for these efforts, including massively parallel analyzers and robotic devices, have been adapted to a number of applications in research, dramatically driving forward the pace of research. These developments have led to an improved understanding of how genes are expressed and proteins produced.
Optical analyzers have been adapted to assay a number of different targets, such as arrays. A number of different types of analyzers have been developed. These include laser-based array scanners, microscope-based imaging detectors, flow cytometry systems, and imaging cytometers. One such device is the Alpha Innotech AlphaArray™ imager. Fig. 1 shows a schematic of this device. Illumination light 180 from a broad spectrum light source 150 is directed through optical filter 160. The light is directed onto substrate 700. Printed on substrate 700 are spots 710. Spots 710 include a plurality of detectable moieties 720, 730. The wavelengths selected by filter 160 is the illumination light which excites fluorescence from at least one of the fluorescent moieties 720, 730. The emitted fluorescent light 200 is collected by lens 120 and directed through emission filter 130 onto area array detector 140. This system is further described in U.S. Pat. No. 6,271,042, hereby incorporated by reference herein. The broad spectrum light source may be an arc lamp, light emitting diode, or any source of illumination light of a selected wavelength. This would include arc lamps that provide substantially more illumination in some wavelengths than others. The illumination filter 160 may include a selectable filter such as a filter wheel in which any of the number of filters may be rotated into the pathway of the illumination light. The illumination light directed through illumination filter 160 preferably is directed through at a location of parallel light rays. The illumination light may then be directed onto substrate 700 using a mirror, optical fiber or other means. Substrate 700 may be a glass or plastic slide, microplate well bottom, or any other solid or semisolid surface. Spots 710 may be spots on a two- dimensional array of spots such as a bio array or DNA array, or could be non ordered or ordered cells or beads on the substrate surface. Although fluorescent moieties 720, 730 are shown as discrete spots on spot 710 it is also possible that these spots are overlapping. By selection of alternative illumination wavelengths by placing alternative illumination filters in front of the illumination light allows sequential assay of the excitation wavelengths for each of fluorescent moieties 720, 730 respectively. Selection of alternative emission filters could also be required.
The emitted light is collected by objective lens 120 and directed through excitation filter 130. As with illumination filter 160, excitation filter 130 may comprise a filter wheel or other changeable filter means that would allow a selectable filter to be rotated into the pathway of the collected light. In this way subsequent, potentially overlapping fluorescent dyes may be analyzed. Area array detector 140 may be a two- dimensional array of photodiodes, a CCD detector, or any other detection means that allows simultaneous measurement of a plurality of distinct targets.
An implementation of this device is shown in Figs. 2 and 3. The device comprises of an image capture module 1 which is shown in further detail in Fig. 3. In Fig. 2 an arc lamp with eight-position excitation filter wheel 2 allows illumination light of a selectable wavelength to be generated by a single source. The illumination light is directed into a bifurcated optical fiber cable allowing off-axis illumination from two or more sides onto the sample. Although the present configuration illustrates a top illumination configuration illumination from the bottom is also possible. In addition, the present configuration is an illumination configuration in which the illumination and collection of emitted light occurs from the same side of the substrate. However, transillumination with illumination from one side of a substrate and a collection of light from the other is also possible.
A sample is held on a sample stage that is movable along the x-y axis. The sample stage has one micron movement ability for advancing and positioning samples with sufficient precision. An autofocus motor and encoder 5 allows accurate focusing onto a sample. Mounted on the sample stage is a twelve-position slide holder 6 allowing multiple substrates to be held and subsequently imaged.
With reference to Fig. 3 the inset of 1 of Fig. 2 shows the image capture module composed of a cooled 1.3 megapixel CCD detector 11. An upper imaging lens 12 focuses the image onto detector 11. The lower objective lens 13 collects light emitted from a substrate held on the stage and directs the light through a selected filter on a positioned emission filter wheel 14. As noted with respect to Fig. 1 preferably this emission filter 14 is in an infinite focus region of the collected light. The filter will function most efficiently if parallel light rays are passed through this filter. An emission wheel motor and encoder 16 controls the movement of an emission filter 14.
The use of charge coupled device (CCD) detection allows the use of the rapidly advancing CCD detectors currently used in a number of photographic and movie recording devices. These provide efficient detection at relatively low cost over a large area. This system illustrated in Figs. 2 and 3 is optimized for optically detecting biological samples on glass slides. The system was developed with 15 micron resolution which is sufficient to detect some types of individual cells (generally 30-50 microns in diameter) . However, this system is not able to resolve intracellular features. This working system allows epi-fluorescent configuration.
As noted in respect to Figs. 1-3 these configuration are exemplary and other configurations such as bottom scanning or transillumination scanning may be adopted. The 1.3 megapixel cooled CCD detector allows progressive scan interline transfer, a 50% quantum efficiency at 400 ran, 40% quantum efficiency at 550 ran, 20% quantum efficiency at 700 nm. This detector has dark current less than 0.003 electrons/second/pixel and
18,000 E" quantum wells. This allows yields with high sensitivity over an integration interval. In addition, it allows updates of several frames per second for analysis of rapid kinetic reactions. The auto focus is effected by image content, making the system relatively easy to use.
The use of a cooled CCD for the detector allows for tradeoff between long integration periods and high sensitivity on one hand, or short integration times for kinetic studies yielding lowing sensitivity. The individual views of each CCD exposure may be combined using a cross correlation algorithm into a single mosaic image. When analyzing an array using a pixel resolution of 15 microns, a single microscope slide can be read in 15 seconds. The current 0.13 numerical aperture lens system is not based on a microscope objective, allowing for an aperture of 25 millimeters. This lens is well matched at approximately 0.5 magnification to the large format 1.3 megapixel CCD. This in turn provides the large working distance that can accommodate the full height of a micro plate. The light source is a white light source with relatively flat spectral illumination from the UV light to the near infrared region allowing for large number of optical visualization techniques to be applied.
It is the object of the present invention to provide a device, device components, and methods which improve upon these illustrated background systems. BRIEF DESCRIPTION OF THE DRAWINGS
Fig. 1 is a schematic view of a prior art optical analyzer. Fig. 2 is a side view of an implementation of the optical analyzer conforming to the schematic of fig. 1.
Fig. 3 is a side cross sectional view of a section of the analyzer of Fig. 2. Fig. 4 is a plan view of an optical analysis system.
Fig. 5a is a side perspective view of an illumination optics.
Fig. 5b is an optical diagram of an illumination system.
Fig. 6 is an angled side view of the stage and collection and detection optics of the system.
Fig. 7 is a perspective view of the instrument housing. Fig. 8 is an exploded view of the stage drawer and stage mounts.
Fig. 9 is an exploded view of the stage drawer.
Fig. 10 is a side view of the system showing the frame, stage and mounts, and light collection and detection optics.
Fig. 11 is a partially exploded view of a slide holder.
Fig. 12a is a perspective view of an alternative illumination system. Fig. 12b is an optical diagram of one broad spectrum light source and one narrow wavelength additive light source with light rays from the narrow wavelength additive light source shown. Fig. 12c is an exploded view of the LED component shown in Fig. 12a.
Fig. 13 is a side perspective view of a kinematic mount. Fig. 14 is a cartoon showing the orientation of a sample plane with respect to the detector surface.
Figs. 15a, b are plan views of stage configurations.
Fig. 15c is the inverted view of a sample drawer and stage with x-y-z axis arms shown.
BEST MODE FOR CARRYING OUT THE INVENTION
It should be understood that the present embodiments illustrate a number of different new features and subsystems. The illustrated systems and components described have a number of features which could be adopted for a number of uses. It should be understood that the present implementation of these devices to biological arrays, cell arrays, or any other detection format are given by way of example and that a number of the method system or components may be adapted to a number of different implementations.
I . Optical Configuration Example 1: Array reader
The components, methods and systems of the present inventions are illustrated with the following system example. In this example the system has been adapted as an array reader, either of slides or of microplate arrays. The arrays may be printed on the slide surface or at the bottom of the microplate wells. The following example is depicted in Figs. 5a, 5b, 6, 7, 8, 9, and 10. 1. Illumination source
Illumination source 202 is preferably a broad spectrum light source. The use of such a light source allows great system flexibility. Alternately, a light source with narrow spectral wavelength bands may be used. The use of such a light source allows higher power levels at specific wavelengths that correspond to useful targets. A subsequent filter can allow for selection of a specific illumination wavelength. Unlike laser systems, whose output wavelength is within a relatively narrow range, whose cost is high, and whose electrical efficiency is low a broad spectrum light source allows use of a single light source for a large number of different dyes. Alternately, a narrow band light source generally has several strong spectral bands, has low cost, and has very high efficiency at these wavelengths.
One preferred light source is an arc lamp. Two possible drawbacks with the use of arc lamps is the heat and UV generated and the non-uniformity of light produced. Any light source is going to have some variation in light intensity, typically with a Gaussian distribution with a higher intensity at the center and a lower intensity at the edges. In one embodiment, a xenon arc lamp having an arc width of 1.8 mm FWHM (full width, half maximum) is used. This produces a broad spectrum light source with relatively flat spectral profile across the visible spectrum. In the present embodiments, small arc sources are preferred to large arc sources. By small arc source is meant an arc lamp having an arc that is both small in volume and bright in intensity. The volume of an arc lamp is measured by referring to the diameter of a sphere that encompasses a certain percent of the total energy of the arc. Common energy thresholds are 50% and 95% of total energy. The intensity of an arc lamp is measured in units of Watts (radiant intensity) or lumens (visible intensity) .
It is one object of the current invention to collimate the rays that pass through the excitation filter. It is also an object of the current invention to maintain an excitation filter diameter that is approximately- 25 mm, and certainly no more than 38 mm. This is because the cost of excitation filters increases with diameter.
It is a property of optics known as conservation of entendue that light which is emitted from a source of a given area into a particular solid angle will retain the product of area and solid angle as it propagates the optical system. Thusly, a small volume arc may be collimated into a smaller area beam than a large volume arc.
In order to achieve collimation with the excitation filter diameters described above it is necessary to have an arc of diameter less than a few mm (1 - 3 mm) encompassing 50% of total arc energy. In order to be useful as a light source for high intensity illumination such an arc lamp needs to have an intensity of more than 25 Watts and more than 2500 Lumens.
Another possible arc lamp is a mercury arc lamp. This provides an illumination source producing a bright illumination beam across a number of useful wavelengths. This provides a very intense light source, allowing enhanced illumination strength and greater sensitivity. One such source is a mercury arc lamp with a 1 mm FWHM arc diameter. A number of arc lamps (such as xenon) have output that is flat across a number of wavelengths. In contrast, mercury arc lamps have an output that is characterized by narrow spectral lines. The use of this narrow spectral illumination source produces less autofluorescence from some targeted substrates, such as glass. In addition the system is adaptable to a number of targets, including fluorescent dyes and quantum dots.
Several different small arc lamps and their spectral and physical properties of interest are indicated below:
Figure imgf000012_0001
An alternate light source is a laser. While the cost of a laser is high, and the electrical efficiency is often low (semiconductor lasers may have good electrical efficiency) , the spectral lines of a laser are very well matched to certain fluorescent dyes and to quantum dots . One disadvantage of lasers is its non-uniform irradiance. Generally, lasers have an irradiance pattern that is Gaussian, although many lasers (especially semiconductor lasers) may have non-Gaussian non-uniformities. Another alternate light source is an LED. The cost of an LED is very low, and the electrical efficiency is high, and the spectral bands of an LED are very well matched to certain fluorescent dyes and to quantum dots. One disadvantage of LEDs are their non-uniform irradiance. Generally, LEDs have an irradiance pattern that is Gaussian, although many LEDs may have non- Gaussian non-uniformities.
1.1. Broad Spectrum Light Sources
The above light sources can be classified as either broad spectrum light sources or narrow spectrum light sources. Broad spectrum light sources approach a continuum across the visible light spectrum. Arc lamps represent one example of broad spectrum light sources.
Narrow spectrum light sources, such as lasers, have intense, discrete wavelength emission. In most cases, if a variety of target dyes having a range of excitation wavelengths are to be used multiple lasers will have to be included to obtain the required excitation wavelengths if lasers are used for the illumination source. In contrast a single broad spectrum light source may be used for a variety of targeted dyes. In the present systems, use of a broad spectrum illumination source is preferred.
1.2 Wavelength Enhancement
In addition, another alternate light source is to put- two or more light sources together in order to take advantage of their unique advantages. One embodiment of such a combination light source is two light sources normal to each other with a dichroic beam splitter used to transmit certain spectral bands of one light source and reflect other spectral bands of the second light source.
Another preferred embodiment of a light source is to combine the arc lamp or laser illumination system described herein with LEDs. The LEDs may be combined as in the alternate embodiment described above or may be arranged in single or multiple modules to illuminate the target off-axis. In this case, as in illustrated in Figs. 12a-c, each module consists of an LED, a mount for all the optics of the module, a lens (or a mirror) , and filter.
The LED booster can be used with any broad spectrum arc lamp or a narrow band arc lamp to enhance the optical excitation power available for fluorescent dyes. This is particularly useful if the arc lamp is of the type that has very narrow spectral lines. However, the LED booster may also be used with arc lamps to increase the spectral intensity of any desired region. A combination of boosters of different wavelengths may also be useful. LED's may be turned on one at a time or all at once. LED's may also be pulsed quickly, which can be very useful in time resolved measurements.
The LED module is designed to illuminate at only one wavelength band that may either 1) fill in a gap in the wavelengths emitted by another illumination source or 2) provide greater intensity of illumination at one specific wavelength. The wavelength produced by the LED may correspond to a certain fluorescent dye. In one example, an LED module was so constructed which gave 80% of the power produced by a Xenon arc lamp-filter combination in the wavelength band for Cy5 dye excitation. In Figs. 12b, c, one possible arrangement of the illumination sources is illustrated. The broad spectrum light source 300 is mounted in the center with the LED illumination sources 302a-302d are arranged in a ring surrounding the broad spectrum light source and angled such that all of the light sources target a single optical element, such as the heat elimination optic or the light integrator. In Fig. 12a, four LED illuminators 302a-d are illustrated. This is shown for illustrative purposes only. In practice, between one and more than 10 LED illuminators may be used. If the LED illuminators are used to fill in for wavelengths which are less strongly produced by the central illumination source, the plurality of LEDs allow design such that each LED may fill in for one specific wavelength. Alternatively, if additional excitation light for one specific dye is required, the LEDs may all emit at that specific dye's illumination wavelength. This allows greater sensitivity of the system. In Fig. 12b, the rays of illumination light are shown. The light from the supplemental illumination source is focused by a condensing lens onto a target optical element, such as the heat and UV filter.
A number of elements of the present system make this configuration of additional use. LED sources are commonly not uniform in their illumination patterns. However the present system, as disclosed below, includes a integration optic that results in uniform illumination light . Control of the activation of the additional LED illuminators would be by a central controller. In the regulation of this illumination source allows an initial determination of presence of a selected dye, followed by utilizing the additional illumination sources to provide additional illumination strength at the specified wavelength. The described LED illuminators are low cost, relatively small (adding light to system weight or foot print) may be simply mounted, and use relatively little power.
The direct power measurement of the red LED puts it at about 80% of the Cy5 exciter Xenon driven illumination. The image showed this same increase in signal intensity but nearly 2x increased the signal to noise for the LED illumination. This was due to the background being half with the LED illumination. Alternate embodiments will use more than one LES to achieve significant improvement in overall power and signal to noise ratio. For example, it would be expected that the use of ten LED's would produce an increase of power in the Cy5 excitation spectrum of 800%. The signal to noise improvement expected from such a design would be approximately the square root of 8. Due to the lower background of the LED, the signal to noise ration could improve even more. Similar improvements of greater or lesser degree can be expected by using other LED's with different wavelength properties. One LED that is adaptable is the Luxcon Emitter (Lumileds Lighting, L.L.C, San Jose, California) .
2. Heat and UV Filter
The illumination beam from the arc lamp is directed through the infrared (IR) and ultraviolet (UV) removal optic. This as shown in Fig. 5b is composed of an IR and UV reflector 204 and an IR and UV absorbing element 206 mounted in a parallel configuration such that the illumination light passes first through the reflector 204 and second through the absorbing filter, which absorbs most of the remaining UV and IR light while allowing other wavelengths to pass through. It was found that the use of simply one IR reflector was not sufficient for required blocking of UV and IR. However, the use of a first filter that reflects back more than
80% of the UV and IR, combined with a second filter that absorbs 99.9% or more of the IR allows removal of substantially all of the UV and IR from the system. The described configuration allows removal of more than 99.9% of UV and IR light without long-term degradation of any of the components of the optical system. The absorbing filter cannot be used alone for to two reasons 1) solarization of the glass and 2) heat stress leading to breakage of the filter. Solarization of absorbing filters is caused by excessive UV which degrades the performance of the filter. Heat stress is caused by uneven heating of the absorbing filter leading to large mechanical stress build-ups in the filter that may cause breakage. In the present system a filter with dielectric coating rejects 80% to 99% of UV and IR while transmitting on average 80% of visible. The useful range of this combination of filters is the transmission of wavelengths from 300 to 800 nm (herein below 300 is considered UV, above 800 nm is considered IR) . However, in some applications it may be desirable to reduce this range to minimize autofluorescence of optical components and hence background levels. So in other embodiments a range of 375 for UV cutoff and 725 nm for IR is preferred. The reflective and absorptive coatings and materials may be selected to achieve these cutoffs.
In one embodiment the dielectric filter is 38 mm x 38 mm. The dielectric filter is mounted on one side of a metal mount 205 that acts as a coarse aperture for light and as a heat sink. A high temperature RTV adhesive is used to mount the filter to the metal mount . An absorbing filter made of Schott KG5 glass further blocks UV and IR wavelengths. The absorbing filter is two mm thick and 50 x 50 mm square. The absorbing filter is mounted to the same metal mount as the dielectric filter with the same adhesive. As noted, one drawback to the use of a high intensity illumination system is the production of heat and UV. The present solution allows control of IR and UV and use of high intensity, small arc lamps. However, this configuration is useful for a variety of illumination sources that produce UV and IR.
By simply increasing the thickness of the absorbing filter or increasing the doping the amount of UV and IR blocking can extend down to 8 orders of magnitude. In addition, the reflective layer may be a coating on the layer of absorptive material in some embodiments.
3. Homogenization
Light passing from the heat removal filters next passes into the first end of integration bar 226. This is a elongate, rectangular quartz bar. The bar is internally reflective and allows propagation of the illumination light though the bar. As the light passes through the bar, the internal reflections form virtual images at the output plane of the bar. The summation of these virtual images acts to homogenize the light. Both the length of the bar, the cross sectional area of the bar and the shape of the bar are important for its function. The non-radially symmetric shape of the bar allows enhanced light homogenization of the illumination light. The length is sufficient that the various light rays entering the first end of the rod have a sufficient distance to be reflected to a sufficient extent that the rays coming out of the second end of the integration bar are substantially uniform. The illumination light coming from the second end of the integration bar is substantially uniform, with greater than 80% intensity uniformity of light from locations across the second end of the integration bar. This is shown in Fig. 17a, with a prior illumination intensity over space 510 may be compared to a new intensity. This graph is a projected illustration of results one might expect from old type non-uniform illumination and a new, integrated illumination.
In one example, an integration bar with cross sectional area of 6 x 6 mm and length 50.8 mm was used. A xenon arc lamp with arc size of 1.8 mm FWHM and elliptical reflector of focal length 27.6 mm, f/# of 1.3 illuminates the input plane of the bar. The cross sectional area of the bar input plane should be large enough to encompass substantially all of the energy focused by the lamp reflector. The material of the integrating bar is fused silica.
The integration bar serves a secondary function. The end of the integration bar has a rectangular cross section and the illumination light retains this profile. In the present system, the field of view as imaged by the CCD is rectangular. By matching the geometry of the illumination light to the geometry of the field of view, less light is wasted, the targets can be illuminated in more specific areas (potentially reducing photobleaching from the illumination light outside the field of view) . In addition, less light on non-targeted areas will be scattered or otherwise need to be removed from the system.
Fig. 17b shows a circular illumination system. Circle 504 indicates the illumination area. Rectangle 502 shows the detector area. The area not within rectangle 502 but within circle 504 is wasted light. As the illumination area is novel to analyze new areas on the slide some of the detected area will already have been exposed to illumination light. By illuminating with a rectangular illumination area, adjacent areas may be illuminated without exposing edge regions to additional illumination light. It is preferred that all target spots be analyzed under as similar conditions as possible. The creation of a rectangular illumination area prevents areas of the sample from being illuminated but not detected, this subject to photobleaching prior to a subsequent analysis.
Light entering the first end of the integration bar would substantially include the image of the arc lamp arc, and would have a brightness distribution reflective of this image. Light emerging from the second end of the integration bar no longer has the illumination profile of the image of the arc.
Optical principles governing the total internal reflection are the same as those relating to optical fibers. This allows the length and geometry of the bar to be selected such that all of the selected wavelengths of interest could be homogenized by the integration bar and the emerging light can be sufficiently uniform that the illumination light does not substantially contribute to non-uniformity in optical signal from targets illuminated by the illumination light. The integrating bar need not have a rectangular cross section. However, as the order of the polygon formed by the bar increases, and begins to approximate a circle, the homogenizing property of the bar degrades for any fixed length. As the accompanying charts show, greater than 90% of the standard deviation in illumination variation may be removed by selecting a integration bar of various polygonal orders and lengths. A homogenizing bar of elliptical cross section may also be used.
An additional consideration is tiling of images. In selecting a rectangular cross section, the fourth order polygon allows simplified tiling. While some other shapes (such as a hexagon) also could be tiled, the use of a rectangular integration rod allows off-axis illumination while still maintaining a rectangular illumination pattern. As the sample is moved, adjacent areas can be illuminated without complex methods to ensure that gaps are not omitted from an optical analysis.
An additional advantage of the light homogenization is the better distribution of the light energy across the surface of optical elements. In prior systems, light is locally concentrated, and the optical element upon which such light impinges need to be able to withstand the energy from the more intense areas of light. For example, filters could melt or degrade at areas of intense light. By adding a light integration rod to make the illumination light more uniform prior to impinging on the illumination filter greatly increases the amount of light that can be used by the system without degrading filter performance. Additional discussion of light homogenization is found in SPIE, Vol. 4768 (2002) .
4. Collimating Lens Light emitted from the end of integration bar
226 is directed through collimating lens 208. This lens collimates the illumination light to produce parallel light rays. The subsequent filters are most efficient when filtering parallel light rays. An illustration of the expected effect on light intensity from parallel and angled rays is illustrated, in Fig. 18. Although this lens is shown with a fixed mount in Fig. 5a, it is envisioned that the lens could be mounted on a movable mount, or a motorized movable mount, such that the distance between the end of the integration bar 226 and the lens 208 could be controlled. In this way the illumination area could also be controlled. This feature is particularly desirable in systems having variable optical resolution.
5. Illumination Filter Wheel
Parallel light rays from lens 208 next pass through an illumination filter 214a on filter wheel 210. Illumination filter wheel 210 holds a number of filters 214a-d. The use of a broad spectrum light source combined with selectable filters allows selection of a specific illumination wavelength. This in turn allows use of any of a number of dyes. Motor 212 is used to rotate filter wheel 210 to position the filters of the filter wheel in the path of the illumination light. Each of filters 214a-d is removable from the wheel by a simple mount that attaches to the wheel but may be detached for exchange of filters. Motor 212 is controlled by a system electronic control such that a user may simply select an illumination wavelength of specify a dye on the target and the illumination wavelength would be automatically selected by rotation of the proper filter into the illumination pathway.
6. Steering Mirror
Light passing through the illumination filter is reflected by steering mirror 218 onto illumination focus lens 222. Steering mirror is mounted on mount 220 allowing the position of the mount to be adjusted. Adjustment of the steering mirror causes the image formed by the illumination focus lens to move. The steering mirror is thusly used for fine alignment of the image produced by illumination focus lens to the desired field of view.
7. Illumination Focus Lens
Light directed by the steering mirror is directed through the focusing lens 222, which focuses the illumination light onto the target on the sample stage. In one example, the illumination focus lens has a diameter of 38.10 mm, a focal length of 51.6 mm and is made of BK7 optical glass. The focal length of the illumination focus lens is chosen to produce an image of the output face of the integrating bar at the desired magnification on the field of view. With off-axis illumination, a square object will be imaged as a rectangle with the aspect ratio of the rectangle determined by the angle that the illumination beam forms to the optical axis of the CCD-lens system. Illumination off-axis also produces some non-uniformity of light intensity in the field of view. The degree of this off- axis non-uniformity is also dependent on the angle of incidence of the illumination beam. The non-uniformity is greater at higher angle of incidence. For this reason, grazing incidence illumination is not desirable. In one example, the illumination beam angle of incidence is 45 degrees. 8. Mounting Board
All of elements described in sections 2-7 above as shown in Fig. 5a may be fixed in position on a mounting board 230. The elements would need to be initially aligned, but subsequently would not require adjustment.
The above combination of components provides an illumination system that is highly efficient and provides light of a geometry that may be selectively designed to illuminate only the specified area of the target field of view. The combination of elements makes use of intense arc lamp light sources having small arc widths and producing intense illumination light possible. For each optical element included in an illumination system, there is some cost in light loss, which varies depending on the system component. In the present design, the number of optical elements is sufficiently few that loss is minimized. In addition, the design is quite compact. For example, in prior systems in which an optical fiber is used for transmission of light, two addition lenses are required to focus the illumination light into the fiber and to focus the light emitting from the fiber. Such a configuration also requires considerable space. In the present device, highly uniform off axis illumination is produced with a minimal space requirement . The combination of a small (under 2mm) arc lamp and an integration bar allows a system in which unfocused, broad spectrum light homogenized and directed into a sample. This is lower cost, smaller footprint, and fewer optical constraints than prior systems that require a focusing lens for broad spectrum illumination using an integration optical using internal reflection homogenization. 9. Sample Stage
The illumination light focused by the illumination lens onto a sample substrate on the sample stage. With reference to Figs. 6 and 8, the sample drawer 240 is configured to hold a microplate or a microplate dimensioned device.
9.1 Slide Holder Presently, many arrays are printed on microscope slides. A holder for up to four slides may be adapted to the microplate format. In its simplest form, such a holder is simply a microplate frame with the four sides of the frame having a lip and a groove, pins or other means for positioning the slides into place.
A slide holding frame can be a separate device that is able to both 1) securely hold the slide in place (assuming a fixed position in both translational and rotational axes with respect to the mechanical sample holder of the instrument) and 2) allow the slide to be manipulated by automation robotics that have been configured to process devices having specific dimensions.
One slide holder is described in U.S. Patent No. 6,118,582, hereby expressly incorporated by reference herein. This include is a slide holder for holding one or more slides having a generally rectangular frame and at least one slot for receiving one slide. Flexible retaining latches and retaining grooves are provided at each of the slots for facilitating the securing of the slides.
This product has a number of drawbacks. The array printed on the slide pictured in this patent has rather large spot size, seemingly several millimeters diameter for each spot. In biological arrays, array spots are rather small, commonly less than 200 microns across, often smaller. Elements of the present system allow detection of spots which may be only several microns wide. Given this spot dimension the positioning tolerances are quite small. In the above patent device, there is nothing to ensure repeatable horizontal positioning of the slide in the slide retaining slot. Some slides may be slightly larger or smaller (within manufacturing tolerances) so the slot would have to accommodate these slight differences by having the slot be sufficiently large. However in an oversized slot, the slide could move around, making imaging more difficult (especially tiling together different views) . Ideally, the slide holder would provide a biasing force such that the slides are fixed on the holder in translational and rotational directions. This would force the slide 1) down onto the frame; and 2) against the sides of the frame. With reference to Fig. 11, a frame 503 has a plurality of slide holding regions 507 defined by pegs 509. Pegs define a region confining the back and sides of the slide to a specific horizontal and vertical position on the holder, with the slide resting on a lip on frame 503. A winged biasing clip secures the slide into place. Side bars 504 are attached by arms 515 to central bar 517 to the winged biasing clip. A bolt 501 is secured through washer 502, through bar 517 and attached to frame 503. When the winged biasing clips are secured to frame 503, the side bars rest in groove 511 and push slides against surface 513. When a slide is inserted into the holder, arms 515 flex away from the center of the frame. The biasing force both constrains movement of the slide generally in all three directions (along the translational and rotational axes) but also specifically presses the slide forward against the pins 509 on the side opposite the side bars 504. A single winged structure provides a simple biasing force for two slides using a structure (winged biasing clip) that is about as tall as a standard microplate. The pins may be selected to be as high as the winged biasing clip, to more easily allow for stacking of the devices, as in the magazine of a robotic processing device.
This holder allows four slides (e.g., nucleic acid array slides) to be held by a holder and analyzed together. The device may be simply manufactured of metal or plastic or any other suitable material. The force of the winged biasing clip is sufficient that even during automated movement of the holder either by a robotic loader or by the stages of the instrument, the slides maintain a fixed position.
An alternative holder/adapter could be used to allow scanning of a variety of objects, including gels, blots, or other samples. Such a sample would rest on a targeted substrate, such as a glass plate, positioned on the edges of the sample drawer. The illustrated holder may be tailored to hold slides made to various international or national standards, other glass or plastic substrates, or other non-slide shaped substrates, such as custom protein chips. 9.2 Sample Drawer
Returning to Figs. 6, 8, and 9 the sample stage in various views is shown. The sample drawer includes a plurality of pins 242 to secure the sample substrate into a fixed location. A biasing bar 250 exerts a force on the device held on the stage to press the device against the pins and hold the sample in place during scanning. Thus the sample assumes a fixed position on the sample stage, as biasing bar 250 forces the sample substrate against pins 242 and prevents the sample substrate from moving during scanning. Thus, the sample is at a fixed position, "corner crowded" into the sample drawer in a unique position as to both rotational and translational axes. As is disclosed below, the sample drawer 240 is mounted to the side of the z-axis arm. The sample drawer may then be simply be unbolted and replaced with a new sample holding device if such a change is required. The sample mount, which is described below, allows positioning of the drawer in a fixed rotational position.
With respect to Fig. 9, the mounting of the components of the sample drawer are shown. A z-axis mounting bracket 460 is attached to drawer top piece 452 which is joined to drawer bottom piece 450. A screw 462 slides through a hole in top piece 452 and is affixed into bottom piece 450. A spring 464 is mounted annularly about screw 462 such that the spring press against the head of screw 462, producing a biasing force of piece 450 against piece 452. On top piece 452, three V-shaped grooves 454,
456, 458 each receive the end of a round tipped screw that extends through piece 450. This functions as a kinematic mount, ensuring that the stage is not over or under constrained.
9.3 Sample Drawer Mount As shown in Figs. 6 and 8, the sample drawer
240 is mounted on a z axis 244 to allow selective positioning along the z axis. The z axis is in turn mounted on an x axis 246, which is mounted on the y axis 248. The x axis, y axis and z axis are mechanically linked to motors 254, 252 and 243 respectively, allowing the drawer to be moved in three translational dimensions. These motors are precision stepper motors allowing 1 micron increment movement of the stage in each of the x- y-z directions.
9.4 Stage Mount Adjustment
One prospective use of the present system is for analysis using large format CCD arrays. However one problem with the use of such arrays is that the detection surface is not flat with respect to the housing in which the detector is mounted. This is one of the reasons such large format detectors have not be adapted for the use of analysis of targets that are a few dozen microns in diameter: the positioning of the stage and movement of the stage and focusing on small targets is challenging when the detector surface is angled with respect to the stage. Thus the stage must be aligned with the detector not only along its x-y-z axis (z axis to focus, x-y coordinates to select the area on a sample substrate) . It also must be adjusted in rotational yaw/pitch/roll coordinates to ensure that the surface of the detector and the sample drawer are aligned. Given the very high resolutions contemplated for various embodiments and the complexity of stitching together various views (e.g. adjacent areas imaged) misalignment between the sample and the CCD can lead to out of focus views, longer focusing times, and views that are extremely difficult to compare, overlay, stitch or otherwise process and analyze.
Ideally, the sample drawer would be mounted on arms such that the sample substrate analyzed remains close to the focus plane throughout all travel on the mounting arms. For a single axis motion along the x-y axis, the pixels in the detector should view travel as purely vertical or horizontal with respect to the CCD.
Any optical mount's position can be defined uniquely in terms of six independent coordinates; three translations and three rotations with respect to some arbitrary fixed coordinate system. A mount is said to be kinematic when the number of degrees of freedom (axes of free motion) and the number physical constraints applied to the mount total six. This is equivalent to saying that any physical constraints applied are independent (non- redundant) . A kinematic optical mount therefore has six independent constraints.
The realized solution is to mount the sample drawer to the stage using adjustable kinematic mounts. The advantages of a kinematic mount are increased stability, distortion free optical mounting, and in the case of a kinematic base, removable and repeat-able re¬ positioning. Features of the mounts are illustrated in Figs . 8, 10, and 13. Y arm is mounted on stage 280, which is fixed within the housing of the system. The plate also includes kinetic mounts for adjustment of the yaw/pitch/ roll positioning of the stage. Adjustments for the stage and drawer are made at kinematic mounts. These mounts include round tip set screws with the tips of the screws positioned into V-shaped grooves extending on a surface. With reference to Fig. 10, the system is mounted within a housing 292. The housing is secured to an internal frame consisting of beams 350, 354, 352, 354, 358, 360. The stage 280 is mounted on beams 350, 354 and 352. One style of kinetic mount is a set of three V-grooves and three oval tip screws. A close up of one V-groove and oval tip screw is shown in Fig. 13.
With reference to Fig. 13, each of the three contact points includes two screws, a screw with a soft flat tip 388 and a round tipped screw 382, which extend through brackets 384 and 386 respectively. The flat tip screw 388 is tightened against the flat surface of stage 280, while the round tip screw is fit into V-shaped groove 390. By adjusting these screws, the roll, pitch and height of the stage can be adjusted to match the plane of the detector surface. The bottom view of Fig.
15 provides another view of the V-shaped grooves.
With respect to Fig. 8, a screw extending from arc-shaped groove 402, 404 and into arm 248 allows adjustment of yaw rotation following the pitch and roll alignment. The rotational alignment is needed to be done usually once, at the set up of the instrument. At that time, the stage would be in a fixed position with respect to the detector surface. At the same time, the movement of the arms would be calibrated to ensure that the drawer moves the sample in relation to the detector surface such that the movement is not skewed and is simply an x-y translation of the sample substrate relative to the detector surface. Given the high resolution (4 micron) of the system, and the large targeted area which requires several views to be image a substrate (such as a microplate) misalignment between the sample and the area array detector can result in areas of the sample substrate plane being out of focus. Variation in focus make compilation of images into a single image problematic.
The detector surface commonly is not level with respect to its housing and hence not orthogonal to the optical axis when viewed at the depth of field required for cell detection or microarray analysis. The stage must be adjusted so that the angle of the detector surface matches as closely as possible the angle of the targeted substrate to the optical axis. This requires fine control of yaw, pitch and roll calibration of the sample stage. The present kinematic mounts allow for such calibration.
9.5 Arm Configuration As shown, the x and y arms are stacked over each other, and the sample stage is offset from these two components. This is another feature that allows a more compact design, reducing the footprint of the instrument.
Alternative sample stage configurations could be possible. For example, Fig. 15a shows a configuration in which the sample holder could be positioned at the end of a long y positioning arm. However, this greatly increases the distance the stage must travel to analyze the sample and project the sample from the housing. In another alternative shown in Fig. 15b, a structure on the sample stage, such as an extendable drawer, could be used for access to the sample from outside of the instrument . 9.6 Instrument Housing
With respect to Fig. 7, the exterior housing of this embodiment of the instrument is shown. The stage and arms are configured to allow the sample holding stage to be extended through door 290 in housing 292. When this door is closed, the housing is optically sealed, allowing assay of devices held on the sample stage without interference from outside light. In addition, this access allows compatibility with standardized robotics that are able to manipulate microplate substrates. The sample drawer may be extended from the instrument housing such that the sample substrate has at least half an inch clearance around it. This allows a robotic gripper to reach down and grab the sample holding device. In addition, there are some robots that reach forward to grab the sample holding device. The sample holding drawer may also allow the sample substrate to be grasped from the front or side. The use of robotics enhances the throughput of the imaging system. The present described sample stage, sample drawer and housing is adaptable to a range of sample holding devices-. These would include slides to various national/international standards, SBS standardized footprint multiwell plates, non-standard!zed plates, and customized sample holders. One such holder was described. Other holders could include a microplate dimensioned gel or membrane, or other similar device.
10. Focus As noted, the present system includes a z-axis stage movement, that allows the sample to be moved up and down. This is similar to microscope focus. This focusing means allows for constant magnification in imaging a sample.
The use of z-axis focusing also aids in the ability to adjust to samples at varying heights. A microplate typically has a frame, with a substrate positioned within that frame at a specific depth. In Fig. 16b, Dtot is the thickness of the microplate, consisting of the well depth (Dwen) , the thickness of the well bottom, and the gap (Dgap) between the bottom of the substrate making up the well and the bottom of the microplate. DfOC is the distance required to focus to the targeted plane, the microplate well bottom. In prior systems, the ability to adjust to the differing thickness of samples was quite limited, at times requiring shims to adjust outlier heights of a sample holding device, such as a microplate to bring the sample holding device within the focus frame.
In one possible implementation, focusing of the sample would be done by movement of the detector lens with relation to the sample. The sample stage/sample drawer would be in a fixed location during focusing. This results in variable magnification.
In another, preferred design shown in Fig. 6, the samples are moved in focusing, with the samples in focus at one plane with respect to a detector surface. A cartoon configuration is shown in Fig. 16a. This is the z-stage focusing allowed by the z-arm movement in Fig. 6. The samples, which are always the same distance from the detector, will all be detected at the same magnification. (For the same sample types.)
Anther result of this change is a ability to focus through a greater depth. The present design allows •a greater than 10 mm z-axis focus range. This is rather important if the user wants to focus on a number of different targets, including microplates, that have a variety of distances to the target plane on the device on which the sample is deposited. One implementation allows focusing in the z-axis for 12.7 mm.
11. Light Collection Optics
Targets held on a substrate on the sample stage are illuminated by the illumination system, exciting fluorescence from the targets. With reference to Fig. 6, this emitted light is collected by objective lens 270, which collimates the light and directs the collected emission light through a filter 264 on filter wheel 260. The filter wheel is controlled by motor 262. The light is directed through filter 264, impinges on detector focusing lens 264. The filter both removes light of the excitation wavelength and light emitted from a sample that is not of the selected wavelength (for example, autofluorescence from the sample, light from a different dye, etc. ) .
In one embodiment, the objective lens is a 50 mm F/l.8 lens, such as Nikon AF D 520 lens. The detector lenses is a 85 mm F/l.8 lens, such as Nikon D AF (620) . This lens is mounted to the detector using a photographic lens mount. The detector lens is mounted 1.615 inches from the detector with the back focal length of the lens facing the detector. The objective lens is mounted approximately one focal length away from the sample target with the back focal length of the lens facing the sample. In this embodiment, the objective lens, detector lens and detector are all fixed in position. Focusing is by stage movement . These elements would need to be initially aligned during instrument focus and subsequently all focusing would be by movement of the sample stage.
In alternate embodiments, higher resolution may simply be achieved by selection and positioning of the lens. In some systems, multiple lenses mounted on a turret would allow for a wide range of sample magnifications. This may range from the 15 μm/pixel resolution disclosed in U.S. Patent No. 6,271,042, to the 4 μm resolution of the above embodiment, to a less than 1 μm resolution for contemplated systems. The above- described systems for alignment of the stage and uniform illumination make such resolution imaging feasible. The two sets of filters, the illumination filter and emission filter, allow a single light source to be used with this system for a variety of targeted dyes. To detect a specific optical label, the filter corresponding to the excitation wavelength is selected for the illumination filter and the filter for the emission wavelength is selected for the emission filter. The lenses which collimate the illumination light prior to reaching the illumination filter and the collected light prior to reaching the detection filter aid in the efficiency of the filters. Such filters function most efficiently in the areas of parallel light rays as previously described with reference to Fig. 18. Efficiency is measured in two ways. First would be transmittance of the selected wavelength. Second is the blocking of nonselected wavelengths. Parallel rays enhance both these determinations of efficiency.
12. Detector
Emitted light focused by the detection focusing lens is detected by the area array detector 274. This detector is a cooled 4 megapixel CCD, with 50% quantum efficiency at 400 nm, 40% quantum efficiency at 550 ran, dark current less than 0.1 electron/sec/pixel; 30,000 electron per well. Such a detector has the following advantages.
Unlike laser scanners, the CCD or area array scanners defined in section 2 used with the rectangular area illuminator disclosed above can detect an entire area of a sample substrate at one time. This allows for both repeated analysis for short integration times for kinetic studies, and longer integration times to enhance sensitivity. Both such integration times can be combined using a cross correlation algorithm into a single mosaic image. It is generally agreed by those who are experts in microarray analysis that somewhere between 10 and 20 pixels are required to over sample a microarray spot. Thusly, with a spot diameter of 100 microns a minimum pixel size of 10 microns would be required and 5 microns would be preferred. The need for over sampling is to assess the quality of the spotting and binding by using morphological factors within the spot .
In order to image a slide of 25 mm by 75 mm to a pixel resolution of 4 microns with a 1 MPixel detector approximately 100 views must be tiled together. In order to image a microplate of standard size (~ 85 mm x 125 mm) with the above pixel resolution and detector type approximately 500 views must be tiled together. By pixel resolution is meant the linear dimension of a pixel on the detector projected to object space. In this example the detector pixel resolution is approximately 6.7 microns. When projected to object space it is approximately 4 microns. If the detector type is expanded to 4 MPixels, and the pixel resolution is held to 4 microns then the number of views required to cover a plate or slide is reduced to approximately 25 views and 125 views respectively. The illumination area is the area of the detector projected into object space. For example, if the detector has a pixel size of 6.7 microns, and a pixel resolution of 4 microns in object space, then a detector with 1000 x 1000 pixels will produce an illumination area of 4 mm x 4 mm. In practice, it is desirable for the illumination area to be slightly larger than that strictly required to map to the detector for purposes of ensuring that a region of the sample is never under- illuminated and for manufacturing efficiencies. Thusly, in the above example an actual illumination area of 4.5 mm x 4.5 mm would be preferred. Within such an area an excitation filter for the Cy5 dye that has a bandpass of 70 nm with a center wavelength at 540 nm together with the illuminator herein described would pass a radiant power of at least several hundred miliwatts and preferably between 500 mW and 1 W. Actual illumination intensity will vary by the type of filter used, and the type of arc lamp used (broad or narrow spectrum) . Typically, such an illumination area would contain from 400 to 1600 100 micron diameter spots.
13. Controller
A single controller can control all moving elements (including all stepper motors and filter wheel motors) and obtain a detector signal.
Alternately, a controller may be dedicated to each motorized component. In this configuration each motorized component has a slave controller which is controlled by a single master controller. Limits switches are used to indicate the end of travel or start of travel for a motorized component. Microswitch.es may be used as limit switches. Microswitches can provide accuracies of perhaps tenths of a millimeter in the starting or ending position of a motorized component. Microswitches may also be used to indicate special positions along the way of a motorized component. For example, a microswitch may indicate the location of a filter in a filter wheel. Use of microswitches enhances the overall accuracy and safety of the motion control system.
In the XYZ stage, microswitches (such as those sold by CappUSA) are used to verify one or both ends of travel of a stage. In the excitation filter wheel, one switch is used as a home switch (the "zero" filter position, which is actually "lights off") and another switch is used to index filter positions 1-8. In the emission filter wheel, only one microswitch is used for the home position and an encoder is used to track indexing to high positions. In an alternate embodiment the emission filter wheel could use two microswitches and no encoder.
The controller would be able to effect the following processes. i. Position sample drawer to receive sample holding device from outside system housing. ii. Receive sample holding device on sample drawer, retract sample drawer and close drawer into system. iii. Obtain plate or sample holding device information from user interface, plate identification scanner (e.g. bar code reader, RF ID reader, etc) , or from prescan. iv. Identify from device information associated dyes, excitation and emission wavelengths, target size, target area, resolution required, etc. v. Select excitation and emission filters. vi. If variable magnification, set magnification. vii. Position sample drawer in position for initial scan. viii. Illuminate area of sample and detect resulting emitted light. ix. If required, change illumination and detection filters and detect second dye. x. If required, change magnification and analyze at a higher resolution. xi . Move sample drawer such that additional area of the sample is in view. xii. Repeat steps v-xi for each area to be analyzed. xiii. Process image, tile images, analyze data.
Example 2: Alternative reader
With respect to Fig. 4., an illumination source 20 produces an illumination beam 24. The illumination source may be an arc lamp, an LED or other illumination source. The use of a broad spectrum illumination source allows bright illumination of the sample and selection of a broad spectrum illumination, allowing selection of a range of wavelengths. A single source of illumination may be used, or multiple combined illumination light sources may be used. The illumination beam 24 passes through a heat elimination element 22 and onto condensing lens 26. Heat elimination element 22 is a hot mirror or other optical element that reflects, absorbs, dissipates or otherwise prevents UV and/or for IR wavelengths from being transmitted to the other elements in the system.
Condensing lens 26 condenses the illumination light into an integrating bar 28. Light traveling through integration bar 28 is homogenized such that light emitted from the end of this bar is uniform over both the entire area of illumination and over time. For these purposes, uniform will mean removing most of the variation which would be ordinarily observed from arc lamp illumination.
Light emitted from the end of bar 28 is focused by relay lens 30 and directed onto a spatial light modulator 34 by steering mirror 32. The spatial light modulator allows section of individual light pixels to be reflected onto the sample substrate.
A spatial light modulator is any optical device able to selectively modulate light. In the present example, the spatial light modulator receives illumination light and selectively reflects this light onto the sample surface. The selective light modulator is pixilated and produces an array of illumination light pixels. This allows the light at each pixel to either be relayed onto the sample or be not transmitted in whole or in part. Such a pixilated array may be a multiple aperture array, a reflective array, a spatial absorbing array, or other such device. These devices include digital micromirror devices, ferroelectric liquid crystal devices, electrostatic microshutters, microopto- electromechanical systems (preferably with high contrast abilities) and other similar devices. A number of relevant devices are described in U.S. Patent Nos . 5,587,832; 6,663,560, 6,657,758; 6,483,641; and 6,388,809 hereby incorporated by reference herein. One advantage of the spatial light modulator is the ability to selectively control the transmission of illumination light to individual pixels. This can be done by calibration of the spatial light modulator and detection of the illumination light, either directly or indirectly using a target surface responsive to the illumination light, by an area array detector. This is further described in the above patents.
The combination of the light homogenization optics and the spatial light modulator allows illumination of a selected area of the sample surface, with the pixels each illuminated with enhanced uniformity. Selection of a desired illumination pattern (for example: 1024 by 768 pixels) allows assay of a selected surface while not exposing non selected areas to potential photobleaching from illumination light.
The light directed from the spatial light modulator 34 passes through lens 36. This lens collimates the illumination light. This illumination light is directed through excitation filter 38. This filter may be mounted on a filter wheel holding a plurality of filters, allowing selection of the illumination wavelength by filter selection. The filter is preferably positioned in an area of parallel light rays, which enhances the efficiency of the filter. The combination of a filter wheel and a broad spectrum illumination source (such as an arc lamp) allows relatively high intensity of illumination and flexibility in selection of illumination wavelength. The light passing through filter 38 passes onto dichroic mirror 40. Mirror 40 reflects light of the illumination wavelength onto the objective lens 42, which images the illumination light pattern onto a sample surface 44 on sample holder 46.
The illumination light excites fluorescence from targets or otherwise optically activates targets (e.g. array spots, cells, beads, etc.) on the sample surface. This emitted light is collected by the objective lens, which functions as a high numerical aperture light collector. This light is directed onto the dichroic mirror 40, which transmits light of the emitted light wavelength. The illumination light and the collected light wavelengths are sufficiently different that they may be optically separated. In addition, it is preferred that the dichroic mirror, like the filters, be placed at a location where impinging light has parallel rays, allowing these elements to function at greatest efficiency.
The collected light 60 then passes through filter 52 and is focused by lens 54 onto an area array detector 56. The detector used may be any detector that is able to image a two dimensional area. Such detectors could include a charge coupled device (CCD) , photodiode array, charge injection device (CID) , complementary metal oxide semiconductor (CMOS) or any other detection device. A number of the elements all may be controlled by a central electronic control 50 (e.g. a microprocessor) . These elements would include the illumination source 20, the filter wheels and other rotating mounts, the spatial light modulator 34, the sample stage motor 48 and the detector 56. In addition any add-on robotics could be also controlled by the same device. Alternative Embodiments
A person of ordinary skill in the art would understand that a number of modifications of the present invention are possible. One of the embodiments shows top down scanning of a substrate. This may be the preferred imaging configuration for scanning slides or other substrates in which the sample is positioned on the top of the substrate. Because the illumination light does not have to pass through the substrate on which the sample is deposited to reach the sample, background from the substrate is minimized. Bottom scanning may also be used and may be preferred for multiwell plates . For such plates, the sides of the wells is an obstacle to illumination and collection of light in a top imaging configuration. The sides of the well limit the numerical aperture from which light can be collected from the top. In addition, the working distances of the lenses may be too short to accommodate the depth of the well. The well sides would then prevent focusing on the well bottom entirely. In addition, the illumination light would need to reach the bottom of the well . The required angle would probably eliminate the ability to illuminate with off axis illumination, and would certainly restrict the angle of illumination. These and additional reasons may make bottom scanning preferred for analysis of microplate targets.
Bottom reading allows shorter working distance from the sample to the objective lens for multiwell plates, increasing the numerical aperture of light collected from the well bottom, enhancing sensitivity.
In addition, this configuration allows isolation of the sample into a sample holding chamber that can be robotically accessed to allow for automation and increased throughput. Furthermore, in this configuration the wells may be covered at their openings during analysis, reducing the risk of sample contamination and allowing the cells to be maintained at a more uniform temperature and environment .
Top reading systems have some fundamental limitations. For example, top reading systems must read microplates that do not contain any liquid. This is because the liquid in the microplate acts as a lens and refracts the light entering the well. Minute differences in the height of columns of liquid in the wells also leads to differences in optical path length from well to well. Finally, the light that provides excitation of the target sample must pass through the liquid, causing autofluorescence, and the fluorescence emitted by the target sample must also pass through the liquid causing additional autofluorescence. Also, some of the fluorescence emitted by the target sample in the top reading system may be absorbed by the liquid in the well causing errors in quantitation.
Bottom reading systems have the advantages of being able to combine several different targets-array elements, cells, and beads-within a liquid containing well. This is a very convenient and cost-effective format for high throughput research. This also makes for a highly versatile reading system.
Top reading systems may be employed, however, when the target sample is on a dry substrate. In this case, there is no optical surface between the sample and the collection optics which may reduce working distance and autofluorescence.
In another alternative, the present system is illustrated in an epi-illumination configuration, in which both illumination and light collection of emitted light occur from the same side of the sample substrate. However transillumination systems in which one side of a transparent substrate is illuminated and emitted light is collected from the other side is also contemplated.
In a similar manner, in Example 2 the illustrated illumination has the same optical axis as the collected light. In this on-axis illumination, positioning the pixels of the light from the spatial light modulator is simplified. However, off-axis illumination shown in Example 1 is also contemplated for some application, and may reduce reflection and scattering of illumination light into the collection optics (which otherwise must be removed by beam splitters or filters) .
The disclosed elements make possible a number of different functions. These include a number of components, systems, and methods, some of which are listed below. 1. The spatial light modulator and the integration bar combine to provide illumination that may be adapted to the geometry of the target. The selected illumination area is illuminated with a much greater uniformity than is allowed using just broad spectrum illumination light that is not conditioned. This results in much more consistent data. If the illumination light varies, the light emitted by the excitation light will also vary. It is expected that an illumination source would vary, both over time and over an illuminated area. By using the disclosed integration means in combination with a spatial light modulator, the projected pixels can illuminate targets with minimal variation. 2. The present system has a very large viewing area and is adaptable to analysis of multiple substrates in a single viewing. These arrays can be slide based arrays, multiwell plate arrays, or any other analytical substrate. The illumination can be configured so that only the substrate areas of interest are illuminated.
3. Illumination system. The present illumination system is highly efficient for a number of different reasons. First, the uniformity of the light better distributes light over optical elements like filters. Thus more illumination power may be used without degrading optical elements. Second, elimination of almost all of the UV and IR also allows much more illumination light to be used without degrading optical elements. The use of supplemental LED light modules allows the addition of light at specific wavelength(s) . This allows, for example, one of the dyes to be illuminated at significantly greater illumination strengths.
In addition to the efficiency of the system, the light produced is both much more uniform than the illumination produced by the system in the background section and is shaped to match the array and detector geometries, both of which are commonly square or rectangular. The light integration bar is a polygon of a specific order. This order may be selected to better match the uniformity requirements of the overall system, and/or to better match the tiling pattern. For example, there may be advantages to creating hexagonal shaped tiles rather than square tiles. It is known that hexagonal tiles are an effective pattern for tiling. In addition, the vignetting pattern caused by all lens is circularly symmetric. This results in a bright central region on the detector with darker corners. A hexagonal tiling pattern may better match the vignetting pattern that is projected by the lens system on the detector. In such a system, it may be preferred to use an integration bar with a hexagonal cross section. 4. The IR and UV removal optic is described as used in an array reader optical analyzer, but is seen as advantageous for a range of applications in which elimination of IR and UV light is desired. This optical element may be modified to remove almost all of the unwanted wavelengths, while still transmitting most of the desired wavelengths. For example, many absorbing filters are known to autofluoresce even when illuminated at visible wavelengths. By reducing the bandwidth of light that falls on the absorbing filter with a reflecting interference film, autofluorescence is thereby reduced.
5. The illumination optics, including the IR/UV removal component, the integration bar, the collimating lens, the filter or filter wheel, the steering mirror and the illumination focusing lens may all be mounted on an single board mount in fixed positions that would require adjustment only at the set up of the instrument. In addition, in the off axis illumination mode, many different types of arc lamps with unique spectral qualities could be distributed in more than one illuminator system to illuminate the sample target at different angles of incidence, or at the same angle of incidence but distributed at different angles of rotation in the target plane. 6. The stage is mounted on a kinematic mount allowing the plane of the stage to be aligned parallel with the plane of the detector surface. Again, this is done once at instrument set up. The sample drawer also has a kinematic mount to ensure that the sample drawer is properly constrained and assumes the sample plane. This in turn allows use of a large format array at high resolutions where skew of the sample substrate and the detector surface would bring areas of the sample out of focus.
7. A slide adapter is disclosed which allows up to four array holding slides to be held, processed by a standardized automated system, and analyzed. The device holds slides in a fixed position of the holder, such that the slides do not move during analysis.
8. The present processing and control software allows parallel function of various analytical and processing procedures. This includes the analysis and processing of previous samples as the system acquires new samples. The well centric data structures allow specific well identification and tracking, automated or user defined well parameters, and processing of the well derived data to simply image display and analysis. 9. The autofocus method allows simplified identification of the optimum focus position. These, among others, are the specific inventive components of the present system.
II. Software/Control/Data Processing As with other analytical system, this control system and software are able to perform a number of required functions, including focus, image analysis and data storage. In addition, the system would include a tiling method including the following steps. 1. Acquire an image of a sample substrate or substrates.
2. Reposition the stage so that at least some part of the substrate or substrates are in the filed of view.
3. Acquire a second overlapping image. 4. Using discrete, detected spots on each image (preferably spots exceeding a selected threshold) , stitch a virtual line on each image to align the images. The intensity of the spots and the location of the spots allow confirmation that they are the same spots. This cross correlation technique allows use of the strongest, most defined signals for initial image matching.
5. Define the region of dimmest detected spots that overlap using the virtual line for alignment. 6. Form a composite image with a stitch line at a second virtual line at a region of dim target intensity. 7. If desired, do a comparison of both bright and dim spots to normalize data, determine variability, confirm proper tiling, etc. A related idea is to analyze microarray data by stitching together the views of the multiwell plate. Once the full plate image has been created, the image is segmented into data regions from the composite, mosaic image. The data is then organized by well, with each well corresponding potentially to one array or one test condition.
Prescan
The present system was designed to provide higher resolution imaging of the targets as well as massively parallel target analysis. A number of elements, such as automated filter and dichroic mirror selection, further aid in providing a high throughput system capable of rapid multiplexing. This, combined with sample substrate introduction automated by using robotics, greatly extends the throughput of the system. A prescan further increase the throughput of the sample scanning. Such a feature may be especially useful in non-ordered samples or if the target areas on a substrate is not known. With the present system, once the targeted area is defined, other element of the system can be adjusted to allow for the targeted area. For example, the spatial light modulator allows illumination of a select area of the sample substrate. The detector need only record data from the area of interest.
This method requires an area array detector (e.g. a CCD) and a sample stage that is able to move at a known speed that can be coordinated with the detector's integration time.
As the sample is moved, the pixels from each integration interval are processed to form a single, low resolution image. This can be done without ever stopping the motion of the sample stage. One effect of this is the sample target is blurred in the direction of motion. However, using software, the degree of blur may be identified and software could correct for non-blur in the direction orthogonal to the motion. The pre-scan would then be done at a low resolution defined by this image blur. However, the pre-scan would be done at a higher speed using this method. Once the targets of interest are identified, these areas could be viewed when the stage is not in motion, to provide a complete high- resolution view of the target.
For the optical analysis of multi-well plates a number of steps are needed. The local coordinates of a well must be determined. This step locates the well in relation to global coordinates of the sample drawer. In addition from a initial scan, user input or sample identification input (e.g. RFid, bar code ID, etc.) a number of well properties are input into the system. These include the number of wells (an integer) , type of wells (round or square) , bottom coordinates (locates for the system the exterior surface of the plate as required for mechanical and optical alignment) , the thickness of the bottom (again, required for mechanical and optical alignment) , the plate code (specifically identifying plate or sample, including dyes to be targeted) , and description of plate for user interface display, part 11 fields, custom coordinates (a flag if on enabling specified fields) , well one coordinates, final well coordinates. All of this information is specified prior to a scan.
During scanning specific information is attached to analytical data. This includes an image identifier that is a combination of the plate code, row, and column that uniquely identifies images belonging to a specific well. In addition, the well shape (round or square) , the well type (control, calibration, or experiment) , and the x and y blocks (an integer which identifies the number of blocks in the array in the well, in the x and y dimensions) . In addition, the x and y blocks spacing, spot diameter local background exclusion diameter (for autogridding, which sets the default diameter in which local background is excluded) , autogridding parameters, x and y spot spacing (the center to center dimension for the spots in the array in the x and y dimensions) are specified. In addition, a description of part 11 fields may be included for the well. Furthermore, meta image data may be attached to images from each well . This would include a text image identifier and a parameter file identifier. The parameter file identifier is text which identifies the parameter file for this image with focus, exposure, acquisition time, and other imaging parameters. The work flow proceeds as follows . The acquisition parameters are set including all necessary- user inputs for the data structures involved (such as well rotations, block locations, array element locations, spot size of the array elements, desired detection, statistics and methods, the imaging process options, autofocus options, auto exposure options, exposure time, fixed focus position, focus offset from a surface, etc.) Through use of an existing template structure most of this could be a part of a template that the user selects, and gridding parameters could come from array descriptor files .
An alternative work flow would be to use a pre- scan of the slide or microplate or sample substrate to help the user identify the location and dimensions of many of the parameters above. A pre-scan could yield information about spot size, grid pattern, grid size, key points on the slide or substrate for use for focus or exposure settings, and the best statistics and methods of analysis. Typically, this pre-scan would be performed at the beginning of the batch of microwell samples and the user defined parameters would then be used as nominal inputs to the rest of the microarray samples.
Images are stitched together in a well-centric manner (or block-centric manner in the case of slides) , and stored in appropriate data structures. The storage of images in a well- centric manner includes defining the well geometry and excluding imaging of area outside of the well . A central processor tracks the progress of stitching and any other data processing steps such as background correction, background subtraction, data intensity stretching, etc. and when at least one block or well is completed that block or well is then submitted by the processor to a sub-routine or module that performs the data analysis. This analysis runs concurrently with the microarray reader or scanner.
The analysis module lays down a grid over the array spots based on previous use of input values. There are at least two options that the user might pre-select that would affect the subsequent work flow. 1) Autogrid alignment based on known alignment algorithms such as geometric segmentation or histogram equalization. 2) User alignment wherein user interface is provided to allow moving of the grid elements in whole or in part so as to best align them with the array spots.
Regardless of the path chosen above, changes to the default user input parameters are retained (i.e., "learned") and applied to the next block or well that is submitted to the analysis module. Learning need not be a linear compilation of positional changes. For example, learning may involve detection of complex array spotter errors that are pin or nozzle specific and then involve other mechanical properties of the array spotter (stage errors for example) . Learning may also involve the incorporation of normalization schemes using control spots on the array, or calibration of spot morphologies using calibration spots on the array. Learning may also be used to adjust subsequent reading parameters in order to optimize the detection of the array "on the fly" .
For analysis autogridding, using the assistance of the user defined gridding parameters and/or the array descriptor files, analysis software will attempt to do a best fit to the data in the well. The user can interact with each well one at a time or "sync" a group of wells by rubber banding a group of wells. If wells are synced then the manual gridding tools operate on all the blocks and spots that are "in sync" (i.e., that have been defined as a group by the user) . Otherwise the user can operate on one well at a time. For each well, a set of autogridding parameters are saved. If adjustments are made in a new analysis template is saved, then the autogridding adjustments become part of the new analysis template. In this manner, plates with common idiosyncrasies in terms of gridding can have their gridding adjustments inherited by newly acquired plates. At the conclusion of reading or scanning of the array there are two optional paths. 1) The analysis module may finish the remaining blocks or wells. Alternatively, 2) based on some analysis quality parameter (such as overall brightness, or control spots for example) the reader or scanner may need to go back and read a previous portion of the array again.
This real time analysis may be performed in "batch mode" so that the analysis is being completed on one slide or plate, as another is inserted into the reader by robotics and scanning commences. The disadvantage of this approach is that areas of the first slide or plate cannot be re-imaged. However, this limitation may be overcome by giving the reader or scanner the ability to read or scan a sufficient number of slides or plates such that as a new plate is being loaded, a previously read or scanned plate may be re- imaged.
Introduction
Microarray detection and analysis follows a well-defined sequence of steps: 1) Pre-scan [optional] , 2) Scan sample, 3) Grid scanned image, 4) Create report of gridded image. This sequence has been sufficient for the use of microarrays in a pure research environment. However, as microarrays move into use in drug discovery and diagnostics, the paradigm of many genes and few samples shifts to a few genes and many samples. This is the so-called array of arrays format where spots of DNA, RNA, antibodies, cells and other biological samples are spotted into microplates. Each well of the microplate can now represent a separate sample—for example from a different patient. This change of emphasis from many genes/few samples to few genes/many samples as microarrays move from basic research to diagnostics will also require changes in the accuracy, throughput and analytical capabilities of microarray systems. Decisions about patient treatment will require greater accuracy and reproducibility. Pre-clinical and clinical programs require results in hours not the days and weeks it currently takes to digest microarray data. And analysis must be faster and require less expertise. Thus, new uses of microarrays demand the highest possible accuracy, throughput, and improved analysis capabilities.
Non-real time example
An example of the current state of the art is indicated in Fig. AA. The first step is often an optional pre-scan. Pre-scans offer the opportunity for the user to get a quick look at their data, make decisions about system gain, exposure time and light level, and rubber band regions of interest where there is data. However, many of these decisions can be automated or made without a pre-scan. For example, in one embodiment, exposure time is determined on a view by view basis. A view is acquired, the pixel intensities within the view are measured, and then the cycle repeats until pixel intensities are in the desired range. The desired range is generally three or more background standard deviations above the mean background and less than saturation. More preferred is that the pixel intensities be with the range of 25% to 75% of the full scale range of the system. A similar iterative procedure is used for focusing. A view is acquired and then it is analyzed for best focus using classical image processing techniques. One such technique is a Sobel transformation. Another is to find the peak intensity of a region of interest of the view. Whatever focus metric is used, once the metric is optimized then the image is focused. Several different techniques for autoexposure and autofocus are detailed in appendix.
Once the view is acquired with proper exposure time, focus and light level it may then be tiled. Tiling is the process of combining individual views into a single mosaic image. In order to combine views, there must be some overlap. The amount of overlap varies as the accuracy of the stages in the system. The overlap may range from one pixel up to 50% of the width of an image. In the preferred embodiment, balancing the cost of stage accuracy against the throughput of the system, the overlap region is 10% of the width of the system. Cross correlation techniques are then used to further refine the best location for a line along which to combine the views. Typically, this is achieved by first using the mechanical movement of the stages as a first approximation of the location of the line where views are to be combined. Then the cross correlation is performed to more accurately determine the line of combination of two views. Additional steps may involved the averaging of signal levels of the pixels that are in the overlap region.
Once all views have been tiled, it is then possible to segment the image. For example, in the case of a microplate image it is desirable to segment the image into wells. The region between the wells does not contain any data. For purposes of clear presentation of data these regions may be masked with some color, or the data beneath them may be reduced in intensity to distinguish this non-data, non-well region. It is further desirable to indicate the location of wells based on some standard nomenclature. For example, a generally accepted nomenclature standard for microplate wells has been developed by the Society for Biomolecular Screening. It is based on naming the columns from alphabetical characters and the rows by numbers.
After segmentation the image may be corrected for background variations, and to equalize the intensity values from views with different exposure times. This image correction process may be quite computationally intensive and take several minutes to tens of minutes to complete for large microplate arrays at high resolution. For example, the image correction process for a microplate that is read using autoexposure settings, four microns resolution, with spots of 100 microns may take as long to complete as it takes to acquire the data in the first place. This is because the kernel for background correction image processing may be quite large in terms of the number of elements in the kernel required to cover an area in pixels that is greater than one spot diameter.
At this point the image is now ready for gridding. Gridding is a process whereby either automatically or manually regions of interest for data analysis are selected. Methods for automated gridding include histogram equalization techniques and geometric techniques. In the manual gridding method, the user draws a matrix of regions of interest and locates each element of the matrix so that it fits around each of the spots in the array.
In the case of cells or beads, the gridding step is replaced with an object recognition step. In this case, objects are determined by some combination of intensity thresholding, matching of regions of interest to geometric shapes, or other method.
Finally, data for the report may be extracted from the grid. Data that may be extracted may include the mean pixel intensity of a spot, the ratio of the mean pixel intensity in one color channel to that of another color channel, median pixel intensity, standard deviation, variance, and many other statistical values.
The report is then analyzed by statistical analysis package that determines the validity of the data overall, and the correlation of the data values to actual genetic activity. This is the object of the whole process.
A new generation of microarray detection and analysis is based on imaging by two-dimensional detectors, followed by tiling into a single contiguous image. One of the advantages of this type of detection is that it is well suited to both higher throughput and the new format. Two dimensional readers complete detection in contiguous blocks of data called tiles. These tiles are then put together using cross correlation algorithms to enhance the accuracy of combination. In this way, blocks of data are ready for analysis more quickly than in scanners. In addition, the tiling method is well suited to matching the format of microplates with their individual wells.
Significant savings in the overall detection and analysis time could be saved by performing as many operations as possible in microarray detection and analysis in a parallel, real time fashion.
In addition to time savings, accuracy can be improved by allowing the user to interactively provide feedback to real time operations. It is obvious that several of the basic steps in real time analysis could be combined for purposes of optimization.
In addition to real time detection and analysis, considerable time savings, and ease of analysis can be had by storing data acquired in array of array experiments in a well centric manner.
Real Time example
In the real time example, it will be clear that significant time savings and improvements in accuracy and ease of analysis may be obtained. Referring to Fig. AB, acquisition begins with Block Acquisition. A Block is herein defined as a region of the sample where meaningful data may be extracted and analyzed. In the case of a microplate, for example, a Block is a well. Views are acquired using all the methods described above. However, views are deliberately acquired in a sequence that leads to the formation of a single Block as soon as possible. Once a block is acquired, another program thread begins to tile the views of that Block together.
It should be apparent that in place of the software term thread, other appropriate methods would be parallel processing, or coprocessors. The tiled Blocks are then stored in temporary memory for image segmentation. By temporary memory is meant either a disk storage device or actual physical memory.
The various Blocks are then segmented into well areas with appropriate masks applied for non-data regions. These segmented wells are then stored temporarily.
Image correction is then applied to the segmented wells. During the time that image correction is taking place, all the above steps may be occurring in parallel. In one example, image correction is the longest of all steps taking as long as all other steps combined. However, it may not be apparent that there is considerable time savings to be had by doing these steps in real time. That is because there is overhead time during which nothing is happening in data acquisition. For example, it may take 3 seconds to move the stage from view to view. It may take 3 seconds to move a filter from one setting to another. Or if autofocusing, it may take several images before one that has good focus is achieved. During these overhead times, image correction, as well as many other steps may take place.
It should be clear that these steps may be combined in a way that is most efficient and best utilizes the resources of the system.
Auto Focus Precondition
Each auto focus test image must satisfy some exposure preconditions to ensure there is enough content on the image to calculate an auto focus metric. These preconditions are:
1) Test for Overexposure: a test image is considered overexposed if the top x percent of its pixels are at the maximum intensity. X is defined by the
MaxPixelSaturation setting in the AutoFocusPrefs section. The default setting is 0.1.
2) Test for Underexposure: a test image is considered underexposed if the maximum pixel intensity is less than (2Ax * max range) . X is defined by the MinAECompensation setting in AutoFocusPrefs section. The default setting is -3.0.
If the first image of an auto focus sequence does not pass the precondition, the seed position is used. If subsequent test images do not meet criteria 1, the sequence is restarted at the current position. If subsequent test images do not meet criteria 2, the seed position is used.
Auto Focus Algorithm
For a specific metric function, the best focus position ("in focus") is achieved at the absolute maximum of the metric.
1) Expose images at Focus Preset (X1) , Focus Preset - Default Offset (x2) , and Focus Preset + Default Offset (X3) . The Default Offset is currently 0.05 of full focus range. 2) Check if there is a peak defined by f (X1) , f (x2) , and f(x3) , where f is the metric function. Vollath' s F4 is currently the metric being used. A peak is defined by f (X1) , f (x2) , and f(x3) if f (x2) and f(x3) are both less than some percentage of f (X1) . Current percentage to define a peak is 97%. We define a peak in such a way because there are small peaks away from the absolute peak of f, and we must identify whether we are at the absolute peak or not. If a peak exists, perform quadratic curve fit on (X1, f (X1) ) , (x2, f(x2)) , and (x3, f (x3) ) to find the x where the maximum of f exists. The quadratic curve fitting is performed by solving for A'*X=b' where A' = A(T)A, b'=A(T)b, and (T) means Transpose. Since A*X=b, A(T)A*X = A(T)b, and A(T)A is a square matrix. A'*X=b' is solved using LU Decomposition. Use that x as the focus position.
3) If no peak exists, check whether f (X1) , f (x2) , and f (x3) are increasing or decreasing. a. If f(x2) , f (X1) , and f (x3) are increasing ( f(x2) < f (X1) < f(x3) ) , expose image at X3 + Default Offset (x4) . Remove (x2, f (x2) ) and start from step 2 on the remaining points. b. If f(x2) , f (X1) , and f(x3) are decreasing ( f(x2) > f (X1) > f(x3) ) , expose image at X1 - Default
Offset (x4) . Remove (x3, f (x3) ) and start from step 2 on the remaining points .
4) If f (X1) , f (x2) , and f (x3) are neither increasing nor decreasing, check to see if a peak exists at or near one of the ends, i.e. two points at one end are within
MinDelta in y value, and the point at the other end is greater than 4 times MinDelta away. The current MinDelta is 3%. If so, expose the next image on the other side of the anticipated peak at one Default Offset step away and start from step 2 on all the points. For example, if f (x2) - f (X1) > 4 * (f (X1) - f(x3)) , then expose at x3 + Default Offset.
5) If none of the above conditions are met so far, then try to find the peak by exposing images at points outside our current range, and check the slopes to see if we can determine the direction of the peak. Expose images at x2 - (2 * Default Offset) (x4) and X3 + (2 * Default Offset) (X5) . Find the slopes between X2 and X4 (Tn1) , and between X5 and X3 (mn) . a. U-Shaped Condition: If mi and mn are both positive or are both equal, then the points form a U- shaped curve, and there is no way to determine which direction the peak exists, so use focus preset. b. Plateau Condition: If mi and mn are both negative, then we have straddled the peak. Of all the points, find the point xi such that Xi_x < xi < xi+i and f (Xi) is the max of x' s that satisfy the previous condition. Perform quadratic curve fit on (xi-i, Xi, Xi+i) to find the x where the maximum of f exists, and use that x as the focus position. c. Peak to Left: If Xn1 > mn, then the peak is located to the left. Remove all points except X4, and start from step 1 again with X4 as the first point. d. Peak to Right: If mi < mn, then the peak is located to the right. Remove all points except X5, and start from step 1 again with X5 as the first point . 6) If at any time one end of the range of focus is reached (0 or 1) , use that point in the metric calculation i.e. try to expose an image at that focus position and calculate the metric. If an end is reached more than three times, return the limit reached.
Mapping
Auto Focus Options
Most images acquired by ArrayEase are composed of multiple fields of views stitched together in a rectangular array to create a larger image. Although an image can be set up to auto focus, it does not mean that ever field of view (View) will be auto focused. The Auto Focus Option determines which Views will be auto focused. This is a batch setting that applies only to the images setup with auto focus. If multiple filter sets are to be acquired for the same sample, each View to be auto focused will be auto focused for each filter set. Calculations such as plane fitting are done independently on each filter set .
Full
Every View is auto focused.
Sampling
A percentage of Views will be auto focused. The percentage depends on the Adaptive or Percentage settings. These auto focus Views are spread evenly across the sample. All Views are acquired in the same order. Views that are not to be auto focused will use the previously calculated focus position. After all the Views are acquired, a plane is fitted using the auto focused positions, if auto focus was successful. Views that deviate from the plane by more than the
FocusDeviationFactor (set in the Def file) will be reacquired.
Adaptive - The first View of each strip (in the longer axis) will be auto focused.
Figure imgf000066_0002
Figure imgf000066_0001
AFX -auto focused view; X - uses position found by AFX
Percentage (50%, 40%, 30%, 20%, 10%) - Defines the percentage of views to be auto focused.
Sampling with Adjustment
This option is similar to Sampling above. A percentage of Views will be auto focused. The percentage depends on the Adaptive or Percentage settings (same settings as above) . These auto focus Views are spread evenly across the sample, and all Views are acquired in the same order. Views that are not to be auto focused will use the previously calculated focus position. After a minimum number of Views have been auto focused
(currently set to 6) , a plane fit is attempted after each auto focused View. If all auto focused positions fall within the FocusDeviationFactor (set in the Def file) from the plane, auto focusing is stopped and the rest of the Views will use focus positions from the plane fitting. After all the Views are acquired, any View that deviates from the plane by more than the FocusDeviationFactor will be reacquired.
Pre-Sampling
A percentage of Views will be auto focused. The percentage and location depends on the Adaptive or Percentage settings. Auto focused Views are imaged first. A plane is fitted to those positions. The rest of the views (and any auto focused View if it is more than a FocusDeviationFactor away from the plane) are acquired.
Adaptive - Auto Focus ten Views. 2 in each corner and 2 in the center in the long axis. (X denotes auto focus View)
Figure imgf000067_0002
Figure imgf000067_0001
Percentage (20%, 10%, 5%, 1%, 0.5%)%) - Defines the percentage of views to be auto focused. These Views are spread evenly across the sample.
Pre-Sampling with Adjustment
This option is similar to Pre-Sampling above. A percentage of Views will be auto focused. The percentage depends on the Percentage setting (note that Adaptive is not an option) . These auto focus Views are spread evenly across the sample. Auto focused Views are imaged first. A plane is fitted to those positions. The rest of the views (and any auto focused View if it is more than a FocusDeviationFactor away from the plane) are acquired. After a minimum number of Views have been auto focused (currently set to 6) , a plane fit is attempted after each auto focused View. If all auto focused positions fall within the FocusDeviationFactor (set in the Def file) from the plane, auto focusing is stopped and the rest of the Views will use focus positions from the plane fitting. After all the Views are acquired, any View that deviates from the plane by more than the FocusDeviationFactor will be reacquired.
Sample The first View of every sample is auto focused.
The calculated focus position is used for the rest of the sample.
Batch Only the first View of the entire batch is auto focused. The calculated focus position is used for the rest of the batch. This option is not available for the Plate since Sample does the same thing. AlphaArray Autofocus (AF) Mapping
Due to the large number of views and sample planarity issues a low overhead method to perform accurate focus is required.
The solutions for sample mapping fall into two classes. 1) Static (pre-acquisition) and 2) Dynamic (during acquisition) .
In the static class a set of focus positions are determined prior to acquisition and can be derived from either a preview image or some sub-sampled image of the acquisition ROI. Once this set of focus positions is determined then the position for all views can be determined by various means then acquisition can begin. In the dynamic class focus is determined during acquisition.
The proposed development path has three stages to full implementation of the most stable and robust formulation using the dynamic class method with an option to further develop the static class as a final formulation.
The first stage is a two option solution.
1) AF on the first view of each strip and use the focus position for the rest of the strip. This will result in a 6 fold reduction ,20 AF calculations for a full slide, less for a smaller acquisition ROI) on the number of focus cycles required compared to full AF. This method is not amenable to plates (the strips are too long) .
2) AF every "X" views, aka the sampling period method. AF is performed every "X" views and the next views use that focus position. The once-per-strip is a special case of this method. This method will work with plates. The second stage is to add consistency checks to the set of focus positions. Focus positions that are significant deviations from the expected are then refocused and reacguired. A similar type of acquisition is already supported for exposure times that are much different (ReExposeEnabled) in the autoexposure class. There are several schemes to check for consistency (e.g. deviation from a plane, or line, relative to depth of field) . The third stage is to add an adaptive feature that checks the set of determined focus positions for internal consistency to enable accurate curve fitting. Once a sufficiently accurate set of focus positions is collected and a surface model is fit the subsequent focus positions are derived from the fit. In essence the sampling period method proceeds with AF until enough points are in hand to extrapolate from a surface fit and then switch to basically a manual focus mode using the surface fit to determine the focus positions.
Autoexpose Algorithm - Simplified
There are three distinct phases to determining the final exposure time. 1) Boost phase: Characterized by low signal levels. 2) Calculation phase: Signals above a predetermined threshold.
3) Validation phase: Final testing of signal and exposure time.
A test image is acquired and measurements of signal levels are used to determine the course of action. Test images are typically filtered as set by NoiseFilterEnable flag in preferences. The final' images are never filtered based on NoiseFilterEnable flag.
Boost Phase In the boost phase the exposure time is increased by a calculated factor and a new test image is acquired. The first test image is the seed image and SeedTime is the exposure time for the first test image.
NewTime=01dTime* (DataMax-Black) / (ImageMax-Black) *Threshold EQN 1
OldTime is the test image exposure time. DataMax is the cameras maximum signal level (e.g. 4095 for 12 bit cameras) . ImageMax is the test image maximum intensity. Threshold is a value set in the preferences ("AutoExposeLevel") .
Black is the bias level that is read from the bias calibration file (average value) . A check on the ImageMax should be made-if
ImageMax<=Black then ImageMax=Black+l .
"Boosting" continues until (ImageMax- Black) >Threshold* (DataMax-Black) .
Calculation Phase
In the calculation phase signal levels are above the Threshold and an accurate prediction of final exposure levels can be made.
FinalTime=OldTime* (DataMax-Black) / (ImageMax-Black+Noise) *2EV EQJXT 2
EV is the exposure level compensation factor set by the user. Noise is a noise level based on well capacity and photon noise.
SystemGain is the camera system gain setting (electrons per ADU) .
Noise=sqrt [SystemGain* (ImageMax-Black) ] /2 SQiV 3
Validation Phase
Validation is a user selectable configuration option. Once a FinalTime is determined the signal levels are to be validated based on this option. A test image is acquired and signal levels are verified to be below saturation and above a lower limit set by preferences (for example, 0.85* [DataMax-Black] ) .
Test image acquisition : TestTime=FinalTime*2"EV EQN 4
Validation condition : 0 . 85<= (DataMax-Black) / ( ImageMax-Black) <1 . 0
If the validation condition is true then use FinalTime to acquire the f inal validated image .
If the validation condition is false and
Threshold< (DataMax-Black) / (ImageMax-Black) <0 . 85 then use EQN 2 and repeat Validation Phase .
If the validation condition is false and (DataMax-Black) / ( ImageMax-Black) >=1 . 0 then use
SeedTime=TestTime/2 . 0 repeat Boost Phase .
III . Additional Features /Comments
If live cells are used in the assay, preferably the cells are kept at a condition at which the cells remain alive during the course of the assay. Using a bottom imaging , epi- fluorescent configuration allows the samples and the optics to be physically isolated from each other . This in turn provides the ability to control the environment of the cells and potentially change the condition of the cells mid assay. This can be done in one of three ways.
1. Robotics
A separate, off the shelf incubator and robotic sample feeder could be used. These are commercially available, standardized, and adaptable to the present reader. The present reader is able to image an entire plate at once in a relative short time period. This means that the samples in a microplate would spend minimal time outside the controlled environmental chamber. This add-on solution allows flexibility for users, could separately configure the optical system and the environmental system.
2. Self Contained Sample Cartridge
A second solution would be to use a microplate contained in a cartridge that included a mircoelement for temperature regulation and onboard fluidic circuits and gas environmental controls.
3. Environmental Control as Part of the System
Standardization of microplates and advances in robotics has led to development of a variety of devices adapted for use with microplates. These include transfer devices (such as pippetters) that seal over wells of the microplate, preventing cross contamination through aerosol droplets of dispensed reagents. Such a device could be attached to reagent reservoirs or gas sources to feed the cells. In addition, a heating element used with each channel feeding each well would allow temperature control of the well . In any of the above three implementations, the bottom scanning configuration, ability to detect individual cells, and ability to image rapidly enable the live cell assay and provide enhanced functionality. This includes kinetic measurements. Presently, image processing algorithms are capable of providing size and shape characteristics of cells as well as dye concentration, relative fluorescence intensities and other imaging parameters . These in turn can be used to identify artifacts, including contaminants, debris, coincidence of cells and out of focus cells or other targets.
In the illustrated embodiment, for cells in solution (that tend to be spherical) , a pixel resolution in the object plane of slightly less than half of the diameter of the cell is sufficient to classify the object as a cell. Adherent cells, which tend to have irregular shapes, tend to be elongate and hence longer in one dimension. The resolution of the system described in one embodiment is able to classify both of these cells types as individual cells and discriminate cell cytoplasm from the nucleus.
The present system and components and method combine in a system having high versatility and has functions including analysis of reporter assays, is adaptable for analysis of newly developed dyes, can evaluate morphology of targets (bead, cells, etc.) , can perform kinetic assays, high content, etc.
IV. Examples of Applications and Test Results
The described systems and methods are adaptable to a number of different applications including the following: Example 1: Cell arrays
Much remains to be learned about the function in living cells of proteins encoded by genes. Systematic, large-scale screening, confirmed by targeted sets secondary assays, provide validated approaches to leveraging genomic data.
In leveraging genomic data, certain limitations in research instrumentation have become apparent. One current bottleneck is the speed of cellular assays. High throughput analytical techniques using cells remain laborious, expensive, and generally limited to certain types of functional assays. The assay of cellular function over time (kinetic assays) remains an even more difficult obstacle.
New cell-based assays that are emerging have the potential to accelerate studies of protein functions and the effects of small molecules on cellular function. Such assays are difficult to implement on a large scale because of a lack of adequate detection systems. The lack of proper detection systems is due in part of the complex requirements of living cells. The viability and function of such cells is adversely affected by variations in temperature, air composition, and humidity among other variables. Additionally, cells grow relatively slowly and can be difficult to manipulate during an assay. Many cellular detection systems are sample destructive, prohibiting the ability to measure living cells over time (kinetic assay) . New methods for detecting proteins within living cells in a high throughput system are needed by a large number of research applications. Cell arrays are one cell-based application where the existence of a better detection device can expand the opportunities for research. A cell array consists of an array of plasmids (e.g., from a plasma library) bonded to a glass slide and then transduced into cells. This creates an array where each spot location consists of a living cell that potentially over expresses a gene from a defined plasmid.
A number of different assays including immunologic, histochemical, and functional assays may be adapted to cell array format. Using fluorescent labels with a binding agent allows great flexibility, relatively low cost, and the advantage of a known reagent technology. With the optimized vector and promoter combination, very high levels of expression can be achieved for biochemical and functional detection. Cell arrays can be constructed in industry standard 8x12 centimeter multiwell plates. The current need is to have an optical analysis system which can efficiently detect and analyze such plates. When the spot with a desired signal is detected its position allows identification of the gene that produced the signal .
In designing array substrates for such assays, the libraries selected for this array may be composed of two fundamentally different types: 1) a library in which each spot represents a unique gene (e.g., in an extreme example an entire genome) , or 2) a library in which each spot represents a different mutation of the same gene. By analogy to microarray analysis, it is also possible to use either of the two types of libraries above in an array format such that each cell array spot which consists of a monolayer of many cells spread over a region may be underlaid by an array of libraries. Thusly, each cell array spot is composed of a monolayer of cells plus an underlying array of many spots taken from the library types above. Such a format may be more useful for study of protein-protein interactions, cell- cell interactions such as occur inn biological tissues, and for the study of complex kinetics where compound mediates the expression of a protein.
Extending this concept, the array underlying the cell array spot so defined may be also be an array of compounds designed to be inhibitors or mediators for cellular receptors.
In another embodiment of the above example of cell arrays, beads may also be incorporated as controls or calibration means indicating size, fluorescence intensity, or as ligands for cellular binding. One such class of ligands would be monoclonal anti-bodies which are attached to the surface of the beads and are specific for certain cellular receptors or proteins. Said beads as ligands might be used with cell arrays as agonists or antagonists of cellular receptors, for measurements of affinity or competition, or to produce up-regulation or down-regulation of cellular receptors. The fundamental difference between beads so used and the array underlying the cell array spot are that beads are mobile and may be concentrated, whereas the array elements underlying a cell array spot are fixed and non-mobile with a fixed concentration in relation to the cells.
The cell array is one cellular application where the existence of a better detection device will facilitate better data. Cell arrays are currently being used for the study of HIV Envelope and other genes with therapeutic and vaccine potential . In a typical assay, plasmids are arrayed, cells are transfected on the array, and the cells are fixed and strained to visualize gene expression. A representative cell array imaged (with the dsRED) expressed in an array of cells. The cell array pictured in Fig. Bl is composed of a monolayer of cells, but only cells that settle on a particular spot of DNA become transfected with the fluorescent reporter. Each of the spots on the array represents a cluster of 100-200 cells (Figs. B2 and B3) .
Currently, a major limitation of the cell array is the absence of an affordable detection device capable of assaying living cells and with sufficient resolution for cell array analysis. For example, arrays to date have been fixed, stained, coated with preservative, sealed within glass coverslips, and scanned using a laser-based detector where the slide must be dry, positioned within a fixed slide holder, and inserted into the machine through a narrow slot. It is not possible to keep cells alive during the process.
In the present examples, an area illuminator combined with an area array detector allow rapid analysis of fairly large areas (multiple wells) in a single view. In the detailed embodiments broad spectrum illuminators are used rather than lasers. The practical result of this difference is that detection of samples of variable height, unusual dimension, or with unusual environmental requirements can be readily accommodated.
A major benefit of being able to visualize genes with living cells is the ability to measure the kinetics of their expression over time, a major goal within many cell-based assays. In order to test whether this application was possible, we used a cell array expressing fluorescent proteins under the control of CMV promoters, and assayed expression of the array over time (Fig. B4, lower panels) . Results indicate that kinetic measurement of expressed proteins within living cells is possible. Moreover, we were able to detect differences in expression using a promoter enhancer (sodium butyrate) , suggesting that promoter and transcription factor research could benefit from this technology. Each spot on our test array expressed one of two reporter proteins, but could just as easily have expressed a library of different proteins. A different fluorescent marker or fusion-protein could also be used, as could different promoters, transcription factors, or cell types, depending on the scientific inquiry.
In one experiment, GFP-expressing cells were serially diluted with non-expressing cells to obtain from 100,000 to 1,500 green cells per microtiter plate well
(all wells contain 100,000 cells total) . Imaging of the wells using one implementation of the optical reader revealed that as few as 1,500 cells could still be visualized (Fig. B5, left panel) . When quantified, the presence of 1,500 green cells represented >70,000 positive pixels compared to the next well over that contained no green cells (Fig. B5, center panel) . Wells containing 3,000 and 6,000 cells could also be detected with over 100,000 positive pixels per well. By comparison, when the fluorescent signal from the entire well is averaged, as would a normal microtiter plate fluorometer, wells must contain >10,000 cells to elicit even a minimal signal (Fig. B5, right panel) .
To discriminate single cells, an analyzer with 4 μm resolution was used to image cells stained with a cytoplasmic and nuclear dye. Imaging of both dyes and merging of the figures demonstrated that this reader was not only capable of distinguishing individual cells, but also of resolving nuclear from cytoplasmic staining (Fig. B6) . To demonstrate its capabilities further, increased resolutions of 0.75 μm and 0.35 μm were also achieved. Example 2 : High content screening Additional interest in both academic and industry settings is shown for high content analytical systems. High content analysis can identify functional associations between cellular events by screening for correlations of morphology, cellular localization and event timing. Achieving these goals allows much higher information to be derived from any specific individual analytical event. Imaging of an entire well bottom surface maximizes the usable area for cell arrays. The microplate format capability increases throughput by testing multiple conditions in parallel.
High content detection systems offer a fundamental advantage in both resolution and field of view typically by imaging inter and intra cellular features across large surface area. In addition such devices offer dramatically improved sensitivity by scoring individual cells as events. For example, a microplate well containing 100,000 cells of which 1,000 are fluorescent may average as a signal just 1%. However, if individual cells are scored each cell would be measured as a discrete event also increasing the fluorescence over the background and providing a statistically relevant sample group, (e.g. 1,000 sample points, rather than just one.)
In the present examples, modification for higher resolution simply requires selection of lenses.
The targeted area could then be analyzed at intracellular resolutions. Example 3 : Low Level Gene Expression
Cancer is a major public health problem. Worldwide, more than 6 million people die from cancer each year and more than 10 million new cases are detected. In developed countries, cancer is the second leaving cause of death.
Lung cancer is the leading cause of cancer motality world wide in both the developed and developing worlds. In the United States, we expect 169,400 new cases and a staggering 154,900 deaths in 2002. Lung cancer accounts for 28% of all cancer deaths. There are more patients who die from lung cancer than from breast, colon, and prostate cancers combined. The five-year survival rate for lung cancer detected early enough to still be localized is 48%, while the overall five-year survival rate of lung cancer is 15%.
Advances in the detection and treatment of this disease have not resulted in significant improvement in mortality rates. In the last 50 years, lung cancer incidence increased by 249% and the mortality by 259%.
Currently, people at risk of lung cancer undergo chest X- ray or spiral CT. Chest X-ray is capable of detecting tumors 1-2 cm in size and CT can detect peripheral tumors smaller than one cm. Most lung cancers are detected in advanced stages (Stage II and greater) , but the only patients who achieve long-term survival are the minority diagnosed with stage 0 or I disease. Clearly, alternative approaches to improve lung cancer screening and early detection of lesions likely to progress to malignancy are desperately needed.
The clinical classification of cancer and its precursors is currently based on phenotype markers, such as nuclear-to-cytoplasmic ratio and extent of invasion, which are then used to select therapy. In the last decade, enormous progress has been made to understand the molecular events that accompany carcinogenesis. The availability of technologies for detecting cancer associated molecular alterations has led to identification of molecular markers for cancer and the associated processes they modulate. Molecular signatures of carcinogenic change include altered DNA copy number at the level of genomic DNA or altered gene expression at messenger RNA (mRNA) or protein levels.
Altered gene expression may be observed as inappropriate in tissue (space), time, or level. The known hallmarks of cancer cell biology include loss of proliferation control as characterized by changes in cell cycle, cell cycle checkpoint function, apoptosis, angiogenesis, and other signaling pathways. Assays that detect molecular alterations associated with cancer progression may be useful in a number of contexts including early diagnosis, classification of cancer subtype, predicting efficacy of therapy, staging of disease progression and prognosis.
Cancer results from an accumulation of key mutations in expanding clones of cells originating from tissue-specific stem cells. Past research indicates that gene alterations observed in tumor tissue may not be causative for cancer, and still useful for prognostic staging. In light of the complexity and heterogeneity of molecular events underlying cancer, it seems unlikely that we will soon have a definitive handful of critical diagnostic or prognostic assays. Rather, parallel evaluation of multiple genes may remain a critical tool for clinical evaluation for some time to come. Therefore, systems capable of providing sensitive and robust detection of multiple factors can provide sophisticated approaches to early diagnosis and guidance for treatment. Based on current knowledge, one can describe some of the features that would be desirable for cancer detection systems (See Table 1) .
The recent availability of the human genome sequence, and the development of high-throughput genomic technologies and methods for isolating selected cell populations from fresh, frozen, and fixed tissue have introduced unprecedented opportunities for defining in precise genetic terms how human cancers develop. This information will provide the basis of more objective methods for diagnosing, classifying, and staging many neoplasms. It will also revolutionize cancer treatment through the discovery of disease-specific molecular targets. Even more importantly, it should reveal changes in early lesions that can serve as molecular targets for novel prevention strategies.
Table 1. Desirable features in Systems for Detection of Low Expression Prognostic Genes
Figure imgf000084_0001
The availability of the human genome sequence has spurred enormous interest and investment worldwide in the use of gene expression analysis to identify differences in gene expression levels between normal and cancerous cell populations. There are many promising multiplex methods to analyze gene expression, including gene expression microarrays and serial analysis of gene expression (SAGE) . These techniques have enabled measurement of the expression of thousands of genes, revealing many new, potentially important cancer genes. In addition, quantitative PCR (Q-PCR) can be used to quantitate changes in mRNA level and detect DNA polymorphisms while comparative genomic hybridization (CGH) reveals changes in DNA copy number.
Various approaches for gene expression microarrays have been described. One of the most widely used spots nucleic acids representing individual genes onto a specially coated microscope slide sized microarray substrate. In this approach, two mRNA samples are converted to complementary nucleic acid (cDNA or cRNA) and fluorescently labeled with Cy3 or Cy5. The samples are co-hybridized to the microarray, and differences in gene expression between the two samples are detected by changes in fluorescence ratio between the two dyes.
The ability to spot thousands of genes onto a substrate makes microarrays well suited for scanning gene expression across the entire genome. Even with prior knowledge of gene sequences, producing spotted microarrays presents substantial design, logistical, and manufacturing challenges. In practice, this has limited the number of researchers independently printing microarrays. Additional challenges for gene expression microarrays arise from the proportionality of output fluorescent signal to mRNA abundance and the complexities of nucleic acid hybridization. In vivo, RNA transcript abundance ranges over 105-fold. Additionally, relatively few genes are expressed at high copy numbers, and the majority of genes are expressed at low copy numbers. Thus, the regulatory genes likely to be of interest for cancer studies can be expected to generate low fluorescence. Failure to achieve equilibrium hybridization may further limit fluorescent signal. Nonetheless, gene expression microarray studies have made profound contributions to our understanding of cancer biology.
Serial Analysis of Gene Expression (SAGE)
SAGE enables detection of quantitative differences of gene expression across multiple samples without prior knowledge of genes expressed. SAGE describes the relative abundance of transcripts in a mRNA population by enumerating the number of copies of each mRNA represented by unique sequence tags. SAGE has been validated in numerous studies including several that were focused on cancer. Numerous improvements to the procedure have resulted in robust up-to-date protocols (www.sagenet.org) . Furthermore, recent development of microSAGE has extended the utility of this technology to analyzing minute lesions. Combining microarrays and SAGE enables the measurement of specific known genes as well as the unbiased detection- of genes not printed on the microarray.
Over 50 SAGE libraries have been constructed to describe the early stages of the neoplastic process, including over 45 SAGE libraries of non-small cell lung cancer and normal bronchial and lung tissues. These SAGE libraries represent the largest disease specific SAGE data set currently constructed. Currently this is a resource of over six million tags with 400,000 unique tags.
A challenge for applying SAGE methodology to detect low expression genes is the need to sequence pools of sequence tags in which tags representing highly expressed genes occur much more frequently.
Array-Based Comparative Genomic Hybridization (CGH) Until recently, localized deletion mapping using microsatellite markers has represented the highest resolution method available to localize potential tumor suppressor genes. However, new approaches based on the use of genomic microarrays have been developed. To achieve high resolution, Pollack et al . , made use of cDNA microarrays for analyzing genomic DNA derived probes. However, this approach is hampered by suboptimal hybridization due to the fact that genomic DNA used as probe has introns that are absent in the spotted cDNA target. Complementary to these types of analyses, CGH allows the detection of segmental copy number changes.
Array CGH or matrix CGH offers high resolution for genome-wide detection of chromosomal alterations. This technique detects gain (or loss) of chromosomal regions through competitive hybridization of probes generated from tumors/preneoplastic lesions and reference (normal) genomic DNA to a microarray of specific chromosome segments, e.g. BAC DNA.
Several groups have made of array CGH to analyze small chromosomal regions in detail. However, for whole genome analysis, the most comprehensive attempts to-date is a BAC array containing 2460 loci, covering less than 10% of the genome and a commercial system from Spectral Genomics at 1300 loci (www.spectral@genomics.com) .
Limitations of Current Microarray Detection Technology Certainly, existing microarray readers have generated impressive research results . Broadly, the goal of these detection devices is quantitation of integrated fluorescent signals from microarray spots containing biomolecules of interest. The general workflow for such devices includes acquiring an image of emitted fluorescence, gridding the spots, and integrating fluorescence by statistical analysis of pixel values.
The majority of current microarray readers are based on laser-PMT technology. The strength of laser scanners for microarrays is their availability, high sensitivity, and optimization for slides. At present, only one laser microarray scanner supports formats other than slides. While the laser offers an intense source of narrow wavelength illumination, it is limited to available and affordable laser wavelengths, thus limiting the choice of dyes, stains, and other fluorescent markers that can be used. Whether laser scanners are robust enough to survive as clinical workhorses remains to be determined. Perhaps the greatest challenges for microarray detection technology revolve around matching the biological range of gene expression in vivo. Certainly, most researchers desire improvements in "experimental sensitivity" and would prefer the overall limit of detection to be lower than the lower physiologic levels of gene expression. In yeast, the range of mRNA transcript abundance spans about 10δ-fold from the most highly expressed gene to the physiologic equivalent of zero. This greatly exceeds the "experimental dynamic range," meaning the ratio between the highest and lowest signals in a single fluorescent dye channel measurable in a single image using current detectors. Laser-PMT based microarray scanners operate using "constant exposure" in that laser power and PMT gain are not variable within a microarray image. In practice, one selects exposure settings to improve detection of low signal spots while sacrificing higher signal spots to saturation. Thus, multiple images must be taken to capture the full of the biological information in the microarray.
In addition, individual genes can be repressed or induced up to 1000-fold. Comparison of SAGE, Q-PCR, and microarray data indicate that ratios between two samples/states determined by microarray show significant compression relative to the other two methods. The magnitude of change between mRNA samples (representing cellular states) is often used to set priorities for further research. Thus, "state ratio accuracy" as measured by the fluorescence ratio between channels is another area where the current system and components provides improvements.
Advantages of Experimental Multiplexing With Cells, Arrays and Beads
Embodiments of the system herein described can detect single cells, arrays, and single beads. Data attached on Luminex beads was taken with the illuminator and software described in the background of the art section. Due to the limitations of the previous design, the beads were not imaged in solution but rather were under a cover slip at the bottom of a microplate well. Improvements in mechanical systems, alignment techniques, software and illumination enable the use of beads in solution which is of great advantage in the integration of said new instrument into automated experiments such as high throughput screening. In addition, the new instrument will improve signal to noise ratio and uniformity of illumination enhancements for reasons previously discussed.
Properties of Arrays Useful For Biological Discovery Arrays are spotted in fixed locations. Each array element is a probe for a specific target. In some cases, the array element may be the target and the probes are different samples hybridized to the targets. The location of the array, element is associated with some degree of biological specificity. Specificity may be for DNA, RNA, proteins, or other biological entities. The fixed location of the array element allows for ease of analysis in associating a specific probe with a target. Hence, the address of the array is a key that provides information about the behavior of genes, proteins and other biological entities. Arrays are always ordered because it is the essence of the technique to use the location of the array element as a key to specificity. Arrays maybe placed on one surface or on both sides of a very thin surface. The instrument disclosed is capable of reading arrays that are placed in a three dimensional matrix due to the large range of focus of the opto¬ mechanical system described. All that is required is that proximal layers of array elements be semi-transparent to distal layers of array elements. Properties of Cells Useful For Biological Discovery
Cells represent the basic unit of biological activity. As such they have great predictive value for biological studies in that they contain most of the levels of functionality and complexity of any organism. Cells may be fixed as in cell arrays, or they may be mobile as in solution. A unique property of cells is that they are alive and will grow and divide under the proper environmental conditions. Growth may occur in solution or in cell arrays. As indicated in much of the experimental data provided herein, cells may be operated on by numerous biological entities such as proteins, antibodies, enzymes, viruses, bacteria, not to exclude others.
Properties of Beads Useful For Biological Discovery
Beads may be ordered or disordered. They may have various probes attached to their surface or contained within that confer specificity such as antibodies, proteins, DNA or RNA not to exclude others.
Beads are made by a process that can confer very accurate sizing and doping with dyes. Hence beads are excellent for controls. Because beads are mobile in solution they can be concentrated, for example they can indicate the concentration of receptors on a cell. Beads may be stained with a wide variety of dyes and since they are not living organisms the use of these dyes has no impact on their behavior. In some contexts, beads may mimic cells in their size or specificity but they are not living organisms. Properties of Arrays, Beads, and Cells When Multiplexed Together
All the individual properties of these three entities-arrays, cells, and beads are preserved in a multiplexed experiment. However, complexity of multiplexing is significantly enhanced when all three entities are combined. Cell arrays may also be configured over fixed arrays so that biological material contained in the fixed arrays is acting upon or taken into the cell . Using the abilities of the current instrument to image multiple focal planes each entity may be ordered or disordered (cells and beads) in a three dimensional matrix. This may allow for example, studies in the activity of cells that are not confined to cell array monolayers. Beads might also be used to act upon or transfer certain chemical agents into a cell. This could be combined with a cell array format in which the cells are growing over an array and are being transfected with different biological agents. In this way, the dimensionality of the experiment is not increased geometrically but also in terms of the biological phenomena that may be studied.

Claims

Claims
1. A method of optical analysis of optically detectable discrete targets in a multiwell plate comprising: a) directing illumination light onto a target area on a multiwell plate; b) collecting emitted light from a plurality of discrete targets located within said target area; c) directing said emitted light to an area array detector; d) measuring emitted light to generate area using said detector such that said plurality of discrete targets are detected by said detector in a single image detection; and e) organizing image data by well on said multiwell plate from which said image data was generated.
2. The method of claim 1, further including repeating steps a-d for a plurality of target areas on said multiwell plate; and stitching together a plurality of target area images into a composite image of multiple wells on said multiwell plate.
3. The method of claim 1 or 2, wherein said detector allows oversampling of each discrete target.
4. The method of claim 1-3, wherein the discrete targets are cell sized targets.
5. The method of claim 1-3, wherein the discrete targets are microarray spot targets.
6. The method of claim 1-5, wherein said step a) includes a step of homogenizing said illumination light.
7. The method of claim 1-6, wherein each discrete target is detected with between 10 to 20 detector pixels.
8. The method of claims 1-7, wherein step a) includes selection of an illumination wavelength using a selectable illumination filter.
9. The method of claims 1-8, wherein step a) includes off-axis illumination with respect to an optical axis of collected light transmitted to the detector.
10. The method of claims 1-9, wherein steps a) and b) are configured for epi-illumination.
11. The method of claims 1-10, further comprising an initial step of autofocusing on a well bottom.
12. The method of claims 1-11, wherein step c) includes directing collected emitted light through a selectable emission light filter.
13. The method of claims 1-12, wherein step a) includes an initial conditioning of illumination light by removal of light wavelengths selected from a group consisting of ultraviolet wavelengths and infrared wavelengths.
14. The method of claims 1-13, wherein step a) includes shaping illumination light into a geometric shape matched to a geometry of a target area on said multiwell plate.
15. The method of claims 1-14, wherein step a) includes spatially modulating illumination light.
16. The method of claims 1-15, wherein step c) includes focusing light onto the detector using a focusing lens selected from a positionally selectable group of focus lenses.
17. The method of claims 1-16, wherein steps a) - d) are completed a first time as a prescan and subsequently steps a) - d) are repeated under a modified scanning modality.
18. The method of claims 1-17, wherein step e) includes creating a full multiwell plate image, segmenting said full plate image into data images, and organizing the data images by plate well .
19. The method of claims 1-18, wherein steps a) - d) are initially preformed at a low resolution and are subsequently repeated at a higher resolution.
20. A device for optical analysis of a multiwell plate comprising: an illumination means configured to illuminate an area of a multiwell plate; a light collection means configured to collect light emmited from discrete targets in said area of said multiwell plate; an area array detector; a collection lens positioned to focus collected light onto the detector; and a data collection means which receives image data from said area array detector and organizes image data by well associated with said multiwell plate.
21. The device of claim 20, wherein said illumination means includes a broad spectrum light source.
22. The device of claims 20 or 21, wherein said illumination means includes a light conditioning optical element which removes ultraviolet and infrared wavelengths from said illumination light.
23. The device of claims 20-22, wherein the illumination means includes a light homogenization means.
24. The device of claims 20-22, wherein the illumination means includes a means for shaping illumination light into a selected illumination area.
25. The device of claims 20-24, further including a means to select illumination wavelength.
26. The device of claims 20-25, wherein the illumination means is configured to provide off axis illumination.
27. The device of claims 20-26, wherein the illumination means includes a secondary light source configured to provide light within a specific wavelength range.
28. The device of claims 20-27, further including a means for autofocusing.
29. The device of claims 20-28, further including a means for filtering emitted light.
30. The device of claims 20-29, wherein said detector is configured to detect discrete targets in said multiwell plate using 10 to 20 pixels per discrete target.
31. The device of claims 20-30, wherein said illumination means includes a spatial light modulator.
32. The device of claims 20-31, wherein said data collection means includes a data processor which creates a full plate image from a plurality of individual detection events, segments said data into images, and organizes said images by wells from said multiwell plate.
33. The device of claims 20-32, wherein said device includes a means for imaging said plate at a relatively lower resolution and a means for imaging said plate at a higher resolution.
PCT/US2005/031772 2004-09-09 2005-09-08 Microplate analysis system and method WO2006031537A2 (en)

Applications Claiming Priority (14)

Application Number Priority Date Filing Date Title
US60845004P 2004-09-09 2004-09-09
US60833004P 2004-09-09 2004-09-09
US60843004P 2004-09-09 2004-09-09
US60840704P 2004-09-09 2004-09-09
US60846104P 2004-09-09 2004-09-09
US60844204P 2004-09-09 2004-09-09
US60847004P 2004-09-09 2004-09-09
US60/608,442 2004-09-09
US60/608,330 2004-09-09
US60/608,407 2004-09-09
US60/608,430 2004-09-09
US60/608,461 2004-09-09
US60/608,470 2004-09-09
US60/608,450 2004-09-09

Publications (2)

Publication Number Publication Date
WO2006031537A2 true WO2006031537A2 (en) 2006-03-23
WO2006031537A3 WO2006031537A3 (en) 2006-08-17

Family

ID=36060532

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2005/031772 WO2006031537A2 (en) 2004-09-09 2005-09-08 Microplate analysis system and method

Country Status (1)

Country Link
WO (1) WO2006031537A2 (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1857807A2 (en) 2006-05-19 2007-11-21 Wallac Oy Arrangement and method for illuminating an object
EP2375273A1 (en) * 2010-04-09 2011-10-12 Leica Microsystems CMS GmbH Fluorescence microscope and method for multiple positioning in a screening application
EP2381242A1 (en) * 2006-12-20 2011-10-26 Ventana Medical Systems, Inc. Quantitative, multispectral image analysis of tissue specimens stained with quantum dots
EP2639292A1 (en) 2012-03-14 2013-09-18 Tecan Trading AG Method and micro-plate reader for examining biological cells or cell cultures
US8987684B2 (en) 2007-12-19 2015-03-24 Koninklijke Philips N.V. Detection system and method
US9557217B2 (en) 2007-02-13 2017-01-31 Bti Holdings, Inc. Universal multidetection system for microplates
EP3301431A1 (en) * 2016-09-29 2018-04-04 Roche Diagniostics GmbH Multi-chamber analysis device and method for analyzing
GB2597502A (en) * 2020-07-24 2022-02-02 Ffei Ltd A whole slide imaging method for a microscope
WO2023115537A1 (en) * 2021-12-24 2023-06-29 Molecular Devices, Llc. Microplate reader

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6271042B1 (en) * 1998-08-26 2001-08-07 Alpha Innotech Corporation Biochip detection system
US20010055764A1 (en) * 1999-05-07 2001-12-27 Empedocles Stephen A. Microarray methods utilizing semiconductor nanocrystals
US20020158211A1 (en) * 2001-04-16 2002-10-31 Dakota Technologies, Inc. Multi-dimensional fluorescence apparatus and method for rapid and highly sensitive quantitative analysis of mixtures

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6271042B1 (en) * 1998-08-26 2001-08-07 Alpha Innotech Corporation Biochip detection system
US20010055764A1 (en) * 1999-05-07 2001-12-27 Empedocles Stephen A. Microarray methods utilizing semiconductor nanocrystals
US20020158211A1 (en) * 2001-04-16 2002-10-31 Dakota Technologies, Inc. Multi-dimensional fluorescence apparatus and method for rapid and highly sensitive quantitative analysis of mixtures

Cited By (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1857807A2 (en) 2006-05-19 2007-11-21 Wallac Oy Arrangement and method for illuminating an object
EP1857807A3 (en) * 2006-05-19 2007-12-05 Wallac Oy Arrangement and method for illuminating an object
US8798394B2 (en) 2006-12-20 2014-08-05 Ventana Medical Systems, Inc. Quantitave, multispectral image analysis of tissue specimens stained with quantum dots
EP2381242A1 (en) * 2006-12-20 2011-10-26 Ventana Medical Systems, Inc. Quantitative, multispectral image analysis of tissue specimens stained with quantum dots
US8244021B2 (en) 2006-12-20 2012-08-14 Ventana Medical Systems, Inc. Quantitative, multispectral image analysis of tissue specimens stained with quantum dots
US8280141B2 (en) 2006-12-20 2012-10-02 Ventana Medical Systems, Inc. Quantitative, multispectral image analysis of tissue specimens stained with quantum dots
US8285024B2 (en) 2006-12-20 2012-10-09 Ventana Medical Systems, Inc. Quantitative, multispectral image analysis of tissue specimens stained with quantum dots
US8290236B2 (en) 2006-12-20 2012-10-16 Ventana Medical Systems, Inc. Quantitative, multispectral image analysis of tissue specimens stained with quantum dots
US8290235B2 (en) 2006-12-20 2012-10-16 Ventana Medical Systems, Inc Quantitative, multispectral image analysis of tissue specimens stained with quantum dots
US10072982B2 (en) 2007-02-13 2018-09-11 Biotek Instruments, Inc. Universal multidetection system for microplates
US9557217B2 (en) 2007-02-13 2017-01-31 Bti Holdings, Inc. Universal multidetection system for microplates
US8987684B2 (en) 2007-12-19 2015-03-24 Koninklijke Philips N.V. Detection system and method
EP2375273A1 (en) * 2010-04-09 2011-10-12 Leica Microsystems CMS GmbH Fluorescence microscope and method for multiple positioning in a screening application
EP2639292A1 (en) 2012-03-14 2013-09-18 Tecan Trading AG Method and micro-plate reader for examining biological cells or cell cultures
US10527550B2 (en) 2012-03-14 2020-01-07 Tecan Trading Ag Method and microplate reader for investigating biological cells or cell cultures
EP3301431A1 (en) * 2016-09-29 2018-04-04 Roche Diagniostics GmbH Multi-chamber analysis device and method for analyzing
JP2018054613A (en) * 2016-09-29 2018-04-05 エフ.ホフマン−ラ ロシュ アーゲーF. Hoffmann−La Roche Aktiengesellschaft Multi-chamber analysis device and analysis method
CN107883994A (en) * 2016-09-29 2018-04-06 豪夫迈·罗氏有限公司 Cavity plate analytical equipment and analysis method
US10204281B2 (en) 2016-09-29 2019-02-12 Roche Molecular Systems, Inc. Multi-chamber analysis device and method for analyzing
CN107883994B (en) * 2016-09-29 2021-07-23 豪夫迈·罗氏有限公司 Multi-cavity plate analysis device and analysis method
GB2597502A (en) * 2020-07-24 2022-02-02 Ffei Ltd A whole slide imaging method for a microscope
WO2023115537A1 (en) * 2021-12-24 2023-06-29 Molecular Devices, Llc. Microplate reader

Also Published As

Publication number Publication date
WO2006031537A3 (en) 2006-08-17

Similar Documents

Publication Publication Date Title
WO2006031537A2 (en) Microplate analysis system and method
JP3837165B2 (en) Digital imaging system for testing in well plates, gels and blots
US10977478B2 (en) Methods and devices for reading microarrays
US7803609B2 (en) System, method, and product for generating patterned illumination
US7354389B2 (en) Microarray detector and methods
US20120126142A1 (en) Fluorescent analysis method
KR102136648B1 (en) Detection method, microarray analysis method and fluorescence reading device
US7682782B2 (en) System, method, and product for multiple wavelength detection using single source excitation
US8520976B2 (en) System, method, and product for imaging probe arrays with small feature size
US9445025B2 (en) System, method, and product for imaging probe arrays with small feature sizes
US20100087325A1 (en) Biological sample temperature control system and method
CN1308726A (en) Method and apparatus for computer controlled rell, including fetal cell, based diagnosis
WO2005113832A2 (en) Wide field imager for quantitative analysis of microarrays
TWI233487B (en) Apparatus and method for accessing and processing reflection image from microwell-plate-based biochip
CN1829907A (en) Analysing biological entities
EP1508027A2 (en) Microarray detector and methods
US20040224332A1 (en) System and method for calibration and focusing a scanner instrument using elements associated with a biological probe array
US20220187587A1 (en) Kinematic imaging system
US20230207056A1 (en) Automatically switching variant analysis model versions for genomic analysis applications
JP2003042956A (en) Data read method and scanner used therefor
US7504072B2 (en) Biopolymeric array scanning devices that focus on the far side of an array and methods for using the same
WO2023129764A1 (en) Automatically switching variant analysis model versions for genomic analysis applications
CN115605577A (en) Real-time quantitative polymerase chain reaction (qPCR) reactor system with sample detection and injection
KR101188233B1 (en) A diagnosis apparatus for biochip
Agroskin et al. Luminescence video analyzer of biological microchips

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NG NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU LV MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase