WO2001057805A2 - Image data processing method and apparatus - Google Patents

Image data processing method and apparatus Download PDF

Info

Publication number
WO2001057805A2
WO2001057805A2 PCT/GB2001/000389 GB0100389W WO0157805A2 WO 2001057805 A2 WO2001057805 A2 WO 2001057805A2 GB 0100389 W GB0100389 W GB 0100389W WO 0157805 A2 WO0157805 A2 WO 0157805A2
Authority
WO
WIPO (PCT)
Prior art keywords
image data
interior configuration
scanner
data
volumetric
Prior art date
Application number
PCT/GB2001/000389
Other languages
French (fr)
Other versions
WO2001057805A3 (en
Inventor
Ivan Daniel Meir
Norman Ronald Smith
Guy Richard John Fowler
Original Assignee
Tct International Plc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tct International Plc filed Critical Tct International Plc
Priority to AU2001230372A priority Critical patent/AU2001230372A1/en
Publication of WO2001057805A2 publication Critical patent/WO2001057805A2/en
Publication of WO2001057805A3 publication Critical patent/WO2001057805A3/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4053Super resolution, i.e. output image resolution higher than sensor resolution
    • G06T3/4061Super resolution, i.e. output image resolution higher than sensor resolution by injecting details from a different spectral band
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • G06V10/245Aligning, centring, orientation detection or correction of the image by locating a pattern; Special marks for positioning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10072Tomographic images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • the present invention relates to a method and apparatus for processing image data derived from an object such as a patient.
  • volumetric image data sets pertaining to the same patient but of one or more different modalities and/or obtained on different occasions It would be convenient for a clinician to combine volumetric image data sets pertaining to the same patient but of one or more different modalities and/or obtained on different occasions.
  • each apparatus will have its own reference frame and furthermore in many cases there will be little or no overlap between the data sets (e.g. X-ray data and surface or sliced MRI data) which will make accurate registration difficult or impossible.
  • the configuration or shape of the patient will be somewhat different when the respective data sets are acquired, again tending to prevent accurate registration.
  • the patient may move during the acquisition of data for example it may take a number minutes to acquire an MRI scan. Even the fastest CT scan can take several seconds which will result in imaging errors unless the patient is kept absolutely rigid.
  • An object of the present invention is to overcome or alleviate at least some of the above problems.
  • the invention provides a method of processing configuration-sensitive data comprising acquiring configuration information in association with said configuration-sensitive data and enhancing the configuration-sensitive data using the configuration information.
  • the configuration-sensitive data comprises a volumetric data set relating to the interior of a subject.
  • the volumetric data set might define the shape of internal organs of a patient.
  • surface data is acquired by optical means and utilised to derive the configuration information.
  • the configuration information might be a digitised surface representation of the patient's body acquired by a 2D or preferably a 3D camera arrangement.
  • the surface data is acquired by optically tracking markers located on the surface of the subject.
  • the markers are detachable and are located in predetermined relationships to permanent or semi-permanent markings on the surface of the subject.
  • the configuration information could also be information derived from the digitised surface representation for example it could comprise a set of normals to the surface or could comprise model data (derived from a physical or statistical model of the relevant part of the patient's body for example) which could be used to enhance the configuration-sensitive data e.g. by means of statistical correlation between the configuration information and the configuration-sensitive data.
  • the object (or the surface or volumetric data thereof) is modelled (e.g. as a rigid model, an affine model or a spline model such as a NURBS model) in such a manner as to constrain its allowable movement and the images or volumetric data sets are registered under the constraint that movement is limited to that allowed by the model.
  • Useful medical information can then be derived, e.g. by carrying over information from the surface data to the volumetric data. In certain embodiments such information can be derived with the aid of a statistical or physical model of the object.
  • the invention provides medical imaging apparatus arranged to 30 acquire configuration information in association with configuration-sensitive medical data.
  • the configuration information comprises surface data representative of a subject.
  • the configuration information could be a surface representation of a patient's body and the apparatus could include one or more cameras for acquiring such a representation.
  • the apparatus is calibrated to output the surface representation and an internal representation (e.g. an X-ray image or a volumetric data set) referenced to a common reference frame.
  • the apparatus includes display means arranged to display both a present surface representation and a stored previous surface representation of the same patient and means for adjusting the position or orientation of the apparatus (optionally under the control of a user) in relation to the patient such that the two surface representations are registered.
  • This ensures that the previous and present internal representations are also registered and aids comparison of such representations (e.g. X-ray images) and also helps to prevent mistakes in alignment of the apparatus relative to the patient which might result in further visits from the patient and unnecessary radiation dosage, for example.
  • the invention provides a method of associating two sets of volumetric data (VI and V2) comprising the step of associating sets ( ⁇ Si, VI ⁇ and ⁇ S2, V2 ⁇ ) of surface data (SI and S2) registered with the respective sets of volumetric data.
  • This aspect is related to the apparatus of the above-mentioned aspect in that each set of surface data associated with a set of volumetric data can be acquired with that apparatus.
  • the registration is performed on a model of the subject which is constrained to allow movement only in accordance with a predetermined model for example a rigid model, an affine model or a spline model.
  • the invention provides a frame to fit the surface of scanned body part, the frame carrying guide means for locating the site of a medical procedure at a defined position or orientation at or beneath the surface of the body part.
  • the frame is in the form of a mask shaped to fit the surface of a patient's face or head.
  • the invention also provides a method of making such a frame.
  • the invention further provides a method of processing image data relating to a three dimensional object of a variable positional disposition, the object having an outer surface and an interior configuration, the method comprising: acquiring for a first positional disposition of the object, first interior configuration image data concerning the interior configuration of the object, and also first three dimensional outer surface image data concerning the outer surface of the object, acquiring for a second positional disposition of the object, a second interior configuration image data concerning the interior configuration of the object, and also second three dimensional outer surface image data concerning the outer surface of the object, and registering the first and second interior configuration image data as a function of the relationship between the first and second three dimensional outer surface image data.
  • FIG. 1 is a diagrammatic representation of scanning apparatus in accordance with one aspect of the invention.
  • Figure 2 is a flow diagram illustrating a method of processing images or volumetric data sets in accordance with another aspect of the invention
  • Figures 3A and 3B are diagrammatic representations showing the distortion of a coordinate system 10 correspond to the movement of a patient as detected optically;
  • Figures 3C to 3E are diagrammatic representations showing the corresponding distortion of a coordinate system relative to which a volumetric data set is obtained;
  • Figure 4 is a diagrammatic representation illustrating a method of calibration of the apparatus of Figure 1;
  • Figure 5 is a diagrammatic representation of scanning apparatus in accordance with an aspect of the invention and the registration of surface and volumetric representations obtained with the apparatus;
  • Figure 6 is a diagrammatic representation of X-ray and video camera apparatus in accordance with one aspect of the invention.
  • Figure 7 is a diagrammatic profile of two sets of superimposed surface and volumetric data showing the registration in accordance with an aspect of the invention of the two volumetric data sets from the registration of each volumetric data set with its associated surface and the registration of the surfaces;
  • Figure 8A is a diagrammatic representation of the combination of two volumetric data sets in accordance with an aspect of the invention by the registration of their associated surfaces and Figure 8B is a similar diagrammatic representation utilising sets of sliced MRI data;
  • Figure 9 is a diagrammatic transverse cross-section of a scanner in accordance with an aspect of the invention showing the enhancement of the volumetric data with correction data derived from surface information
  • Figure 10 is a diagrammatic elevation of a mask fitted to a patient's head and incorporating a biopsy needle for taking a biopsy at a defined 3D position in the patient's brain, and
  • FIG 11 illustrates ceiling mounted stereoscopic cameras for use with a scanner, in accordance with the invention. Detailed description
  • a volumetric scanner 10 which is suitably a MRI, CT or PET scanner for example is shown and is arranged to generate a 3D internal volumetric image 30 of a patient P.
  • Such scanners are well known per se and accordingly no further details are given of the conventional features of such scanners.
  • Three transversely disposed stereoscopic camera arrangements are provided, comprising digital cameras Cl and C2 which acquire a 3D image of the head and shoulders of the patient P, digital cameras C3 and C4 which acquire a 3D image of the torso of the patient and digital cameras C5 and C6 which acquire a 3D image of the legs of the patient.
  • These images can be still or moving images and the left and right images of each pair can be correlated with the aid of a projected pattern in order to facilitate the derivation of a 3D surface representation.
  • the images acquired by the three sets of cameras are displayed to the user as indicated at 20A, 20B and 20C.
  • Figure 2 shows how the data acquired e.g. by the scanner of Figure 1 can be processed.
  • step 100 images of a surface and/or volumetric data sets are obtained from the same subject (e.g. patient P) at different times, e.g. before and after treatment. It is assumed in this embodiment that little or no movement of the patient P occurs during data acquisition, which is a valid assumption if the data acquisition takes less than say 25 milliseconds or so.
  • a model of the object is provided and used to constrain allowable distortion of the images or volumetric data sets of step 100 when subsequently registering them (step 600).
  • the model of step 500 could be a rigid model 200 (in which case no distortion is allowed between the respective images or volumetric data sets acquired at different times) an affine model 300 (which allows shearing or stretching as shown) which term includes a piecewise affine model comprising a patchwork of affine transforms and allows different shearing and stretching distortions in the different parts of the surface or volumetric data set as shown or a spline model 400 e.g.
  • the models can take into account the characteristics of the scanner used to acquire the volumetric data sets.
  • the models 200, 300 and 400 characterise the relationship between the three dimensional image of the patient's outer surface and the interior configuration imaged by the scanner and captured in the volumetric data.
  • step 600 the selected model is used to constrain relative movement or distortion of the images or volumetric data sets acquired at different times while they are registered as completely as possible within this constraint.
  • step 700 the volumetric data is enhanced and output. More detailed information on this step is given in the description of the subsequent embodiments. This step is optionally performed with the aid of a physical model 700 and/or a statistical model 800 of the subject.
  • a physical model 700 can incorporate a knowledge of the object's physical properties and can employ e.g. a finite element analysis- technique to model the subject's deformations between the different occasions of data acquisition.
  • a statistical model 800 is employed.
  • Statistical models of the human face and other parts, of the human body e.g. internal organs are known, e.g. from A Hill, A Thornham, C J Taylor “Model-Based Interpretation of 3D Medical Images.”, Proc BMVC 1993 pp 339-348, which is incorporated herein by reference.
  • This reference describes Point Distribution Models.
  • a Point Distribution Model comprises an envelope in m- dimensional space defined by eigenvectors representative of atleast the principal modes of variation of the envelope, each point within the envelope representing a potential instance of the model and defining the positions (in 2D or 3D physical space) of n landmark points which are located on characteristic features of the model.
  • the envelope is generated from a training set of examples, of the subject being modelled (e.g. images of human faces if the human face is being modelled).
  • the main usefulness of such models lies in the possibility of applying Principal Component Analysis (PCA) to find the eigenvectors corresponding to the main modes of variation of the envelope derived from the training set which enables the envelope to be approximated by an envelope in fewer dimensions.
  • PCA Principal Component Analysis
  • Point Distribution Models are described in more detail by A Hill et al, 'Model- Based Interpretation of 3-D Medical Images' Procs. 4th British Machine Vision Conference pp 339-348 Sept 1993 which is incorporated herein by reference.
  • Active Shape Models are derived from Point Distribution Models and are used to generate new instances, of the model ie new shapes, represented by points, within the envelope in the m-dimensional space.
  • a new shape e.g. the shape of a new human face known to nearly conform to the envelope
  • its set of shape parameters can be found and the shape can then be manipulated e.g. by rotation and scaling to conform better to the envelope, preferably in an iterative process.
  • the PDM preferably incorporates grey level or other image information (e.g. colour information) besides shape and e.g. grey scale profile perpendicular to a boundary at a landmark point is compared in order to move the landmark point to make the new image conform more closely to the set of allowable shapes represented by the PDM.
  • grey level or other image information e.g. colour information
  • an ASM consists of a shape model controlling a set of landmark points, together with a statistical model of image information e.g. grey levels, around each landmark.
  • Active Shape Models and the associated Grey-Level Models are described in more detail by T.F. Cootes et al in "Active Shape Models: Evaluation of a Multi-Resolution Method for Improving Image Search" Proc. British Machine Vision Conference 1994 pp 327-336, which is incorporated herein by reference.
  • the training set from which the statistical model 900 is 15 derived is preferably trained on a variety of images or volumetric data sets, derived from a single organ, patient or other subject in order to model the possible variation of a given subject as opposed to the variance in file general population or such subjects.
  • the resulting analysis enables data e.g. growth of a tumour, which is not a function of normal variation in the subject to be highlighted -and indeed quantified.
  • the model 200, 300 or 400 (Figure 2) is utilised to distort the coordinate frame of Figure 3A in such a manner that the model patient coordinates in Figure 3B are unchanged relative to the coordinate frame.
  • a given point on the modelled patient e.g. the nose tip, has the same coordinates, in the distorted coordinate frame of Figure IB as- in the undistorted coordinate frame of Figure 3A.
  • a complementary distortion is then applied to the volumetric image (data set) obtained on the second occasion as shown in Figure 3C to transform this data set to a data set (shown in Figure 3D) which would have been obtained on the second occasion if the patient had not moved.
  • the volumetric image of Figure 3D can then be compared with the volumetric data set not shown obtained on the first occasion.
  • the surface of the patient p is permanently or semipermanently marked by small locating tattoos not shown, which are visible to the human eye on close examination.
  • Temporary markers M are applied to these tattoos and are sufficiently large to be tracked easily in real time by a stereoscopic camera arrangement.
  • the markers M can for example be in the form of detachable stickers or can be drawn over the tattoos with a marker pen. It is not essential for the markers M to be precisely located over the tattoos (although this will usually be the most practical option) but the markers should each have a location which is precisely defined by the tattoos for example the markers could each be equidistant from two, three or more tattoos.
  • a volumetric scanner e.g. scanner 10 of Figure 1
  • the markers M are tracked optically and the scanner's volumetric coordinate frame is distorted to correspond with that distortion of the scanner's optical coordinate frame which would leave the 3D positions of the markers M unchanged, either during the scan or relative to a previous scan during which the markers were also used.
  • a calibration target T which is visible both to the cameras Cl and C2 and to the scanner 10 is imaged by both the scanner and the cameras resulting in images II in the camera reference frame FI and 12 in the scanner reference frame F2.
  • a geometrical transformation TR can be found in a known manner which will map II onto 12 and the same transformation can then be applied e.g. in software or firmware to move, scale and (if necessary) distort the visual image of a patient's body to registration with the scanner reference frame F2.
  • the scanner is an MRI or a CT scanner there will be data common to. the visual data acquired by the cameras, and the volumetric data, typically the patient's skin surface.
  • a calibration procedure such as that shown in Figure 5 can be used.
  • a patient p is imaged by both the cameras Cl and C2 and the scanner 10 and the resulting images pi and p2 of the patient's surface in the respective reference frames Fl and F2 of the camera system and scanner can be registered by a transformation TR which can be found in a known manner by analysis.
  • This transformation TR can be stored and used subsequently to register the digitised 5 surfaces acquired by the cameras to the volumetric data set acquired by the scanner 10.
  • Figure 6 shows a slightly different embodiment which consists essentially of an X-ray camera Cx rigidly mounted with a digital video, camera Cv on a common supporting frame FR.
  • the X-ray image and visual image 11 acquired by the cameras Cx and Cv are processed Dy a computer FC and displayed on a display D (only 11 is shown).
  • the computer PC is provided with video memory arranged to store a visual image 12 of the same patient previously acquired by the video camera Cv when taking an X-ray image.
  • the cameras Cv and Cx are moved e.g. under the control of the operator or possibly under control of a suitable image registration program, on their common mounting frame FR to superimpose image 12 on image II as shown by arrow al and the X-ray image is the captured.
  • the X-ray camera is movable with respect to the video camera and the movement of the X-ray camera required to register the new X-ray image with the previous X-ray image is derived from the movement needed to register the surface images.
  • the arrangement can instead image an array of markers M located by tattoos on the patient in a manner similar to that of Figure 3E.
  • X-ray images are standardised, which aids comparison and also facilitates further analysis such as that illustrated by blocks 700, 800 and 900 of Figure 2.
  • Other embodiments could utilise a standard type of volumetric scanner e.g. a CT scanner or an MRI scanner rather than an X-ray camera Cx, with similar advantages.
  • MRI scanning can be enhanced by scanning only the relevant volume known from previously acquired surface data to contain the region of interest.
  • stereoscopic video camera arrangements rather than a single video camera Cv could be employed and more sophisticated registration techniques analogous to those of blocks 200 to 600 of Figure 2 could be employed.
  • stereoscopic viewing arrangements rather than a screen could be used for the registration.
  • Figure 7 illustrates a further aspect of the invention involving using e.g. an MRI scanner arrangement of Figure 1 to capture a surface representation SI of the patient's body and a volumetric data set II including the surface of the patient's body and using a further scanner of different modality e.g. a CT scanner to capture a different volumetric data set 12 in association with a surface representation S2.
  • the volumetric data sets 11 and 12 are registered with their respective associated surface representations SI and S2 by appropriate transformations rl and r2 as shown (preferably utilising the results of an earlier calibration procedure as described above) and the surface representations are then registered with each other by a transformation Rl. Since the volumetric data sets 11 and 12 are each referenced to the resulting common surface they can be registered with each other by a transformation R2 which can be simply derived from Rl, rl and r2.
  • FIG. 8A Such a combination of different features is shown in Figure 8A, wherein surface representations S registered with respective volumetric data sets Va and Vb of different modality are combined to generate a new volumetric data set Vc registered with a surface, representation. S.' which is. a composite of the surface representations S.
  • Such a technique can be used to combine not only volumetric data sets of different modality out also volumetric data sets of different resolution or accuracy or different size or shape.
  • the volumetric data sets acquired by different modalities have no overlap, ie no information in common, but are incorporated into a common reference frame by mutually registering surface representations which are acquired optically e.g. by a camera arrangement similar to that of Figure 1, simultaneously with the respective volumetric data sets.
  • Figure 8B shows two composite images of an organ having cross-sections XI, X3 and X5 and X2 and X4 respectively wherein the cross-sections are acquired by MRI and the surfaces S are acquired optically. By registering surfaces S to a composite surface S.', the MRI cross-sections are mutually aligned, as shown.
  • volumetric data sets which are acquired whilst the patient is moving.
  • an MRI scan can take 40 minutes and if the patient moves during this period the resulting scan is degraded by blurring.
  • the blurring could be alleviated by utilising the optically acquired surface data of the moving patient to distort the reference frame of the volumetric data set and reconstructing the volumetric data set with reference to the distorted reference frame.
  • This de-blurring technique is somewhat similar to the registration technique described above with reference to Figures 3A to 3D but unlike that technique, can involve a progressive distortion of the reference frame to follow movement of the patient.
  • the blurring can be alleviated by utilising a statistical or physical model of the patient to define a range of possible configurations of the patient in a mathematical space, generating volumetric data sets (in this case, artificial MRI scans) corresponding to the respective configurations allowed by the model, finding the generated volumetric data sets, corresponding to a path in the above mathematical space, whose appropriately weighted mean best matches (registers with) the actual blurred volumetric data set acquired by the scanner, and then processing the model and its associated volumetric data sets to find the volumetric data set which would have been maintained by the scanner if the patient had maintained a given configuration.
  • volumetric data sets in this case, artificial MRI scans
  • surface data acquired by one or more cameras can be used to aid directly the processing of volumetric scanning data to generate an accurate volumetric image of the interior of the patient.
  • a PET scanner 10' having an array of photon detectors D around its periphery and having a centrally located position source is provided with at least one stereoscopic arrangement of digital cameras C which capture the surface S of the subject
  • the camera array would be arranged to capture the entire 360 degree periphery of the subject and to this end could be arranged to rotate around the longitudinal axis of the scanner, for example.
  • True photon paths are shown by the full arrowed lines and include a path PT resulting from scattering at PS on the object surface.
  • the outputs of the relevant photon detectors would be interpreted to infer a photon path Pr, which is physically impossible because it does not pass through surface S. Accordingly such an erroneous interpretation can be avoided with the aid of the surface data acquired by the cameras C.
  • the surface data acquired by the cameras C can be used to derive volumetric information which can facilitate the processing of the output signals of the detectors D, in particular the absorption and scattering can be estimated from the amount of tissue as determined from the surface acquired by the cameras C.
  • X-ray imaging which can be used with a knowledge of the patient's surface e.g. to locate soft tissue.
  • a statistical model of the X-ray image or volumetric data set could be used to aid the derivation of an actual X-ray image or volumetric data set from a patient. This would result in higher accuracy and/or a lower required dosage.
  • One reconstruction method which would be applicable is based on lever sets as 15 described by J. A. Sethian "Level Sets and fast Matching Methods", Cambridge University Press 1999 which is hereby incorporated by reference. This method would involve, computation of successive layers from the surface acquired by the cameras toward the centre of the subject.
  • a further application of the registration of surface and volumetric data in accordance with the present invention lies in the construction of a frame to fit the surface of a scanned body part, the frame carrying guide means for locating the site of a medical procedure at a defined position or orientation at or beneath the surface of the body part.
  • the position or orientation can be set by utilising the volumetric data which has been registered with the surface data and by implication with the frame and its guide means.
  • the frame could be a mask that fits over the patient's face or head or it could rearranged to be fitted to a rigid part of the leg or abdomen.
  • Mask M has an interior surface IS which matches a surface of 30 the patient's head previously acquired by a stereoscopic camera arrangement and is provided with a guide canal G for guiding a biopsy needle N to a defined position p in the patient's brain.
  • the orientation of the guide canal is predetermined with the aid of volumetric data, registered with the surface data and acquired by a scanner in accordance with the invention (e.g.
  • a scale SC to enable the needle N to be advanced to a predetermined extent until a reference mark on the needle is aligned with a predetermined graduation of the scale (also chosen on the basis of the volumetric data).
  • a stop could be used instead of scale SC
  • the described embodiments relate to the enhancement of volumetric data with the aid of surface data
  • other medical data could be enhanced with the aid of such surface data.
  • measurements of breathing could be combined with a moving 3D surface representation of the patient while such measurements are being made and the resulting measurements could be registered with previous measurements by a method of the type illustrated in Figure 2.
  • the invention may be used to advantage in order to correlate and register interior configuration image data for a patient obtained from a CT scanner and a PET scanner.
  • CT and PET scanners are large devices usually fixedly mounted in a dedicated room or area.
  • stereoscopic camera arrangements such as Cl and C3 for a PET scanner can be mounted on the wall, ceiling or an overhead gantry, separate from the scanner 10, itself.
  • the cameras are shown to be ceiling mounted in Figure 11. Similar camera mounting arrangements are provided for the CT scanner (not shown). Thus, there is no need to carry out modifications to the scanner itself, in order to install cameras at each of the CT and PET scanner locations.
  • a CT scan of the patient is taken when in a first disposition lying on the scanner table of the CT scanner, and a 3D surface image of the patient is captured using the cameras Cl, C3, i.e. in the first disposition.
  • the patient is moved to the PET scanner and the process is repeated.
  • the image data from the PET and CT scans are then processed as previously described to bring the data into registry so that the data can be merged and used to analyse the patient's condition.
  • the 3D image data captured by the overhead cameras C for each of the CT and PET scanners is used as previously described to bring the scanner data into registry
  • Volumetric data captured by a scanner can be gated on the basis of surface data. acquired by a camera arrangement e.g. to ensure that volumetric data is captured only when the patient is in a defined position or configuration or at a particular time in the patient's breathing cycle.
  • optical' is to be construed to cover infra-red as well as visible wavelengths.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Computer Graphics (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Magnetic Resonance Imaging Apparatus (AREA)
  • Apparatus For Radiation Diagnosis (AREA)

Abstract

A volumetric scanner (10) such as a PET or MRI scanner for example is provided with an array of cameras (C1 to C6) which are arranged to acquire a 3D surface representation (20) of a patient (P) which is referenced to the same reference frame as the internal volumetric image (30) of the patient. The 3D surface data is used to enhance the volumetric data e.g. by de-blurring the image by distorting the volumetric reference frame with the aid of the surface representation, optionally with the aid of a statistical model of the possible configurations of the patient Volumetric data sets of different modalities (e.g. MRI and PET respectively) can be combined by registering the 3D surfaces captured by camera arrangements associated with the respective scanners.

Description

Image Data Processing Method and Apparatus
Field of the invention
The present invention relates to a method and apparatus for processing image data derived from an object such as a patient.
Background of the invention
It would be convenient for a clinician to combine volumetric image data sets pertaining to the same patient but of one or more different modalities and/or obtained on different occasions. However it is not necessarily possible to do this with conventional apparatus because each apparatus will have its own reference frame and furthermore in many cases there will be little or no overlap between the data sets (e.g. X-ray data and surface or sliced MRI data) which will make accurate registration difficult or impossible. Furthermore in many cases the configuration or shape of the patient will be somewhat different when the respective data sets are acquired, again tending to prevent accurate registration. Additionally the patient may move during the acquisition of data for example it may take a number minutes to acquire an MRI scan. Even the fastest CT scan can take several seconds which will result in imaging errors unless the patient is kept absolutely rigid.
An object of the present invention is to overcome or alleviate at least some of the above problems.
Summary of the invention
In one aspect the invention provides a method of processing configuration-sensitive data comprising acquiring configuration information in association with said configuration-sensitive data and enhancing the configuration-sensitive data using the configuration information.
Preferably the configuration-sensitive data comprises a volumetric data set relating to the interior of a subject. For example in one embodiment the volumetric data set might define the shape of internal organs of a patient.
Preferably surface data is acquired by optical means and utilised to derive the configuration information. For example in one embodiment the configuration information might be a digitised surface representation of the patient's body acquired by a 2D or preferably a 3D camera arrangement. In another embodiment the surface data is acquired by optically tracking markers located on the surface of the subject. Preferably the markers are detachable and are located in predetermined relationships to permanent or semi-permanent markings on the surface of the subject.
Since such surface data or such a digitised surface representation can be acquired extremely quickly and accurately, this offers great potential for improving the quality - and integration of volumetric data, particularly in medical applications. The configuration information could also be information derived from the digitised surface representation for example it could comprise a set of normals to the surface or could comprise model data (derived from a physical or statistical model of the relevant part of the patient's body for example) which could be used to enhance the configuration-sensitive data e.g. by means of statistical correlation between the configuration information and the configuration-sensitive data.
In preferred embodiments the object (or the surface or volumetric data thereof) is modelled (e.g. as a rigid model, an affine model or a spline model such as a NURBS model) in such a manner as to constrain its allowable movement and the images or volumetric data sets are registered under the constraint that movement is limited to that allowed by the model. Useful medical information can then be derived, e.g. by carrying over information from the surface data to the volumetric data. In certain embodiments such information can be derived with the aid of a statistical or physical model of the object.
In a related aspect the invention provides medical imaging apparatus arranged to 30 acquire configuration information in association with configuration-sensitive medical data. Preferably the configuration information comprises surface data representative of a subject.
For example the configuration information could be a surface representation of a patient's body and the apparatus could include one or more cameras for acquiring such a representation. Preferably the apparatus is calibrated to output the surface representation and an internal representation (e.g. an X-ray image or a volumetric data set) referenced to a common reference frame.
In a preferred embodiment the apparatus includes display means arranged to display both a present surface representation and a stored previous surface representation of the same patient and means for adjusting the position or orientation of the apparatus (optionally under the control of a user) in relation to the patient such that the two surface representations are registered. This ensures that the previous and present internal representations are also registered and aids comparison of such representations (e.g. X-ray images) and also helps to prevent mistakes in alignment of the apparatus relative to the patient which might result in further visits from the patient and unnecessary radiation dosage, for example.
In another aspect the invention provides a method of associating two sets of volumetric data (VI and V2) comprising the step of associating sets ({Si, VI} and {S2, V2} ) of surface data (SI and S2) registered with the respective sets of volumetric data. This aspect is related to the apparatus of the above-mentioned aspect in that each set of surface data associated with a set of volumetric data can be acquired with that apparatus.
Optionally the registration is performed on a model of the subject which is constrained to allow movement only in accordance with a predetermined model for example a rigid model, an affine model or a spline model.
In another aspect the invention provides a frame to fit the surface of scanned body part, the frame carrying guide means for locating the site of a medical procedure at a defined position or orientation at or beneath the surface of the body part. In one embodiment the frame is in the form of a mask shaped to fit the surface of a patient's face or head.
The invention also provides a method of making such a frame.
The invention further provides a method of processing image data relating to a three dimensional object of a variable positional disposition, the object having an outer surface and an interior configuration, the method comprising: acquiring for a first positional disposition of the object, first interior configuration image data concerning the interior configuration of the object, and also first three dimensional outer surface image data concerning the outer surface of the object, acquiring for a second positional disposition of the object, a second interior configuration image data concerning the interior configuration of the object, and also second three dimensional outer surface image data concerning the outer surface of the object, and registering the first and second interior configuration image data as a function of the relationship between the first and second three dimensional outer surface image data.
Brief description of the drawings
Preferred embodiments of the invention are described below by way of example only with reference to the accompanying drawings, wherein:
Figure 1 is a diagrammatic representation of scanning apparatus in accordance with one aspect of the invention;
Figure 2 is a flow diagram illustrating a method of processing images or volumetric data sets in accordance with another aspect of the invention;
Figures 3A and 3B are diagrammatic representations showing the distortion of a coordinate system 10 correspond to the movement of a patient as detected optically; Figures 3C to 3E are diagrammatic representations showing the corresponding distortion of a coordinate system relative to which a volumetric data set is obtained;
Figure 4 is a diagrammatic representation illustrating a method of calibration of the apparatus of Figure 1;
Figure 5 is a diagrammatic representation of scanning apparatus in accordance with an aspect of the invention and the registration of surface and volumetric representations obtained with the apparatus;
Figure 6 is a diagrammatic representation of X-ray and video camera apparatus in accordance with one aspect of the invention;
Figure 7 is a diagrammatic profile of two sets of superimposed surface and volumetric data showing the registration in accordance with an aspect of the invention of the two volumetric data sets from the registration of each volumetric data set with its associated surface and the registration of the surfaces;
Figure 8A is a diagrammatic representation of the combination of two volumetric data sets in accordance with an aspect of the invention by the registration of their associated surfaces and Figure 8B is a similar diagrammatic representation utilising sets of sliced MRI data;
Figure 9 is a diagrammatic transverse cross-section of a scanner in accordance with an aspect of the invention showing the enhancement of the volumetric data with correction data derived from surface information,
Figure 10 is a diagrammatic elevation of a mask fitted to a patient's head and incorporating a biopsy needle for taking a biopsy at a defined 3D position in the patient's brain, and
Figure 11 illustrates ceiling mounted stereoscopic cameras for use with a scanner, in accordance with the invention. Detailed description
Referring to Figure 1, a volumetric scanner 10 which is suitably a MRI, CT or PET scanner for example is shown and is arranged to generate a 3D internal volumetric image 30 of a patient P. Such scanners are well known per se and accordingly no further details are given of the conventional features of such scanners. Three transversely disposed stereoscopic camera arrangements are provided, comprising digital cameras Cl and C2 which acquire a 3D image of the head and shoulders of the patient P, digital cameras C3 and C4 which acquire a 3D image of the torso of the patient and digital cameras C5 and C6 which acquire a 3D image of the legs of the patient. These images can be still or moving images and the left and right images of each pair can be correlated with the aid of a projected pattern in order to facilitate the derivation of a 3D surface representation. The images acquired by the three sets of cameras are displayed to the user as indicated at 20A, 20B and 20C.
Multiple camera arrangements for acquiring 3D surfaces are commercially available e.g. the S4M apparatus available from TCTi pic, formerly known as Tricorder
Technology pic and the camera arrangement of Figure 1 can be based on such known arrangements. Accordingly, the processing electronics used to process the 2D images to a 3D image is not described or shown in Figure 1. However, it should be noted that the fields of view of cameras Cl and C2 overlap with those of cameras C3 and C4 and that the fields of view of cameras. C3 and C4 overlap with those of C5 and C6, to enable an overall 3D surface 20 of the patient P to be obtained.
Figure 2 shows how the data acquired e.g. by the scanner of Figure 1 can be processed. In step 100, images of a surface and/or volumetric data sets are obtained from the same subject (e.g. patient P) at different times, e.g. before and after treatment. It is assumed in this embodiment that little or no movement of the patient P occurs during data acquisition, which is a valid assumption if the data acquisition takes less than say 25 milliseconds or so.
It should be noted that other embodiments of the invention described hereinafter specifically address blurring of surface images or, more particularly, volumetric data sets caused by patient movement. In step 500, a model of the object is provided and used to constrain allowable distortion of the images or volumetric data sets of step 100 when subsequently registering them (step 600). The model of step 500 could be a rigid model 200 (in which case no distortion is allowed between the respective images or volumetric data sets acquired at different times) an affine model 300 (which allows shearing or stretching as shown) which term includes a piecewise affine model comprising a patchwork of affine transforms and allows different shearing and stretching distortions in the different parts of the surface or volumetric data set as shown or a spline model 400 e.g. a cubic spline or a NURBS representation, which could in either case also be a piecewise representation. In each case the models can take into account the characteristics of the scanner used to acquire the volumetric data sets. Thus, the models 200, 300 and 400 characterise the relationship between the three dimensional image of the patient's outer surface and the interior configuration imaged by the scanner and captured in the volumetric data.
In step 600 the selected model is used to constrain relative movement or distortion of the images or volumetric data sets acquired at different times while they are registered as completely as possible within this constraint.
In step 700, the volumetric data is enhanced and output. More detailed information on this step is given in the description of the subsequent embodiments. This step is optionally performed with the aid of a physical model 700 and/or a statistical model 800 of the subject.
A physical model 700 can incorporate a knowledge of the object's physical properties and can employ e.g. a finite element analysis- technique to model the subject's deformations between the different occasions of data acquisition.
Alternatively or additionally a statistical model 800 is employed. Statistical models of the human face and other parts, of the human body e.g. internal organs are known, e.g. from A Hill, A Thornham, C J Taylor "Model-Based Interpretation of 3D Medical Images.", Proc BMVC 1993 pp 339-348, which is incorporated herein by reference. This reference describes Point Distribution Models. Briefly, a Point Distribution Model (PDM) comprises an envelope in m- dimensional space defined by eigenvectors representative of atleast the principal modes of variation of the envelope, each point within the envelope representing a potential instance of the model and defining the positions (in 2D or 3D physical space) of n landmark points which are located on characteristic features of the model. The envelope is generated from a training set of examples, of the subject being modelled (e.g. images of human faces if the human face is being modelled). The main usefulness of such models lies in the possibility of applying Principal Component Analysis (PCA) to find the eigenvectors corresponding to the main modes of variation of the envelope derived from the training set which enables the envelope to be approximated by an envelope in fewer dimensions.
Point Distribution Models are described in more detail by A Hill et al, 'Model- Based Interpretation of 3-D Medical Images' Procs. 4th British Machine Vision Conference pp 339-348 Sept 1993 which is incorporated herein by reference.
Active Shape Models are derived from Point Distribution Models and are used to generate new instances, of the model ie new shapes, represented by points, within the envelope in the m-dimensional space.
Given a new shape e.g. the shape of a new human face known to nearly conform to the envelope, its set of shape parameters can be found and the shape can then be manipulated e.g. by rotation and scaling to conform better to the envelope, preferably in an iterative process. In order to improve the matching the PDM preferably incorporates grey level or other image information (e.g. colour information) besides shape and e.g. grey scale profile perpendicular to a boundary at a landmark point is compared in order to move the landmark point to make the new image conform more closely to the set of allowable shapes represented by the PDM. Thus the new example is deformed in ways to better fit the data represented by the training set. Thus an ASM consists of a shape model controlling a set of landmark points, together with a statistical model of image information e.g. grey levels, around each landmark. Active Shape Models and the associated Grey-Level Models are described in more detail by T.F. Cootes et al in "Active Shape Models: Evaluation of a Multi-Resolution Method for Improving Image Search" Proc. British Machine Vision Conference 1994 pp 327-336, which is incorporated herein by reference.
In the present application, the training set from which the statistical model 900 is 15 derived is preferably trained on a variety of images or volumetric data sets, derived from a single organ, patient or other subject in order to model the possible variation of a given subject as opposed to the variance in file general population or such subjects.
The resulting analysis enables data e.g. growth of a tumour, which is not a function of normal variation in the subject to be highlighted -and indeed quantified.
One method of performing the registration step 600 is illustrated in Figures 3A to
3D. It is assumed that a patient P is scanned optically e.g. by the cameras Cl to C6 of scanner 10 of Figure 1 on two occasions e.g. before and after surgery, and that there is relative movement of the patient. The resulting surface representations are shown in Figures 3A and 3B respectively with the difference in configuration of the patient shown greatly exaggerated for the sake or clarity. The corresponding volumetric representations e.g. 3D MRI images, are shown in Figures 3D and 3C respectively. In order to determine what the volumetric image (data set) would have been on the second occasion if the patient had not moved, and thereby enable an accurate comparison of the volumetric images obtained on the two occasions, the model 200, 300 or 400 (Figure 2) is utilised to distort the coordinate frame of Figure 3A in such a manner that the model patient coordinates in Figure 3B are unchanged relative to the coordinate frame. Thus a given point on the modelled patient e.g. the nose tip, has the same coordinates, in the distorted coordinate frame of Figure IB as- in the undistorted coordinate frame of Figure 3A.
A complementary distortion is then applied to the volumetric image (data set) obtained on the second occasion as shown in Figure 3C to transform this data set to a data set (shown in Figure 3D) which would have been obtained on the second occasion if the patient had not moved. The volumetric image of Figure 3D can then be compared with the volumetric data set not shown obtained on the first occasion. In the embodiment of Figure 3E the surface of the patient p is permanently or semipermanently marked by small locating tattoos not shown, which are visible to the human eye on close examination. Temporary markers M are applied to these tattoos and are sufficiently large to be tracked easily in real time by a stereoscopic camera arrangement. The markers M can for example be in the form of detachable stickers or can be drawn over the tattoos with a marker pen. It is not essential for the markers M to be precisely located over the tattoos (although this will usually be the most practical option) but the markers should each have a location which is precisely defined by the tattoos for example the markers could each be equidistant from two, three or more tattoos. On each occasion the patient p is scanned in a volumetric scanner (e.g. scanner 10 of Figure 1) the markers M are tracked optically and the scanner's volumetric coordinate frame is distorted to correspond with that distortion of the scanner's optical coordinate frame which would leave the 3D positions of the markers M unchanged, either during the scan or relative to a previous scan during which the markers were also used. As a result, either blurring of the volumetric image due to patient movement during the scan is avoided or proper registration of the present volumetric scan with a volumetric scan acquired on a previous occasion is enabled. The above description assumes that the surface data and volumetric data are initially referenced to a common reference frame. This can be achieved in a variety of ways. Examples of registration techniques are illustrated in Figures 4, 5 and 6.
Referring to Figure 4, a calibration target T which is visible both to the cameras Cl and C2 and to the scanner 10 is imaged by both the scanner and the cameras resulting in images II in the camera reference frame FI and 12 in the scanner reference frame F2. A geometrical transformation TR can be found in a known manner which will map II onto 12 and the same transformation can then be applied e.g. in software or firmware to move, scale and (if necessary) distort the visual image of a patient's body to registration with the scanner reference frame F2.
In some cases e.g. if the scanner is an MRI or a CT scanner there will be data common to. the visual data acquired by the cameras, and the volumetric data, typically the patient's skin surface. In such cases a calibration procedure such as that shown in Figure 5 can be used. A patient p is imaged by both the cameras Cl and C2 and the scanner 10 and the resulting images pi and p2 of the patient's surface in the respective reference frames Fl and F2 of the camera system and scanner can be registered by a transformation TR which can be found in a known manner by analysis. This transformation TR can be stored and used subsequently to register the digitised 5 surfaces acquired by the cameras to the volumetric data set acquired by the scanner 10.
Figure 6 shows a slightly different embodiment which consists essentially of an X-ray camera Cx rigidly mounted with a digital video, camera Cv on a common supporting frame FR. The X-ray image and visual image 11 acquired by the cameras Cx and Cv are processed Dy a computer FC and displayed on a display D (only 11 is shown). The computer PC is provided with video memory arranged to store a visual image 12 of the same patient previously acquired by the video camera Cv when taking an X-ray image. The cameras Cv and Cx are moved e.g. under the control of the operator or possibly under control of a suitable image registration program, on their common mounting frame FR to superimpose image 12 on image II as shown by arrow al and the X-ray image is the captured. Consequently this X-ray image is correctly aligned with the X-ray image captured on the previous occasion. In a variant of this, embodiment the X-ray camera is movable with respect to the video camera and the movement of the X-ray camera required to register the new X-ray image with the previous X-ray image is derived from the movement needed to register the surface images. Rather than utilising complete visual images II and 12, the arrangement can instead image an array of markers M located by tattoos on the patient in a manner similar to that of Figure 3E.
In either case, this results in a number of advantages:
a) mistakes in positioning of the apparatus (which might result in a further visit for an X-ray and hence a greater X-ray dose than is necessary, as well as inconvenience for the patient and clinical staff) are avoided, and
b) X-ray images are standardised, which aids comparison and also facilitates further analysis such as that illustrated by blocks 700, 800 and 900 of Figure 2. Other embodiments could utilise a standard type of volumetric scanner e.g. a CT scanner or an MRI scanner rather than an X-ray camera Cx, with similar advantages. In particular, MRI scanning can be enhanced by scanning only the relevant volume known from previously acquired surface data to contain the region of interest. Furthermore, stereoscopic video camera arrangements rather than a single video camera Cv could be employed and more sophisticated registration techniques analogous to those of blocks 200 to 600 of Figure 2 could be employed. Additionally, stereoscopic viewing arrangements rather than a screen could be used for the registration.
Figure 7 illustrates a further aspect of the invention involving using e.g. an MRI scanner arrangement of Figure 1 to capture a surface representation SI of the patient's body and a volumetric data set II including the surface of the patient's body and using a further scanner of different modality e.g. a CT scanner to capture a different volumetric data set 12 in association with a surface representation S2. The volumetric data sets 11 and 12 are registered with their respective associated surface representations SI and S2 by appropriate transformations rl and r2 as shown (preferably utilising the results of an earlier calibration procedure as described above) and the surface representations are then registered with each other by a transformation Rl. Since the volumetric data sets 11 and 12 are each referenced to the resulting common surface they can be registered with each other by a transformation R2 which can be simply derived from Rl, rl and r2.
Assuming that the volumetric data sets 11 and 12 show different features of the patient as a result of their differing modalities then the resulting combined volumetric data set conveys more information than either individual ly. Such a combination of different features is shown in Figure 8A, wherein surface representations S registered with respective volumetric data sets Va and Vb of different modality are combined to generate a new volumetric data set Vc registered with a surface, representation. S.' which is. a composite of the surface representations S. Such a technique can be used to combine not only volumetric data sets of different modality out also volumetric data sets of different resolution or accuracy or different size or shape. In a variant of this embodiment, the volumetric data sets acquired by different modalities have no overlap, ie no information in common, but are incorporated into a common reference frame by mutually registering surface representations which are acquired optically e.g. by a camera arrangement similar to that of Figure 1, simultaneously with the respective volumetric data sets.
Figure 8B shows two composite images of an organ having cross-sections XI, X3 and X5 and X2 and X4 respectively wherein the cross-sections are acquired by MRI and the surfaces S are acquired optically. By registering surfaces S to a composite surface S.', the MRI cross-sections are mutually aligned, as shown.
In a variant of the embodiments of Figures 7, 8A and 8B, more sophisticated registration techniques analogous to those of blocks 200 to 600 of Figure 2 can be used. The resulting incompletely registered data sets can then be processed using the techniques of Figure 2 blocks 700 to 900.
A particularly useful application of such a variant lies in the correction of
(particularly) volumetric data sets which are acquired whilst the patient is moving. For example an MRI scan can take 40 minutes and if the patient moves during this period the resulting scan is degraded by blurring. The blurring could be alleviated by utilising the optically acquired surface data of the moving patient to distort the reference frame of the volumetric data set and reconstructing the volumetric data set with reference to the distorted reference frame. This de-blurring technique is somewhat similar to the registration technique described above with reference to Figures 3A to 3D but unlike that technique, can involve a progressive distortion of the reference frame to follow movement of the patient.
In another embodiment the blurring can be alleviated by utilising a statistical or physical model of the patient to define a range of possible configurations of the patient in a mathematical space, generating volumetric data sets (in this case, artificial MRI scans) corresponding to the respective configurations allowed by the model, finding the generated volumetric data sets, corresponding to a path in the above mathematical space, whose appropriately weighted mean best matches (registers with) the actual blurred volumetric data set acquired by the scanner, and then processing the model and its associated volumetric data sets to find the volumetric data set which would have been maintained by the scanner if the patient had maintained a given configuration.
More generally, surface data acquired by one or more cameras can be used to aid directly the processing of volumetric scanning data to generate an accurate volumetric image of the interior of the patient. One example is shown in Figure 9, wherein a PET scanner 10'. having an array of photon detectors D around its periphery and having a centrally located position source is provided with at least one stereoscopic arrangement of digital cameras C which capture the surface S of the subject Although only two cameras are shown for the sake of simplicity, in practice the camera array would be arranged to capture the entire 360 degree periphery of the subject and to this end could be arranged to rotate around the longitudinal axis of the scanner, for example.
True photon paths are shown by the full arrowed lines and include a path PT resulting from scattering at PS on the object surface. In the absence of any information about the surface S, the outputs of the relevant photon detectors would be interpreted to infer a photon path Pr, which is physically impossible because it does not pass through surface S. Accordingly such an erroneous interpretation can be avoided with the aid of the surface data acquired by the cameras C.
Additionally, the surface data acquired by the cameras C can be used to derive volumetric information which can facilitate the processing of the output signals of the detectors D, in particular the absorption and scattering can be estimated from the amount of tissue as determined from the surface acquired by the cameras C.
Other applications utilising the surface information include X-ray imaging which can be used with a knowledge of the patient's surface e.g. to locate soft tissue.
Furthermore, a statistical model of the X-ray image or volumetric data set, registered with the aid of the surface representation acquired by a stereoscopic camera arrangement, could be used to aid the derivation of an actual X-ray image or volumetric data set from a patient. This would result in higher accuracy and/or a lower required dosage. One reconstruction method which would be applicable is based on lever sets as 15 described by J. A. Sethian "Level Sets and fast Matching Methods", Cambridge University Press 1999 which is hereby incorporated by reference. This method would involve, computation of successive layers from the surface acquired by the cameras toward the centre of the subject.
A further application of the registration of surface and volumetric data in accordance with the present invention lies in the construction of a frame to fit the surface of a scanned body part, the frame carrying guide means for locating the site of a medical procedure at a defined position or orientation at or beneath the surface of the body part. The position or orientation can be set by utilising the volumetric data which has been registered with the surface data and by implication with the frame and its guide means.
For example, the frame could be a mask that fits over the patient's face or head or it could rearranged to be fitted to a rigid part of the leg or abdomen. Such a mask is shown in Figure 10. Mask M has an interior surface IS which matches a surface of 30 the patient's head previously acquired by a stereoscopic camera arrangement and is provided with a guide canal G for guiding a biopsy needle N to a defined position p in the patient's brain. To this end, the orientation of the guide canal is predetermined with the aid of volumetric data, registered with the surface data and acquired by a scanner in accordance with the invention (e.g. the scanner of Figure 1) and the guide canal is provided with a scale SC to enable the needle N to be advanced to a predetermined extent until a reference mark on the needle is aligned with a predetermined graduation of the scale (also chosen on the basis of the volumetric data). In a variant of this embodiment, a stop could be used instead of scale SC
Although the described embodiments relate to the enhancement of volumetric data with the aid of surface data, in principle other medical data could be enhanced with the aid of such surface data. For example measurements of breathing could be combined with a moving 3D surface representation of the patient while such measurements are being made and the resulting measurements could be registered with previous measurements by a method of the type illustrated in Figure 2. The invention may be used to advantage in order to correlate and register interior configuration image data for a patient obtained from a CT scanner and a PET scanner. CT and PET scanners are large devices usually fixedly mounted in a dedicated room or area. Referring to Figure 11, in accordance with the invention, stereoscopic camera arrangements such as Cl and C3 for a PET scanner can be mounted on the wall, ceiling or an overhead gantry, separate from the scanner 10, itself. The cameras are shown to be ceiling mounted in Figure 11. Similar camera mounting arrangements are provided for the CT scanner (not shown). Thus, there is no need to carry out modifications to the scanner itself, in order to install cameras at each of the CT and PET scanner locations. In use, a CT scan of the patient is taken when in a first disposition lying on the scanner table of the CT scanner, and a 3D surface image of the patient is captured using the cameras Cl, C3, i.e. in the first disposition. Then, the patient is moved to the PET scanner and the process is repeated. The image data from the PET and CT scans are then processed as previously described to bring the data into registry so that the data can be merged and used to analyse the patient's condition. To this end, the 3D image data captured by the overhead cameras C for each of the CT and PET scanners is used as previously described to bring the scanner data into registry
Volumetric data captured by a scanner can be gated on the basis of surface data. acquired by a camera arrangement e.g. to ensure that volumetric data is captured only when the patient is in a defined position or configuration or at a particular time in the patient's breathing cycle.
In the present specification the term 'optical' is to be construed to cover infra-red as well as visible wavelengths.

Claims

Claims
1. A method of processing image data relating to a three dimensional object of a variable positional disposition, the object having an outer surface and an interior configuration, the method comprising: acquiring for a first positional disposition of the object, first interior configuration image data concerning the interior configuration of the object, and also first three dimensional outer surface image data concerning the outer surface of the object, acquiring for a second positional disposition of the object, a second interior configuration image data concerning the interior configuration of the object, and also second three dimensional outer surface image data concerning the outer surface of the object, and registering the first and second interior configuration image data as a function of the relationship between the first and second three dimensional outer surface image data.
2. A method according to claim 1 wherein the first and second interior configuration image data comprises data of the same modality derived by a scanner scanning the object on different occasions.
3. A method according to claim 1 wherein the first and second interior configuration image data comprises data of respective different modalities.
4. A method according to claim 3 including acquiring the first interior configuration image data utilising a first scanner operable according to a first modality, and acquiring the second interior configuration image data utilising a first scanner operable according to a first modality.
5. A method according to claim 4 including acquiring the first three dimensional interior configuration image data with a CT scanner, and acquiring the second interior configuration image data with a PET scanner.
6. A method according to any preceding claim, including providing a deformation constraint model (200, 300, 400) of its interior configuration as a function of said variable positional disposition, and said registering of the first and second interior configuration image data being carried out as a function of the relationship between the first and second outer surface image data and in dependence upon the model.
7. A method according to claim 6 wherein the deformation constraint model (200, 300, 400) defines a deformable frame and including deforming the frame as a function of the relationship between the first and second outer surface image data, and the registering the first and second interior configuration image data is performed with reference to the deformed frame.
8. A method according to claim 7 comprising utilising a model of the object to define a range of possible configurations of the object in a mathematical space, generating volumetric data sets corresponding to the respective configurations allowed by the model, finding the generated volumetric data sets, corresponding to a path in said mathematical space, whose appropriately weighted mean best matches an actual volumetric data set acquired from the object corresponding to blurred image data for the interior configuration of the object, and then processing the model and its associated volumetric data sets to find the volumetric data set which would have been obtained if the subject had maintained a given configuration, whereby to compensate for movement of the object.
9. A method according to any preceding claim including providing a model (700, 800) of the object, and referencing the registered data to the model to provide an output.
10. A method according to any preceding claim including obtaining the first and second three dimensional outer surface image data using at least one pair of digital cameras.
11. A method according to any preceding claim including gating the data such that the acquired data corresponds to when the object is in a particular positional configuration.
12. A method according to any preceding claim including, for an individual positional disposition of the object, referencing the interior configuration data and the outer surface image data to a common reference frame.
13. A method according to any preceding claim, wherein the object comprises a living object.
14. A method according to any preceding claim including placing markers on the object such that the image data for said outer surface contains data concerning the location of the markers.
15. A computer program to be run on data processing apparatus to perform a method as claimed in any preceding claim.
16. Apparatus configured to perform a method as claimed in any preceding claim.
17. Image processing apparatus for processing image data relating to a three dimensional object of a variable positional disposition, the object having an outer surface and an interior configuration, comprising: a data acquisition station arrangement for acquiring for a first positional disposition of the object, first interior configuration image data concerning the interior configuration of the object, and also first three dimensional outer surface image data concerning the outer surface of the object, and for acquiring for a second positional disposition of the object, a second volumetric data set corresponding to interior configuration image data concerning the interior configuration of the object, and also second three dimensional outer surface image data concerning the outer surface of the object, and a processor configured to register the first and second interior configuration image data as a function of the relationship between the first and second three dimensional outer surface image data.
18. Apparatus according to claim 17 wherein the data acquisition arrangement includes a scanner to acquire the interior configuration image data and a stereoscopic camera arrangement to acquire the three dimensional outer surface image data.
19. Apparatus according to claim 17 wherein the data acquisition arrangement includes a first scanner to acquire the first interior configuration image data and a second scanner to acquire the second interior configuration image data.
20. Apparatus according to claim 19 wherein the first scanner comprises a CT scanner and the scanner comprises a PET scanner.
21. Apparatus according to claim 18 wherein the scanner includes a table to receive a patient, and the stereoscopic camera arrangement is mounted separate from the seamier above the table.
22. Apparatus according to claim 21 wherein the stereoscopic camera arrangement is ceiling mounted or wall mounted,
23. A method of processing image data relating to a three dimensional object of a variable positional disposition, the object having an outer surface and an interior configuration, the method comprising: acquiring with a scanner for an individual disposition of the object, interior configuration image data concerning the interior configuration of the object, acquiring with a stereoscopic camera arrangement first three dimensional outer surface image data concerning the outer surface of the object in said individual disposition, and registering the interior configuration image data with the three dimensional outer surface image data.
PCT/GB2001/000389 2000-01-31 2001-01-31 Image data processing method and apparatus WO2001057805A2 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
AU2001230372A AU2001230372A1 (en) 2000-01-31 2001-01-31 Image data processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
GB0002181.6 2000-01-31
GB0002181A GB2358752A (en) 2000-01-31 2000-01-31 Surface or volumetric data processing method and apparatus

Publications (2)

Publication Number Publication Date
WO2001057805A2 true WO2001057805A2 (en) 2001-08-09
WO2001057805A3 WO2001057805A3 (en) 2002-03-21

Family

ID=9884669

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/GB2001/000389 WO2001057805A2 (en) 2000-01-31 2001-01-31 Image data processing method and apparatus

Country Status (3)

Country Link
AU (1) AU2001230372A1 (en)
GB (1) GB2358752A (en)
WO (1) WO2001057805A2 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006095324A1 (en) 2005-03-10 2006-09-14 Koninklijke Philips Electronics N.V. Image processing system and method for registration of two-dimensional with three-dimensional volume data during interventional procedures
CN111723837A (en) * 2019-03-20 2020-09-29 斯瑞克欧洲控股I公司 Techniques for processing patient-specific image data for computer-assisted surgical navigation

Families Citing this family (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7359748B1 (en) 2000-07-26 2008-04-15 Rhett Drugge Apparatus for total immersion photography
US7554007B2 (en) 2003-05-22 2009-06-30 Evogene Ltd. Methods of increasing abiotic stress tolerance and/or biomass in plants
AU2005234725B2 (en) 2003-05-22 2012-02-23 Evogene Ltd. Methods of Increasing Abiotic Stress Tolerance and/or Biomass in Plants and Plants Generated Thereby
EP1766058A4 (en) 2004-06-14 2008-05-21 Evogene Ltd Polynucleotides and polypeptides involved in plant fiber development and methods of using same
EP2716654A1 (en) 2005-10-24 2014-04-09 Evogene Ltd. Isolated polypeptides, polynucleotides encoding same, transgenic plants expressing same and methods of using same
GB2455926B (en) * 2006-01-30 2010-09-01 Axellis Ltd Method of preparing a medical restraint
WO2008122980A2 (en) 2007-04-09 2008-10-16 Evogene Ltd. Polynucleotides, polypeptides and methods for increasing oil content, growth rate and biomass of plants
EP2910638B1 (en) 2007-07-24 2018-05-30 Evogene Ltd. Polynucleotides, polypeptides encoded thereby, and methods of using same for increasing abiotic stress tolerance and/or biomass and/or yield in plants expressing same
CA2710941C (en) 2007-12-31 2017-01-03 Real Imaging Ltd. Method apparatus and system for analyzing images
WO2009083973A1 (en) * 2007-12-31 2009-07-09 Real Imaging Ltd. System and method for registration of imaging data
EP2265163B1 (en) 2008-03-28 2014-06-04 Real Imaging Ltd. Method apparatus and system for analyzing images
MX367882B (en) 2008-05-22 2019-09-10 Evogene Ltd Isolated polynucleotides and polypeptides and methods of using same for increasing plant utility.
BR122021014165B1 (en) 2008-08-18 2022-08-16 Evogene Ltd. METHOD TO INCREASE NITROGEN USE EFFICIENCY, FERTILIZER USE EFFICIENCY, PRODUCTION, BIOMASS AND/OR NITROGEN DEFICIENCY AND DROUGHT STRESS TOLERANCE OF A PLANT, AND CONSTRUCTION OF ISOLATED NUCLEIC ACID
SI3460062T1 (en) 2009-03-02 2021-09-30 Evogene Ltd. Isolated polynucleotides and polypeptides, and methods of using same for increasing plant yield and/or agricultural characteristics

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5902239A (en) * 1996-10-30 1999-05-11 U.S. Philips Corporation Image guided surgery system including a unit for transforming patient positions to image positions
US5999840A (en) * 1994-09-01 1999-12-07 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
FR2733338B1 (en) * 1995-04-18 1997-06-06 Fouilloux Jean Pierre PROCESS FOR OBTAINING AN INDEFORMABLE SOLID TYPE MOVEMENT OF A SET OF MARKERS ARRANGED ON ANATOMICAL ELEMENTS, ESPECIALLY OF THE HUMAN BODY
GB2330913B (en) * 1996-07-09 2001-06-06 Secr Defence Method and apparatus for imaging artefact reduction
JPH1119080A (en) * 1997-07-08 1999-01-26 Shimadzu Corp X-ray ct device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5999840A (en) * 1994-09-01 1999-12-07 Massachusetts Institute Of Technology System and method of registration of three-dimensional data sets
US5902239A (en) * 1996-10-30 1999-05-11 U.S. Philips Corporation Image guided surgery system including a unit for transforming patient positions to image positions

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006095324A1 (en) 2005-03-10 2006-09-14 Koninklijke Philips Electronics N.V. Image processing system and method for registration of two-dimensional with three-dimensional volume data during interventional procedures
US7912262B2 (en) 2005-03-10 2011-03-22 Koninklijke Philips Electronics N.V. Image processing system and method for registration of two-dimensional with three-dimensional volume data during interventional procedures
CN111723837A (en) * 2019-03-20 2020-09-29 斯瑞克欧洲控股I公司 Techniques for processing patient-specific image data for computer-assisted surgical navigation
CN111723837B (en) * 2019-03-20 2024-03-12 斯瑞克欧洲控股I公司 Techniques for processing image data of a particular patient for computer-assisted surgical navigation

Also Published As

Publication number Publication date
GB2358752A (en) 2001-08-01
WO2001057805A3 (en) 2002-03-21
GB0002181D0 (en) 2000-03-22
AU2001230372A1 (en) 2001-08-14

Similar Documents

Publication Publication Date Title
US11576645B2 (en) Systems and methods for scanning a patient in an imaging system
US7117026B2 (en) Physiological model based non-rigid image registration
US5531520A (en) System and method of registration of three-dimensional data sets including anatomical body data
JP5906015B2 (en) 2D / 3D image registration based on features
US20200268251A1 (en) System and method for patient positioning
EP2452649A1 (en) Visualization of anatomical data by augmented reality
US20130094742A1 (en) Method and system for determining an imaging direction and calibration of an imaging apparatus
US20130034203A1 (en) 2d/3d registration of a digital mouse atlas with x-ray projection images and optical camera photos
JP4495926B2 (en) X-ray stereoscopic reconstruction processing apparatus, X-ray imaging apparatus, X-ray stereoscopic reconstruction processing method, and X-ray stereoscopic imaging auxiliary tool
US11672505B2 (en) Correcting probe induced deformation in an ultrasound fusing imaging system
JP2003265408A (en) Endoscope guide device and method
WO2001057805A2 (en) Image data processing method and apparatus
JP2002186603A (en) Method for transforming coordinates to guide an object
US9254106B2 (en) Method for completing a medical image data set
KR101767005B1 (en) Method and apparatus for matching images using contour-based registration
KR20160057024A (en) Markerless 3D Object Tracking Apparatus and Method therefor
Richey et al. Soft tissue monitoring of the surgical field: detection and tracking of breast surface deformations
JP2022094744A (en) Subject motion measuring device, subject motion measuring method, program, and imaging system
Wang et al. Towards video guidance for ultrasound, using a prior high-resolution 3D surface map of the external anatomy
CN115908121B (en) Endoscope registration method, device and calibration system
JP7407831B2 (en) Intervention device tracking
KR102534981B1 (en) System for alignmenting patient position and monitoring with surface image guidance
TW201211937A (en) Human face matching system and method thereof
TW202333631A (en) Method and navigation system for registering two-dimensional image data set with three-dimensional image data set of body of interest
Sadowsky et al. Enhancement of mobile c-arm cone-beam reconstruction using prior anatomical models

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY CA CH CN CR CU CZ DE DK DM DZ EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT TZ UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE TR BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP