US20080146932A1 - 3D ultrasound-based instrument for non-invasive measurement of Amniotic Fluid Volume - Google Patents

3D ultrasound-based instrument for non-invasive measurement of Amniotic Fluid Volume Download PDF

Info

Publication number
US20080146932A1
US20080146932A1 US11/925,887 US92588707A US2008146932A1 US 20080146932 A1 US20080146932 A1 US 20080146932A1 US 92588707 A US92588707 A US 92588707A US 2008146932 A1 US2008146932 A1 US 2008146932A1
Authority
US
United States
Prior art keywords
image
amniotic fluid
ultrasound
scanplanes
transceiver
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/925,887
Inventor
Vikram Chalana
Yanwei Wang
Fuxing Yang
Susannah Helen Bloch
Stephen Dudycha
Gerald McMorrow
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Verathon Inc
Original Assignee
Vikram Chalana
Yanwei Wang
Fuxing Yang
Susannah Helen Bloch
Stephen Dudycha
Mcmorrow Gerald
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US10/165,556 external-priority patent/US6676605B2/en
Priority claimed from US10/443,126 external-priority patent/US7041059B2/en
Priority claimed from US10/633,186 external-priority patent/US7004904B2/en
Priority claimed from PCT/US2005/030799 external-priority patent/WO2006026605A2/en
Priority claimed from PCT/US2005/031755 external-priority patent/WO2006031526A2/en
Priority claimed from PCT/US2005/043836 external-priority patent/WO2006062867A2/en
Priority claimed from US11/295,043 external-priority patent/US7727150B2/en
Priority claimed from US11/362,368 external-priority patent/US7744534B2/en
Priority to US11/925,887 priority Critical patent/US20080146932A1/en
Application filed by Vikram Chalana, Yanwei Wang, Fuxing Yang, Susannah Helen Bloch, Stephen Dudycha, Mcmorrow Gerald filed Critical Vikram Chalana
Priority to US12/121,721 priority patent/US8167803B2/en
Priority to US12/121,726 priority patent/US20090105585A1/en
Priority to PCT/US2008/063987 priority patent/WO2008144570A1/en
Publication of US20080146932A1 publication Critical patent/US20080146932A1/en
Priority to US12/537,985 priority patent/US8133181B2/en
Assigned to VERATHON INC. reassignment VERATHON INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: DUDYCHA, STEPHEN, MCMORROW, GERALD J, CHALANA, VIKRAM
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0858Detecting organic movements or changes, e.g. tumours, cysts, swellings involving measuring tissue layers, e.g. skin, interfaces
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/20Measuring for diagnostic purposes; Identification of persons for measuring urological functions restricted to the evaluation of the urinary system
    • A61B5/202Assessing bladder functions, e.g. incontinence assessment
    • A61B5/204Determining bladder volume
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B5/00Measuring for diagnostic purposes; Identification of persons
    • A61B5/43Detecting, measuring or recording for evaluating the reproductive systems
    • A61B5/4306Detecting, measuring or recording for evaluating the reproductive systems for evaluating the female reproductive systems, e.g. gynaecological evaluations
    • A61B5/4343Pregnancy and labour monitoring, e.g. for labour onset detection
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/56Details of data transmission or power supply
    • A61B8/565Details of data transmission or power supply involving data transmission via a network
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S15/00Systems using the reflection or reradiation of acoustic waves, e.g. sonar systems
    • G01S15/88Sonar systems specially adapted for specific applications
    • G01S15/89Sonar systems specially adapted for specific applications for mapping or imaging
    • G01S15/8906Short-range imaging systems; Acoustic microscope systems using pulse-echo techniques
    • G01S15/8993Three dimensional imaging systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S7/00Details of systems according to groups G01S13/00, G01S15/00, G01S17/00
    • G01S7/52Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00
    • G01S7/52017Details of systems according to groups G01S13/00, G01S15/00, G01S17/00 of systems according to group G01S15/00 particularly adapted to short-range imaging
    • G01S7/52053Display arrangements
    • G01S7/52057Cathode ray tube displays
    • G01S7/5206Two-dimensional coordinated display of distance and direction; B-scan display
    • G01S7/52065Compound scan display, e.g. panoramic imaging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/143Segmentation; Edge detection involving probabilistic approaches, e.g. Markov random field [MRF] modelling
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/155Segmentation; Edge detection involving morphological operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • G06T7/62Analysis of geometric attributes of area, perimeter, diameter or volume
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/26Segmentation of patterns in the image field; Cutting or merging of image elements to establish the pattern region, e.g. clustering-based techniques; Detection of occlusion
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/13Tomography
    • A61B8/14Echo-tomography
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/44Constructional features of the ultrasonic, sonic or infrasonic diagnostic device
    • A61B8/4444Constructional features of the ultrasonic, sonic or infrasonic diagnostic device related to the probe
    • A61B8/4472Wireless probes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20061Hough transform
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30004Biomedical image processing
    • G06T2207/30044Fetus; Embryo
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/03Recognition of patterns in medical or anatomical images

Definitions

  • This invention pertains to the field of obstetrics, particularly to ultrasound-based non-invasive obstetric measurements.
  • Amniotic Fluid volume is critical for assessing the kidney and lung function of a fetus and also for assessing the placental function of the mother.
  • Amniotic fluid volume is also a key measure to diagnose conditions such as polyhydramnios (too much AF) and oligohydramnios (too little AF). Polyhydramnios and oligohydramnios are diagnosed in about 7-8% of all pregnancies and these conditions are of concern because they may lead to birth defects or to delivery complications.
  • the amniotic fluid volume is also one of the important components of the fetal biophysical profile, a major indicator of fetal well-being.
  • AFI amniotic fluid index
  • Some of the other methods that have been used to estimate AF volume include:
  • Dye dilution technique This is an invasive method where a dye is injected into the AF during amniocentesis and the final concentration of dye is measured from a sample of AF removed after several minutes. This technique is the accepted gold standard for AF volume measurement; however, it is an invasive and cumbersome method and is not routinely used.
  • Segiv et al. Pertaining to ultrasound-based determination of amniotic fluid volumes, Segiv et al. (in Segiv C, Akselrod S, Tepper R., “Application of a semiautomatic boundary detection algorithm for the assessment of amniotic fluid quantity from ultrasound images.” Ultrasound Med Biol, May; 25(4): 515-26, 1999) describe a method for amniotic fluid segmentation from 2D images. However, the Segiv et al. method is interactive in nature and the identification of amniotic fluid volume is very observer dependent. Moreover, the system described is not a dedicated device for amniotic fluid volume assessment.
  • the clarity of ultrasound acquired images is affected by motions of the examined subject, the motions of organs and fluids within the examined subject, the motion of the probing ultrasound transceiver, the coupling medium used transceiver and the examined subject, and the algorithms used for image processing.
  • image processing frequency domain approaches have been utilized in the literature including using Wiener filters that is implemented in the frequency domain and assumes that the point spread function (PSF) is fixed and known. This assumption conflicts with the observation that the received ultrasound signals are usually non-stationary and depth-dependent. Since the algorithm is implemented in the frequency domain, the error introduced in PSF will leak across the spatial domain. As a result, the performance of Wiener filtering is not ideal.
  • the most common container for dispensing ultrasound coupling gel is an 8 oz. plastic squeeze bottle with an open, tapered tip.
  • the tapered tip bottle is inexpensive and easy to refill from a larger reservoir in the form of a bag or pump-type and dispenses gel in a controlled manner.
  • Other embodiments include the Sontac® ultrasound gel pad available from VerathonTM Medical, Bothell, Wash., USA is a pre-packaged, circular pad of moist, flexible coupling gel 2.5 inches in diameter and 0.06 inches thick and is advantageously used with the BladderScan devices.
  • the Sontac pad is simple to apply and to remove, and provides adequate coupling for a one-position ultrasound scan in most cases.
  • Yet others include the Aquaflex® gel pads perform in a similar manner to Sontac pads, but are larger and thicker (2 cm thick ⁇ 9 cm diameter), and traditionally used for therapeutic ultrasound or where some distance between the probe and the skin surface (“stand-off”) must be maintained.
  • an ultrasonic coupling medium The main purpose of an ultrasonic coupling medium is to provide an air-free interface between an ultrasound transducer and the body surface.
  • Gels are used as coupling media since they are moist and deformable, but not runny: they wet both the transducer and the body surface, but stay where they are applied.
  • the most common delivery method for ultrasonic coupling gel, the plastic squeeze bottle has several disadvantages. First, if the bottle has been stored upright the gel will fall to the bottom of the bottle, and vigorous shaking is required to get the gel back to the bottle tip, especially if the gel is cold. This motion can be particularly irritating to sonographers, who routinely suffer from wrist and arm pain from ultrasound scanning.
  • the bottle tip is a two-way valve: squeezing the bottle releases gel at the tip, but releasing the bottle sucks air back into the bottle and into the gel.
  • the presence of air bubbles in the gel may detract from its performance as a coupling medium.
  • refilling the bottle from a central source is not a particularly difficult task, it is non-sterile and potentially messy.
  • Sontac pads and other solid gel coupling pads are simpler to use than gel: the user does not have to guess at an appropriate application amount, the pad is sterile, and it can be simply lifted off the patient and disposed of after use.
  • pads do not mold to the skin or transducer surface as well as the more liquefied coupling gels and therefore may not provide ideal coupling when used alone, especially on dry, hairy, curved, or wrinkled surfaces.
  • Sontac pads suffer from the additional disadvantage that they are thin and easily damaged by moderate pressure from the ultrasound transducer. (See Bishop S, Draper D O, Knight K L, Feland J B, Eggett D. “Human tissue-temperature rise during ultrasound treatments with the Aquaflex gel pad.” Journal of Athletic Training 39(2):126-131, 2004).
  • the preferred form of the invention is a three dimensional (3D) ultrasound-based system and method using a hand-held 3D ultrasound device to acquire at least one 3D data set of a uterus and having a plurality of automated processes optimized to robustly locate and measure the volume of amniotic fluid in the uterus without resorting to pre-conceived models of the shapes of amniotic fluid pockets in ultrasound images.
  • the automated process uses a plurality of algorithms in a sequence that includes steps for image enhancement, segmentation, and polishing.
  • a hand-held 3D ultrasound device is used to image the uterus trans-abdominally.
  • the user moves the device around on the maternal abdomen and, using 2D image processing to locate the amniotic fluid areas, the device gives feedback to the user about where to acquire the 3D image data sets.
  • the user acquires one or more 3D image data sets covering all of the amniotic fluid in the uterus and the data sets are then stored in the device or transferred to a host computer.
  • the 3D ultrasound device is configured to acquire the 3D image data sets in two formats.
  • the first format is a collection of two-dimensional scanplanes, each scanplane being separated from the other and representing a portion of the uterus being scanned.
  • Each scanplane is formed from one-dimensional ultrasound A-lines confined within the limits of the 2D scanplane.
  • the 3D data sets is then represented as a 3D array of 2D scanplanes.
  • the 3D array of 2D scanplanes is an assembly of scanplanes, and may be assembled into a translational array, a wedge array, or a rotational array.
  • the 3D ultrasound device is configured to acquire the 3D image data sets from one-dimensional ultrasound A-lines distributed in 3D space of the uterus to form a 3D scancone of 3D-distributed scanline.
  • the 3D scancone is not an assembly of 2D scanplanes.
  • the 3D image datasets are then subjected to image enhancement and analysis processes.
  • the processes are either implemented on the device itself or is implemented on the host computer. Alternatively, the processes can also be implemented on a server or other computer to which the 3D ultrasound data sets are transferred.
  • each 2D image in the 3D dataset is first enhanced using non-linear filters by an image pre-filtering step.
  • the image pre-filtering step includes an image-smoothing step to reduce image noise followed by an image-sharpening step to obtain maximum contrast between organ wall boundaries.
  • a second process includes subjecting the resulting image of the first process to a location method to identify initial edge points between amniotic fluid and other fetal or maternal structures.
  • the location method automatically determines the leading and trailing regions of wall locations along an A-mode one-dimensional scan line.
  • a third process includes subjecting the image of the first process to an intensity-based segmentation process where dark pixels (representing fluid) are automatically separated from bright pixels (representing tissue and other structures).
  • the images resulting from the second and third step are combined to result in a single image representing likely amniotic fluid regions.
  • the combined image is cleaned to make the output image smooth and to remove extraneous structures such as the fetal head and the fetal bladder.
  • boundary line contours are placed on each 2D image. Thereafter, the method then calculates the total 3D volume of amniotic fluid in the uterus.
  • preferred alternate embodiments of the invention allow for acquiring at least two 3D data sets, preferably four, each 3D data set having at least a partial ultrasonic view of the uterus, each partial view obtained from a different anatomical site of the patient.
  • a 3D array of 2D scanplanes is assembled such that the 3D array presents a composite image of the uterus that displays the amniotic fluid regions to provide the basis for calculation of amniotic fluid volumes.
  • the user acquires the 3D data sets in quarter sections of the uterus when the patient is in a supine position.
  • four image cones of data are acquired near the midpoint of each uterine quadrant at substantially equally spaced intervals between quadrant centers.
  • Image processing as outlined above is conducted for each quadrant image, segmenting on the darker pixels or voxels associated with amniotic fluid.
  • Correcting algorithms are applied to compensate for any quadrant-to-quadrant image cone overlap by registering and fixing one quadrant's image to another.
  • the result is a fixed 3D mosaic image of the uterus and the amniotic fluid volumes or regions in the uterus from the four separate image cones.
  • the user acquires one or more 3D image data sets of quarter sections of the uterus when the patient is in a lateral position.
  • each image cones of data are acquired along a lateral line of substantially equally spaced intervals.
  • Each image cone are subjected to the image processing as outlined above, with emphasis given to segmenting on the darker pixels or voxels associated with amniotic fluid.
  • Scanplanes showing common pixel or voxel overlaps are registered into a common coordinate system along the lateral line. Correcting algorithms are applied to compensate for any image cone overlap along the lateral line. The result is a fixed 3D mosaic image of the uterus and the amniotic fluid volumes or regions in the uterus from the four separate image cones.
  • At least two 3D scancone of 3D distributed scanlines are acquired at different anatomical sites, image processed, registered and fused into a 3D mosaic image composite. Amniotic fluid volumes are then calculated.
  • the system and method further provides an automatic method to detect and correct for any contribution the fetal head provides to the amniotic fluid volume.
  • Such systems, methods, and devices include improved transducer aiming and utilizing time-domain deconvolution processes upon the non-stationary effects of ultrasound signals.
  • the processes deconvolution applies algorithms to improve the clarity or resolution of ultrasonic images by suppressed reverberation of ultrasound echoes.
  • the initially acquired and distorted ultrasound image is reconstructed to a clearer image by countering the effect of distortion operators.
  • An improved point spread function (PSF) of the imaging system is applied, utilizing a deconvolution algorithm, to improve the image resolution, and remove reverberations by modeling them as noise.
  • PSF point spread function
  • optical flow which is a powerful motion analysis technique and is applied in many different research or commercial fields.
  • the optical flow is able to estimate the velocity field of image series and the velocity vector provides information of the contents inside the image series.
  • the velocity information inside and around the target can be different from other parts in the field. Otherwise, there will be no valuable information in current field and the scanning has to be adjusted.
  • new optical-flow-based methods for estimating heart motion from two-dimensional echocardiographic sequences As regards analyzing the motions of organ movement and fluid flows within an examined subject, new optical-flow-based methods for estimating heart motion from two-dimensional echocardiographic sequences, an optical-flow guided active contour method for Myocardial tracking in contrast echocardiography, and a method for shape-driven segmentation and tracking of the left ventricle.
  • ultrasound motion of the cannula is configured by cannula fitted with echogenic ultrasound micro reflectors.
  • embodiments include an apparatus that: dispenses a metered quantity of ultrasound coupling gel and enables one-handed gel application.
  • the apparatus also preserves the gel in a de-gassed state (no air bubbles), preserves the gel in a sterile state (no contact between gel applicator and patient), includes a method for easy container refill, and preserves the shape and volume of existing gel application bottles.
  • FIG. 1 is a side view of a microprocessor-controlled, hand-held ultrasound transceiver
  • FIG. 2A is a is depiction of the hand-held transceiver in use for scanning a patient
  • FIG. 2B is a perspective view of the hand-held transceiver device sitting in a communication cradle;
  • FIG. 2C is a perspective view of an amniotic fluid volume measuring system
  • FIG. 3 is an alternate embodiment of an amniotic fluid volume measuring system in schematic view of a plurality of transceivers in connection with a server;
  • FIG. 4 is another alternate embodiment of an amniotic fluid volume measuring system in a schematic view of a plurality of transceivers in connection with a server over a network;
  • FIG. 5A a graphical representation of a plurality of scan lines forming a single scan plane
  • FIG. 5B is a graphical representation of a plurality of scanplanes forming a three-dimensional array having a substantially conic shape
  • FIG. 5C is a graphical representation of a plurality of 3D distributed scanlines emanating from the transceiver forming a scancone;
  • FIG. 6 is a depiction of the hand-held transceiver placed laterally on a patient trans-abdominally to transmit ultrasound and receive ultrasound echoes for processing to determine amniotic fluid volumes;
  • FIG. 7 shows a block diagram overview of the two-dimensional and three-dimensional Input, Image Enhancement, Intensity-Based Segmentation, Edge-Based Segmentation, Combine, Polish, Output, and Compute algorithms to visualize and determine the volume or area of amniotic fluid;
  • FIG. 8A depicts the sub-algorithms of Image Enhancement
  • FIG. 8B depicts the sub-algorithms of Intensity-Based Segmentation
  • FIG. 8C depicts the sub-algorithms of Edge-Based Segmentation
  • FIG. 8D depicts the sub-algorithms of the Polish algorithm, including Close, Open, Remove Deep Regions, and Remove Fetal Head Regions;
  • FIG. 8E depicts the sub-algorithms of the Remove Fetal Head Regions sub-algorithm
  • FIG. 8F depicts the sub-algorithms of the Hough Transform sub-algorithm
  • FIG. 9 depicts the operation of a circular Hough transform algorithm
  • FIG. 10 shows results of sequentially applying the algorithm steps on a sample image
  • FIG. 11 illustrates a set of intermediate images of the fetal head detection process
  • FIG. 12 presents a 4-panel series of sonographer amniotic fluid pocket outlines and the algorithm output amniotic fluid pocket outlines;
  • FIG. 13 illustrates a 4-quadrant supine procedure to acquire multiple image cones
  • FIG. 14 illustrates an in-line lateral line procedure to acquire multiple image cones
  • FIG. 15 is a block diagram overview of the rigid registration and correcting algorithms used in processing multiple image cone data sets
  • FIG. 16 is a block diagram of the steps in the rigid registration algorithm
  • FIG. 17A is an example image showing a first view of a fixed scanplane
  • FIG. 17B is an example image showing a second view view of a moving scanplane having some voxels in common with the first scanplane;
  • FIG. 17C is a composite image of the first (fixed) and second (moving) images
  • FIG. 18A is an example image showing a first view of a fixed scanplane
  • FIG. 18B is an example image showing a second view of a moving scanplane having some voxels in common with the first view and a third view;
  • FIG. 18C is a third view of a moving scanplane having some voxels in common with the second view
  • FIG. 18D is a composite image of the first (fixed), second (moving), and third (moving) views
  • FIG. 19 illustrates a 6 -section supine procedure to acquire multiple image cones around the center point of uterus of a patient in a supine procedure
  • FIG. 20 is a block diagram algorithm overview of the registration and correcting algorithms used in processing the 6-section multiple image cone data sets depicted in FIG. 19 ;
  • FIG. 21 is an expansion of the Image Enhancement and Segmentation block 1010 of FIG. 20 ;
  • FIG. 22 is an expansion of the RigidRegistration block 1014 of FIG. 20 ;
  • FIG. 23 is a 4-panel image set that shows the effect of multiple iterations of the heat filter applied to an original image
  • FIG. 24 shows the affect of shock filtering and a combination heat-and-shock filtering to the pixel values of the image
  • FIG. 25 is a 7-panel image set progressively receiving application of the image enhancement and segmentation algorithms of FIG. 21 ;
  • FIG. 26 is a pixel difference kernel for obtaining X and Y derivatives to determine pixel gradient magnitudes for edge-based segmentation.
  • FIG. 27 is a 3-panel image set showing the progressive demarcation or edge detection of organ wall interfaces arising from edge-based segmentation algorithms.
  • FIGS. 1A-D depicts a partial schematic and a partial isometric view of a transceiver, a scan cone comprising a rotational array of scan planes, and a scan plane of the array of an ultrasound harmonic imaging system;
  • FIG. 2 depicts a partial schematic and partial isometric and side view of a transceiver, and a scan cone array comprised of 3D-distributed scan lines in alternate embodiment of an ultrasound harmonic imaging system;
  • FIG. 3 is a schematic illustration of a server-accessed local area network in communication with a plurality of ultrasound harmonic imaging systems
  • FIG. 4 is a schematic illustration of the Internet in communication with a plurality of ultrasound harmonic imaging systems
  • FIG. 5 schematically depicts a method flow chart algorithm 120 to acquire a clarity enhanced ultrasound image
  • FIG. 6 is an expansion of sub-algorithm 150 of master algorithm 120 of FIG. 7 ;
  • FIG. 7 is an expansion of sub-algorithms 200 of FIG. 5 ;
  • FIG. 8 is an expansion of sub-algorithm 300 of master algorithm illustrated in FIG. 5 ;
  • FIG. 9 is an expansion of sub-algorithms 400 A and 400 B of FIG. 5 ;
  • FIG. 10 is an expansion of sub-algorithm 500 of FIG. 5 ;
  • FIG. 11 depicts a logarithm of a Cepstrum
  • FIGS. 12A-C depict histogram waveform plots derived from water tank pulse-echo experiments undergoing parametric and non-parametric analysis
  • FIGS. 13-25 are bladder sonograms that depict image clarity after undergoing image enhancement processing by algorithms described in FIGS. 5-10 ;
  • FIG. 13 is an unprocessed image that will undergo image enhancement processing
  • FIG. 14 illustrates an enclosed portion of a magnified region of FIG. 13 ;
  • FIG. 15 is the resultant image of FIG. 13 that has undergone image processing via nonparametric estimation under sub-algorithm 400 A;
  • FIG. 16 is the resultant image of FIG. 13 that has undergone image processing via parametric estimation under sub-algorithm 400 B;
  • FIG. 17 the resultant image of an alternate image processing embodiment using a Weiner filter.
  • FIG. 18 is another unprocessed image that will undergo image enhancement processing
  • FIG. 19 illustrates an enclosed portion of a magnified region of FIG. 18 ;
  • FIG. 20 is the resultant image of FIG. 18 that has undergone image processing via nonparametric estimation under sub-algorithm 400 A;
  • FIG. 21 is the resultant image of FIG. 18 that has undergone image processing via parametric estimation under sub-algorithm 400 B;
  • FIG. 22 is another unprocessed image that will undergo image enhancement processing
  • FIG. 23 illustrates an enclosed portion of a magnified region of FIG. 22 ;
  • FIG. 24 is the resultant image of FIG. 22 that has undergone image processing via nonparametric estimation under sub-algorithm 400 A;
  • FIG. 25 is the resultant image of FIG. 22 that has undergone image processing via parametric estimation under sub-algorithm 400 B;
  • FIG. 26 depicts a schematic example of a time velocity map derived from sub-algorithm 310 ;
  • FIG. 27 depicts another schematic example of a time velocity map derived from sub-algorithm 310 ;
  • FIG. 28 illustrates a seven panel image series of a beating heart ventricle that will undergo the optical flow processes of sub-algorithm 300 in which at least two images are required;
  • FIG. 29 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 presented in a 2D flow pattern after undergoing sub-algorithm 310 ;
  • FIG. 30 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the X-axis direction or phi direction after undergoing sub-algorithm 310 ;
  • FIG. 31 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the Y-axis direction radial direction after undergoing sub-algorithm 310 ;
  • FIG. 32 illustrates a 3D optical vector plot after undergoing sub-algorithm 310 and corresponds to the top row of FIG. 29 ;
  • FIG. 35 illustrates a 3D optical vector plot in the radial direction above a Y-axis threshold setting of 0.6 after undergoing sub-algorithm 310 and corresponds to FIG. 34 the threshold T that are less than 0.6 are set to 0;
  • FIGS. 36A-G depicts embodiments of the sonic gel dispenser
  • FIGS. 37 and 38 are diagrams showing one embodiment of the present invention.
  • FIG. 39 is diagram showing additional detail for a needle shaft to be used with one embodiment of the invention.
  • FIGS. 40A and 40B are diagrams showing close-up views of surface features of the needle shaft shown in FIG. 38 ;
  • FIG. 41 is a diagram showing imaging components for use with the needle shaft shown in FIG. 38 ;
  • FIG. 42 is a diagram showing a representation of an image produced by the imaging components shown in FIG. 41 ;
  • FIG. 43 is a system diagram of an embodiment of the present invention.
  • FIG. 44 is a system diagram of an example embodiment showing additional detail for one of the components shown in FIG. 38 ;
  • FIGS. 45 and 46 are flowcharts of a method of displaying the trajectory of a cannula in accordance with an embodiment of the present invention.
  • the preferred portable embodiment of the ultrasound transceiver of the amniotic fluid volume measuring system are shown in FIGS. 1-4 .
  • the transceiver 10 includes a handle 12 having a trigger 14 and a top button 16 , a transceiver housing 18 attached to the handle 12 , and a transceiver dome 20 .
  • a display 24 for user interaction is attached to the transceiver housing 18 at an end opposite the transceiver dome 20 .
  • Housed within the transceiver 10 is a single element transducer (not shown) that converts ultrasound waves to electrical signals.
  • the transceiver 10 is held in position against the body of a patient by a user for image acquisition and signal processing.
  • the transceiver 10 transmits a radio frequency ultrasound signal at substantially 3.7 MHz to the body and then receives a returning echo signal.
  • the transceiver 10 can be adjusted to transmit a range of probing ultrasound energy from approximately 2 MHz to approximately 10 MHz radio frequencies.
  • the top button 16 selects for different acquisition volumes.
  • the transceiver is controlled by a microprocessor and software associated with the microprocessor and a digital signal processor of a computer system.
  • the term “computer system” broadly comprises any microprocessor-based or other computer system capable of executing operating instructions and manipulating data, and is not limited to a traditional desktop or notebook computer.
  • the display 24 presents alphanumeric or graphic data indicating the proper or optimal positioning of the transceiver 10 for initiating a series of scans.
  • the transceiver 10 is configured to initiate the series of scans to obtain and present 3D images as either a 3D array of 2D scanplanes or as a single 3D scancone of 3D distributed scanlines.
  • a suitable transceiver is the DCD372 made by Diagnostic Ultrasound.
  • the two- or three-dimensional image of a scan plane may be presented in the display 24 .
  • the transceiver need not be battery-operated or otherwise portable, need not have a top-mounted display 24 , and may include many other features or differences.
  • the display 24 may be a liquid crystal display (LCD), a light emitting diode (LED), a cathode ray tube (CRT), or any suitable display capable of presenting alphanumeric data or graphic images.
  • FIG. 2A is a photograph of the hand-held transceiver 10 for scanning a patient.
  • the transceiver 10 is then positioned over the patient's abdomen by a user holding the handle 12 to place the transceiver housing 18 against the patient's abdomen.
  • the top button 16 is centrally located on the handle 12 .
  • the transceiver 10 transmits an ultrasound signal at substantially 3.7 MHz into the uterus.
  • the transceiver 10 receives a return ultrasound echo signal emanating from the uterus and presents it on the display 24 .
  • FIG. 2B is a perspective view of the hand-held transceiver device sitting in a communication cradle.
  • the transceiver 10 sits in a communication cradle 42 via the handle 12 .
  • This cradle can be connected to a standard USB port of any personal computer, enabling all the data on the device to be transferred to the computer and enabling new programs to be transferred into the device from the computer.
  • FIG. 2C is a perspective view of an amniotic fluid volume measuring system 5 A.
  • the system 5 A includes the transceiver 10 cradled in the cradle 42 that is in signal communication with a computer 52 .
  • the transceiver 10 sits in a communication cradle 42 via the handle 12 .
  • This cradle can be connected to a standard USB port of any personal computer 52 , enabling all the data on the transceiver 10 to be transferred to the computer for analysis and determination of amniotic fluid volume.
  • FIG. 3 depicts an alternate embodiment of an amniotic fluid volume measuring system 5 B in a schematic view.
  • the system 5 B includes a plurality systems 5 A in signal communication with a server 56 .
  • each transceiver 10 is in signal connection with the server 56 through connections via a plurality of computers 52 .
  • FIG. 3 depicts each transceiver 10 being used to send probing ultrasound radiation to a uterus of a patient and to subsequently retrieve ultrasound echoes returning from the uterus, convert the ultrasound echoes into digital echo signals, store the digital echo signals, and process the digital echo signals by algorithms of the invention.
  • a user holds the transceiver 10 by the handle 12 to send probing ultrasound signals and to receive incoming ultrasound echoes.
  • the transceiver 10 is placed in the communication cradle 42 that is in signal communication with a computer 52 , and operates as an amniotic fluid volume measuring system. Two amniotic fluid volume-measuring systems are depicted as representative though fewer or more systems may be used.
  • a “server” can be any computer software or hardware that responds to requests or issues commands to or from a client. Likewise, the server may be accessible by one or more client computers via the Internet, or may be in communication over a LAN or other network.
  • Each amniotic fluid volume measuring systems includes the transceiver 10 for acquiring data from a patient.
  • the transceiver 10 is placed in the cradle 52 to establish signal communication with the computer 52 .
  • Signal communication as illustrated is by a wired connection from the cradle 42 to the computer 52 .
  • Signal communication between the transceiver 10 and the computer 52 may also be by wireless means, for example, infrared signals or radio frequency signals. The wireless means of signal communication may occur between the cradle 42 and the computer 52 , the transceiver 10 and the computer 52 , or the transceiver 10 and the cradle 42 .
  • a preferred first embodiment of the amniotic fluid volume measuring system includes each transceiver 10 being separately used on a patient and sending signals proportionate to the received and acquired ultrasound echoes to the computer 52 for storage.
  • Residing in each computer 52 are imaging programs having instructions to prepare and analyze a plurality of one dimensional (1D) images from the stored signals and transforms the plurality of 1D images into the plurality of 2D scanplanes.
  • the imaging programs also present 3D renderings from the plurality of 2D scanplanes.
  • Also residing in each computer 52 are instructions to perform the additional ultrasound image enhancement procedures, including instructions to implement the image processing algorithms.
  • a preferred second embodiment of the amniotic fluid volume measuring system is similar to the first embodiment, but the imaging programs and the instructions to perform the additional ultrasound enhancement procedures are located on the server 56 .
  • Each computer 52 from each amniotic fluid volume measuring system receives the acquired signals from the transceiver 10 via the cradle 51 and stores the signals in the memory of the computer 52 .
  • the computer 52 subsequently retrieves the imaging programs and the instructions to perform the additional ultrasound enhancement procedures from the server 56 .
  • each computer 52 prepares the 1D images, 2D images, 3D renderings, and enhanced images from the retrieved imaging and ultrasound enhancement procedures. Results from the data analysis procedures are sent to the server 56 for storage.
  • a preferred third embodiment of the amniotic fluid volume measuring system is similar to the first and second embodiments, but the imaging programs and the instructions to perform the additional ultrasound enhancement procedures are located on the server 56 and executed on the server 56 .
  • Each computer 52 from each amniotic fluid volume measuring system receives the acquired signals from the transceiver 10 and via the cradle 51 sends the acquired signals in the memory of the computer 52 .
  • the computer 52 subsequently sends the stored signals to the server 56 .
  • the imaging programs and the instructions to perform the additional ultrasound enhancement procedures are executed to prepare the 1D images, 2D images, 3D renderings, and enhanced images from the server 56 stored signals. Results from the data analysis procedures are kept on the server 56 , or alternatively, sent to the computer 52 .
  • FIG. 4 is another embodiment of an amniotic volume fluid measuring system 5 C presented in schematic view.
  • the system 5 C includes a plurality of amniotic fluid measuring systems 5 A connected to a server 56 over the Internet or other network 64 .
  • FIG. 4 represents any of the first, second, or third embodiments of the invention advantageously deployed to other servers and computer systems through connections via the network.
  • FIG. 5A a graphical representation of a plurality of scan lines forming a single scan plane.
  • FIG. 5A illustrates how ultrasound signals are used to make analyzable images, more specifically how a series of one-dimensional (1D) scanlines are used to produce a two-dimensional (2D) image.
  • the 1D and 2D operational aspects of the single element transducer housed in the transceiver 10 is seen as it rotates mechanically about an angle ⁇ .
  • a scanline 214 of length r migrates between a first limiting position 218 and a second limiting position 222 as determined by the value of the angle ⁇ , creating a fan-like 2D scanplane 210 .
  • the transceiver 10 operates substantially at 3.7 MHz frequency and creates an approximately 18 cm deep scan line 214 and migrates within the angle ⁇ having an angle of approximately 0.027 radians.
  • a first motor tilts the transducer approximately 60° clockwise and then counterclockwise forming the fan-like 2D scanplane presenting an approximate 120° 2D sector image.
  • a plurality of scanlines, each scanline substantially equivalent to scanline 214 is recorded, between the first limiting position 218 and the second limiting position 222 formed by the unique tilt angle ⁇ .
  • the plurality of scanlines between the two extremes forms a scanplane 210 .
  • each scanplane contains 77 scan lines, although the number of lines can vary within the scope of this invention.
  • the tilt angle ⁇ sweeps through angles approximately between ⁇ 60° and +60° for a total arc of approximately 120°.
  • FIG. 5B is a graphical representation of a plurality of scanplanes forming a three-dimensional array (3D) 240 having a substantially conic shape.
  • FIG. 5B illustrates how a 3D rendering is obtained from the plurality of 2D scanplanes.
  • each scanplane 210 are the plurality of scanlines, each scanline equivalent to the scanline 214 and sharing a common rotational angle ⁇ .
  • each scanplane contains 77 scan lines, although the number of lines can vary within the scope of this invention.
  • Each 2D sector image scanplane 210 with tilt angle 100 and range r collectively forms a 3D conic array 240 with rotation angle ⁇ .
  • a second motor rotates the transducer between 3.75° or 7.5° to gather the next 120° sector image. This process is repeated until the transducer is rotated through 180°, resulting in the cone-shaped 3D conic array 240 data set with 24 planes rotationally assembled in the preferred embodiment.
  • the conic array could have fewer or more planes rotationally assembled.
  • preferred alternate embodiments of the conic array could include at least two scanplanes, or a range of scanplanes from 2 to 48 scanplanes. The upper range of the scanplanes can be greater than 48 scanplanes.
  • the tilt angle ⁇ indicates the tilt of the scanline from the centerline in 2D sector image, and the rotation angle ⁇ , identifies the particular rotation plane the sector image lies in. Therefore, any point in this 3D data set can be isolated using coordinates expressed as three parameters, P(r, ⁇ , ⁇ ).
  • the returning echoes are interpreted as analog electrical signals by a transducer, converted to digital signals by an analog-to-digital converter, and conveyed to the digital signal processor of the computer system for storage and analysis to determine the locations of the amniotic fluid walls.
  • the computer system is representationally depicted in FIGS. 3 and 4 and includes a microprocessor, random access memory (RAM), or other memory for storing processing instructions and data generated by the transceiver 10 .
  • FIG. 5C is a graphical representation of a plurality of 3D-distributed scanlines emanating from the transceiver 10 forming a scancone 300 .
  • the scancone 300 is formed by a plurality of 3D distributed scanlines that comprises a plurality of internal and peripheral scanlines.
  • the scanlines are one-dimensional ultrasound A-lines that emanate from the transceiver 10 at different coordinate directions, that taken as an aggregate, from a conic shape.
  • the 3D-distributed A-lines (scanlines) are not necessarily confined within a scanplane, but instead are directed to sweep throughout the internal and along the periphery of the scancone 300 .
  • the 3D-distributed scanlines not only would occupy a given scanplane in a 3D array of 2D scanplanes, but also the inter-scanplane spaces, from the conic axis to and including the conic periphery.
  • the transceiver 10 shows the same illustrated features from FIG. 1 , but is configured to distribute the ultrasound A-lines throughout 3D space in different coordinate directions to form the scancone 300 .
  • the internal scanlines are represented by scanlines 312 A-C.
  • the number and location of the internal scanlines emanating from the transceiver 10 is the number of internal scanlines needed to be distributed within the scancone 300 , at different positional coordinates, to sufficiently visualize structures or images within the scancone 300 .
  • the internal scanlines are not peripheral scanlines.
  • the peripheral scanlines are represented by scanlines 314 A-F and occupy the conic periphery, thus representing the peripheral limits of the scancone 300 .
  • FIG. 6 is a depiction of the hand-held transceiver placed on a patient trans-abdominally to transmit probing ultrasound and receive ultrasound echoes for processing to determine amniotic fluid volumes.
  • the transceiver 10 is held by the handle 12 to position over a patient to measure the volume of amniotic fluid in an amniotic sac over a baby.
  • a plurality of axes for describing the orientation of the baby, the amniotic sac, and mother is illustrated.
  • the plurality of axes includes a vertical axis depicted on the line L(R)-L(L) for left and right orientations, a horizontal axis LI-LS for inferior and superior orientations, and a depth axis LA-LP for anterior and posterior orientations.
  • FIG. 6 is representative of a preferred data acquisition protocol used for amniotic fluid volume determination.
  • the transceiver 10 is the hand-held 3D ultrasound device (for example, model DCD372 from Diagnostic Ultrasound) and is used to image the uterus trans-abdominally.
  • the patient is in a supine position and the device is operated in a 2D continuous acquisition mode.
  • a 2D continuous mode is where the data is continuously acquired in 2D and presented as a scanplane similar to the scanplane 210 on the display 24 while an operator physically moves the transceiver 10 .
  • An operator moves the transceiver 10 around on the maternal abdomen and the presses the trigger 14 of the transceiver 10 and continuously acquires real-time feedback presented in 2D on the display 24 .
  • Amniotic fluid where present, visually appears as dark regions along with an alphanumeric indication of amniotic fluid area (for example, in cm 2 ) on the display 24 .
  • the operator decides which side of the uterus has more amniotic fluid by the presentation on the display 24 .
  • the side having more amniotic fluid presents as regions having larger darker regions on the display 24 .
  • the side displaying a large dark region registers greater alphanumeric area while the side with less fluid shows displays smaller dark regions and proportionately registers smaller alphanumeric area on the display 24 .
  • amniotic fluid is present throughout the uterus, its distribution in the uterus depends upon where and how the fetus is positioned within the uterus. There is usually less amniotic fluid around the fetus's spine and back and more amniotic fluid in front of its abdomen and around the limbs.
  • the patient Based on fetal position information acquired from data gathered under continuous acquisition mode, the patient is placed in a lateral recumbent position such that the fetus is displaced towards the ground creating a large pocket of amniotic fluid close to abdominal surface where the transceiver 10 can be placed as shown in FIG. 6 .
  • the transceiver 10 can be placed as shown in FIG. 6 .
  • the patient is asked to turn with the left side down and if large fluid pockets are found on the left side, the patient is asked to turn with the right side down.
  • the transceiver 10 is again operated in the 2D continuous acquisition mode and is moved around on the lateral surface of the patient's abdomen.
  • the operator finds the location that shows the largest amniotic fluid area based on acquiring the largest dark region imaged and the largest alphanumeric value displayed on the display 24 .
  • the transceiver 10 is held in a fixed position, the trigger 14 is released to acquire a 3D image comprising a set of arrayed scanplanes.
  • the 3D image presents a rotational array of the scanplanes 210 similar to the 3D array 240 .
  • the operator can reposition the transceiver 10 to different abdominal locations to acquire new 3D images comprised of different scanplane arrays similar to the 3D array 240 .
  • Multiple scan cones obtained from different positions provide the operator the ability to image the entire amniotic fluid region from different view points. In the case of a single image cone being too small to accommodate a large AFV measurement, obtaining multiple 3D array 240 image cones ensures that the total volume of large AFV regions is determined.
  • Multiple 3D images may also be acquired by pressing the top bottom 16 to select multiple conic arrays similar to the 3D array 240 .
  • a single image scan may present an underestimated volume of AFV due to amniotic fluid pockets that remain hidden behind the limbs of the fetus.
  • the hidden amniotic fluid pockets present as unquantifiable shadow-regions.
  • repeated positioning the transceiver 10 and rescanning can be done to obtain more than one ultrasound view to maximize detection of amniotic fluid pockets.
  • Repositioning and rescanning provides multiple views as a plurality of the 3D arrays 240 images cones. Acquiring multiple images cones improves the probability of obtaining initial estimates of AFV that otherwise could remain undetected and un-quantified in a single scan.
  • the user determines and scans at only one location on the entire abdomen that shows the maximum amniotic fluid area while the patient is the supine position.
  • 2D scanplane images equivalent to the scanplane 210 are continuously acquired and the amniotic fluid area on every image is automatically computed.
  • the user selects one location that shows the maximum amniotic fluid area. At this location, as the user releases the scan button, a full 3D data cone is acquired and stored in the device's memory.
  • FIG. 7 shows a block diagram overview the image enhancement, segmentation, and polishing algorithms of the amniotic fluid volume measuring system.
  • the enhancement, segmentation, and polishing algorithms are applied to each scanplane 210 or to the entire scan cone 240 to automatically obtain amniotic fluid regions.
  • the algorithms are expressed in two-dimensional terms and use formulas to convert scanplane pixels (picture elements) into area units.
  • the algorithms are expressed in three-dimensional terms and use formulas to convert voxels (volume elements) into volume units.
  • the algorithms expressed in 2D terms are used during the targeting phase where the operator trans-abdominally positions and repositions the transceiver 10 to obtain real-time feedback about the amniotic fluid area in each scanplane.
  • the algorithms expressed in 3D terms are used to obtain the total amniotic fluid volume computed from the voxels contained within the calculated amniotic fluid regions in the 3D conic array 240 .
  • FIG. 7 represents an overview of a preferred method of the invention and includes a sequence of algorithms, many of which have sub-algorithms described in more specific detail in FIGS. 8A-F .
  • FIG. 7 begins with inputting data of an unprocessed image at step 410 .
  • unprocessed image data 410 is entered (e.g., read from memory, scanned, or otherwise acquired), it is automatically subjected to an image enhancement algorithm 418 that reduces the noise in the data (including speckle noise) using one or more equations while preserving the salient edges on the image using one or more additional equations.
  • image enhancement algorithm 418 that reduces the noise in the data (including speckle noise) using one or more equations while preserving the salient edges on the image using one or more additional equations.
  • the enhanced images are segmented by two different methods whose results are eventually combined.
  • a first segmentation method applies an intensity-based segmentation algorithm 422 that determines all pixels that are potentially fluid pixels based on their intensities.
  • a second segmentation method applies an edge-based segmentation algorithm 438 that relies on detecting the fluid and tissue interfaces.
  • the images obtained by the first segmentation algorithm 422 and the images obtained by the second segmentation algorithm 438 are brought together via a combination algorithm 442 to provide a substantially segmented image.
  • the segmented image obtained from the combination algorithm 442 are then subjected to a polishing algorithm 464 in which the segmented image is cleaned-up by filling gaps with pixels and removing unlikely regions.
  • the image obtained from the polishing algorithm 464 is outputted 480 for calculation of areas and volumes of segmented regions-of-interest.
  • the area or the volume of the segmented region-of-interest is computed 484 by multiplying pixels by a first resolution factor to obtain area, or voxels by a second resolution factor to obtain volume.
  • first resolution or conversion factor for pixel area is equivalent to 0.64 mm 2
  • second resolution or conversion factor for voxel volume is equivalent to 0.512 mm 3 .
  • Different unit lengths for pixels and voxels may be assigned, with a proportional change in pixel area and voxel volume conversion factors.
  • the enhancement, segmentation and polishing algorithms depicted in FIG. 7 for measuring amniotic fluid areas or volumes are not limited to scanplanes assembled into rotational arrays equivalent to the 3D array 240 .
  • the enhancement, segmentation and polishing algorithms depicted in FIG. 7 apply to translation arrays and wedge arrays.
  • Translation arrays are substantially rectilinear image plane slices from incrementally repositioned ultrasound transceivers that are configured to acquire ultrasound rectilinear scanplanes separated by regular or irregular rectilinear spaces.
  • the translation arrays can be made from transceivers configured to advance incrementally, or may be hand-positioned incrementally by an operator.
  • the operator obtains a wedge array from ultrasound transceivers configured to acquire wedge-shaped scanplanes separated by regular or irregular angular spaces, and either mechanistically advanced or hand-tilted incrementally.
  • Any number of scanplanes can be either translationally assembled or wedge-assembled ranges, but preferably in ranges greater than 2 scanplanes.
  • FIG. 7 Other preferred embodiments of the enhancement, segmentation and polishing algorithms depicted in FIG. 7 may be applied to images formed by line arrays, either spiral distributed or reconstructed random-lines.
  • the line arrays are defined using points identified by the coordinates expressed by the three parameters, P(r, ⁇ , ⁇ ), where the values or r, ⁇ , and ⁇ can vary.
  • the enhancement, segmentation and polishing algorithms depicted in FIG. 7 are not limited to ultrasound applications but may be employed in other imaging technologies utilizing scanplane arrays or individual scanplanes.
  • biological-based and non-biological-based images acquired using infrared, visible light, ultraviolet light, microwave, x-ray computed tomography, magnetic resonance, gamma rays, and positron emission are images suitable for the algorithms depicted in FIG. 7 .
  • the algorithms depicted in FIG. 7 can be applied to facsimile transmitted images and documents.
  • FIGS. 8A-E depict expanded details of the preferred embodiments of enhancement, segmentation, and polishing algorithms described in FIG. 7 .
  • Each of the following greater detailed algorithms are either implemented on the transceiver 10 itself or are implemented on the host computer 52 or on the server 56 computer to which the ultrasound data is transferred.
  • FIG. 8A depicts the sub-algorithms of Image Enhancement.
  • the sub-algorithms include a heat filter 514 to reduce noise and a shock filter 518 to sharpen edges.
  • a combination of the heat and shock filters works very well at reducing noise and sharpening the data while preserving the significant discontinuities.
  • the noisy signal is filtered using a 1D heat filter (Equation E1 below), which results in the reduction of noise and smoothing of edges.
  • This step is followed by a shock-filtering step 518 (Equation E2 below), which results in the sharpening of the blurred signal.
  • Noise reduction and edge sharpening is achieved by application of the following equations E1-E2.
  • the algorithm of the heat filter 514 uses a heat equation E1.
  • the heat equation E1 in partial differential equation (PDE) form for image processing is expressed as:
  • ⁇ u ⁇ t ⁇ 2 ⁇ u ⁇ x 2 + ⁇ 2 ⁇ u ⁇ y 2 , E1
  • u is the image being processed.
  • the image u is 2D, and is comprised of an array of pixels arranged in rows along the x-axis, and an array of pixels arranged in columns along the y-axis.
  • I input image pixel intensity
  • the value of I depends on the application, and commonly occurs within ranges consistent with the application. For example, I can be as low as 0 to 1, or occupy middle ranges between 0 to 127 or 0 to 512. Similarly, I may have values occupying higher ranges of 0 to 1024 and 0 to 4096, or greater.
  • the heat equation E1 results in a smoothing of the image and is equivalent to the Gaussian filtering of the image. The larger the number of iterations that it is applied for the more the input image is smoothed or blurred and the more the noise that is reduced.
  • the shock filter 518 is a PDE used to sharpen images as detailed below.
  • the two dimensional shock filter E2 is expressed as:
  • FIG. 8B depicts the sub-algorithms of Intensity-Based Segmentation (step 422 in FIG. 7 ).
  • the intensity-based segmentation step 422 uses a “k-means” intensity clustering 522 technique where the enhanced image is subjected to a categorizing “k-means” clustering algorithm.
  • the “k-means” algorithm categorizes pixel intensities into white, gray, and black pixel groups. Given the number of desired clusters or groups of intensities (k), the k-means algorithm is an iterative algorithm comprising four steps:
  • each image is clustered independently of the neighboring images.
  • the entire volume is clustered together. To make this step faster, pixels are sampled at 2 or any multiple sampling rate factors before determining the cluster boundaries. The cluster boundaries determined from the down-sampled data are then applied to the entire data.
  • FIG. 8C depicts the sub-algorithms of Edge-Based Segmentation (step 438 in FIG. 7 ) and uses a sequence of four sub-algorithms.
  • the sequence includes a spatial gradients 526 algorithm, a hysteresis threshold 530 algorithm, a Region-of-Interest (ROI) 534 algorithm, and a matching edges filter 538 algorithm.
  • ROI Region-of-Interest
  • the spatial gradient 526 computes the x-directional and y-directional spatial gradients of the enhanced image.
  • the Hysteresis threshold 530 algorithm detects salient edges. Once the edges are detected, the regions defined by the edges are selected by a user employing the ROI 534 algorithm to select regions-of-interest deemed relevant for analysis.
  • the edge points can be easily determined by taking x- and y-derivatives using backward differences along x- and y-directions.
  • the pixel gradient magnitude ⁇ I ⁇ is then computed from the x- and y-derivative image in equation E5 as:
  • I 2 x the square of x-derivative of intensity
  • hysteresis thresholding 530 Two threshold values, a lower threshold and a higher threshold, are used. First, the image is thresholded at the lower threshold value and a connected component labeling is carried out on the resulting image. Next, each connected edge component is preserved which has at least one edge pixel having a gradient magnitude greater than the upper threshold. This kind of thresholding scheme is good at retaining long connected edges that have one or more high gradient points.
  • the two thresholds are automatically estimated.
  • the upper gradient threshold is estimated at a value such that at most 97% of the image pixels are marked as non-edges.
  • the lower threshold is set at 50% of the value of the upper threshold. These percentages could be different in different implementations.
  • edge points that lie within a desired region-of-interest are selected 534 . This region of interest selection 534 excludes points lying at the image boundaries and points lying too close to or too far from the transceiver 10 .
  • the matching edge filter 538 is applied to remove outlier edge points and fill in the area between the matching edge points.
  • the edge-matching algorithm 538 is applied to establish valid boundary edges and remove spurious edges while filling the regions between boundary edges.
  • Edge points on an image have a directional component indicating the direction of the gradient.
  • Pixels in scanlines crossing a boundary edge location will exhibit two gradient transitions depending on the pixel intensity directionality.
  • Each gradient transition is given a positive or negative value depending on the pixel intensity directionality. For example, if the scanline approaches an echo reflective bright wall from a darker region, then an ascending transition is established as the pixel intensity gradient increases to a maximum value, i.e., as the transition ascends from a dark region to a bright region. The ascending transition is given a positive numerical value. Similarly, as the scanline recedes from the echo reflective wall, a descending transition is established as the pixel intensity gradient decreases to or approaches a minimum value. The descending transition is given a negative numerical value.
  • Valid boundary edges are those that exhibit ascending and descending pixel intensity gradients, or equivalently, exhibit paired or matched positive and negative numerical values. The valid boundary edges are retained in the image. Spurious or invalid boundary edges do not exhibit paired ascending-descending pixel intensity gradients, i.e., do not exhibit paired or matched positive and negative numerical values. The spurious boundary edges are removed from the image.
  • edge points for amniotic fluid surround a dark, closed region, with directions pointing inwards towards the center of the region.
  • the direction of a gradient for any edge point the edge point having a gradient direction approximately opposite to the current point represents the matching edge point.
  • those edge points exhibiting an assigned positive and negative value are kept as valid edge points on the image because the negative value is paired with its positive value counterpart.
  • those edge point candidates having unmatched values i.e., those edge point candidates not having a negative-positive value pair, are deemed not to be true or valid edge points and are discarded from the image.
  • the matching edge point algorithm 538 delineates edge points not lying on the boundary for removal from the desired dark regions. Thereafter, the region between any two matching edge points is filled in with non-zero pixels to establish edge-based segmentation. In a preferred embodiment of the invention, only edge points whose directions are primarily oriented co-linearly with the scanline are sought to permit the detection of matching front wall and back wall pairs.
  • both segmentation methods use a combining step that combines the results of intensity-based segmentation 422 step and the edge-based segmentation 438 step using an AND Operator of Images 442 .
  • the AND Operator of Images 442 is achieved by a pixel-wise Boolean AND operator 442 step to produce a segmented image by computing the pixel intersection of two images.
  • the Boolean AND operation 442 represents the pixels as binary numbers and the corresponding assignment of an assigned intersection value as a binary number 1 or 0 by the combination of any two pixels. For example, consider any two pixels, say pixel A and pixel B , which can have a 1 or 0 as assigned values.
  • the Boolean AND operation 542 takes the binary any two digital images as input, and outputs a third image with the pixel values made equivalent to the intersection of the two input images.
  • the polish 464 algorithm of FIG. 7 is comprised of multiple sub-algorithms.
  • FIG. 8D depicts the sub-algorithms of the Polish 464 algorithm, including a Close 546 algorithm, an Open 550 algorithm, a Remove Deep Regions 554 algorithm, and a Remove Fetal Head Regions 560 algorithm.
  • Closing and opening algorithms are operations that process images based on the knowledge of the shape of objects contained on a black and white image, where white represents foreground regions and black represents background regions. Closing serves to remove background features on the image that are smaller than a specified size. Opening serves to remove foreground features on the image that are smaller than a specified size. The size of the features to be removed is specified as an input to these operations.
  • the opening algorithm 550 removes unlikely amniotic fluid regions from the segmented image based on a-priori knowledge of the size and location of amniotic fluid pockets.
  • the closing 546 algorithm obtains the Apparent Amniotic Fluid Area (AAFA) or Volume (AAFV) values.
  • AAFA and AAFV values are “Apparent” and maximal because these values may contain region areas or region volumes of non-amniotic origin unknowingly contributing to and obscuring what otherwise would be the true amniotic fluid volume.
  • the AAFA and AAFV values contain the true amniotic volumes, and possibly as well areas or volumes due to deep tissues and undetected fetal head volumes.
  • the apparent area and volume values require correction or adjustments due to unknown contributions of deep tissue and of the fetal head in order to determine an Adjusted Amniotic Fluid Area (AdAFA) value or Volume (AdAVA) value 568 .
  • AdAFA Adjusted Amniotic Fluid Area
  • AdAVA Volume
  • the AdAFA and AdAVA values obtained by the Close 546 algorithm are reduced by the morphological opening algorithm 550 . Thereafter, the AdAFA and AdAVA values are further reduced by removing areas and volumes attributable to deep regions by using the Remove Deep Regions 554 algorithm. Thereafter, the polishing algorithm 464 continues by applying a fetal head region detection algorithm 560 .
  • FIG. 8E depicts the sub-algorithms of the Remove Fetal Head Regions sub-algorithm 560 .
  • the basic idea of the sub-algorithms of the fetal head detection algorithm 560 is that the edge points that potentially represent a fetal skull are detected. Thereafter, a circle finding algorithm to determine the best-fitting circle to these fetal skull edges is implemented. The radii of the circles that are searched are known a priori based on the fetus' gestational age. The best fitting circle whose fitting metric lies above a certain pre-specified threshold is marked as the fetal head and the region inside this circle is the fetal head region.
  • the algorithms include a gestational Age 726 input, a determine head diameter factor 730 algorithm, a Head Edge Detection algorithm, 734 , and a Hough transform procedure 736 .
  • Fetal brain tissue has substantially similar ultrasound echo qualities as presented by amniotic fluid. If not detected and subtracted from amniotic fluid volumes, fetal brain tissue volumes will be measured as part of the total amniotic fluid volumes and lead to an overestimation and false diagnosis of oligo or poly-hyraminotic conditions. Thus detecting fetal head position, measuring fetal brain matter volumes, and deducting the fetal brain matter volumes from the amniotic fluid volumes to obtain a corrected amniotic fluid volume serves to establish accurately measure amniotic fluid volumes.
  • the gestational age input 726 begins the fetal head detection algorithm 560 and uses a head dimension table to obtain ranges of head bi-parietal diameters (BPD) to search for (e.g., 30 week gestational age corresponds to a 6 cm head diameter).
  • BPD head bi-parietal diameters
  • the head diameter range is input to both the Head Edge Detection, 734 , and the Hough Transform, 736 .
  • the head edge detection 734 algorithm seeks out the distinctively bright ultrasound echoes from the anterior and posterior walls of the fetal skull while the Hough Transform algorithm, 736 , finds the fetal head using circular shapes as models for the fetal head in the Cartesian image (pre-scan conversion to polar form).
  • Scanplanes processed by steps 522 , 538 , 530 are input to the head edge detection step 734 .
  • Applied as the first step in the fetal head detection algorithm 734 is the detection of the potential head edges from among the edges found by the matching edge filter.
  • the matching edge 538 filter outputs pairs of edge points potentially belonging to front walls or back walls. Not all of these walls correspond to fetal head locations.
  • the edge points representing the fetal head are determined using the following heuristics:
  • the pixels found satisfying these features are then vertically dilated to produce a set of thick fetal head edges as the output of Head Edge Detection, 734 .
  • FIG. 8F depicts the sub-algorithms of the Hough transform procedure 736 .
  • the sub-algorithms include a Polar Hough Transform 738 algorithm, a find maximum Hough value 742 algorithm 742 , and a fill circle region 746 .
  • the Polar Hough Transform algorithm looks for fetal head structures in polar coordinate terms by converting from Cartesian coordinates using a plurality of equations.
  • the fetal head which appears like a circle in a 3D scan-converted Cartesian coordinate image, has a different shape in the pre-scan converted polar space.
  • the fetal head shape is expressed in terms of polar coordinate terms explained as follows:
  • the Hough transform 736 algorithm using equations E5 and E6 attempts to find the best-fit circle to the edges of an image.
  • a circle in the polar space is defined by a set of three parameters, (r 0 , ⁇ 0 , R) representing the center and the radius of the circle.
  • the basic idea for the Hough transform 736 is as follows. Suppose a circle is sought having a fixed radius (say, R 1 ) for which the best center of the circle is similarly sought. Now, every edge point on the input image lies on a potential circle whose center lays R 1 pixels away from it. The set of potential centers themselves form a circle of radius R 1 around each edge pixel. Now, drawing potential circles of radius R 1 around each edge pixel, the point at which most circles intersect, a center of the circle that represents a best-fit circle to the given edge points is obtained. Therefore, each pixel in the Hough transform output contains a likelihood value that is simply the count of the number of circles passing through that point.
  • FIG. 9 illustrates the Hough Transform 736 algorithm for a plurality of circles with a fixed radius in a Cartesian coordinate system.
  • a portion of the plurality of circles is represented by a first circle 804 a, a second circle 804 b, and a third circle 804 c.
  • a plurality of edge pixels are represented as gray squares and an edge pixel 808 is shown.
  • a circle is drawn around each edge pixel to distinguish a center location 812 of a best-fit circle 816 passing through each edge pixel point; the point of the center location through which most such circles pass (shown by a gray star 812 ) is the center of the best-fit circle 816 presented as a thick dark line.
  • the circumference of the best fit circle 816 passes substantially through is central portion of each edge pixel, represented as a series of squares substantially equivalent to the edge pixel 808 .
  • This search for best fitting circles can be easily extended to circles with varying radii by adding one more degree of freedom—however, a discrete set of radii around the mean radii for a given gestational age makes the search significantly faster, as it is not necessary to search all possible radii.
  • the next step in the head detection algorithm is selecting or rejecting best-fit circles based on its likelihood, in the find maximum Hough Value 742 algorithm.
  • a 2D metric as a maximum Hough value 742 of the Hough transform 736 output is defined for every image in a dataset.
  • the 3D metric is defined as the maximum of the 2D metrics for the entire 3D dataset.
  • a fetal head is selected on an image depending on whether its 3D metric value exceeds a preset 3D threshold and also whether the 2D metric exceeds a preset 2D threshold.
  • the 3D threshold is currently set at 7 and the 2D threshold is currently set at 5.
  • the fetal head detection algorithm concludes with a fill circle region 746 that incorporates pixels to the image within the detected circle.
  • the fill circle region 746 algorithm fills the inside of the best fitting polar circle. Accordingly, the fill circle region 746 algorithm encloses and defines the area of the fetal brain tissue, permitting the area and volume to be calculated and deducted via algorithm 554 from the apparent amniotic fluid area and volume (AAFA or AAFV) to obtain a computation of the corrected amniotic fluid area or volume via algorithm 484 .
  • AAFA or AAFV apparent amniotic fluid area and volume
  • FIG. 10 shows the results of sequentially applying the algorithm steps of FIGS. 7 and 8 A-D on an unprocessed sample image 820 presented within the confines of a scanplane substantially equivalent to the scanplane 210 .
  • the results of applying the heat filter 514 and shock filter 518 in enhancing the unprocessed sample is shown in enhanced image 840 .
  • the result of intensity-based segmentation algorithms 522 is shown in image 850 .
  • the results of edge-based segmentation 438 algorithm using sub-algorithms 526 , 530 , 534 and 538 of the enhanced image 840 is shown in segmented image 858 .
  • the result of the combination 442 utilizing the Boolean AND images 442 algorithm is shown in image 862 where white represents the amniotic fluid area.
  • polishing 464 algorithm employing algorithms 542 , 546 , 550 , 554 , 560 , and 564 is shown in image 864 , which depicts the amniotic fluid area overlaid on the unprocessed sample image 810 .
  • FIG. 11 depicts a series of images showing the results of the above method to automatically detect, locate, and measure the area and volume of a fetal head using the algorithms outlined in FIGS. 7 and 8 A-F.
  • the fetal head image is marked by distinctive bright echoes from the anterior and posterior walls of the fetal skull and a circular shape of the fetal head in the Cartesian image.
  • the fetal head detection algorithm 734 operates on the polar coordinate data (i.e., pre-scan version, not yet converted to Cartesian coordinates).
  • An example output of applying the head edge detection 734 algorithm to detect potential head edges is shown in image 930 . Occupying the space between the anterior and posterior walls are dilated black pixels 932 (stacks or short lines of black pixels representing thick edges). An example of the polar Hough transform 738 for one actual data sample for a specific radius is shown in polar coordinate image 940 .
  • polar coordinate image 950 An example of the best-fit circle on real data polar data is shown in polar coordinate image 950 that has undergone the find maximum Hough value step 742 .
  • the polar coordinate image 950 is scan-converted to a Cartesian data in image 960 where the effects of finding maximum Hough value 742 algorithm are seen in Cartesian format.
  • FIG. 12 presents a 4-panel series of sonographer amniotic fluid pocket outlines compared to the algorithm's output in a scanplane equivalent to scanplane 210 .
  • the top two panels depict the sonographer's outlines of amniotic fluid pockets obtained by manual interactions with the display while the bottom two panels show the resulting amniotic fluid boundaries obtained from the instant invention's automatic application of 2D algorithms, 3D algorithms, combination heat and shock filter algorithms, and segmentation algorithms.
  • multiple cones of data acquired at multiple anatomical sampling sites may be advantageous.
  • the pregnant uterus may be too large to completely fit in one cone of data sampled from a single measurement or anatomical site of the patient (patient location). That is, the transceiver 10 is moved to different anatomical locations of the patient to obtain different 3D views of the uterus from each measurement or transceiver location.
  • Obtaining multiple 3D views may be especially needed during the third trimester of pregnancy, or when twins or triplets are involved.
  • multiple data cones can be sampled from different anatomical sites at known intervals and then combined into a composite image mosaic to present a large uterus in one, continuous image.
  • at least two 3D image cones are generally preferred, with one image cone defined as fixed, and the other image cone defined as moving.
  • the 3D image cones obtained from each anatomical site may be in the form of 3D arrays of 2D scanplanes, similar to the 3D array 240 . Furthermore, the 3D image cone may be in the form of a wedge or a translational array of 2D scanplanes. Alternatively, the 3D image cone obtained from each anatomical site may be a 3D scancone of 3D-distributed scanlines, similar to the scancone 300 .
  • registration with reference to digital images means the determination of a geometrical transformation or mapping that aligns viewpoint pixels or voxels from one data cone sample of the object (in this embodiment, the uterus) with viewpoint pixels or voxels from another data cone sampled at a different location from the object. That is, registration involves mathematically determining and converting the coordinates of common regions of an object from one viewpoint to the coordinates of another viewpoint. After registration of at least two data cones to a common coordinate system, the registered data cone images are then fused together by combining the two registered data images by producing a reoriented version from the view of one of the registered data cones.
  • a second data cone's view is merged into a first data cone's view by translating and rotating the pixels of the second data cone's pixels that are common with the pixels of the first data cone. Knowing how much to translate and rotate the second data cone's common pixels or voxels allows the pixels or voxels in common between both data cones to be superimposed into approximately the same x, y, z, spatial coordinates so as to accurately portray the object being imaged.
  • the more precise and accurate the pixel or voxel rotation and translation the more precise and accurate is the common pixel or voxel superimposition or overlap between adjacent image cones.
  • the precise and accurate overlap between the images assures the construction of an anatomically correct composite image mosaic substantially devoid of duplicated anatomical regions.
  • the preferred geometrical transformation that fosters obtaining an anatomically accurate mosaic image is a rigid transformation that doesn't permit the distortion or deforming of the geometrical parameters or coordinates between the pixels or voxels common to both image cones.
  • the preferred rigid transformation first converts the polar coordinate scanplanes from adjacent image cones into in x, y, z Cartesian axes.
  • a rigid transformation, T is determined from the scanplanes of adjacent image cones having pixels in common.
  • the transformation represents a shift and rotation conversion factor that aligns and overlaps common pixels from the scanplanes of the adjacent image cones.
  • the common pixels used for the purposes of establishing registration of three-dimensional images are the boundaries of the amniotic fluid regions as determined by the amniotic fluid segmentation algorithm described above.
  • FIGS. 13-14 Several different protocols may be used to collect and process multiple cones of data from more than one measurement site are described in FIGS. 13-14 .
  • FIG. 13 illustrates a 4-quadrant supine procedure to acquire multiple image cones around the center point of uterine quadrants of a patient in a supine procedure.
  • the patient lies supine (on her back) displacing most or all of the amniotic fluid towards the top.
  • the uterus is divided into 4 quadrants defined by the umbilicus (the navel) and the linea-nigra (the vertical center line of the abdomen) and a single 3D scan is acquired at each quadrant.
  • the 4-quadrant supine protocol acquires four different 3D scans in a two dimensional grid, each corner of the grid being a quadrant midpoint.
  • FIG. 14 illustrates a multiple lateral line procedure to acquire multiple image cones in a linear array.
  • the patent lies laterally (on her side), displacing most or all of the amniotic fluid towards the top.
  • Four 3D images cones of data are acquired along a line of substantially equally space intervals.
  • the transceiver 10 moves along the lateral line at position 1 , position 2 , position 3 , and position 4 .
  • the inter-position distance or interval is approximately 6 cm.
  • the preferred embodiment for making a composite image mosaic involves obtaining four multiple image cones where the transceiver 10 is placed at four measurement sites over the patient in a supine or lateral position such that at least a portion of the uterus is ultrasonically viewable at each measurement site.
  • the first measurement site is originally defined as fixed, and the second site is defined as moving and placed at a first known inter-site distance relative to the first site.
  • the second site images are registered and fused to the first site images After fusing the second site images to the first site images, the third measurement site is defined as moving and placed at a second known inter-site distance relative to the fused second site now defined as fixed.
  • the third site images are registered and fused to the second site images Similarly, after fusing the third site images to the second site images, the fourth measurement site is defined as moving and placed at a third known inter-site distance relative to the fused third site now defined as fixed. The fourth site images are registered and fused to the third site images
  • the four measurement sites may be along a line or in an array.
  • the array may include rectangles, squares, diamond patterns, or other shapes.
  • the patient is positioned such that the baby moves downward with gravity in the uterus and displaces the amniotic fluid upwards toward the measuring positions of the transceiver 10 .
  • the interval or distance between each measurement site is approximately equal, or may be unequal.
  • the second site is spaced approximately 6 cm from the first site
  • the third site is spaced approximately 6 cm from the second site
  • the fourth site is spaced approximately 6 cm from the third site.
  • the spacing for unequal intervals could be, for example, the second site is spaced approximately 4 cm from the first site
  • the third site is spaced approximately 8 cm from the second site
  • the third is spaced approximately 6 cm from the third site.
  • the interval distance between measurement sites may be varied as long as there are mutually viewable regions of portions of the uterus between adjacent measurement sites.
  • two and three measurement sites may be sufficient for making a composite 3D image mosaic.
  • a triangular array is possible, with equal or unequal intervals.
  • the second and third measurement sites have mutually viewable regions from the first measurement site, the second interval may be measured from the first measurement site instead of measuring from the second measurement site.
  • each measurement site is ultrasonically viewable for at least a portion of the uterus.
  • a pentagon array is possible, with equal or unequal intervals.
  • a hexagon array is possible, with equal or unequal intervals between each measurement site.
  • Other polygonal arrays are possible with increasing numbers of measurement sites.
  • each image cone must be ascertained so that overlapping regions can be identified between any two image cones to permit the combining of adjacent neighboring cones so that a single 3D mosaic composite image is produced from the 4-quadrant or in-line laterally acquired images.
  • each moving cone to conform with the voxels common to the stationary image cone is guided by an inputted initial transform that has the expected translational and rotational values.
  • the distance separating the transceiver 10 between image cone acquisitions predicts the expected translational and rotational values. For example, as shown in FIG. 14 , if 6 cm separates the image cones, then the expected translational and rotational values are proportionally estimated.
  • FIG. 15 is a block diagram algorithm overview of the registration and correcting algorithms used in processing multiple image cone data sets.
  • the algorithm overview 1000 shows how the entire amniotic fluid volume measurement process occurs from the multiply acquired image cones.
  • each of the input cones 1004 is segmented 1008 to detect all amniotic fluid regions.
  • the segmentation 1008 step is substantially similar to steps 418 - 480 of FIG. 7 .
  • these segmented regions are used to align (register) the different cones into one common coordinate system using a Rigid Registration 1012 algorithm.
  • the registered datasets from each image cone are fused with each other using a Fuse Data 1016 algorithm to produce a composite 3D mosaic image.
  • the total amniotic fluid volume is computed 1020 from the fused or composite 3D mosaic image.
  • FIG. 16 is a block diagram of the steps of the rigid registration algorithm 1012 .
  • the rigid algorithm 1012 is a 3D image registration algorithm and is a modification of the Iterated Closest Point (ICP) algorithm published by P J Besl and N D McKay, in “A Method for Registration of 3-D Shapes,” IEEE Trans. Pattern Analysis & Machine Intelligence, vol. 14, no. 2, February 1992, pp. 239-256.
  • the steps of the rigid registration algorithm 1012 serves to correct for overlap between adjacent 3D scan cones acquired in either the 4-quadrant supine grid procedure or lateral line multi data cone acquisition procedures.
  • the rigid algorithm 1012 first processes the fixed image 1104 in polar coordinate terms to Cartesian coordinate terms using the 3D Scan Convert 1108 algorithm.
  • the moving image 1124 is also converted to Cartesian coordinates using the 3D Scan Convert 1128 algorithm.
  • the edges of the amniotic fluid regions on the fixed and moving images are determined and converted into point sets p and q respectively by a 3D edge detection process 1112 and 1132 .
  • the fixed image point set, p undergoes a 3D distance transform process 1116 which maps every voxel in a 3D image to a number representing the distance to the closest edge point in p. Pre-computing this distance transform makes subsequent distance calculations and closest point determinations very efficient.
  • the known initial transform 1136 for example, (6, 0, 0) for the Cartesian T x , T y , T z terms and (0, 0, 0) for the ⁇ x , ⁇ y , ⁇ z Euler angle terms for an inter-transceiver interval of 6 cm, is subsequently applied to the moving image by the Apply Transform 1140 step.
  • This transformed image is then compared to the fixed image to examine for the quantitative occurrence of overlapping voxels. If the overlap is less than 20%, there are not enough common voxels available for registration and the initial transform is considered sufficient for fusing at step 1016 .
  • the q-voxels of the initial transform are subjected to an iterative sequence of rigid registration.
  • a transformation T serves to register a first voxel point set p from the first image cone by merging or overlapping a second voxel point set q from a second image cone that is common to p of the first image cone.
  • n is the number of points in each point set and p and q are the central points in the two voxel point sets.
  • C pq is the central points in the two voxel point sets.
  • the preferred embodiment uses a statistical process known as the Single Value Decomposition (SVD) originally developed by Eckart and Young (G. Eckart and G. Young, 1936, The Approximation of One Matrix by Another of Lower Rank, Pychometrika 1, 211-218).
  • SSVD Single Value Decomposition
  • the SVD is applied to the matrix, and the resulting SVD values are determined to solve for the best fitting rotation transform R to be applied to the moving voxel point set q to align with the fixed voxel point set p to acquire optimum overlapping accuracy of the pixel or voxels common to the fixed and moving images.
  • Equation E9 gives the SVD value of the cross-covariance C pq :
  • D is a 3 ⁇ 3 diagonal matrix and U and V are orthogonal 3 ⁇ 3 matrices
  • Equation E10 further defines the rotational R description of the transformation T in terms of U and V orthogonal 3 ⁇ 3 matrices as:
  • Equation E11 further defines the translation transform t description of the transformation T in terms of p , q and R as:
  • Equations E8 through E11 present a method to determine the rigid transformation between two point sets p and q—this process corresponds to step 1152 in FIG. 17 .
  • the steps of the registration algorithm are applied iteratively until convergence.
  • the iterative sequence includes a Find Closest Points on Fixed Image 1148 step, a Determine New Transform 1152 step, a Calculate Distances 1156 step, and Converged decision 1160 step.
  • the Determine New Transform 1152 step calculates the rotation R via SVD analysis using equations E8-E10 and translation transform t via equation E11.
  • the predicted-q pixel candidates are considered converged and suitable for receiving the transforms R and t to rigidly register the moving image Transform 1136 onto the common voxels p of the 3D Scan Converted 1108 image.
  • the rigid registration process is complete as closest proximity between voxel or pixel sets has occurred between the fixed and moving images, and the process continues with fusion at step 1016 .
  • FIGS. 17A-17C A representative example for the application of the preferred embodiment for the registration and fusion of a moving image onto a fixed image is shown in FIGS. 17A-17C .
  • FIG. 17A is a first measurement view of a fixed scanplane 1200 A from a 3D data set measurement taken at a first site.
  • a first pixel set p consistent for the dark pixels of AFV is shown in a region 1204 A.
  • the region 1204 A has approximate x-y coordinates of (150, 120) that is closest to dark edge.
  • FIG. 17B is a second measurement view of a moving scanplane 1200 B from a 3D data set measurement taken at a second site.
  • a second pixel set q consistent for the dark pixels of AFV is shown in a region 1204 B.
  • the region 1204 B has approximate x-y coordinates of (50, 125) that is closest to dark edge.
  • FIG. 17C is a composite image 1200 C of the first (fixed) 1200 A and second (moving) 1200 B images in which common pixels 1204 B at approximate coordinates (50,125) is aligned or overlapped with the voxels 1204 A at approximate coordinates (150, 120). That is, the region 1204 B pixel set q is linearly and rotational transformed consistent with the closest edge selection methodology as shown in FIGS. 13A and 13B from employing the 3D Edge Detection 1112 step.
  • the composite image 1200 C is a mosaic image from scanplanes having approximately the same ⁇ and rotation ⁇ angles.
  • the registration and fusing of common pixel sets p and q from scanplanes having approximately the same ⁇ and rotation ⁇ angles can be repeated for other scanplanes in each 3D data set taken at the first (fixed) and second (moving) anatomical sites. For example, if the composite image 1200 C above was for scanplane # 1 , then the process may be repeated for the remaining scanplanes # 2 - 24 or # 2 - 48 or greater as needed to capture a completed uterine mosaic image.
  • an array similar to the 3D array 240 from FIG. 5B is assembled, except this time the scanplane array is made of composite images, each composited image belonging to a scanplane having approximately the same ⁇ and rotation ⁇ angles.
  • the respective registration, fusing, and assembling into scanplane arrays of composited images is undertaken with the same procedures.
  • the scanplane composite array similar to the 3D array 240 is composed of a greater mosaic number of registered and fused scanplane images.
  • FIGS. 18A-18D A representative example the fusing of two moving images onto a fixed image is shown in FIGS. 18A-18D .
  • FIG. 18A is a first view of a fixed scanplane 1220 A.
  • Region 1224 A is identified as p voxels approximately at the coordinates (150, 70).
  • FIG. 18B is a second view of a first moving scanplane 1220 B having some q voxels 1224 B at x-y coordinates (300, 100) common with the first measurements p voxels at x-y coordinates (150, 70). Another set of voxels 1234 A is shown roughly near the intersection of x-y coordinates (200, 125). As the transceiver 10 was moved only translationally, The scanplane 1220 B from the second site has approximately the same tilt ⁇ and rotation ⁇ angles of the fixed scanplane 1220 A taken from the first lateral in-line site.
  • FIG. 18C is a third view of a moving scanplane 1220 C.
  • a region 1234 B is identified as q voxels approximately at the x-y coordinates (250, 100) that are common with the second views q voxels 1234 A.
  • the scanplane 1220 c from the third lateral in-line site has approximately the same tilt ⁇ and rotation ⁇ angles of the fixed scanplane 1220 A taken from the first lateral in-line site and the first moving scanplane 1220 B taken from the second lateral in-line site.
  • FIG. 18D is a composite mosaic image 1220 D of the first (fixed) 1220 A image, the second (moving) 1220 B image, and the third (moving) 1220 C image representing the sequential alignment and fusing of q voxel sets 1224 B to 1224 A, and 1234 B with 1234 A.
  • a fourth image similarly could be made to bring about a 4-image mosaic from scanplanes from a fourth 3D data set acquired from the transceiver 10 taking measurements at a fourth anatomical site where the fourth 3D data set is acquired with approximately the same tilt ⁇ and rotation ⁇ angles.
  • the transceiver 10 is moved to different anatomical sites to collect 3D data sets by hand placement by an operator. Such hand placement could create the acquiring of 3D data sets under conditions in which the tilt ⁇ and rotation ⁇ angles are not approximately equal, but differ enough to cause some measurement error requiring correction to use the rigid registration 1012 algorithm.
  • the built-in accelerometer measures the changes in tilt ⁇ and rotation ⁇ angles and compensates accordingly so that acquired moving images are presented if though they were acquired under approximately equal tilt ⁇ and rotation ⁇ angle conditions.
  • FIG. 19 illustrates a 6-section supine procedure to acquire multiple image cones around the center point of a uterus of a patient in a supine position.
  • Each of the 6 segments are scanned in the order indicated, starting with segment 1 on the lower right side of the patient.
  • the display on the scanner 10 is configured to indicate how many segments have been scanned, so that the display shows “0 of 6,” “1 of 6,” . . . “6 of 6.”
  • the scans are positioned such that the lateral distances between each scanning position (except between positions 3 and 4 ) are approximately about 8 cm.
  • the top button of the scanner 10 is repetitively depressed, so that it returns the scan to “0 of 6,” to permit a user to repeat all six scans again.
  • the scanner 10 is returned to the cradle to upload the raw ultrasound data to computer, intranet, or Internet as depicted in FIGS. 2C , 3 , and 4 for algorithmic processing, as will be described in detail below.
  • a result is generated that includes an estimate of the amniotic fluid volume.
  • the six-segment procedure ensures that the measurement process detects all amniotic fluid regions.
  • the transceiver 10 projects outgoing ultrasound signals, in this case into the uterine region of a patient, at six anatomical locations, and receives incoming echoes reflected back from the regions of interest to the transceiver 10 positioned at a given anatomical location.
  • An array of scanplane images are obtained for each anatomical location based upon the incoming echo signals.
  • Image enhanced and segmented regions for the scanplane images are determined for each scanplane array, which may be a rotational, wedge, or translationally configured scanplane array.
  • the segmented regions are used to align or register the different scancones into one common coordinate system. Thereafter, the registered datasets are merged with each other so that the total amniotic fluid volume is computed from the resulting fused image.
  • FIG. 20 is a block diagrammatic overview of an algorithm for the registration and correction processing of the 6-section multiple image cone data sets depicted in FIG. 19 .
  • a six-section algorithm overview 1000 A includes many of the same blocks of algorithm overview 1000 depicted in FIG. 15 . However, the segmentation registration procedures are modified for the 6-section multiple image cones.
  • the subprocesses include the InputCones block 1004 , an Image Enhancement and Segmentation block 1010 , a RigidRegistration block 1014 , the FuseData block 1016 , and the CalculateVolume block 1020 .
  • the Image Enhanced and Segmentation block 1010 reduces the effects of noise, which may include speckle noise, in the data while preserving the salient edges on the image.
  • the enhanced images are then segmented by an edge-based and intensity-based method, and the results of each segmentation method are then subsequently combined.
  • the results of the combined segmentation method are then cleaned up to fill gaps and to remove outliers.
  • the area and/or the volume of the segmented regions is then computed.
  • FIG. 21 is a more detailed view of the Image Enhancement and Segmentation block 1010 of FIG. 20 .
  • the enhancement-segmentation block 1010 begins with an input data block 1010 A 2 , wherein the signals of pixel image data are subjected to a blurring and speckle removal process followed by a sharpening or deblurring process.
  • the combination of blurring and speckle removal followed by sharpening or deblurring enhances the appearance of the pixel-based input image.
  • the blurring and deblurring is achieved by a combination of heat and shock filters.
  • the inputted pixel related data from process 1010 A 2 is first subjected to a heat filter process block 1010 A 4 .
  • the heat filter block 1010 A 4 is a Laplacian-based filtering and results in reduction of the speckle noise and smooths or otherwise blurs the edges in the image.
  • the heat filter block 1010 A 4 is modified via a user-determined stored data block 1010 A 6 wherein the number of heat filter iterations and step sizes are defined by the user and are applied to the inputted data 1010 A 2 in the heat filter process block 1010 A 4 .
  • FIG. 23 The effect of heat iteration number in progressively blurring and removing speckle from an original image as the number of iteration cycles is increased is shown in FIG. 23 .
  • the pixel image data is further processed by a shock filter block 1010 A 8 .
  • the shock filter block 1010 A 8 is subjected to a user-determined stored data block 1010 A 10 wherein the number shock filter iterations, step sizes, and gradient threshold are specified by the user.
  • the foregoing values are then applied to heat filtered pixel data in the shock filter block 1010 A 8 .
  • the effect of shock iteration number, step sizes, and gradient thresholds in reducing the blurring is seen in signal plots (a) and (b) of FIG. 24 .
  • a heat and shock-filtered pixel data is parallel processed in two algorithm pathways, as defined by blocks 1010 B 2 - 6 (Intensity-Based Segmentation Group) and blocks 1010 C 2 - 4 (Edge-Based Segmentation Group).
  • the Intensity-based Segmentation relies on the observation that amniotic fluid is usually darker than the rest of the image. Pixels associated with fluids are classified based upon a threshold intensity level. Thus pixels below this intensity threshold level are interpreted as fluid, and pixels above this intensity threshold are interpreted as solid or non-fluid tissues. However, pixel values within a dataset can vary widely, so a means to automatically determine a threshold level within a given dataset is required in order to distinguish between fluid and non-fluid pixels.
  • the intensity-based segmentation is divided into three steps. A first step includes estimating the fetal body and shadow regions, a second step includes determining an automatic thresholding for the fluid region after removing the body region, and a third step includes removing the shadow and fetal body regions from the potential fluid regions.
  • the Intensity-Based Segmentation Group includes a fetal body region block 1010 B 2 , wherein an estimate of the fetal shadow and body regions is obtained.
  • the fetal body regions in ultrasound images appear bright and are relatively easily detected.
  • anterior bright regions typically correspond with the dome reverberation of the transceiver 10 , and the darker appearing uterus is easily discerned against the bright pixel regions formed by the more echogenic fetal body that commonly appears posterior to the amniotic fluid region.
  • the fetal body and shadow is found in scanlines that extend between the bright dome reverberation region and the posterior bright-appearing fetal body.
  • a magnitude of the estimate of fetal and body region is then modified by a user-determined input parameter stored in a body threshold data block 1010 B 4 , and a pixel value is chosen by the user. For example, a pixel value of 40 may be selected by the user.
  • An example of the image obtained from blocks 1010 B 2 - 4 is panel (c) of FIG. 25 .
  • an automatic region threshold block 1010 B 6 is applied to this estimate to determine which pixels are fluid related and which pixels are non-fluid related.
  • the automatic region threshold block 1010 B 6 uses a version of the Otsu algorithm (R M Haralick and L G Shaprio, Computer and Robot Vision, vol.
  • the Otsu algorithm determines a threshold value from an assumed bimodal pixel value histogram that generally corresponds to fluid and some soft tissue (non-fluid) such as placental or other fetal or maternal soft tissue. All pixel values less than the threshold value as determined by the Otsu algorithm are designated as potential fluid pixels.
  • the first pathway is completed by a removing body regions above this threshold value in block 1010 B 8 so that the amniotic fluid regions are isolated.
  • An example of the effect of the Intensity-based segmentation group is shown in panel (d) of FIG. 25 .
  • the isolated amniotic fluid region image thus obtained from the intensity-based segmentation process is then processed for subsequent combination with the end result of the second edge-based segmentation method.
  • the edge-based segmentation process begins processing the shock filtered 1010 A 8 pixel data via a spatial gradients block 1010 C 2 in which the gradient magnitude of a given pixel neighborhood within the image is determined.
  • the gradient magnitude is determined by the taking the X and Y derivatives using the difference kernels shown in FIG. 26 .
  • the gradient magnitude of the image is given by Equation E7:
  • pixel edge points are determined by a hysteresis threshold of gradients process block 1010 C 4 .
  • a lower and upper threshold value is selected.
  • the image is then thresholded using the lower value and a connected component labeling is carried out on the resulting image.
  • the pixel value of each connected component is measured to determine which pixel edge points have gradient magnitude pixel values equal to or greater than the upper threshold value. Those pixel edge points having gradient magnitude pixel values equal to or exceeding the upper threshold are retained. This retention of pixels having strong gradient values serves to retain selected long connected edges which have one or more high gradient points.
  • the hysteresis threshold 1010 C 4 is modified by a user-determined edge threshold block 1010 C 6 .
  • An example of an application of the second pathway will be shown in panels (b) for the spatial gradients block 1010 C 2 and (c) for the threshold of gradients process block 1010 C 4 of FIG. 27 .
  • Another example of application of the edge detection block group for blocks 1010 C 2 and 1010 C 4 can also be seen in panel (e) of FIG. 25 .
  • the first and second pathways are merged at a combine region and edges process block 1010 D 2 .
  • the combining process avoids erroneous segmentation arising from either the intensity-based or edge-based segmentation processes.
  • the goal of the combining process is to ensure that good edges are reliably identified so that fluid regions are bounded by strong edges. Intensity-based segmentation may underestimate fluid volume, so that the boundaries need to be corrected using the edge-based segmentation information.
  • the beginning and end of each scanline within the segmented region is determined by searching for edge pixels on each scanline. If no edge pixels are found in the search region, the segmentation on that scanline is removed. If edge pixels are found, then the region boundary locations are moved to the location of these edge pixels.
  • Panel (f) of FIG. 25 illustrates the effects of the combining block 1010 D 2 .
  • a cleanup stage helps ensure consistency of segmented regions in a single scanplane and between scanplanes.
  • the cleanup stage uses morphological operators (such as erosion, dilation, opening, closing) using the Markov Random Fields (MRFs) as disclosed in Forbes et al. (Florence Forbes and Adrian E. Raftery, “Bayesian morphology: Fast Unsupervised Bayesian Image Analysis,” Journal of the American Statistical Association, June 1999, herein incorporated by reference).
  • MRFs Markov Random Fields
  • the In-plane opening-closing block 1010 D 4 block is a morphological operator wherein pixel regions are opened to remove pixel outliers from the segmented region, or that fills in or “closes” gaps and holes in the segmented region within a given scanplane.
  • Block 1010 D 4 uses a one-dimensional structuring element extending through five scanlines.
  • the closing-opening block is affected by a user-determined width, height, and depth parameter block 1010 D 6 .
  • an Out-of-plane Closing and Opening processing block 1010 D 8 is applied.
  • the block 1010 D 8 applies a set of out-of-plane morphological closings and openings using a one-dimensional structuring element extending through three scanlines. Pixel inconsistencies are accordingly removed between the scanplanes.
  • Panel (g) of FIG. 25 illustrates the effects of the blocks 1010 D 4 - 8 .
  • FIG. 22 is an expansion of the RigidRegistration block 1014 of FIG. 20 . Similar in purpose and general operation using the previously described ICP algorithm as used in the RigidRegistration block 1012 of FIG. 16 , the block 1014 begins with parallel inputs of a fixed Image 1014 A, a Moving Image 1014 B, and an Initial Transform input 1014 B 10 .
  • the steps of the rigid registration algorithm 1014 correct any overlaps between adjacent 3D scan cones acquired in the 6-section supine grid procedure.
  • the rigid algorithm 1014 first converts the fixed image 1104 A 2 from polar coordinate terms to Cartesian coordinate terms using the 3D Scan Convert 1014 A 4 algorithm. Separately, the moving image 1014 B 2 is also converted to Cartesian coordinates using the 3D Scan Convert 1014 B 4 algorithm. Next, the edges of the amniotic fluid regions on the fixed and moving images are determined and converted into point sets p and q, respectively by a 3D edge detection process 1014 A 6 and 1014 B 6 .
  • the fixed image point set, p undergoes a 3D distance transform process 1014 B 8 which maps every voxel in a 3D image to a number representing the distance to the closest edge point in p. Pre-computing this distance transform makes subsequent distance calculations and closest point determinations very efficient.
  • the known initial transform 1014 B 10 for example, (6, 0, 0) for the Cartesian T x , T y , T z terms and (0, 0, 0) for the ⁇ x , ⁇ y , ⁇ z Euler angle terms, for an inter-transceiver interval of 6 cm, is subsequently applied to the moving image by the transform edges 1014 B 8 block.
  • This transformed image is then subjected to the Find Closest Points on Fixed Image block 1014 C 2 , similar in operation to the block 1148 of FIG. 16 .
  • a new transform is determined in block 1014 C 4 , and the new transform is queried for convergence at decision diamond 1014 C 8 . If conversion is attained, the RigidRegistration 1014 is done at terminus 1014 C 10 . Alternatively, if conversion is not attained, then a return to the transform edges block 1014 B 8 occurs to start another iterative cycle.
  • the RigidRegistration block 1014 typically converges in less than 20 iterations. After applying the initial transformation, the entire registration process is carried out in case there are any overlapping segmented regions between any two images. Similar to the process described in connection with FIG. 16 , an overlap threshold of approximately 20% is currently set as an input parameter.
  • FIG. 23 is a 4-panel image set that shows the effect of multiple iterations of the heat filter applied to an original image.
  • the effect of shock iteration number in progressively blurring and removing speckle from an original image as the number of iterations increases is shown in FIG. 23 .
  • the heat filter is described by process blocks 1010 A 4 and A 6 of FIG. 21 .
  • an original image of a bladder is shown in panel (a) having visible speckle spread throughout the image.
  • Some blurring is seen with the 10 iteration image in panel (b), followed by more progressive blurring at 50 iterations in panel (c) and 100 iterations in panel (d).
  • the speckle progressively decreases.
  • FIG. 24 shows the effect of shock filtering and a combination heat-and-shock filtering to the pixel values of the image.
  • the effect of shock iteration number, step sizes, and gradient thresholds on the blurring of a heat filter is seen in ultrasound signal plots (a) and (b) of FIG. 24 .
  • Signal plot (a) depicts a smoothed or blurred signal gradient as a sigmoidal long dashed line that is subsequently shock filtered.
  • the magnitude of the shock filtered signal (short dashed line) approaches that of the original signal (solid line) without the choppy or noisy pattern associated with speckle. For the most part there is virtually a complete overlap of the shock filtered signal with the original signal through the pixel plot range.
  • ultrasound signal plot (b) depicts the effects of applying a shock filter to a noisy (speckle rich) signal line (sinuous long dash line) that has been smooth or blurred by the heat filter (short dashed line with sigmoidal appearance).
  • the shock filter results in a generally deblurring or sharpening of the edges of the image that were previously blurred.
  • a more abrupt or steep stepped signal plot after shock filtering is obtained without significant removal of speckle.
  • Dependent on the gradient threshold, step size, and iteration number imposed by block 1010 A 10 upon shock block 1010 A 8 different overlapping levels of the shock filtered line to that of the original is obtained.
  • FIG. 25 is a 7-panel image set generated by the image enhancement and segmentation algorithms of FIG. 21 .
  • Panel (a) is an image of the original uterine image.
  • Panel (b) is the image that is produced from the image enhancement processes primarily described in blocks 1010 A 4 - 6 (heat filters) and blocks 1010 A 8 - 10 (shock filters) of FIG. 21 .
  • Panel (c) shows the effects of the processing obtained from blocks 1010 B 2 - 4 (Estimate Shadow and Fetal Body Regions/Body Threshold).
  • Panel (d) is the image when processed by the Intensity-Based Segmentation Block Group 1010 B 2 - 8 .
  • Panel (e) results from application of the Edge-Based Segmentation Block Group 1010 C 2 - 6 .
  • Panel (g) illustrates the effects of the blocks In-plane and Out-of-plane opening and closing processing blocks 1010 D 4 - 8 .
  • FIG. 26 is a pixel difference kernel for obtaining X and Y derivatives to determine pixel gradient magnitudes for edge-based segmentation. As illustrated, a simplest case convolution is obtained for a first derivative computation where K x and K y are convolution constants.
  • FIG. 27 is a 3-panel image set showing the progressive demarcation or edge detection of organ wall interfaces arising from edge-based segmentation algorithms.
  • Panel (a) is the enhanced input image.
  • Panel (b) is the image result when the enhanced input image is subjected to the spatial gradients block 1010 C 2 .
  • Panel (c) is the image result when the enhanced and spatial gradients 1010 C 2 processed image is further processed by the threshold of gradients process block 1010 C 4 .
  • Appendix 1 Examples of Algorithmic Steps.
  • Source code of the algorithms of the present invention is provided in Appendix 2: Matlab Source Code.
  • the clarity of ultrasound imaging requires the efficient coordination of ultrasound transfer or communication to and from an examined subject, image acquisition from the communicated ultrasound, and microprocessor based image processing. Oftentimes the examined subject moves while image acquisition occurs, the ultrasound transducer moves, and/or movement occurs within the scanned region of interest that requires refinements as described below to secure clear images.
  • the ultrasound transceivers or DCD devices developed by Diagnostic Ultrasound are capable of collecting in vivo three-dimensional (3-D) cone-shaped ultrasound images of a patient. Based on these 3-D ultrasound images, various applications have been developed such as bladder volume and mass estimation.
  • a pulsed ultrasound field is transmitted into the body, and the back-scattered “echoes” are detected as a one-dimensional (1-D) voltage trace, which is also referred to as a RF line.
  • a set of 1-D data samples is interpolated to form a two-dimensional (2-D) or 3-D ultrasound image.
  • FIGS. 1A-D depicts a partial schematic and a partial isometric view of a transceiver, a scan cone comprising a rotational array of scan planes, and a scan plane of the array of various ultrasound harmonic imaging systems 60 A-D illustrated in FIGS. 3 and 4 below.
  • FIG. 1A is a side elevation view of an ultrasound transceiver 10 A that includes an inertial reference unit, according to an embodiment of the invention.
  • the transceiver 10 A includes a transceiver housing 18 having an outwardly extending handle 12 suitably configured to allow a user to manipulate the transceiver 10 A relative to a patient.
  • the handle 12 includes a trigger 14 that allows the user to initiate an ultrasound scan of a selected anatomical portion, and a cavity selector 16 .
  • the cavity selector 16 will be described in greater detail below.
  • the transceiver 10 A also includes a transceiver dome 20 that contacts a surface portion of the patient when the selected anatomical portion is scanned.
  • the dome 20 generally provides an appropriate acoustical impedance match to the anatomical portion and/or permits ultrasound energy to be properly focused as it is projected into the anatomical portion.
  • the transceiver 10 A further includes one, or preferably an array of separately excitable ultrasound transducer elements (not shown in FIG. 1A ) positioned within or otherwise adjacent with the housing 18 .
  • the transducer elements may be suitably positioned within the housing 18 or otherwise to project ultrasound energy outwardly from the dome 20 , and to permit reception of acoustic reflections generated by internal structures within the anatomical portion.
  • the one or more array of ultrasound elements may include a one-dimensional, or a two-dimensional array of piezoelectric elements that may be moved within the housing 18 by a motor. Alternately, the array may be stationary with respect to the housing 18 so that the selected anatomical region may be scanned by selectively energizing the elements in the array.
  • a directional indicator panel 22 includes a plurality of arrows that may be illuminated for initial targeting and guiding a user to access the targeting of an organ or structure within an ROI.
  • the directional arrows may be not illuminated. If the organ is off-center, an arrow or set of arrows may be illuminated to direct the user to reposition the transceiver 10 A acoustically at a second or subsequent dermal location of the subject.
  • the acrostic coupling may be achieved by liquid sonic gel applied to the skin of the patient or by sonic gel pads to which the transceiver dome 20 is placed against.
  • the directional indicator panel 22 may be presented on the display 54 of computer 52 in harmonic imaging subsystems described in FIGS. 3 and 4 below, or alternatively, presented on the transceiver display 16 .
  • Transceiver 10 A includes an inertial reference unit that includes an accelerometer 22 and/or gyroscope 23 positioned preferably within or adjacent to housing 18 .
  • the accelerometer 22 may be operable to sense an acceleration of the transceiver 10 A, preferably relative to a coordinate system, while the gyroscope 23 may be operable to sense an angular velocity of the transceiver 10 A relative to the same or another coordinate system.
  • the gyroscope 23 may be of conventional configuration that employs dynamic elements, or it may be an optoelectronic device, such as the known optical ring gyroscope.
  • the accelerometer 22 and the gyroscope 23 may include a commonly packaged and/or solid-state device.
  • One suitable commonly packaged device may be the MT6 miniature inertial measurement unit, available from Omni Instruments, Incorporated, although other suitable alternatives exist.
  • the accelerometer 22 and/or the gyroscope 23 may include commonly packaged micro-electromechanical system (MEMS) devices, which are commercially available from MEMSense, Incorporated. As described in greater detail below, the accelerometer 22 and the gyroscope 23 cooperatively permit the determination of positional and/or angular changes relative to a known position that is proximate to an anatomical region of interest in the patient.
  • MEMS micro-electromechanical system
  • the transceiver 10 A includes (or if capable at being in signal communication with) a display 24 operable to view processed results from an ultrasound scan, and/or to allow an operational interaction between the user and the transceiver 10 A.
  • the display 24 may be configured to display alphanumeric data that indicates a proper and/or an optimal position of the transceiver 10 A relative to the selected anatomical portion.
  • Display 24 may be used to view two- or three-dimensional images of the selected anatomical region.
  • the display 24 may be a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, or other suitable display devices operable to present alphanumeric data and/or graphical images to a user.
  • LCD liquid crystal display
  • LED light emitting diode
  • CRT cathode ray tube
  • a cavity selector 16 may be operable to adjustably adapt the transmission and reception of ultrasound signals to the anatomy of a selected patient.
  • the cavity selector 16 adapts the transceiver 10 A to accommodate various anatomical details of male and female patients.
  • the transceiver 10 A may be suitably configured to locate a single cavity, such as a urinary bladder in the male patient.
  • the transceiver 10 A may be configured to image an anatomical portion having multiple cavities, such as a bodily region that includes a bladder and a uterus.
  • Alternate embodiments of the transceiver 10 A may include a cavity selector 16 configured to select a single cavity scanning mode, or a multiple cavity-scanning mode that may be used with male and/or female patients.
  • the cavity selector 16 may thus permit a single cavity region to be imaged, or a multiple cavity region, such as a region that includes a lung and a heart to be imaged.
  • the transceiver dome 20 of the transceiver 10 A may be positioned against a surface portion of a patient that is proximate to the anatomical portion to be scanned.
  • the user actuates the transceiver 10 A by depressing the trigger 14 .
  • the transceiver 10 transmits ultrasound signals into the body, and receives corresponding return echo signals that may be at least partially processed by the transceiver 10 A to generate an ultrasound image of the selected anatomical portion.
  • the transceiver 10 A transmits ultrasound signals in a range that extends from approximately about two megahertz (MHz) to approximately about ten MHz.
  • the transceiver 10 A may be operably coupled to an ultrasound system that may be configured to generate ultrasound energy at a predetermined frequency and/or pulse repetition rate and to transfer the ultrasound energy to the transceiver 10 A.
  • the system also includes a processor that may be configured to process reflected ultrasound energy that is received by the transceiver 10 A to produce an image of the scanned anatomical region.
  • the system generally includes a viewing device, such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display device, or other similar display devices, that may be used to view the generated image.
  • a viewing device such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display device, or other similar display devices, that may be used to view the generated image.
  • the system may also include one or more peripheral devices that cooperatively assist the processor to control the operation of the transceiver 10 A, such a keyboard, a pointing device, or other similar devices.
  • the transceiver 10 A may be a self-contained device that includes a microprocessor positioned within the housing 18 and software associated with the microprocessor to operably control the transceiver 10 A, and to process the reflected ultrasound energy to generate the ultrasound image.
  • the display 24 may be used to display the generated image and/or to view other information associated with the operation of the transceiver 10 A.
  • the information may include alphanumeric data that indicates a preferred position of the transceiver 10 A prior to performing a series of scans.
  • the transceiver 10 A may be operably coupled to a general-purpose computer, such as a laptop or a desktop computer that includes software that at least partially controls the operation of the transceiver 10 A, and also includes software to process information transferred from the transceiver 10 A, so that an image of the scanned anatomical region may be generated.
  • the transceiver 10 A may also be optionally equipped with electrical contacts to make communication with receiving cradles 50 as discussed in FIGS. 3 and 4 below.
  • transceiver 10 A of FIG. 1A may be used in any of the foregoing embodiments, other transceivers may also be used.
  • the transceiver may lack one or more features of the transceiver 10 A.
  • a suitable transceiver need not be a manually portable device, and/or need not have a top-mounted display, and/or may selectively lack other features or exhibit further differences.
  • FIG. 1A is a graphical representation of a plurality of scan planes that form a three-dimensional (3D) array having a substantially conical shape.
  • An ultrasound scan cone 40 formed by a rotational array of two-dimensional scan planes 42 projects outwardly from the dome 20 of the transceivers 10 A.
  • the other transceiver embodiments 10 B- 10 E may also be configured to develop a scan cone 40 formed by a rotational array of two-dimensional scan planes 42 .
  • the pluralities of scan planes 40 may be oriented about an axis 11 extending through the transceivers 10 A- 10 E.
  • each of the scan planes 42 may be positioned about the axis 11 , preferably, but not necessarily at a predetermined angular position ⁇ .
  • the scan planes 42 may be mutually spaced apart by angles ⁇ 1 and ⁇ 2 .
  • the scan lines within each of the scan planes 42 may be spaced apart by angles ⁇ 1 and ⁇ 2 .
  • the angles ⁇ 1 and ⁇ 2 are depicted as approximately equal, it is understood that the angles ⁇ 1 and ⁇ 2 may have different values.
  • the angles ⁇ 1 and ⁇ 2 are shown as approximately equal, the angles ⁇ 1 and ⁇ 2 may also have different angles.
  • Other scan cone configurations are possible. For example, a wedge-shaped scan cone, or other similar shapes may be generated by the transceiver 10 A, 10 B and 10 C.
  • FIG. 1B is a graphical representation of a scan plane 42 .
  • the scan plane 42 includes the peripheral scan lines 44 and 46 , and an internal scan line 48 having a length r that extends outwardly from the transceivers 10 A- 10 E.
  • a selected point along the peripheral scan lines 44 and 46 and the internal scan line 48 may be defined with reference to the distance r and angular coordinate values ⁇ and ⁇ .
  • the length r preferably extends to approximately 18 to 20 centimeters (cm), although any length is possible.
  • Particular embodiments include approximately seventy-seven scan lines 48 that extend outwardly from the dome 20 , although any number of scan lines is possible.
  • FIG. 1C a graphical representation of a plurality of scan lines emanating from a hand-held ultrasound transceiver forming a single scan plane 42 extending through a cross-section of an internal bodily organ.
  • the number and location of the internal scan lines emanating from the transceivers 10 A- 10 E within a given scan plane 42 may thus be distributed at different positional coordinates about the axis line 11 as required to sufficiently visualize structures or images within the scan plane 42 .
  • four portions of an off-centered region-of-interest (ROI) are exhibited as irregular regions 49 . Three portions may be viewable within the scan plane 42 in totality, and one may be truncated by the peripheral scan line 44 .
  • ROI off-centered region-of-interest
  • the angular movement of the transducer may be mechanically effected and/or it may be electronically or otherwise generated.
  • the number of lines 48 and the length of the lines may vary, so that the tilt angle ⁇ sweeps through angles approximately between ⁇ 60° and +60° for a total arc of approximately 120°.
  • the transceiver 10 may be configured to generate approximately about seventy-seven scan lines between the first limiting scan line 44 and a second limiting scan line 46 .
  • each of the scan lines has a length of approximately about 18 to 20 centimeters (cm).
  • the angular separation between adjacent scan lines 48 ( FIG. 1B ) may be uniform or non-uniform.
  • the angular separation ⁇ 1 and ⁇ 2 may be about 1.5°.
  • the angular separation ⁇ 1 and ⁇ 2 may be a sequence wherein adjacent angles may be ordered to include angles of 1.5°, 6.8°, 15.5°, 7.2°, and so on, where a 1.5° separation is between a first scan line and a second scan line, a 6.8° separation is between the second scan line and a third scan line, a 15.5° separation is between the third scan line and a fourth scan line, a 7.2° separation is between the fourth scan line and a fifth scan line, and so on.
  • the angular separation between adjacent scan lines may also be a combination of uniform and non-uniform angular spacings, for example, a sequence of angles may be ordered to include 1.5°, 1.5°, 1.5°, 7.2°, 14.3°, 20.2°, 8.0°, 8.0°, 8.0°, 4.3°, 7.8°, and so on.
  • FIG. 1D is an isometric view of an ultrasound scan cone that projects outwardly from the transceivers of FIGS. 1A-E .
  • Three-dimensional images of a region of interest may be presented within a scan cone 40 that comprises a plurality of 2D images formed in an array of scan planes 42 .
  • a dome cutout 41 that is the complementary to the dome 20 of the transceivers 10 A- 10 E is shown at the top of the scan cone 40 .
  • FIG. 2 depicts a partial schematic and partial isometric and side view of a transceiver, and a scan cone array comprised of 3D-distributed scan lines in alternate embodiment of an ultrasound harmonic imaging system.
  • a plurality of three-dimensional (3D) distributed scan lines emanating from a transceiver that cooperatively forms a scan cone 30 .
  • Each of the scan lines have a length r that projects outwardly from the transceivers 10 A- 10 E of FIGS. 1A-1E .
  • the transceiver 10 A emits 3D-distributed scan lines within the scan cone 30 that may be one-dimensional ultrasound A-lines.
  • the other transceiver embodiments 10 B- 10 E may also be configured to emit 3D-distributed scan lines.
  • these 3D-distributed A-lines define the conical shape of the scan cone 30 .
  • the ultrasound scan cone 30 extends outwardly from the dome 20 of the transceiver 10 A, 10 B and 10 C centered about an axis line 11 .
  • the 3D-distributed scan lines of the scan cone 30 include a plurality of internal and peripheral scan lines that may be distributed within a volume defined by a perimeter of the scan cone 30 . Accordingly, the peripheral scan lines 31 A- 31 F define an outer surface of the scan cone 30 , while the internal scan lines 34 A- 34 C may be distributed between the respective peripheral scan lines 31 A- 31 F.
  • Scan line 34 B may be generally collinear with the axis 11 , and the scan cone 30 may be generally and coaxially centered on the axis line 11 .
  • the locations of the internal and peripheral scan lines may be further defined by an angular spacing from the center scan line 34 B and between internal and peripheral scan lines.
  • the angular spacing between scan line 34 B and peripheral or internal scan lines may be designated by angle ⁇ and angular spacings between internal or peripheral scan lines may be designated by angle ⁇ .
  • the angles ⁇ 1 , ⁇ 2 , and ⁇ 3 respectively define the angular spacings from scan line 34 B to scan lines 34 A, 34 C, and 31 D.
  • angles ⁇ 1 , ⁇ 2 , and ⁇ 3 respectively define the angular spacings between scan line 31 B and 31 C, 31 C and 34 A, and 31 D and 31 E.
  • the plurality of peripheral scan lines 31 A-E and the plurality of internal scan lines 34 A-D may be three dimensionally distributed A-lines (scan lines) that are not necessarily confined within a scan plane, but instead may sweep throughout the internal regions and along the periphery of the scan cone 30 .
  • A-lines scan lines
  • a given point within the scan cone 30 may be identified by the coordinates r, ⁇ , and ⁇ whose values generally vary.
  • the number and location of the internal scan lines emanating from the transceivers 10 A- 10 E may thus be distributed within the scan cone 30 at different positional coordinates as required to sufficiently visualize structures or images within a region of interest (ROI) in a patient.
  • ROI region of interest
  • the angular movement of the ultrasound transducer within the transceiver 10 A- 10 E may be mechanically effected, and/or it may be electronically generated.
  • the number of lines and the length of the lines may be uniform or otherwise vary, so that angle ⁇ sweeps through angles approximately between ⁇ 60° between scan line 34 B and 31 A, and +60° between scan line 34 B and 31 B.
  • angle ⁇ in this example presents a total arc of approximately 120°.
  • the transceiver 10 A, 10 B and 10 C may be configured to generate a plurality of 3D-distributed scan lines within the scan cone 30 having a length r of approximately 18 to 20 centimeters (cm).
  • FIG. 3 is a schematic illustration of a server-accessed local area network in communication with a plurality of ultrasound harmonic imaging systems.
  • An ultrasound harmonic imaging system 100 includes one or more personal computer devices 52 that may be coupled to a server 56 by a communications system 55 .
  • the devices 52 may be, in turn, coupled to one or more ultrasound transceivers 10 A and/or 10 B, for examples the ultrasound harmonic sub-systems 60 A- 60 D.
  • Ultrasound based images of organs or other regions of interest derived from either the signals of echoes from fundamental frequency ultrasound and/or harmonics thereof, may be shown within scan cone 30 or 40 presented on display 54 .
  • the server 56 may be operable to provide additional processing of ultrasound information, or it may be coupled to still other servers (not shown in FIG. 3 ) and devices.
  • Transceivers 10 A or 10 B may be in wireless communication with computer 52 in sub-system 60 A, in wired signal communication in sub-system 10 B, in wireless communication with computer 52 via receiving cradle 50 in sub-system 10 C, or in wired communication with computer 52 via receiving cradle 50 in sub-system 10 D.
  • FIG. 4 is a schematic illustration of the Internet in communication with a plurality of ultrasound harmonic imaging systems.
  • An Internet system 110 may be coupled or otherwise in communication with the ultrasound harmonic sub-systems 60 A- 60 D.
  • FIG. 5 schematically depicts a master method flow chart algorithm 120 to acquire a clarity enhanced ultrasound image.
  • Algorithm 120 begins with process block 150 , in which an acoustic coupling or sonic gel is applied to the dermal surface near the region-of-interest (ROI) using a degassing gel dispenser. Embodiments illustrating the degassing gel dispenser and its uses are depicted in FIGS. 36A-G below.
  • decision diamond 170 is reached with the query “Targeting a moving structure?”, and if negative to this query, algorithm 120 continues to process block 200 .
  • the ultrasound transceiver dome 20 of transceivers 10 A,B are placed into the dermal residing sonic gel to and pulsed ultrasound energy is transmitted to the ROI. Thereafter, echoes of the fundamental ultrasound frequency and/or harmonics thereof are captured by the transceiver 10 A,B and converted to echogenic signals. If the answer to decision diamond is affirmative for targeting a moving structure within the ROI, the ROI is re-targeted, at process block 300 , using optical flow real-time analysis.
  • algorithm 120 continues with processing blocks 400 A or 400 B.
  • Processing blocks 400 A and 400 B process echogenic datasets of the echogenic signals from process blocks 200 and 300 using a point spread function algorithms to compensate or otherwise suppress motion induced reverberations within the ROI echogenic data sets.
  • Processing block 400 A employs nonparametric analysis
  • processing block 400 B employs parametric analysis and described in FIG. 9 below.
  • algorithm 120 continues with processing block 50 to segment image sections derived from the distortion-compensated data sets.
  • process block 600 areas of the segmented sections within 2D images and/or 3D volumes are determined.
  • master algorithm 120 completes at process block 700 in which segmented structures within the static or moving ROI is displayed along with any segmented section area and/or volume measurements.
  • FIG. 6 is an expansion of sub-algorithm 150 of master algorithm 120 of FIG. 7 .
  • sub-algorithm 150 starts at process block 152 wherein a metered volume of sonic gel is applied from the volume-controlled dispenser to the dermal surface believed to overlap the ROI. Thereafter, at process block 154 , any gas pockets within the applied gel are expelled by a roller pressing action. Sub-algorithm 150 is then completed and exits to sub-algorithm 200 .
  • FIG. 7 is an expansion of sub-algorithms 200 of FIG. 5 .
  • sub-algorithm 200 starts at process block 202 wherein the transceiver dome 20 of transceivers 10 A,B are placed into the gas-purged sonic gel to get a firm sonic coupling, and then at process block 206 , pulsed frequency ultrasound is transmitted to the underlying ROI. Thereafter, at process block 210 , ultrasound echoes from the ROI and any intervening structure, is collected by the transceivers 10 A,B and converted to the echogenic data sets for presentation of an image of the ROI.
  • sub-algorithm 200 continues to process block 222 in which the transceiver is moved to a new anatomical location for re-routing to process block 202 .
  • the ultrasound transceiver dome 20 of transceivers 10 A,B are placed into the dermal residing sonic gel to and pulsed ultrasound energy is transmitted to the ROI.
  • sub-algorithm 200 continues to process block 226 in which a 3D echogenic data set array of the ROI is acquired using at least one of an ultrasound fundamental and/or harmonic frequency. Sub-algorithm 200 is then completed and exits to sub-algorithm 300 .
  • FIG. 8 is an expansion of sub-algorithm 300 of master algorithm illustrated in FIG. 5 .
  • sub-algorithm 300 begins in processing block 302 by making a transceiver 10 A,B-to-ROI sonic coupling similar to process block 202 , transmitting pulse frequency ultrasound at process block 306 , and thereafter, at processing block 310 , acquire ultrasound echoes, convert to echogenic data sets, and present a currently displayed image “i” of the ROI and compare “i” with any predecessor image “i-1” of the ROI, if available.
  • process block 314 pixel movement along Cartesian axes is ascertained to determine X and Y-axis pixel center-of-optical flow, and similarly, followed by process block 318 pixel movement along the phi angle to ascertain a rotational center-of-optical flow.
  • optical flow velocity maps to ascertain whether axial and rotational vectors exceed a pre-defined threshold OFR value.
  • decision diamond 326 is reached with the query “Does optical flow velocity map match the expected pattern for the structure being imaged?”, and if negative, sub-algorithm re-routes to process block 306 for retransmission of ultrasound to the ROI via the sonically coupled transceiver 10 A,B. If affirmative for a matched velocity map and expected pattern of the structure being imaged, sub-algorithm 300 continues with process block 330 in which a 3D echogenic data set array of the ROI is acquired using at least one of an ultrasound fundamental and/or harmonic frequency. Sub-algorithm 300 is then completed and exits to sub-algorithms 400 A and 400 B.
  • FIG. 9 depicts expansion of subalgorithms 400 A and 400 B of FIG. 5 .
  • Sub-algorithm 400 A employs nonparametric pulse estimation and 400 B employs parametric pulse estimation.
  • Sub-algorithm 400 A describes an implementation of the CLEAN algorithm for reducing reverberation and noise in the ultrasound signals and comprises an RF line processing block 400 A- 2 , a non-parametric pulse estimation block 400 A- 4 , a CLEAN iteration block 400 A- 6 , a decision diamond block 400 A- 8 having the query “STOP?”, and a Scan Convert processing block 400 A- 10 .
  • each RF line uses its own unique estimate of the point spread function of the transducer (or pulse estimate).
  • the algorithm is iterative by re-routing to Non parametric pulse estimation block 400 A- 4 in that the point spread function is estimated, the CLEAN sub-algorithm applied and then the pulse is re-estimated from the output of the CLEAN sub-algorithm.
  • the iterations are stopped after a maximum number of iterations is reached or the changes in the signal are sufficiently small. Thereafter, once the iteration has stopped, the signals are converted for presentation as part of a scan plane image at process block 400 A- 10 .
  • Sub-algorithm 400 A is then completed and exits to sub-algorithms 500 .
  • Sub-algorithm 400 B parametric analysis employs an implementation of the CLEAN algorithm that is not iterative.
  • Sub-algorithm 400 B comprise comprises an RF line processing block 400 B- 2 , a parametric pulse estimation block 400 B- 4 , a CLEAN algorithm block 400 B- 6 , a CLEAN iteration block 400 B- 8 , and a Scan Convert processing block 400 B- 10 .
  • the point spread function of the transducer is estimated once and becomes a priori information used in the CLEAN algorithm.
  • a single estimate of the pulse is applied to all RF lines in a scan plane and the CLEAN algorithm is applied once to each line.
  • the signal output is then converted for presentation as part of a scan plane image at process block 400 B- 10 .
  • Sub-algorithm 400 B is then completed and exits to sub-algorithms 500 .
  • FIG. 10 is an expansion of sub-algorithm 500 of FIG. 5 .
  • 3D data sets from processing blocks 400 A- 10 or 400 B- 10 of sub-algorithms 400 A or 400 B are entered at input data process block 504 that then undergoes a 2-step image enhancement procedure at process block 506 .
  • the 2-step image enhancement includes performing a heat filter to reduce noise followed by a shock filter to sharpen edges of structures within the 3D data sets.
  • the heat and shock filters are partial differential equations (PDE) defined respectively in Equations E1 and E2 below:
  • u in the heat filter represents the image being processed.
  • the image u is 2D, and is comprised of an array of pixels arranged in rows along the x-axis, and an array of pixels arranged in columns along the y-axis.
  • I initial input image pixel intensity
  • the value of I depends on the application, and commonly occurs within ranges consistent with the application. For example, I can be as low as 0 to 1, or occupy middle ranges between 0 to 127 or 0 to 512. Similarly, I may have values occupying higher ranges of 0 to 1024 and 0 to 4096, or greater.
  • Equation E9 relates to equation E6 as follows:
  • t is a threshold on the pixel gradient value ⁇ u ⁇ .
  • the combination of heat filtering and shock filtering produces an enhanced image ready to undergo the intensity-based and edge-based segmentation algorithms as discussed below.
  • the enhanced 3D data sets are then subjected to a parallel process of intensity-based segmentation at process block 510 and edge-based segmentation at process block 512 .
  • the intensity-based segmentation step uses a “k-means” intensity clustering technique where the enhanced image is subjected to a categorizing “k-means” clustering algorithm.
  • the “k-means” algorithm categorizes pixel intensities into white, gray, and black pixel groups.
  • the k-means algorithm is an iterative algorithm comprising four steps: Initially determine or categorize cluster boundaries by defining a minimum and a maximum pixel intensity value for every white, gray, or black pixels into groups or k-clusters that are equally spaced in the entire intensity range. Assign each pixel to one of the white, gray or black k-clusters based on the currently set cluster boundaries. Calculate a mean intensity for each pixel intensity k-cluster or group based on the current assignment of pixels into the different k-clusters. The calculated mean intensity is defined as a cluster center. Thereafter, new cluster boundaries are determined as mid points between cluster centers.
  • the fourth and final step of intensity-based segmentation determines if the cluster boundaries significantly change locations from their previous values. Should the cluster boundaries change significantly from their previous values, iterate back to step 2, until the cluster centers do not change significantly between iterations. Visually, the clustering process is manifest by the segmented image and repeated iterations continue until the segmented image does not change between the iterations.
  • each image is clustered independently of the neighboring images.
  • the entire volume is clustered together. To make this step faster, pixels are sampled at 2 or any multiple sampling rate factors before determining the cluster boundaries. The cluster boundaries determined from the down-sampled data are then applied to the entire data.
  • the edge-based segmentation process block 512 uses a sequence of four sub-algorithms.
  • the sequence includes a spatial gradients algorithm, a hysteresis threshold algorithm, a Region-of-Interest (ROI) algorithm, and a matching edges filter algorithm.
  • the spatial gradient algorithm computes the x-directional and y-directional spatial gradients of the enhanced image.
  • the hysteresis threshold algorithm detects salient edges. Once the edges are detected, the regions defined by the edges are selected by a user employing the ROI algorithm to select regions-of-interest deemed relevant for analysis.
  • the edge points can be easily determined by taking x- and y-derivatives using backward differences along x- and y-directions.
  • the pixel gradient magnitude ⁇ I ⁇ is then computed from the x- and y-derivative image in equation E5 as:
  • I 2 x the square of x-derivative of intensity
  • I 2 y the square of y-derivative of intensity along the y-axis.
  • Significant edge points are then determined by thresholding the gradient magnitudes using a hysteresis thresholding operation.
  • Other thresholding methods could also be used.
  • hysteresis thresholding two threshold values, a lower threshold and a higher threshold, are used. First, the image is thresholded at the lower threshold value and a connected component labeling is carried out on the resulting image. Next, each connected edge component is preserved which has at least one edge pixel having a gradient magnitude greater than the upper threshold. This kind of thresholding scheme is good at retaining long connected edges that have one or more high gradient points.
  • the two thresholds are automatically estimated.
  • the upper gradient threshold is estimated at a value such that at most 97% of the image pixels are marked as non-edges.
  • the lower threshold is set at 50% of the value of the upper threshold. These percentages could be different in different implementations.
  • edge points that lie within a desired region-of-interest are selected. This region of interest algorithm excludes points lying at the image boundaries and points lying too close to or too far from the transceivers 10 A,B.
  • the matching edge filter is applied to remove outlier edge points and fill in the area between the matching edge points.
  • the edge-matching algorithm is applied to establish valid boundary edges and remove spurious edges while filling the regions between boundary edges.
  • Edge points on an image have a directional component indicating the direction of the gradient.
  • Pixels in scanlines crossing a boundary edge location can exhibit two gradient transitions depending on the pixel intensity directionality.
  • Each gradient transition is given a positive or negative value depending on the pixel intensity directionality. For example, if the scanline approaches an echo reflective bright wall from a darker region, then an ascending transition is established as the pixel intensity gradient increases to a maximum value, i.e., as the transition ascends from a dark region to a bright region. The ascending transition is given a positive numerical value. Similarly, as the scanline recedes from the echo reflective wall, a descending transition is established as the pixel intensity gradient decreases to or approaches a minimum value. The descending transition is given a negative numerical value.
  • Valid boundary edges are those that exhibit ascending and descending pixel intensity gradients, or equivalently, exhibit paired or matched positive and negative numerical values. The valid boundary edges are retained in the image. Spurious or invalid boundary edges do not exhibit paired ascending-descending pixel intensity gradients, i.e., do not exhibit paired or matched positive and negative numerical values. The spurious boundary edges are removed from the image.
  • edge points for blood fluid surround a dark, closed region, with directions pointing inwards towards the center of the region.
  • the direction of a gradient for any edge point the edge point having a gradient direction approximately opposite to the current point represents the matching edge point.
  • those edge points exhibiting an assigned positive and negative value are kept as valid edge points on the image because the negative value is paired with its positive value counterpart.
  • those edge point candidates having unmatched values i.e., those edge point candidates not having a negative-positive value pair, are deemed not to be true or valid edge points and are discarded from the image.
  • the matching edge point algorithm delineates edge points not lying on the boundary for removal from the desired dark regions. Thereafter, the region between any two matching edge points is filled in with non-zero pixels to establish edge-based segmentation.
  • edge points whose directions are primarily oriented co-linearly with the scanline are sought to permit the detection of matching front wall and back wall pairs of a bladder cavity, for example the left or right ventricle.
  • results from the respective segmentation procedures are then combined at process block 514 and subsequently undergoes a cleanup algorithm process at process block 516 .
  • the combining process of block 214 uses a pixel-wise Boolean AND operator step to produce a segmented image by computing the pixel intersection of two images.
  • the Boolean AND operation represents the pixels of each scan plane of the 3D data sets as binary numbers and the corresponding assignment of an assigned intersection value as a binary number 1 or 0 by the combination of any two pixels. For example, consider any two pixels, say pixel A and pixel B , which can have a 1 or 0 as assigned values.
  • Cleanup 516 includes filling gaps with pixels and removing pixel groups unlikely to be related to the ROI undergoing study, for example pixel groups unrelated to bladder cavity structures.
  • Sub-algorithm 500 is then completed and exits to sub-algorithm 600 .
  • FIG. 11 depicts a logarithm of a Cepstrum.
  • the Cepstrum is used in sub-algorithm 400 A for the pulse estimation via application of point spread functions to the echogenic data sets generated by the transceivers 10 A,B.
  • FIGS. 12A-C depict histogram waveform plots derived from water tank pulse-echo experiments undergoing parametric and non-parametric analysis.
  • FIG. 12A is a measure plot.
  • FIG. 12B is a nonparametric pulse estimated pattern derived from sub-algorithm 400 A.
  • FIG. 12 c is a parametric pulse estimated pattern derived from sub-algorithm 400 B.
  • FIGS. 13-25 are bladder sonograms that depict image clarity after undergoing image enhancement processing by algorithms described in FIGS. 5-10 .
  • FIG. 13 is an unprocessed image that will undergo image enhancement processing.
  • FIG. 14 illustrates an enclosed portion of a magnified region of FIG. 13 .
  • FIG. 15 is the resultant image of FIG. 13 that has undergone image processing via nonparametric estimation under sub-algorithm 400 A.
  • the low echogenic region within the circle inset has more contrast than the unprocessed image of FIGS. 13 and 14 .
  • FIG. 16 is the resultant image of FIG. 13 that has undergone image processing via parametric estimation under sub-algorithm 400 B.
  • the circle inset is in the echogenic musculature region encircling the bladder and is shown with enhanced contrast and clarity than the magnified, unprocessed image of FIG. 14 .
  • FIG. 17 the resultant image of an alternate image-processing embodiment using a Weiner filter.
  • Weiner filtration image does not have the clarity nor the contrast in the low echogenic bladder region of FIG. 15 (compare circle insets).
  • FIG. 18 is another unprocessed image that will undergo image enhancement processing.
  • FIG. 19 illustrates an enclosed portion of a magnified region of FIG. 18 .
  • FIG. 20 is the resultant image of FIG. 18 that has undergone image processing via nonparametric estimation under sub-algorithm 400 A.
  • the low echogenic region is darker and the echogenic regions are brighter with more contrast than the magnified, unprocessed image of FIG. 19 .
  • FIG. 21 is the resultant image of FIG. 18 that has undergone image processing via parametric estimation under sub-algorithm 400 B.
  • the low echogenic region is darker and the echogenic regions are brighter with enhanced contrast and clarity than the magnified, unprocessed image of FIG. 19 .
  • FIG. 22 is another unprocessed image that will undergo image enhancement processing.
  • FIG. 23 illustrates an enclosed portion of a magnified region of FIG. 22 .
  • FIG. 24 is the resultant image of FIG. 22 that has undergone image processing via nonparametric estimation under sub-algorithm 400 A.
  • the low echogenic region is darker and the echogenic regions are brighter with more contrast than the magnified, unprocessed image of FIG. 23 .
  • FIG. 25 is the resultant image of FIG. 22 that has undergone image processing via parametric estimation under sub-algorithm 400 B.
  • the low echogenic region is darker and the echogenic regions are brighter with enhanced contrast and clarity than the magnified, unprocessed image of FIG. 23 .
  • FIG. 26 depicts a schematic example of a time velocity map derived from sub-algorithm 310 .
  • FIG. 27 depicts another schematic example of a time velocity map derived from sub-algorithm 310 .
  • FIG. 28 illustrates a seven panel image series of a beating heart ventricle that will undergo the optical flow processes of sub-algorithm 300 in which at least two images are required.
  • FIG. 29 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 presented in a 2D flow pattern after undergoing sub-algorithm 310 .
  • FIG. 30 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the X-axis direction or phi direction after undergoing sub-algorithm 310 .
  • FIG. 31 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the Y-axis direction radial direction after undergoing sub-algorithm 310 .
  • FIG. 32 illustrates a 3D optical vector plot after undergoing sub-algorithm 310 and corresponds to the top row of FIG. 29 .
  • FIG. 35 illustrates a 3D optical vector plot in the radial direction above a Y-axis threshold setting of 0.6 after undergoing sub-algorithm 310 and corresponds to FIG. 34 the threshold T that are less than 0.6 are set to 0.
  • FIGS. 36A-G depicts embodiments of the sonic gel dispenser.
  • FIG. 36A illustrates the metered dispensing of sonic gel by calibrated rotation of a compressing wheel.
  • the peristaltic mechanism using the compressing wheel is shown in partial a side view compressing wheel mechanism.
  • FIG. 36B illustrates in cross-section the inside the dispenser showing a collapsible bag that is engaged by the compressing wheel. As more rotation action is conveyed the compressing wheel, the bag progressively collapses.
  • FIG. 36C illustrates an alternative embodiment employing compression by hand gripping.
  • FIG. 36D illustrates an alternative embodiment employing push button or lever compression to dispense metered quantities of sonic gel.
  • FIG. 36E illustrates an alternative embodiment employing air valves to limit re-gassing of internal sonic gel volume stores within the sonic gel dispenser. The value is pinched closed while when the gripping or compressing wheel pressure is lessened and spring opens when the gripping or compressing wheel pressure is increased to allow sonic gel to be dispensed.
  • FIG. 36F illustrates a side, cross-sectional view of the gel dispensing system that includes a pre-packaged collapsible bottle with a refill bag, a bottle holder that positions the pre-packaged bottle for use, and a sealed tip that may be clipped open.
  • FIG. 36G illustrate a side view of the pre-packaged collapsible bottle of FIG. 36F .
  • a particular embodiment includes eight ounce squeeze bottle.
  • FIGS. 37-46 concern insertion viewed by ultrasonic systems in which the optimization of cannula motion detection during insertion is enhanced with method algorithms directed to detect moving cannula fitted with echogenic ultrasound micro reflectors.
  • An embodiment related to cannula insertion generally includes an ultrasound probe attached to a first camera and a second camera and a processing and display generating system that is in signal communication with the ultrasound probe, the first camera, and/or the second camera.
  • a user of the system scans tissue containing a target vein using the ultrasound probe and a cross-sectional image of the target vein is displayed.
  • the first camera records a first image of a cannula in a first direction and the second camera records a second image of the cannula in a second direction orthogonal to the first direction.
  • the first and/or the second images are processed by the processing and display generating system along with the relative positions of the ultrasound probe, the first camera, and/or the second camera to determine the trajectory of the cannula.
  • a representation of the determined trajectory of the cannula is then displayed on the ultrasound image.
  • FIG. 37 is a diagram illustrating a side view of one embodiment of the present invention.
  • a two-dimensional (2D) ultrasound probe 1010 is attached to a first camera 1014 that takes images in a first direction.
  • the ultrasound probe 1010 is also attached to a second camera 1018 via a member 1016 .
  • the member 1016 may link the first camera 1014 to the second camera 1018 or the member 1016 may be absent, with the second camera 1018 being directly attached to a specially configured ultrasound probe.
  • the second camera 1018 is oriented such that the second camera 1018 takes images in a second direction that is orthogonal to the first direction of the images taken by the first camera 1014 .
  • the placement of the cameras 1014 , 1018 may be such that they can both take images of a cannula 1020 when the cannula 1020 is placed before the cameras 1014 , 1018 .
  • a needle may also be used in place of a cannula.
  • the cameras 1014 , 1018 and the ultrasound probe 1010 are geometrically interlocked such that the cannula 1020 trajectory can be related to an ultrasound image.
  • the second camera 1018 is behind the cannula 1020 when looking into the plane of the page.
  • the cameras 1014 , 1018 take images at a rapid frame rate of approximately 1030 frames per second.
  • the ultrasound probe 1010 and/or the cameras 1014 , 1018 are in signal communication with a processing and display generating system 1061 .
  • a user employs the ultrasound probe 1010 and the processing and display generating system 1061 to generate a cross-sectional image of a patient's arm tissue containing a vein to be cannulated (“target vein”) 1019 .
  • target vein tissue containing a vein to be cannulated
  • the user identifies the target vein 1019 in the image using methods such as simple compression which differentiates between arteries and/or veins by using the fact that veins collapse easily while arteries do not.
  • the ultrasound probe 1010 is affixed to the patient's arm over the previously identified target vein 19 using a magnetic tape material 1012 .
  • the ultrasound probe 1010 and the processing and display generating system 1061 continue to generate a 2D cross-sectional image of the tissue containing the target vein 1019 . Images from the cameras 1014 , 1018 are provided to the processing and display generating system 1061 as the cannula 1020 is approaching and/or entering the arm of the patient.
  • the processing and display generating system 1061 locates the cannula 1020 in the images provided by the cameras 1014 , 1018 and determines the projected location at which the cannula 1020 will penetrate the cross-sectional ultrasound image being displayed.
  • the trajectory of the cannula 1020 is determined in some embodiments by using image processing to identify bright spots corresponding to micro reflectors previously machined into the shaft of the cannula 1020 or a needle used alone or in combination with the cannula 1020 .
  • Image processing uses the bright spots to determine the angles of the cannula 1020 relative to the cameras 1014 , 1018 and then generates a projected trajectory by using the determined angles and/or the known positions of the cameras 1014 , 1018 in relation to the ultrasound probe 10 .
  • determination of the cannula 1020 trajectory is performed using edge-detection algorithms in combination with the known positions of the cameras 1014 , 1018 in relation to the ultrasound probe 1010 , for example.
  • the projected location may be indicated on the displayed image as a computer-generated cross-hair 1066 , the intersection of which is where the cannula 1020 is projected to penetrate the image.
  • the ultrasound image confirms that the cannula 1020 penetrated at the location of the cross-hair 1066 .
  • This gives the user a real-time ultrasound image of the target vein 1019 with an overlaid real-time computer-generated image of the position in the ultrasound image that the cannula 1020 will penetrate.
  • This allows the user to adjust the location and/or angle of the cannula 1020 before and/or during insertion to increase the likelihood they will penetrate the target vein 1019 . Risks of pneumothorax and other adverse outcomes should be substantially reduced since a user will be able to use normal “free” insertion procedures but have the added knowledge of knowing where the cannula 1020 trajectory will lead.
  • FIG. 38 is a diagram illustrating a top view of the embodiment shown in FIG. 37 . It is more easily seen from this view that the second camera 1018 is positioned behind the cannula 1020 . The positioning of the cameras 1014 , 1018 relative to the cannula 1020 allows the cameras 1014 , 1018 to capture images of the cannula 1020 from two different directions, thus making it easier to determine the trajectory of the cannula 1020 .
  • FIG. 39 is diagram showing additional detail for a needle shaft 1022 to be used with one embodiment of the invention.
  • the needle shaft 1022 includes a plurality of micro corner reflectors 1024 .
  • the micro corner reflectors 1024 are cut into the needle shaft 1022 at defined intervals ⁇ l in symmetrical patterns about the circumference of the needle shaft 1022 .
  • the micro corner reflectors 1024 could be cut with a laser, for example.
  • FIGS. 40A and 40B are diagrams showing close-up views of surface features of the needle shaft 1022 shown in FIG. 39 .
  • FIG. 40A shows a first input ray with a first incident angle of approximately 90° striking one of the micro corner reflectors 1024 on the needle shaft 1022 .
  • a first output ray is shown exiting the micro corner reflector 1024 in a direction toward the source of the first input ray.
  • FIG. 40B shows a second input ray with a second incident angle other than 90° striking a micro corner reflector 1025 on the needle shaft 1022 .
  • a second output ray is shown exiting the micro corner reflector 1025 in a direction toward the source of the second input ray.
  • FIGS. 40A and 40B illustrate that the micro corner reflectors 1024 , 1025 are useful because they tend to reflect an output ray in the direction from which an input ray originated.
  • FIG. 41 is a diagram showing imaging components for use with the needle shaft 1022 shown in FIG. 39 in accordance with one embodiment of the invention.
  • the imaging components are shown to include a first light source 1026 , a second light source 1028 , a lens 1030 , and a sensor chip 1032 .
  • the first and/or second light sources 1026 , 1028 may be light emitting diodes (LEDs), for example.
  • the light sources 1026 , 1028 are infra-red LEDs.
  • an infra-red source is advantageous because it is not visible to the human eye, but when an image of the needle shaft 1022 is recorded, the image will show strong bright dots where the micro corner reflectors 1024 are located because silicon sensor chips are sensitive to infra-red light and the micro corner reflectors 1024 tend to reflect output rays in the direction from which input rays originate, as discussed with reference to FIGS. 40A and 40B .
  • a single light source may be used.
  • the sensor chip 1032 is encased in a housing behind the lens 1030 and the sensor chip 1032 and light sources 1026 , 1028 are in electrical communication with the processing and display generating system 1061 .
  • the sensor chip 1032 and/or the lens 1030 form a part of the first and second cameras 1014 , 1018 in some embodiments.
  • the light sources 1026 , 1028 are pulsed on at the time the sensor chip 1032 captures an image. In other embodiments, the light sources 1026 , 1028 are left on during video image capture.
  • FIG. 42 is a diagram showing a representation of an image 1034 produced by the imaging components shown in FIG. 41 .
  • the image 34 may include a needle shaft image 1036 that corresponds to a portion of the needle shaft 1022 shown in FIG. 41 .
  • the image 1034 also may include a series of bright dots 1038 running along the center of the needle shaft image 1036 that correspond to the micro corner reflectors 1024 shown in FIG. 41 .
  • a center line 1040 is shown in FIG. 42 to illustrate how an angle theta ( ⁇ ) could be obtained by image processing to recognize the bright dots 1038 and determine a line through them.
  • the angle theta represents the degree to which the needle shaft 1022 is inclined with respect to a reference line 1042 that is related to the fixed position of the sensor chip 1032 .
  • FIG. 43 is a system diagram of an embodiment of the present invention and shows additional detail for the processing and display generating system 1061 in accordance with an example embodiment of the invention.
  • the ultrasound probe 1010 is shown connected to the processing and display generating system via M control lines and N data lines.
  • the M and N variables are for convenience and appear simply to indicate that the connections may be composed of one or more transmission paths.
  • the control lines allow the processing and display generating system 61 to direct the ultrasound probe 1010 to properly perform an ultrasound scan and the data lines allow responses from the ultrasound scan to be transmitted to the processing and display generating system 1061 .
  • the first and second cameras 1014 , 1018 are also each shown to be connected to the processing and display generating system 1061 via N lines. Although the same variable N is used, it is simply indicating that one or more lines may be present, not that each device with a label of N lines has the same number of lines.
  • the processing and display generating system 1061 is composed of a display 1064 and a block 1062 containing a computer, a digital signal processor (DSP), and analog to digital (A/D) converters. As discussed for FIG. 37 , the display 1064 will display a cross-sectional ultrasound image.
  • the computer-generated cross hair 66 is shown over a representation of a cross-sectional view of the target vein 1019 in FIG. 43 .
  • the cross hair 1066 consists of an x-crosshair 1068 and a z-crosshair 1070 .
  • the DSP and the computer in the block 1062 use images from the first camera 1014 to determine the plane in which the cannula 1020 will penetrate the ultrasound image and then write the z-crosshair 1070 on the ultrasound image provided to the display 1064 .
  • the DSP and the computer in the block 1062 use images from the second camera 1018 , which are orthogonal to the images provided by the first camera 1014 as discussed for FIG. 37 , to write the x-crosshair 1068 on the ultrasound image.
  • FIG. 44 is a system diagram of an example embodiment showing additional detail for the block 1062 shown in FIG. 39 .
  • the block 1062 includes a first A/D converter 1080 , a second A/D converter 1082 , and a third A/D converter 1084 .
  • the first A/D converter 1080 receives signals from the ultrasound probe 1010 and converts them to digital information that is provided to a DSP 1086 .
  • the second and third A/D converters 1082 , 1084 receive signals from the first and second cameras 1014 , 1018 respectively and convert the signals to digital information that is provided to the DSP 1086 . In alternative embodiments, some or all of the A/D converters are not present.
  • video from the cameras 1014 , 1018 may be provided to the DSP 1086 directly in digital form rather than being created in analog form before passing through A/D converters 1082 , 1084 .
  • the DSP 1086 is in data communication with a computer 1088 that includes a central processing unit (CPU) 1090 in data communication with a memory component 1092 .
  • the computer 1088 is in signal communication with the ultrasound probe 1010 and is able to control the ultrasound probe 1010 using this connection.
  • the computer 1088 is also connected to the display 64 and produces a video signal used to drive the display 1064 .
  • FIG. 45 is a flowchart of a method of displaying the trajectory of a cannula in accordance with an embodiment of the present invention.
  • a block 1200 an ultrasound image of a vein cross-section is produced and/or displayed.
  • the trajectory of a cannula is determined.
  • the determined trajectory of the cannula is displayed on the ultrasound image.
  • FIG. 46 is a flowchart showing additional detail for the block 1210 depicted in FIG. 45 .
  • the block 1210 includes a block 1212 where a first image of a cannula is recorded using a first camera.
  • a second image of the cannula orthogonal to the first image of the cannula is recorded using a second camera.
  • the first and second images are processed to determine the trajectory of the cannula.
  • a three dimensional ultrasound system could be used rather than a 2D system.
  • different numbers of cameras could be used along with image processing that determines the cannula 1020 trajectory based on the number of cameras used.
  • the two cameras 1014 , 1018 could also be placed in a non-orthogonal relationship so long as the image processing was adjusted to properly determine the orientation and/or projected trajectory of the cannula 1020 .
  • an embodiment of the invention could be used for needles and/or other devices which are to be inserted in the body of a patient. Additionally, an embodiment of the invention could be used in places other than arm veins.
  • Regions of the patient's body other than an arm could be used and/or biological structures other than veins may be the focus of interest.
  • ultrasound-based algorithms alternate embodiments may be configured to image acquisitions other than ultrasound, for example X-ray, visible and infrared light acquired images. Accordingly, the scope of the invention is not limited by the disclosure of the preferred embodiment.

Abstract

A hand-held 3D ultrasound instrument is disclosed which is used to non-invasively and automatically measure amniotic fluid volume in the uterus requiring a minimum of operator intervention. Using a 2D image-processing algorithm, the instrument gives automatic feedback to the user about where to acquire the 3D image set. The user acquires one or more 3D data sets covering all of the amniotic fluid in the uterus and this data is then processed using an optimized 3D algorithm to output the total amniotic fluid volume corrected for any fetal head brain volume contributions.

Description

  • The following applications are incorporated by reference as if fully set forth herein: U.S. application Ser. No. 11/119,355 filed Apr. 29, 2005; Ser. No. 11/362,368 filed Feb. 26, 2006; Ser. No. 11/680,380 filed Feb. 28, 2007 and Ser. No. 11/925,654 filed Oct. 26, 2007.
  • FIELD OF THE INVENTION
  • This invention pertains to the field of obstetrics, particularly to ultrasound-based non-invasive obstetric measurements.
  • BACKGROUND OF THE INVENTION
  • Measurement of the amount of Amniotic Fluid (AF) volume is critical for assessing the kidney and lung function of a fetus and also for assessing the placental function of the mother. Amniotic fluid volume is also a key measure to diagnose conditions such as polyhydramnios (too much AF) and oligohydramnios (too little AF). Polyhydramnios and oligohydramnios are diagnosed in about 7-8% of all pregnancies and these conditions are of concern because they may lead to birth defects or to delivery complications. The amniotic fluid volume is also one of the important components of the fetal biophysical profile, a major indicator of fetal well-being.
  • The currently practiced and accepted method of quantitatively estimating the AF volume is from two-dimensional (2D) ultrasound images. The most commonly used measure is known as the use of the amniotic fluid index (AFI). AFI is the sum of vertical lengths of the largest AF pockets in each of the 4 quadrants. The four quadrants are defined by the umbilicus (the navel) and the linea nigra (the vertical mid-line of the abdomen). The transducer head is placed on the maternal abdomen along the longitudinal axis with the patient in the supine position. This measure was first proposed by Phelan et al (Phelan J P, Smith C V, Broussard P, Small M., “Amniotic fluid volume assessment with the four-quadrant technique at 36-42 weeks' gestation,” J Reprod Med July; 32(7): 540-2, 1987) and then recorded for a large normal population over time by Moore and Cayle (Moore T R, Cayle J E. “The amniotic fluid index in normal human pregnancy,” Am J Obstet Gynecol May; 162(5): 1168-73, 1990).
  • Even though the AFI measure is routinely used, studies have shown a very poor correlation of the AFI with the true AF volume (Sepulveda W, Flack N J, Fisk N M., “Direct volume measurement at midtrimester amnioinfusion in relation to ultrasonographic indexes of amniotic fluid volume,” Am J Obstet Gynecol April; 170(4): 1160-3, 1994). The correlation coefficient was found to be as low as 0.55, even for experienced sonographers. The use of vertical diameter only and the use of only one pocket in each quadrant are two reasons why the AFI is not a very good measure of AF Volume (AFV).
  • Some of the other methods that have been used to estimate AF volume include:
  • Dye dilution technique. This is an invasive method where a dye is injected into the AF during amniocentesis and the final concentration of dye is measured from a sample of AF removed after several minutes. This technique is the accepted gold standard for AF volume measurement; however, it is an invasive and cumbersome method and is not routinely used.
  • Subjective interpretation from ultrasound images. This technique is obviously dependent on observer experience and has not been found to be very good or consistent at diagnosing oligo- or poly-hydramnios.
  • Vertical length of the largest single cord-free pocket. This is an earlier variation of the AFI where the diameter of only one pocket is measured to estimate the AF volume.
  • Two-diameter areas of the largest AF pockets in the four quadrants. This is similar to the AFI; however, in this case, two diameters are measured instead of only one for the largest pocket. This two diameter area has been recently shown to be better than AFI or the single pocket measurement in identifying oligohydramnios (Magann E F, Perry K G Jr, Chauhan S P, Anfanger P J, Whitworth N S, Morrison J C., “The accuracy of ultrasound evaluation of amniotic fluid volume in singleton pregnancies: the effect of operator experience and ultrasound interpretative technique,” J Clin Ultrasound, June; 25(5):249-53, 1997).
  • The measurement of various anatomical structures using computational constructs are described, for example, in U.S. Pat. No. 6,346,124 to Geiser, et al. (Autonomous Boundary Detection System For Echocardiographic Images). Similarly, the measurement of bladder structures are covered in U.S. Pat. No. 6,213,949 to Ganguly, et al. (System For Estimating Bladder Volume) and U.S. Pat. No. 5,235,985 to McMorrow, et al., (Automatic Bladder Scanning Apparatus). The measurement of fetal head structures is described in U.S. Pat. No. 5,605,155 to Chalana, et al., (Ultrasound System For Automatically Measuring Fetal Head Size). The measurement of fetal weight is described in U.S. Pat. No. 6,375,616 to Soferman, et al. (Automatic Fetal Weight Determination).
  • Pertaining to ultrasound-based determination of amniotic fluid volumes, Segiv et al. (in Segiv C, Akselrod S, Tepper R., “Application of a semiautomatic boundary detection algorithm for the assessment of amniotic fluid quantity from ultrasound images.” Ultrasound Med Biol, May; 25(4): 515-26, 1999) describe a method for amniotic fluid segmentation from 2D images. However, the Segiv et al. method is interactive in nature and the identification of amniotic fluid volume is very observer dependent. Moreover, the system described is not a dedicated device for amniotic fluid volume assessment.
  • Grover et al. (Grover J, Mentakis E A, Ross M G, “Three-dimensional method for determination of amniotic fluid volume in intrauterine pockets.” Obstet Gynecol, December; 90(6): 1007-10, 1997) describe the use of a urinary bladder volume instrument for amniotic fluid volume measurement. The Grover et al. method makes use of the bladder volume instrument without any modifications and uses shape and other anatomical assumptions specific to the bladder that do not generalize to amniotic fluid pockets. Amniotic fluid pockets having shapes not consistent with the Grover et al. bladder model introduces analytical errors. Moreover, the bladder volume instrument does not allow for the possibility of more than one amniotic fluid pocket in one image scan. Therefore, the amniotic fluid volume measurements made by the Grover et al. system may not be correct or accurate.
  • None of the currently used methods for AF volume estimation are ideal. Therefore, there is a need for better, non-invasive, and easier ways to accurately measure amniotic fluid volume.
  • The clarity of ultrasound acquired images is affected by motions of the examined subject, the motions of organs and fluids within the examined subject, the motion of the probing ultrasound transceiver, the coupling medium used transceiver and the examined subject, and the algorithms used for image processing. As regards image processing frequency domain approaches have been utilized in the literature including using Wiener filters that is implemented in the frequency domain and assumes that the point spread function (PSF) is fixed and known. This assumption conflicts with the observation that the received ultrasound signals are usually non-stationary and depth-dependent. Since the algorithm is implemented in the frequency domain, the error introduced in PSF will leak across the spatial domain. As a result, the performance of Wiener filtering is not ideal.
  • As regards prior uses of coupling mediums, the most common container for dispensing ultrasound coupling gel is an 8 oz. plastic squeeze bottle with an open, tapered tip. The tapered tip bottle is inexpensive and easy to refill from a larger reservoir in the form of a bag or pump-type and dispenses gel in a controlled manner. Other embodiments include the Sontac® ultrasound gel pad available from Verathon™ Medical, Bothell, Wash., USA is a pre-packaged, circular pad of moist, flexible coupling gel 2.5 inches in diameter and 0.06 inches thick and is advantageously used with the BladderScan devices. The Sontac pad is simple to apply and to remove, and provides adequate coupling for a one-position ultrasound scan in most cases. Yet others include the Aquaflex® gel pads perform in a similar manner to Sontac pads, but are larger and thicker (2 cm thick×9 cm diameter), and traditionally used for therapeutic ultrasound or where some distance between the probe and the skin surface (“stand-off”) must be maintained.
  • The main purpose of an ultrasonic coupling medium is to provide an air-free interface between an ultrasound transducer and the body surface. Gels are used as coupling media since they are moist and deformable, but not runny: they wet both the transducer and the body surface, but stay where they are applied. The most common delivery method for ultrasonic coupling gel, the plastic squeeze bottle, has several disadvantages. First, if the bottle has been stored upright the gel will fall to the bottom of the bottle, and vigorous shaking is required to get the gel back to the bottle tip, especially if the gel is cold. This motion can be particularly irritating to sonographers, who routinely suffer from wrist and arm pain from ultrasound scanning. Second, the bottle tip is a two-way valve: squeezing the bottle releases gel at the tip, but releasing the bottle sucks air back into the bottle and into the gel. The presence of air bubbles in the gel may detract from its performance as a coupling medium. Third, there is no standard application amount: inexperienced users such as Diagnostic Ultrasound customers have to make an educated guess about how much gel to use. Fourth, when the squeeze bottle is nearly empty it is next to impossible to coax the final 5-10% of gel into the bottle's tip for dispensing. Finally, although refilling the bottle from a central source is not a particularly difficult task, it is non-sterile and potentially messy.
  • Sontac pads and other solid gel coupling pads are simpler to use than gel: the user does not have to guess at an appropriate application amount, the pad is sterile, and it can be simply lifted off the patient and disposed of after use. However, pads do not mold to the skin or transducer surface as well as the more liquefied coupling gels and therefore may not provide ideal coupling when used alone, especially on dry, hairy, curved, or wrinkled surfaces. Sontac pads suffer from the additional disadvantage that they are thin and easily damaged by moderate pressure from the ultrasound transducer. (See Bishop S, Draper D O, Knight K L, Feland J B, Eggett D. “Human tissue-temperature rise during ultrasound treatments with the Aquaflex gel pad.” Journal of Athletic Training 39(2):126-131, 2004).
  • Relating to cannula insertion, unsuccessful insertion and/or removal of a cannula, a needle, or other similar devices into vascular tissue may cause vascular wall damage that may lead to serious complications or even death. Image guided placement of a cannula or needle into the vascular tissue reduces the risk of injury and increases the confidence of healthcare providers in using the foregoing devices. Current image guided placement methods generally use a guidance system for holding specific cannula or needle sizes. The motion and force required to disengage the cannula from the guidance system may, however, contribute to a vessel wall injury, which may result in extravasation. Complications arising from extravasation resulting in morbidity are well documented. Therefore, there is a need for image guided placement of a cannula or needle into vascular tissue while still allowing a health care practitioner to use standard “free” insertion procedures that do not require a guidance system to hold the cannula or needle.
  • SUMMARY OF THE INVENTION
  • The preferred form of the invention is a three dimensional (3D) ultrasound-based system and method using a hand-held 3D ultrasound device to acquire at least one 3D data set of a uterus and having a plurality of automated processes optimized to robustly locate and measure the volume of amniotic fluid in the uterus without resorting to pre-conceived models of the shapes of amniotic fluid pockets in ultrasound images. The automated process uses a plurality of algorithms in a sequence that includes steps for image enhancement, segmentation, and polishing.
  • A hand-held 3D ultrasound device is used to image the uterus trans-abdominally. The user moves the device around on the maternal abdomen and, using 2D image processing to locate the amniotic fluid areas, the device gives feedback to the user about where to acquire the 3D image data sets. The user acquires one or more 3D image data sets covering all of the amniotic fluid in the uterus and the data sets are then stored in the device or transferred to a host computer.
  • The 3D ultrasound device is configured to acquire the 3D image data sets in two formats. The first format is a collection of two-dimensional scanplanes, each scanplane being separated from the other and representing a portion of the uterus being scanned. Each scanplane is formed from one-dimensional ultrasound A-lines confined within the limits of the 2D scanplane. The 3D data sets is then represented as a 3D array of 2D scanplanes. The 3D array of 2D scanplanes is an assembly of scanplanes, and may be assembled into a translational array, a wedge array, or a rotational array.
  • Alternatively, the 3D ultrasound device is configured to acquire the 3D image data sets from one-dimensional ultrasound A-lines distributed in 3D space of the uterus to form a 3D scancone of 3D-distributed scanline. The 3D scancone is not an assembly of 2D scanplanes.
  • The 3D image datasets, either as discrete scanplanes or 3D distributed scanlines, are then subjected to image enhancement and analysis processes. The processes are either implemented on the device itself or is implemented on the host computer. Alternatively, the processes can also be implemented on a server or other computer to which the 3D ultrasound data sets are transferred.
  • In a preferred image enhancement process, each 2D image in the 3D dataset is first enhanced using non-linear filters by an image pre-filtering step. The image pre-filtering step includes an image-smoothing step to reduce image noise followed by an image-sharpening step to obtain maximum contrast between organ wall boundaries.
  • A second process includes subjecting the resulting image of the first process to a location method to identify initial edge points between amniotic fluid and other fetal or maternal structures. The location method automatically determines the leading and trailing regions of wall locations along an A-mode one-dimensional scan line.
  • A third process includes subjecting the image of the first process to an intensity-based segmentation process where dark pixels (representing fluid) are automatically separated from bright pixels (representing tissue and other structures).
  • In a fourth process, the images resulting from the second and third step are combined to result in a single image representing likely amniotic fluid regions.
  • In a fifth process, the combined image is cleaned to make the output image smooth and to remove extraneous structures such as the fetal head and the fetal bladder.
  • In a sixth process, boundary line contours are placed on each 2D image. Thereafter, the method then calculates the total 3D volume of amniotic fluid in the uterus.
  • In cases in which uteruses are too large to fit in a single 3D array of 2D scanplanes or a single 3D scancone of 3D distributed scanlines, especially as occurs during the second and third trimester of pregnancy, preferred alternate embodiments of the invention allow for acquiring at least two 3D data sets, preferably four, each 3D data set having at least a partial ultrasonic view of the uterus, each partial view obtained from a different anatomical site of the patient.
  • In one embodiment a 3D array of 2D scanplanes is assembled such that the 3D array presents a composite image of the uterus that displays the amniotic fluid regions to provide the basis for calculation of amniotic fluid volumes. In a preferred alternate embodiment, the user acquires the 3D data sets in quarter sections of the uterus when the patient is in a supine position. In this 4-quadrant supine procedure, four image cones of data are acquired near the midpoint of each uterine quadrant at substantially equally spaced intervals between quadrant centers. Image processing as outlined above is conducted for each quadrant image, segmenting on the darker pixels or voxels associated with amniotic fluid. Correcting algorithms are applied to compensate for any quadrant-to-quadrant image cone overlap by registering and fixing one quadrant's image to another. The result is a fixed 3D mosaic image of the uterus and the amniotic fluid volumes or regions in the uterus from the four separate image cones.
  • Similarly, in another preferred alternate embodiment, the user acquires one or more 3D image data sets of quarter sections of the uterus when the patient is in a lateral position. In this multi-image cone lateral procedure, each image cones of data are acquired along a lateral line of substantially equally spaced intervals. Each image cone are subjected to the image processing as outlined above, with emphasis given to segmenting on the darker pixels or voxels associated with amniotic fluid. Scanplanes showing common pixel or voxel overlaps are registered into a common coordinate system along the lateral line. Correcting algorithms are applied to compensate for any image cone overlap along the lateral line. The result is a fixed 3D mosaic image of the uterus and the amniotic fluid volumes or regions in the uterus from the four separate image cones.
  • In yet other preferred embodiments, at least two 3D scancone of 3D distributed scanlines are acquired at different anatomical sites, image processed, registered and fused into a 3D mosaic image composite. Amniotic fluid volumes are then calculated.
  • The system and method further provides an automatic method to detect and correct for any contribution the fetal head provides to the amniotic fluid volume.
  • Systems, methods, and devices for image clarity of ultrasound-based images are described. Such systems, methods, and devices include improved transducer aiming and utilizing time-domain deconvolution processes upon the non-stationary effects of ultrasound signals. The processes deconvolution applies algorithms to improve the clarity or resolution of ultrasonic images by suppressed reverberation of ultrasound echoes. The initially acquired and distorted ultrasound image is reconstructed to a clearer image by countering the effect of distortion operators. An improved point spread function (PSF) of the imaging system is applied, utilizing a deconvolution algorithm, to improve the image resolution, and remove reverberations by modeling them as noise.
  • As regards improved transducer aiming particular embodiments employ novel applications of computer vision techniques to perform real time analysis. First, a computer vision method is introduced: optical flow, which is a powerful motion analysis technique and is applied in many different research or commercial fields. The optical flow is able to estimate the velocity field of image series and the velocity vector provides information of the contents inside the image series. In the current field, if the target is with very large motion and the motion is in a specific pattern, like moving orientation, the velocity information inside and around the target can be different from other parts in the field. Otherwise, there will be no valuable information in current field and the scanning has to be adjusted.
  • As regards analyzing the motions of organ movement and fluid flows within an examined subject, new optical-flow-based methods for estimating heart motion from two-dimensional echocardiographic sequences, an optical-flow guided active contour method for Myocardial tracking in contrast echocardiography, and a method for shape-driven segmentation and tracking of the left ventricle.
  • As regards cannula insertion, ultrasound motion of the cannula is configured by cannula fitted with echogenic ultrasound micro reflectors.
  • As regards sonic coupling gel media to improve ultrasound communication between a transducer and the examined subject, embodiments include an apparatus that: dispenses a metered quantity of ultrasound coupling gel and enables one-handed gel application. The apparatus also preserves the gel in a de-gassed state (no air bubbles), preserves the gel in a sterile state (no contact between gel applicator and patient), includes a method for easy container refill, and preserves the shape and volume of existing gel application bottles.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • FIG. 1 is a side view of a microprocessor-controlled, hand-held ultrasound transceiver;
  • FIG. 2A is a is depiction of the hand-held transceiver in use for scanning a patient;
  • FIG. 2B is a perspective view of the hand-held transceiver device sitting in a communication cradle;
  • FIG. 2C is a perspective view of an amniotic fluid volume measuring system;
  • FIG. 3 is an alternate embodiment of an amniotic fluid volume measuring system in schematic view of a plurality of transceivers in connection with a server;
  • FIG. 4 is another alternate embodiment of an amniotic fluid volume measuring system in a schematic view of a plurality of transceivers in connection with a server over a network;
  • FIG. 5A a graphical representation of a plurality of scan lines forming a single scan plane;
  • FIG. 5B is a graphical representation of a plurality of scanplanes forming a three-dimensional array having a substantially conic shape;
  • FIG. 5C is a graphical representation of a plurality of 3D distributed scanlines emanating from the transceiver forming a scancone;
  • FIG. 6 is a depiction of the hand-held transceiver placed laterally on a patient trans-abdominally to transmit ultrasound and receive ultrasound echoes for processing to determine amniotic fluid volumes;
  • FIG. 7 shows a block diagram overview of the two-dimensional and three-dimensional Input, Image Enhancement, Intensity-Based Segmentation, Edge-Based Segmentation, Combine, Polish, Output, and Compute algorithms to visualize and determine the volume or area of amniotic fluid;
  • FIG. 8A depicts the sub-algorithms of Image Enhancement;
  • FIG. 8B depicts the sub-algorithms of Intensity-Based Segmentation;
  • FIG. 8C depicts the sub-algorithms of Edge-Based Segmentation;
  • FIG. 8D depicts the sub-algorithms of the Polish algorithm, including Close, Open, Remove Deep Regions, and Remove Fetal Head Regions;
  • FIG. 8E depicts the sub-algorithms of the Remove Fetal Head Regions sub-algorithm;
  • FIG. 8F depicts the sub-algorithms of the Hough Transform sub-algorithm;
  • FIG. 9 depicts the operation of a circular Hough transform algorithm;
  • FIG. 10 shows results of sequentially applying the algorithm steps on a sample image;
  • FIG. 11 illustrates a set of intermediate images of the fetal head detection process;
  • FIG. 12 presents a 4-panel series of sonographer amniotic fluid pocket outlines and the algorithm output amniotic fluid pocket outlines;
  • FIG. 13 illustrates a 4-quadrant supine procedure to acquire multiple image cones;
  • FIG. 14 illustrates an in-line lateral line procedure to acquire multiple image cones;
  • FIG. 15 is a block diagram overview of the rigid registration and correcting algorithms used in processing multiple image cone data sets;
  • FIG. 16 is a block diagram of the steps in the rigid registration algorithm;
  • FIG. 17A is an example image showing a first view of a fixed scanplane;
  • FIG. 17B is an example image showing a second view view of a moving scanplane having some voxels in common with the first scanplane;
  • FIG. 17C is a composite image of the first (fixed) and second (moving) images;
  • FIG. 18A is an example image showing a first view of a fixed scanplane;
  • FIG. 18B is an example image showing a second view of a moving scanplane having some voxels in common with the first view and a third view;
  • FIG. 18C is a third view of a moving scanplane having some voxels in common with the second view;
  • FIG. 18D is a composite image of the first (fixed), second (moving), and third (moving) views;
  • FIG. 19 illustrates a 6-section supine procedure to acquire multiple image cones around the center point of uterus of a patient in a supine procedure;
  • FIG. 20 is a block diagram algorithm overview of the registration and correcting algorithms used in processing the 6-section multiple image cone data sets depicted in FIG. 19;
  • FIG. 21 is an expansion of the Image Enhancement and Segmentation block 1010 of FIG. 20;
  • FIG. 22 is an expansion of the RigidRegistration block 1014 of FIG. 20;
  • FIG. 23 is a 4-panel image set that shows the effect of multiple iterations of the heat filter applied to an original image;
  • FIG. 24 shows the affect of shock filtering and a combination heat-and-shock filtering to the pixel values of the image;
  • FIG. 25 is a 7-panel image set progressively receiving application of the image enhancement and segmentation algorithms of FIG. 21;
  • FIG. 26 is a pixel difference kernel for obtaining X and Y derivatives to determine pixel gradient magnitudes for edge-based segmentation; and
  • FIG. 27 is a 3-panel image set showing the progressive demarcation or edge detection of organ wall interfaces arising from edge-based segmentation algorithms.
  • FIGS. 1A-D depicts a partial schematic and a partial isometric view of a transceiver, a scan cone comprising a rotational array of scan planes, and a scan plane of the array of an ultrasound harmonic imaging system;
  • FIG. 2 depicts a partial schematic and partial isometric and side view of a transceiver, and a scan cone array comprised of 3D-distributed scan lines in alternate embodiment of an ultrasound harmonic imaging system;
  • FIG. 3 is a schematic illustration of a server-accessed local area network in communication with a plurality of ultrasound harmonic imaging systems;
  • FIG. 4 is a schematic illustration of the Internet in communication with a plurality of ultrasound harmonic imaging systems;
  • FIG. 5 schematically depicts a method flow chart algorithm 120 to acquire a clarity enhanced ultrasound image;
  • FIG. 6 is an expansion of sub-algorithm 150 of master algorithm 120 of FIG. 7;
  • FIG. 7 is an expansion of sub-algorithms 200 of FIG. 5;
  • FIG. 8 is an expansion of sub-algorithm 300 of master algorithm illustrated in FIG. 5;
  • FIG. 9 is an expansion of sub-algorithms 400A and 400B of FIG. 5;
  • FIG. 10 is an expansion of sub-algorithm 500 of FIG. 5;
  • FIG. 11 depicts a logarithm of a Cepstrum;
  • FIGS. 12A-C depict histogram waveform plots derived from water tank pulse-echo experiments undergoing parametric and non-parametric analysis;
  • FIGS. 13-25 are bladder sonograms that depict image clarity after undergoing image enhancement processing by algorithms described in FIGS. 5-10;
  • FIG. 13 is an unprocessed image that will undergo image enhancement processing;
  • FIG. 14 illustrates an enclosed portion of a magnified region of FIG. 13;
  • FIG. 15 is the resultant image of FIG. 13 that has undergone image processing via nonparametric estimation under sub-algorithm 400A;
  • FIG. 16 is the resultant image of FIG. 13 that has undergone image processing via parametric estimation under sub-algorithm 400B;
  • FIG. 17 the resultant image of an alternate image processing embodiment using a Weiner filter.
  • FIG. 18 is another unprocessed image that will undergo image enhancement processing;
  • FIG. 19 illustrates an enclosed portion of a magnified region of FIG. 18;
  • FIG. 20 is the resultant image of FIG. 18 that has undergone image processing via nonparametric estimation under sub-algorithm 400A;
  • FIG. 21 is the resultant image of FIG. 18 that has undergone image processing via parametric estimation under sub-algorithm 400B;
  • FIG. 22 is another unprocessed image that will undergo image enhancement processing;
  • FIG. 23 illustrates an enclosed portion of a magnified region of FIG. 22;
  • FIG. 24 is the resultant image of FIG. 22 that has undergone image processing via nonparametric estimation under sub-algorithm 400A;
  • FIG. 25 is the resultant image of FIG. 22 that has undergone image processing via parametric estimation under sub-algorithm 400B;
  • FIG. 26 depicts a schematic example of a time velocity map derived from sub-algorithm 310;
  • FIG. 27 depicts another schematic example of a time velocity map derived from sub-algorithm 310;
  • FIG. 28 illustrates a seven panel image series of a beating heart ventricle that will undergo the optical flow processes of sub-algorithm 300 in which at least two images are required;
  • FIG. 29 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 presented in a 2D flow pattern after undergoing sub-algorithm 310;
  • FIG. 30 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the X-axis direction or phi direction after undergoing sub-algorithm 310;
  • FIG. 31 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the Y-axis direction radial direction after undergoing sub-algorithm 310;
  • FIG. 32 illustrates a 3D optical vector plot after undergoing sub-algorithm 310 and corresponds to the top row of FIG. 29;
  • FIG. 33 illustrates a 3D optical vector plot in the phi direction after undergoing sub-algorithm 310 and corresponds to FIG. 30 at T=1;
  • FIG. 34 illustrates a 3D optical vector plot in the radial direction after undergoing sub-algorithm 310 and corresponds to FIG. 31 at T=1;
  • FIG. 35 illustrates a 3D optical vector plot in the radial direction above a Y-axis threshold setting of 0.6 after undergoing sub-algorithm 310 and corresponds to FIG. 34 the threshold T that are less than 0.6 are set to 0;
  • FIGS. 36A-G depicts embodiments of the sonic gel dispenser;
  • FIGS. 37 and 38 are diagrams showing one embodiment of the present invention;
  • FIG. 39 is diagram showing additional detail for a needle shaft to be used with one embodiment of the invention;
  • FIGS. 40A and 40B are diagrams showing close-up views of surface features of the needle shaft shown in FIG. 38;
  • FIG. 41 is a diagram showing imaging components for use with the needle shaft shown in FIG. 38;
  • FIG. 42 is a diagram showing a representation of an image produced by the imaging components shown in FIG. 41;
  • FIG. 43 is a system diagram of an embodiment of the present invention;
  • FIG. 44 is a system diagram of an example embodiment showing additional detail for one of the components shown in FIG. 38; and
  • FIGS. 45 and 46 are flowcharts of a method of displaying the trajectory of a cannula in accordance with an embodiment of the present invention.
  • DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT
  • The preferred portable embodiment of the ultrasound transceiver of the amniotic fluid volume measuring system are shown in FIGS. 1-4. The transceiver 10 includes a handle 12 having a trigger 14 and a top button 16, a transceiver housing 18 attached to the handle 12, and a transceiver dome 20. A display 24 for user interaction is attached to the transceiver housing 18 at an end opposite the transceiver dome 20. Housed within the transceiver 10 is a single element transducer (not shown) that converts ultrasound waves to electrical signals. The transceiver 10 is held in position against the body of a patient by a user for image acquisition and signal processing. In operation, the transceiver 10 transmits a radio frequency ultrasound signal at substantially 3.7 MHz to the body and then receives a returning echo signal. To accommodate different patients having a variable range of obesity, the transceiver 10 can be adjusted to transmit a range of probing ultrasound energy from approximately 2 MHz to approximately 10 MHz radio frequencies.
  • The top button 16 selects for different acquisition volumes. The transceiver is controlled by a microprocessor and software associated with the microprocessor and a digital signal processor of a computer system. As used in this invention, the term “computer system” broadly comprises any microprocessor-based or other computer system capable of executing operating instructions and manipulating data, and is not limited to a traditional desktop or notebook computer. The display 24 presents alphanumeric or graphic data indicating the proper or optimal positioning of the transceiver 10 for initiating a series of scans. The transceiver 10 is configured to initiate the series of scans to obtain and present 3D images as either a 3D array of 2D scanplanes or as a single 3D scancone of 3D distributed scanlines. A suitable transceiver is the DCD372 made by Diagnostic Ultrasound. In alternate embodiments, the two- or three-dimensional image of a scan plane may be presented in the display 24.
  • Although the preferred ultrasound transceiver is described above, other transceivers may also be used. For example, the transceiver need not be battery-operated or otherwise portable, need not have a top-mounted display 24, and may include many other features or differences. The display 24 may be a liquid crystal display (LCD), a light emitting diode (LED), a cathode ray tube (CRT), or any suitable display capable of presenting alphanumeric data or graphic images.
  • FIG. 2A is a photograph of the hand-held transceiver 10 for scanning a patient. The transceiver 10 is then positioned over the patient's abdomen by a user holding the handle 12 to place the transceiver housing 18 against the patient's abdomen. The top button 16 is centrally located on the handle 12. Once optimally positioned over the abdomen for scanning, the transceiver 10 transmits an ultrasound signal at substantially 3.7 MHz into the uterus. The transceiver 10 receives a return ultrasound echo signal emanating from the uterus and presents it on the display 24.
  • FIG. 2B is a perspective view of the hand-held transceiver device sitting in a communication cradle. The transceiver 10 sits in a communication cradle 42 via the handle 12. This cradle can be connected to a standard USB port of any personal computer, enabling all the data on the device to be transferred to the computer and enabling new programs to be transferred into the device from the computer.
  • FIG. 2C is a perspective view of an amniotic fluid volume measuring system 5A. The system 5A includes the transceiver 10 cradled in the cradle 42 that is in signal communication with a computer 52. The transceiver 10 sits in a communication cradle 42 via the handle 12. This cradle can be connected to a standard USB port of any personal computer 52, enabling all the data on the transceiver 10 to be transferred to the computer for analysis and determination of amniotic fluid volume.
  • FIG. 3 depicts an alternate embodiment of an amniotic fluid volume measuring system 5B in a schematic view. The system 5B includes a plurality systems 5A in signal communication with a server 56. As illustrated each transceiver 10 is in signal connection with the server 56 through connections via a plurality of computers 52. FIG. 3, by example, depicts each transceiver 10 being used to send probing ultrasound radiation to a uterus of a patient and to subsequently retrieve ultrasound echoes returning from the uterus, convert the ultrasound echoes into digital echo signals, store the digital echo signals, and process the digital echo signals by algorithms of the invention. A user holds the transceiver 10 by the handle 12 to send probing ultrasound signals and to receive incoming ultrasound echoes. The transceiver 10 is placed in the communication cradle 42 that is in signal communication with a computer 52, and operates as an amniotic fluid volume measuring system. Two amniotic fluid volume-measuring systems are depicted as representative though fewer or more systems may be used. As used in this invention, a “server” can be any computer software or hardware that responds to requests or issues commands to or from a client. Likewise, the server may be accessible by one or more client computers via the Internet, or may be in communication over a LAN or other network.
  • Each amniotic fluid volume measuring systems includes the transceiver 10 for acquiring data from a patient. The transceiver 10 is placed in the cradle 52 to establish signal communication with the computer 52. Signal communication as illustrated is by a wired connection from the cradle 42 to the computer 52. Signal communication between the transceiver 10 and the computer 52 may also be by wireless means, for example, infrared signals or radio frequency signals. The wireless means of signal communication may occur between the cradle 42 and the computer 52, the transceiver 10 and the computer 52, or the transceiver 10 and the cradle 42.
  • A preferred first embodiment of the amniotic fluid volume measuring system includes each transceiver 10 being separately used on a patient and sending signals proportionate to the received and acquired ultrasound echoes to the computer 52 for storage. Residing in each computer 52 are imaging programs having instructions to prepare and analyze a plurality of one dimensional (1D) images from the stored signals and transforms the plurality of 1D images into the plurality of 2D scanplanes. The imaging programs also present 3D renderings from the plurality of 2D scanplanes. Also residing in each computer 52 are instructions to perform the additional ultrasound image enhancement procedures, including instructions to implement the image processing algorithms.
  • A preferred second embodiment of the amniotic fluid volume measuring system is similar to the first embodiment, but the imaging programs and the instructions to perform the additional ultrasound enhancement procedures are located on the server 56. Each computer 52 from each amniotic fluid volume measuring system receives the acquired signals from the transceiver 10 via the cradle 51 and stores the signals in the memory of the computer 52. The computer 52 subsequently retrieves the imaging programs and the instructions to perform the additional ultrasound enhancement procedures from the server 56. Thereafter, each computer 52 prepares the 1D images, 2D images, 3D renderings, and enhanced images from the retrieved imaging and ultrasound enhancement procedures. Results from the data analysis procedures are sent to the server 56 for storage.
  • A preferred third embodiment of the amniotic fluid volume measuring system is similar to the first and second embodiments, but the imaging programs and the instructions to perform the additional ultrasound enhancement procedures are located on the server 56 and executed on the server 56. Each computer 52 from each amniotic fluid volume measuring system receives the acquired signals from the transceiver 10 and via the cradle 51 sends the acquired signals in the memory of the computer 52. The computer 52 subsequently sends the stored signals to the server 56. In the server 56, the imaging programs and the instructions to perform the additional ultrasound enhancement procedures are executed to prepare the 1D images, 2D images, 3D renderings, and enhanced images from the server 56 stored signals. Results from the data analysis procedures are kept on the server 56, or alternatively, sent to the computer 52.
  • FIG. 4 is another embodiment of an amniotic volume fluid measuring system 5C presented in schematic view. The system 5C includes a plurality of amniotic fluid measuring systems 5A connected to a server 56 over the Internet or other network 64. FIG. 4 represents any of the first, second, or third embodiments of the invention advantageously deployed to other servers and computer systems through connections via the network.
  • FIG. 5A a graphical representation of a plurality of scan lines forming a single scan plane. FIG. 5A illustrates how ultrasound signals are used to make analyzable images, more specifically how a series of one-dimensional (1D) scanlines are used to produce a two-dimensional (2D) image. The 1D and 2D operational aspects of the single element transducer housed in the transceiver 10 is seen as it rotates mechanically about an angle φ. A scanline 214 of length r migrates between a first limiting position 218 and a second limiting position 222 as determined by the value of the angle φ, creating a fan-like 2D scanplane 210. In one preferred form, the transceiver 10 operates substantially at 3.7 MHz frequency and creates an approximately 18 cm deep scan line 214 and migrates within the angle φ having an angle of approximately 0.027 radians. A first motor tilts the transducer approximately 60° clockwise and then counterclockwise forming the fan-like 2D scanplane presenting an approximate 120° 2D sector image. A plurality of scanlines, each scanline substantially equivalent to scanline 214 is recorded, between the first limiting position 218 and the second limiting position 222 formed by the unique tilt angle φ. The plurality of scanlines between the two extremes forms a scanplane 210. In the preferred embodiment, each scanplane contains 77 scan lines, although the number of lines can vary within the scope of this invention. The tilt angle φ sweeps through angles approximately between −60° and +60° for a total arc of approximately 120°.
  • FIG. 5B is a graphical representation of a plurality of scanplanes forming a three-dimensional array (3D) 240 having a substantially conic shape. FIG. 5B illustrates how a 3D rendering is obtained from the plurality of 2D scanplanes. Within each scanplane 210 are the plurality of scanlines, each scanline equivalent to the scanline 214 and sharing a common rotational angle φ. In the preferred embodiment, each scanplane contains 77 scan lines, although the number of lines can vary within the scope of this invention. Each 2D sector image scanplane 210 with tilt angle 100 and range r (equivalent to the scanline 214) collectively forms a 3D conic array 240 with rotation angle θ. After gathering the 2D sector image, a second motor rotates the transducer between 3.75° or 7.5° to gather the next 120° sector image. This process is repeated until the transducer is rotated through 180°, resulting in the cone-shaped 3D conic array 240 data set with 24 planes rotationally assembled in the preferred embodiment. The conic array could have fewer or more planes rotationally assembled. For example, preferred alternate embodiments of the conic array could include at least two scanplanes, or a range of scanplanes from 2 to 48 scanplanes. The upper range of the scanplanes can be greater than 48 scanplanes. The tilt angle φ indicates the tilt of the scanline from the centerline in 2D sector image, and the rotation angle θ, identifies the particular rotation plane the sector image lies in. Therefore, any point in this 3D data set can be isolated using coordinates expressed as three parameters, P(r,φ,θ).
  • As the scanlines are transmitted and received, the returning echoes are interpreted as analog electrical signals by a transducer, converted to digital signals by an analog-to-digital converter, and conveyed to the digital signal processor of the computer system for storage and analysis to determine the locations of the amniotic fluid walls. The computer system is representationally depicted in FIGS. 3 and 4 and includes a microprocessor, random access memory (RAM), or other memory for storing processing instructions and data generated by the transceiver 10.
  • FIG. 5C is a graphical representation of a plurality of 3D-distributed scanlines emanating from the transceiver 10 forming a scancone 300. The scancone 300 is formed by a plurality of 3D distributed scanlines that comprises a plurality of internal and peripheral scanlines. The scanlines are one-dimensional ultrasound A-lines that emanate from the transceiver 10 at different coordinate directions, that taken as an aggregate, from a conic shape. The 3D-distributed A-lines (scanlines) are not necessarily confined within a scanplane, but instead are directed to sweep throughout the internal and along the periphery of the scancone 300. The 3D-distributed scanlines not only would occupy a given scanplane in a 3D array of 2D scanplanes, but also the inter-scanplane spaces, from the conic axis to and including the conic periphery. The transceiver 10 shows the same illustrated features from FIG. 1, but is configured to distribute the ultrasound A-lines throughout 3D space in different coordinate directions to form the scancone 300.
  • The internal scanlines are represented by scanlines 312A-C. The number and location of the internal scanlines emanating from the transceiver 10 is the number of internal scanlines needed to be distributed within the scancone 300, at different positional coordinates, to sufficiently visualize structures or images within the scancone 300. The internal scanlines are not peripheral scanlines. The peripheral scanlines are represented by scanlines 314A-F and occupy the conic periphery, thus representing the peripheral limits of the scancone 300.
  • FIG. 6 is a depiction of the hand-held transceiver placed on a patient trans-abdominally to transmit probing ultrasound and receive ultrasound echoes for processing to determine amniotic fluid volumes. The transceiver 10 is held by the handle 12 to position over a patient to measure the volume of amniotic fluid in an amniotic sac over a baby. A plurality of axes for describing the orientation of the baby, the amniotic sac, and mother is illustrated. The plurality of axes includes a vertical axis depicted on the line L(R)-L(L) for left and right orientations, a horizontal axis LI-LS for inferior and superior orientations, and a depth axis LA-LP for anterior and posterior orientations.
  • FIG. 6 is representative of a preferred data acquisition protocol used for amniotic fluid volume determination. In this protocol, the transceiver 10 is the hand-held 3D ultrasound device (for example, model DCD372 from Diagnostic Ultrasound) and is used to image the uterus trans-abdominally. Initially during the targeting phase, the patient is in a supine position and the device is operated in a 2D continuous acquisition mode. A 2D continuous mode is where the data is continuously acquired in 2D and presented as a scanplane similar to the scanplane 210 on the display 24 while an operator physically moves the transceiver 10. An operator moves the transceiver 10 around on the maternal abdomen and the presses the trigger 14 of the transceiver 10 and continuously acquires real-time feedback presented in 2D on the display 24. Amniotic fluid, where present, visually appears as dark regions along with an alphanumeric indication of amniotic fluid area (for example, in cm2) on the display 24. Based on this real-time information in terms of the relative position of the transceiver 10 to the fetus, the operator decides which side of the uterus has more amniotic fluid by the presentation on the display 24. The side having more amniotic fluid presents as regions having larger darker regions on the display 24. Accordingly, the side displaying a large dark region registers greater alphanumeric area while the side with less fluid shows displays smaller dark regions and proportionately registers smaller alphanumeric area on the display 24. While amniotic fluid is present throughout the uterus, its distribution in the uterus depends upon where and how the fetus is positioned within the uterus. There is usually less amniotic fluid around the fetus's spine and back and more amniotic fluid in front of its abdomen and around the limbs.
  • Based on fetal position information acquired from data gathered under continuous acquisition mode, the patient is placed in a lateral recumbent position such that the fetus is displaced towards the ground creating a large pocket of amniotic fluid close to abdominal surface where the transceiver 10 can be placed as shown in FIG. 6. For example, if large fluid pockets are found on the right side of the patient, the patient is asked to turn with the left side down and if large fluid pockets are found on the left side, the patient is asked to turn with the right side down.
  • After the patient has been placed in the desired position, the transceiver 10 is again operated in the 2D continuous acquisition mode and is moved around on the lateral surface of the patient's abdomen. The operator finds the location that shows the largest amniotic fluid area based on acquiring the largest dark region imaged and the largest alphanumeric value displayed on the display 24. At the lateral abdominal location providing the largest dark region, the transceiver 10 is held in a fixed position, the trigger 14 is released to acquire a 3D image comprising a set of arrayed scanplanes. The 3D image presents a rotational array of the scanplanes 210 similar to the 3D array 240.
  • In a preferred alternate data acquisition protocol, the operator can reposition the transceiver 10 to different abdominal locations to acquire new 3D images comprised of different scanplane arrays similar to the 3D array 240. Multiple scan cones obtained from different positions provide the operator the ability to image the entire amniotic fluid region from different view points. In the case of a single image cone being too small to accommodate a large AFV measurement, obtaining multiple 3D array 240 image cones ensures that the total volume of large AFV regions is determined. Multiple 3D images may also be acquired by pressing the top bottom 16 to select multiple conic arrays similar to the 3D array 240.
  • Depending on the position of the fetus relative to the location of the transceiver 10, a single image scan may present an underestimated volume of AFV due to amniotic fluid pockets that remain hidden behind the limbs of the fetus. The hidden amniotic fluid pockets present as unquantifiable shadow-regions.
  • To guard against underestimating AFV, repeated positioning the transceiver 10 and rescanning can be done to obtain more than one ultrasound view to maximize detection of amniotic fluid pockets. Repositioning and rescanning provides multiple views as a plurality of the 3D arrays 240 images cones. Acquiring multiple images cones improves the probability of obtaining initial estimates of AFV that otherwise could remain undetected and un-quantified in a single scan.
  • In an alternative scan protocol, the user determines and scans at only one location on the entire abdomen that shows the maximum amniotic fluid area while the patient is the supine position. As before, when the user presses the top button 16, 2D scanplane images equivalent to the scanplane 210 are continuously acquired and the amniotic fluid area on every image is automatically computed. The user selects one location that shows the maximum amniotic fluid area. At this location, as the user releases the scan button, a full 3D data cone is acquired and stored in the device's memory.
  • FIG. 7 shows a block diagram overview the image enhancement, segmentation, and polishing algorithms of the amniotic fluid volume measuring system. The enhancement, segmentation, and polishing algorithms are applied to each scanplane 210 or to the entire scan cone 240 to automatically obtain amniotic fluid regions. For scanplanes substantially equivalent to scanplane 210, the algorithms are expressed in two-dimensional terms and use formulas to convert scanplane pixels (picture elements) into area units. For the scan cones substantially equivalent to the 3D conic array 240, the algorithms are expressed in three-dimensional terms and use formulas to convert voxels (volume elements) into volume units.
  • The algorithms expressed in 2D terms are used during the targeting phase where the operator trans-abdominally positions and repositions the transceiver 10 to obtain real-time feedback about the amniotic fluid area in each scanplane. The algorithms expressed in 3D terms are used to obtain the total amniotic fluid volume computed from the voxels contained within the calculated amniotic fluid regions in the 3D conic array 240.
  • FIG. 7 represents an overview of a preferred method of the invention and includes a sequence of algorithms, many of which have sub-algorithms described in more specific detail in FIGS. 8A-F. FIG. 7 begins with inputting data of an unprocessed image at step 410. After unprocessed image data 410 is entered (e.g., read from memory, scanned, or otherwise acquired), it is automatically subjected to an image enhancement algorithm 418 that reduces the noise in the data (including speckle noise) using one or more equations while preserving the salient edges on the image using one or more additional equations. Next, the enhanced images are segmented by two different methods whose results are eventually combined. A first segmentation method applies an intensity-based segmentation algorithm 422 that determines all pixels that are potentially fluid pixels based on their intensities. A second segmentation method applies an edge-based segmentation algorithm 438 that relies on detecting the fluid and tissue interfaces. The images obtained by the first segmentation algorithm 422 and the images obtained by the second segmentation algorithm 438 are brought together via a combination algorithm 442 to provide a substantially segmented image. The segmented image obtained from the combination algorithm 442 are then subjected to a polishing algorithm 464 in which the segmented image is cleaned-up by filling gaps with pixels and removing unlikely regions. The image obtained from the polishing algorithm 464 is outputted 480 for calculation of areas and volumes of segmented regions-of-interest. Finally the area or the volume of the segmented region-of-interest is computed 484 by multiplying pixels by a first resolution factor to obtain area, or voxels by a second resolution factor to obtain volume. For example, for pixels having a size of 0.8 mm by 0.8 mm, the first resolution or conversion factor for pixel area is equivalent to 0.64 mm2, and the second resolution or conversion factor for voxel volume is equivalent to 0.512 mm3. Different unit lengths for pixels and voxels may be assigned, with a proportional change in pixel area and voxel volume conversion factors.
  • The enhancement, segmentation and polishing algorithms depicted in FIG. 7 for measuring amniotic fluid areas or volumes are not limited to scanplanes assembled into rotational arrays equivalent to the 3D array 240. As additional examples, the enhancement, segmentation and polishing algorithms depicted in FIG. 7 apply to translation arrays and wedge arrays. Translation arrays are substantially rectilinear image plane slices from incrementally repositioned ultrasound transceivers that are configured to acquire ultrasound rectilinear scanplanes separated by regular or irregular rectilinear spaces. The translation arrays can be made from transceivers configured to advance incrementally, or may be hand-positioned incrementally by an operator. The operator obtains a wedge array from ultrasound transceivers configured to acquire wedge-shaped scanplanes separated by regular or irregular angular spaces, and either mechanistically advanced or hand-tilted incrementally. Any number of scanplanes can be either translationally assembled or wedge-assembled ranges, but preferably in ranges greater than 2 scanplanes.
  • Other preferred embodiments of the enhancement, segmentation and polishing algorithms depicted in FIG. 7 may be applied to images formed by line arrays, either spiral distributed or reconstructed random-lines. The line arrays are defined using points identified by the coordinates expressed by the three parameters, P(r,φ,θ), where the values or r, φ, and θ can vary.
  • The enhancement, segmentation and polishing algorithms depicted in FIG. 7 are not limited to ultrasound applications but may be employed in other imaging technologies utilizing scanplane arrays or individual scanplanes. For example, biological-based and non-biological-based images acquired using infrared, visible light, ultraviolet light, microwave, x-ray computed tomography, magnetic resonance, gamma rays, and positron emission are images suitable for the algorithms depicted in FIG. 7. Furthermore, the algorithms depicted in FIG. 7 can be applied to facsimile transmitted images and documents.
  • FIGS. 8A-E depict expanded details of the preferred embodiments of enhancement, segmentation, and polishing algorithms described in FIG. 7. Each of the following greater detailed algorithms are either implemented on the transceiver 10 itself or are implemented on the host computer 52 or on the server 56 computer to which the ultrasound data is transferred.
  • FIG. 8A depicts the sub-algorithms of Image Enhancement. The sub-algorithms include a heat filter 514 to reduce noise and a shock filter 518 to sharpen edges. A combination of the heat and shock filters works very well at reducing noise and sharpening the data while preserving the significant discontinuities. First, the noisy signal is filtered using a 1D heat filter (Equation E1 below), which results in the reduction of noise and smoothing of edges. This step is followed by a shock-filtering step 518 (Equation E2 below), which results in the sharpening of the blurred signal. Noise reduction and edge sharpening is achieved by application of the following equations E1-E2. The algorithm of the heat filter 514 uses a heat equation E1. The heat equation E1 in partial differential equation (PDE) form for image processing is expressed as:
  • u t = 2 u x 2 + 2 u y 2 , E1
  • where u is the image being processed. The image u is 2D, and is comprised of an array of pixels arranged in rows along the x-axis, and an array of pixels arranged in columns along the y-axis. The pixel intensity of each pixel in the image u has an initial input image pixel intensity (I) defined as u0=I. The value of I depends on the application, and commonly occurs within ranges consistent with the application. For example, I can be as low as 0 to 1, or occupy middle ranges between 0 to 127 or 0 to 512. Similarly, I may have values occupying higher ranges of 0 to 1024 and 0 to 4096, or greater. The heat equation E1 results in a smoothing of the image and is equivalent to the Gaussian filtering of the image. The larger the number of iterations that it is applied for the more the input image is smoothed or blurred and the more the noise that is reduced.
  • The shock filter 518 is a PDE used to sharpen images as detailed below. The two dimensional shock filter E2 is expressed as:
  • u t = - F ( ( u ) ) u , E2
  • where u is the image processed whose initial value is the input image pixel intensity (I): u0=I where the l(u) term is the Laplacian of the image u, F is a function of the Laplacian, and ∥∇u∥ is the 2D gradient magnitude of image intensity defined by equation E3.

  • ∥∇u∥=√{square root over (ux 2 +u y 2)},   E3
      • where
        • u2 x=the square of the partial derivative of the pixel intensity (u) along the x-axis,
        • u2 y=the square of the partial derivative of the pixel intensity (u) along the y-axis,
        • the Laplacian l(u) of the image, u, is expressed in equation E4 as

  • l(u)=u xx u x 2+2u xy u x u y +u yy u y 2   E4
      • where equation E4 relates to equation E1 as follows:
        • ux is the first partial derivative
  • u x
  • of u along the x-axis,
        • uy is the first partial derivative
  • u y
  • of u along the y-axis,
        • ux 2 is the square of the first partial derivative
  • u x
  • of u along the x-axis,
        • uy 2 is the square of the first partial derivative
  • u y
  • of u along the y-axis,
        • uxx is the second partial derivative
  • 2 u x 2
  • of u along the x-axis,
        • uyy is the second partial derivative
  • 2 u y 2
  • of u along the y-axis,
        • uxy is cross multiple first partial derivative
  • y xdy
  • of u along the x and y axes, and
      • the sign of the function F modifies the Laplacian by the image gradient values selected to avoid placing spurious edges at points with small gradient values:
  • F ( ( u ) ) = 1 , if ( u ) > 0 and u > t = - 1 , if ( u ) < 0 and u > t = 0 , otherwise
        • where t is a threshold on the pixel gradient value ∥∇u∥.
  • The combination of heat filtering and shock filtering produces an enhanced image ready to undergo the intensity-based and edge-based segmentation algorithms as discussed below.
  • FIG. 8B depicts the sub-algorithms of Intensity-Based Segmentation (step 422 in FIG. 7). The intensity-based segmentation step 422 uses a “k-means” intensity clustering 522 technique where the enhanced image is subjected to a categorizing “k-means” clustering algorithm. The “k-means” algorithm categorizes pixel intensities into white, gray, and black pixel groups. Given the number of desired clusters or groups of intensities (k), the k-means algorithm is an iterative algorithm comprising four steps:
  • 1. Initially determine or categorize cluster boundaries by defining a minimum and a maximum pixel intensity value for every white, gray, or black pixels into groups or k-clusters that are equally spaced in the entire intensity range.
  • 2. Assign each pixel to one of the white, gray or black k-clusters based on the currently set cluster boundaries.
  • 3. Calculate a mean intensity for each pixel intensity k-cluster or group based on the current assignment of pixels into the different k-clusters. The calculated mean intensity is defined as a cluster center. Thereafter, new cluster boundaries are determined as mid points between cluster centers.
  • 4. Determine if the cluster boundaries significantly change locations from their previous values. Should the cluster boundaries change significantly from their previous values, iterate back to step 2, until the cluster centers do not change significantly between iterations. Visually, the clustering process is manifest by the segmented image and repeated iterations continue until the segmented image does not change between the iterations.
  • The pixels in the cluster having the lowest intensity value—the darkest cluster—are defined as pixels associated with amniotic fluid. For the 2D algorithm, each image is clustered independently of the neighboring images. For the 3D algorithm, the entire volume is clustered together. To make this step faster, pixels are sampled at 2 or any multiple sampling rate factors before determining the cluster boundaries. The cluster boundaries determined from the down-sampled data are then applied to the entire data.
  • FIG. 8C depicts the sub-algorithms of Edge-Based Segmentation (step 438 in FIG. 7) and uses a sequence of four sub-algorithms. The sequence includes a spatial gradients 526 algorithm, a hysteresis threshold 530 algorithm, a Region-of-Interest (ROI) 534 algorithm, and a matching edges filter 538 algorithm.
  • The spatial gradient 526 computes the x-directional and y-directional spatial gradients of the enhanced image. The Hysteresis threshold 530 algorithm detects salient edges. Once the edges are detected, the regions defined by the edges are selected by a user employing the ROI 534 algorithm to select regions-of-interest deemed relevant for analysis.
  • Since the enhanced image has very sharp transitions, the edge points can be easily determined by taking x- and y-derivatives using backward differences along x- and y-directions. The pixel gradient magnitude ∥∇I∥ is then computed from the x- and y-derivative image in equation E5 as:

  • ∥∇I∥=√{square root over (Ix 2 +I y 2)}  E5
  • Where I2 x=the square of x-derivative of intensity; and
      • I2 y=the square of y-derivative of intensity along the y-axis.
  • Significant edge points are then determined by thresholding the gradient magnitudes using a hysteresis thresholding operation. Other thresholding methods could also be used. In hysteresis thresholding 530, two threshold values, a lower threshold and a higher threshold, are used. First, the image is thresholded at the lower threshold value and a connected component labeling is carried out on the resulting image. Next, each connected edge component is preserved which has at least one edge pixel having a gradient magnitude greater than the upper threshold. This kind of thresholding scheme is good at retaining long connected edges that have one or more high gradient points.
  • In the preferred embodiment, the two thresholds are automatically estimated. The upper gradient threshold is estimated at a value such that at most 97% of the image pixels are marked as non-edges. The lower threshold is set at 50% of the value of the upper threshold. These percentages could be different in different implementations. Next, edge points that lie within a desired region-of-interest are selected 534. This region of interest selection 534 excludes points lying at the image boundaries and points lying too close to or too far from the transceiver 10. Finally, the matching edge filter 538 is applied to remove outlier edge points and fill in the area between the matching edge points.
  • The edge-matching algorithm 538 is applied to establish valid boundary edges and remove spurious edges while filling the regions between boundary edges. Edge points on an image have a directional component indicating the direction of the gradient. Pixels in scanlines crossing a boundary edge location will exhibit two gradient transitions depending on the pixel intensity directionality. Each gradient transition is given a positive or negative value depending on the pixel intensity directionality. For example, if the scanline approaches an echo reflective bright wall from a darker region, then an ascending transition is established as the pixel intensity gradient increases to a maximum value, i.e., as the transition ascends from a dark region to a bright region. The ascending transition is given a positive numerical value. Similarly, as the scanline recedes from the echo reflective wall, a descending transition is established as the pixel intensity gradient decreases to or approaches a minimum value. The descending transition is given a negative numerical value.
  • Valid boundary edges are those that exhibit ascending and descending pixel intensity gradients, or equivalently, exhibit paired or matched positive and negative numerical values. The valid boundary edges are retained in the image. Spurious or invalid boundary edges do not exhibit paired ascending-descending pixel intensity gradients, i.e., do not exhibit paired or matched positive and negative numerical values. The spurious boundary edges are removed from the image.
  • For amniotic fluid volume related applications, most edge points for amniotic fluid surround a dark, closed region, with directions pointing inwards towards the center of the region. Thus, for a convex-shaped region, the direction of a gradient for any edge point, the edge point having a gradient direction approximately opposite to the current point represents the matching edge point. Those edge points exhibiting an assigned positive and negative value are kept as valid edge points on the image because the negative value is paired with its positive value counterpart. Similarly, those edge point candidates having unmatched values, i.e., those edge point candidates not having a negative-positive value pair, are deemed not to be true or valid edge points and are discarded from the image.
  • The matching edge point algorithm 538 delineates edge points not lying on the boundary for removal from the desired dark regions. Thereafter, the region between any two matching edge points is filled in with non-zero pixels to establish edge-based segmentation. In a preferred embodiment of the invention, only edge points whose directions are primarily oriented co-linearly with the scanline are sought to permit the detection of matching front wall and back wall pairs.
  • Returning to FIG. 7, once Intensity-Based 422 and Edge-Based Segmentation 438 is completed, both segmentation methods use a combining step that combines the results of intensity-based segmentation 422 step and the edge-based segmentation 438 step using an AND Operator of Images 442. The AND Operator of Images 442 is achieved by a pixel-wise Boolean AND operator 442 step to produce a segmented image by computing the pixel intersection of two images. The Boolean AND operation 442 represents the pixels as binary numbers and the corresponding assignment of an assigned intersection value as a binary number 1 or 0 by the combination of any two pixels. For example, consider any two pixels, say pixelA and pixelB, which can have a 1 or 0 as assigned values. If pixelA's value is 1, and pixelB's value is 1, the assigned intersection value of pixelA and pixelB is 1. If the binary value of pixelA and pixelB are both 0, or if either pixelA or pixelB is 0, then the assigned intersection value of pixelA and pixelB is 0. The Boolean AND operation 542 takes the binary any two digital images as input, and outputs a third image with the pixel values made equivalent to the intersection of the two input images.
  • Upon completion of the AND Operator of Images 442 algorithm, the polish 464 algorithm of FIG. 7 is comprised of multiple sub-algorithms. FIG. 8D depicts the sub-algorithms of the Polish 464 algorithm, including a Close 546 algorithm, an Open 550 algorithm, a Remove Deep Regions 554 algorithm, and a Remove Fetal Head Regions 560 algorithm.
  • Closing and opening algorithms are operations that process images based on the knowledge of the shape of objects contained on a black and white image, where white represents foreground regions and black represents background regions. Closing serves to remove background features on the image that are smaller than a specified size. Opening serves to remove foreground features on the image that are smaller than a specified size. The size of the features to be removed is specified as an input to these operations. The opening algorithm 550 removes unlikely amniotic fluid regions from the segmented image based on a-priori knowledge of the size and location of amniotic fluid pockets.
  • Referring to FIG. 8D, the closing 546 algorithm obtains the Apparent Amniotic Fluid Area (AAFA) or Volume (AAFV) values. The AAFA and AAFV values are “Apparent” and maximal because these values may contain region areas or region volumes of non-amniotic origin unknowingly contributing to and obscuring what otherwise would be the true amniotic fluid volume. For example, the AAFA and AAFV values contain the true amniotic volumes, and possibly as well areas or volumes due to deep tissues and undetected fetal head volumes. Thus the apparent area and volume values require correction or adjustments due to unknown contributions of deep tissue and of the fetal head in order to determine an Adjusted Amniotic Fluid Area (AdAFA) value or Volume (AdAVA) value 568.
  • The AdAFA and AdAVA values obtained by the Close 546 algorithm are reduced by the morphological opening algorithm 550. Thereafter, the AdAFA and AdAVA values are further reduced by removing areas and volumes attributable to deep regions by using the Remove Deep Regions 554 algorithm. Thereafter, the polishing algorithm 464 continues by applying a fetal head region detection algorithm 560.
  • FIG. 8E depicts the sub-algorithms of the Remove Fetal Head Regions sub-algorithm 560. The basic idea of the sub-algorithms of the fetal head detection algorithm 560 is that the edge points that potentially represent a fetal skull are detected. Thereafter, a circle finding algorithm to determine the best-fitting circle to these fetal skull edges is implemented. The radii of the circles that are searched are known a priori based on the fetus' gestational age. The best fitting circle whose fitting metric lies above a certain pre-specified threshold is marked as the fetal head and the region inside this circle is the fetal head region. The algorithms include a gestational Age 726 input, a determine head diameter factor 730 algorithm, a Head Edge Detection algorithm, 734, and a Hough transform procedure 736.
  • Fetal brain tissue has substantially similar ultrasound echo qualities as presented by amniotic fluid. If not detected and subtracted from amniotic fluid volumes, fetal brain tissue volumes will be measured as part of the total amniotic fluid volumes and lead to an overestimation and false diagnosis of oligo or poly-hyraminotic conditions. Thus detecting fetal head position, measuring fetal brain matter volumes, and deducting the fetal brain matter volumes from the amniotic fluid volumes to obtain a corrected amniotic fluid volume serves to establish accurately measure amniotic fluid volumes.
  • The gestational age input 726 begins the fetal head detection algorithm 560 and uses a head dimension table to obtain ranges of head bi-parietal diameters (BPD) to search for (e.g., 30 week gestational age corresponds to a 6 cm head diameter). The head diameter range is input to both the Head Edge Detection, 734, and the Hough Transform, 736. The head edge detection 734 algorithm seeks out the distinctively bright ultrasound echoes from the anterior and posterior walls of the fetal skull while the Hough Transform algorithm, 736, finds the fetal head using circular shapes as models for the fetal head in the Cartesian image (pre-scan conversion to polar form).
  • Scanplanes processed by steps 522, 538, 530, are input to the head edge detection step 734. Applied as the first step in the fetal head detection algorithm 734 is the detection of the potential head edges from among the edges found by the matching edge filter. The matching edge 538 filter outputs pairs of edge points potentially belonging to front walls or back walls. Not all of these walls correspond to fetal head locations. The edge points representing the fetal head are determined using the following heuristics:
      • (1) Looking along a one dimensional A-mode scan line, fetal head locations present a corresponding matching gradient in the opposing direction within a short distance approximately the same size as the thickness of the fetal skull. This distance is currently set to a value 1 cm.
      • (2) The front wall and the back wall locations of the fetal head are within a range of diameters corresponding to the expected diameter 730 for the gestational age 726 of the fetus. Walls that are too close or too far are not likely to be head locations.
      • (3) A majority of the pixels between the front and back wall locations of the fetal head lie within the minimum intensity cluster as defined by the output of the clustering algorithm 422. The percentage of pixels that need to be dark is currently defined to be 80%.
  • The pixels found satisfying these features are then vertically dilated to produce a set of thick fetal head edges as the output of Head Edge Detection, 734.
  • FIG. 8F depicts the sub-algorithms of the Hough transform procedure 736. The sub-algorithms include a Polar Hough Transform 738 algorithm, a find maximum Hough value 742 algorithm 742, and a fill circle region 746. The Polar Hough Transform algorithm looks for fetal head structures in polar coordinate terms by converting from Cartesian coordinates using a plurality of equations. The fetal head, which appears like a circle in a 3D scan-converted Cartesian coordinate image, has a different shape in the pre-scan converted polar space. The fetal head shape is expressed in terms of polar coordinate terms explained as follows:
  • The coordinates of a circle in the Cartesian space (x,y) with center (x0,y0) and radius R are defined for an angle θ are derived and defined in equation E5 as:

  • x=R cos θ+x 0

  • y=R sin θ+y 0

  • Figure US20080146932A1-20080619-P00001
    (x−x 0)2+(y−y 0)2 =R 2   E5
  • In polar space, the coordinates (r,φ), with respect to the center (r00), are derived and defined in equation E6 as:

  • r sin φ=R cos θ+r 0 sin φ0

  • r cos φ=R sin θ+r 0 cos φ0

  • Figure US20080146932A1-20080619-P00002
    (r sin φ−r 0 sin φ0)2+(r cos φ−r 0 cos φ0)2 =R 2   E6
  • The Hough transform 736 algorithm using equations E5 and E6 attempts to find the best-fit circle to the edges of an image. A circle in the polar space is defined by a set of three parameters, (r00, R) representing the center and the radius of the circle.
  • The basic idea for the Hough transform 736 is as follows. Suppose a circle is sought having a fixed radius (say, R1) for which the best center of the circle is similarly sought. Now, every edge point on the input image lies on a potential circle whose center lays R1 pixels away from it. The set of potential centers themselves form a circle of radius R1 around each edge pixel. Now, drawing potential circles of radius R1 around each edge pixel, the point at which most circles intersect, a center of the circle that represents a best-fit circle to the given edge points is obtained. Therefore, each pixel in the Hough transform output contains a likelihood value that is simply the count of the number of circles passing through that point.
  • FIG. 9 illustrates the Hough Transform 736 algorithm for a plurality of circles with a fixed radius in a Cartesian coordinate system. A portion of the plurality of circles is represented by a first circle 804 a, a second circle 804 b, and a third circle 804 c. A plurality of edge pixels are represented as gray squares and an edge pixel 808 is shown. A circle is drawn around each edge pixel to distinguish a center location 812 of a best-fit circle 816 passing through each edge pixel point; the point of the center location through which most such circles pass (shown by a gray star 812) is the center of the best-fit circle 816 presented as a thick dark line. The circumference of the best fit circle 816 passes substantially through is central portion of each edge pixel, represented as a series of squares substantially equivalent to the edge pixel 808.
  • This search for best fitting circles can be easily extended to circles with varying radii by adding one more degree of freedom—however, a discrete set of radii around the mean radii for a given gestational age makes the search significantly faster, as it is not necessary to search all possible radii.
  • The next step in the head detection algorithm is selecting or rejecting best-fit circles based on its likelihood, in the find maximum Hough Value 742 algorithm. The greater the number of circles passing through a given point in the Hough-space, the more likely it is to be the center of a best-fit circle. A 2D metric as a maximum Hough value 742 of the Hough transform 736 output is defined for every image in a dataset. The 3D metric is defined as the maximum of the 2D metrics for the entire 3D dataset. A fetal head is selected on an image depending on whether its 3D metric value exceeds a preset 3D threshold and also whether the 2D metric exceeds a preset 2D threshold. The 3D threshold is currently set at 7 and the 2D threshold is currently set at 5. These thresholds have been determined by extensive training on images where the fetal head was known to be present or absent.
  • Thereafter, the fetal head detection algorithm concludes with a fill circle region 746 that incorporates pixels to the image within the detected circle. The fill circle region 746 algorithm fills the inside of the best fitting polar circle. Accordingly, the fill circle region 746 algorithm encloses and defines the area of the fetal brain tissue, permitting the area and volume to be calculated and deducted via algorithm 554 from the apparent amniotic fluid area and volume (AAFA or AAFV) to obtain a computation of the corrected amniotic fluid area or volume via algorithm 484.
  • FIG. 10 shows the results of sequentially applying the algorithm steps of FIGS. 7 and 8A-D on an unprocessed sample image 820 presented within the confines of a scanplane substantially equivalent to the scanplane 210. The results of applying the heat filter 514 and shock filter 518 in enhancing the unprocessed sample is shown in enhanced image 840. The result of intensity-based segmentation algorithms 522 is shown in image 850. The results of edge-based segmentation 438 algorithm using sub-algorithms 526, 530, 534 and 538 of the enhanced image 840 is shown in segmented image 858. The result of the combination 442 utilizing the Boolean AND images 442 algorithm is shown in image 862 where white represents the amniotic fluid area. The result of applying the polishing 464 algorithm employing algorithms 542, 546, 550, 554, 560, and 564 is shown in image 864, which depicts the amniotic fluid area overlaid on the unprocessed sample image 810.
  • FIG. 11 depicts a series of images showing the results of the above method to automatically detect, locate, and measure the area and volume of a fetal head using the algorithms outlined in FIGS. 7 and 8A-F. Beginning with an input image in polar coordinate form 920, the fetal head image is marked by distinctive bright echoes from the anterior and posterior walls of the fetal skull and a circular shape of the fetal head in the Cartesian image. The fetal head detection algorithm 734 operates on the polar coordinate data (i.e., pre-scan version, not yet converted to Cartesian coordinates).
  • An example output of applying the head edge detection 734 algorithm to detect potential head edges is shown in image 930. Occupying the space between the anterior and posterior walls are dilated black pixels 932 (stacks or short lines of black pixels representing thick edges). An example of the polar Hough transform 738 for one actual data sample for a specific radius is shown in polar coordinate image 940.
  • An example of the best-fit circle on real data polar data is shown in polar coordinate image 950 that has undergone the find maximum Hough value step 742. The polar coordinate image 950 is scan-converted to a Cartesian data in image 960 where the effects of finding maximum Hough value 742 algorithm are seen in Cartesian format.
  • FIG. 12 presents a 4-panel series of sonographer amniotic fluid pocket outlines compared to the algorithm's output in a scanplane equivalent to scanplane 210. The top two panels depict the sonographer's outlines of amniotic fluid pockets obtained by manual interactions with the display while the bottom two panels show the resulting amniotic fluid boundaries obtained from the instant invention's automatic application of 2D algorithms, 3D algorithms, combination heat and shock filter algorithms, and segmentation algorithms.
  • After the contours on all the images have been delineated, the volume of the segmented structure is computed. Two specific techniques for doing so are disclosed in detail in U.S. Pat. No. 5,235,985 to McMorrow et al, herein incorporated by reference. This patent provides detailed explanations for non-invasively transmitting, receiving and processing ultrasound for calculating volumes of anatomical structures.
  • Multiple Image Cone Acquisition and Image Processing Procedures:
  • In some embodiments, multiple cones of data acquired at multiple anatomical sampling sites may be advantageous. For example, in some instances, the pregnant uterus may be too large to completely fit in one cone of data sampled from a single measurement or anatomical site of the patient (patient location). That is, the transceiver 10 is moved to different anatomical locations of the patient to obtain different 3D views of the uterus from each measurement or transceiver location.
  • Obtaining multiple 3D views may be especially needed during the third trimester of pregnancy, or when twins or triplets are involved. In such cases, multiple data cones can be sampled from different anatomical sites at known intervals and then combined into a composite image mosaic to present a large uterus in one, continuous image. In order to make a composite image mosaic that is anatomically accurate without duplicating the anatomical regions mutually viewed by adjacent data cones, ordinarily it is advantageous to obtain images from adjacent data cones and then register and subsequently fuse them together. In a preferred embodiment, to acquire and process multiple 3D data sets or images cones, at least two 3D image cones are generally preferred, with one image cone defined as fixed, and the other image cone defined as moving.
  • The 3D image cones obtained from each anatomical site may be in the form of 3D arrays of 2D scanplanes, similar to the 3D array 240. Furthermore, the 3D image cone may be in the form of a wedge or a translational array of 2D scanplanes. Alternatively, the 3D image cone obtained from each anatomical site may be a 3D scancone of 3D-distributed scanlines, similar to the scancone 300.
  • The term “registration” with reference to digital images means the determination of a geometrical transformation or mapping that aligns viewpoint pixels or voxels from one data cone sample of the object (in this embodiment, the uterus) with viewpoint pixels or voxels from another data cone sampled at a different location from the object. That is, registration involves mathematically determining and converting the coordinates of common regions of an object from one viewpoint to the coordinates of another viewpoint. After registration of at least two data cones to a common coordinate system, the registered data cone images are then fused together by combining the two registered data images by producing a reoriented version from the view of one of the registered data cones. That is, for example, a second data cone's view is merged into a first data cone's view by translating and rotating the pixels of the second data cone's pixels that are common with the pixels of the first data cone. Knowing how much to translate and rotate the second data cone's common pixels or voxels allows the pixels or voxels in common between both data cones to be superimposed into approximately the same x, y, z, spatial coordinates so as to accurately portray the object being imaged. The more precise and accurate the pixel or voxel rotation and translation, the more precise and accurate is the common pixel or voxel superimposition or overlap between adjacent image cones. The precise and accurate overlap between the images assures the construction of an anatomically correct composite image mosaic substantially devoid of duplicated anatomical regions.
  • To obtain the precise and accurate overlap of common pixels or voxels between the adjacent data cones, it is advantageous to utilize a geometrical transformation that substantially preserves most or all distances regarding line straightness, surface planarity, and angles between the lines as defined by the image pixels or voxels. That is, the preferred geometrical transformation that fosters obtaining an anatomically accurate mosaic image is a rigid transformation that doesn't permit the distortion or deforming of the geometrical parameters or coordinates between the pixels or voxels common to both image cones.
  • The preferred rigid transformation first converts the polar coordinate scanplanes from adjacent image cones into in x, y, z Cartesian axes. After converting the scanplanes into the Cartesian system, a rigid transformation, T, is determined from the scanplanes of adjacent image cones having pixels in common. The transformation T is a combination of a three-dimensional translation vector expressed in Cartesian as t=(Tx, Ty, Tz), and a three-dimensional rotation R matrix expressed as a function of Euler angles θx, θy, θz around the x, y, and z axes. The transformation represents a shift and rotation conversion factor that aligns and overlaps common pixels from the scanplanes of the adjacent image cones.
  • In the preferred embodiment of the present invention, the common pixels used for the purposes of establishing registration of three-dimensional images are the boundaries of the amniotic fluid regions as determined by the amniotic fluid segmentation algorithm described above.
  • Several different protocols may be used to collect and process multiple cones of data from more than one measurement site are described in FIGS. 13-14.
  • FIG. 13 illustrates a 4-quadrant supine procedure to acquire multiple image cones around the center point of uterine quadrants of a patient in a supine procedure. Here the patient lies supine (on her back) displacing most or all of the amniotic fluid towards the top. The uterus is divided into 4 quadrants defined by the umbilicus (the navel) and the linea-nigra (the vertical center line of the abdomen) and a single 3D scan is acquired at each quadrant. The 4-quadrant supine protocol acquires four different 3D scans in a two dimensional grid, each corner of the grid being a quadrant midpoint. Four cones of data are acquired by the transceiver 10 along the midpoints of quadrant 1, quadrant 2, quadrant 3, and quadrant 4. Thus, one 3D data cone per uterine quadrant midpoint is acquired such that each quadrant midpoint is mutually substantially equally spaced from each other in a four-corner grid array.
  • FIG. 14 illustrates a multiple lateral line procedure to acquire multiple image cones in a linear array. Here the patent lies laterally (on her side), displacing most or all of the amniotic fluid towards the top. Four 3D images cones of data are acquired along a line of substantially equally space intervals. As illustrated, the transceiver 10 moves along the lateral line at position 1, position 2, position 3, and position 4. As illustrated in FIG. 14, the inter-position distance or interval is approximately 6 cm.
  • The preferred embodiment for making a composite image mosaic involves obtaining four multiple image cones where the transceiver 10 is placed at four measurement sites over the patient in a supine or lateral position such that at least a portion of the uterus is ultrasonically viewable at each measurement site. The first measurement site is originally defined as fixed, and the second site is defined as moving and placed at a first known inter-site distance relative to the first site. The second site images are registered and fused to the first site images After fusing the second site images to the first site images, the third measurement site is defined as moving and placed at a second known inter-site distance relative to the fused second site now defined as fixed. The third site images are registered and fused to the second site images Similarly, after fusing the third site images to the second site images, the fourth measurement site is defined as moving and placed at a third known inter-site distance relative to the fused third site now defined as fixed. The fourth site images are registered and fused to the third site images
  • The four measurement sites may be along a line or in an array. The array may include rectangles, squares, diamond patterns, or other shapes. Preferably, the patient is positioned such that the baby moves downward with gravity in the uterus and displaces the amniotic fluid upwards toward the measuring positions of the transceiver 10.
  • The interval or distance between each measurement site is approximately equal, or may be unequal. For example in the lateral protocol, the second site is spaced approximately 6 cm from the first site, the third site is spaced approximately 6 cm from the second site, and the fourth site is spaced approximately 6 cm from the third site. The spacing for unequal intervals could be, for example, the second site is spaced approximately 4 cm from the first site, the third site is spaced approximately 8 cm from the second site, and the third is spaced approximately 6 cm from the third site. The interval distance between measurement sites may be varied as long as there are mutually viewable regions of portions of the uterus between adjacent measurement sites.
  • For uteruses not as large as requiring four measurement sites, two and three measurement sites may be sufficient for making a composite 3D image mosaic. For three measurement sites, a triangular array is possible, with equal or unequal intervals. Furthermore, is the case when the second and third measurement sites have mutually viewable regions from the first measurement site, the second interval may be measured from the first measurement site instead of measuring from the second measurement site.
  • For very large uteruses not fully captured by four measurement or anatomical sites, greater than four measurement sites may be used to make a composite 3D image mosaic provided that each measurement site is ultrasonically viewable for at least a portion of the uterus. For five measurement sites, a pentagon array is possible, with equal or unequal intervals. Similarly, for six measurement sites, a hexagon array is possible, with equal or unequal intervals between each measurement site. Other polygonal arrays are possible with increasing numbers of measurement sites.
  • The geometrical relationship between each image cone must be ascertained so that overlapping regions can be identified between any two image cones to permit the combining of adjacent neighboring cones so that a single 3D mosaic composite image is produced from the 4-quadrant or in-line laterally acquired images.
  • The translational and rotational adjustments of each moving cone to conform with the voxels common to the stationary image cone is guided by an inputted initial transform that has the expected translational and rotational values. The distance separating the transceiver 10 between image cone acquisitions predicts the expected translational and rotational values. For example, as shown in FIG. 14, if 6 cm separates the image cones, then the expected translational and rotational values are proportionally estimated. For example, the (Tx, Ty, Tz) and (θx, θy, θz) Cartesian and Euler angle terms fixed images p voxel values are defined respectively as (6 cm, 0 cm, 0 cm) and (0deg, 0deg, 0deg).
  • FIG. 15 is a block diagram algorithm overview of the registration and correcting algorithms used in processing multiple image cone data sets. The algorithm overview 1000 shows how the entire amniotic fluid volume measurement process occurs from the multiply acquired image cones. First, each of the input cones 1004 is segmented 1008 to detect all amniotic fluid regions. The segmentation 1008 step is substantially similar to steps 418-480 of FIG. 7. Next, these segmented regions are used to align (register) the different cones into one common coordinate system using a Rigid Registration 1012 algorithm. Next, the registered datasets from each image cone are fused with each other using a Fuse Data 1016 algorithm to produce a composite 3D mosaic image. Thereafter, the total amniotic fluid volume is computed 1020 from the fused or composite 3D mosaic image.
  • FIG. 16 is a block diagram of the steps of the rigid registration algorithm 1012. The rigid algorithm 1012 is a 3D image registration algorithm and is a modification of the Iterated Closest Point (ICP) algorithm published by P J Besl and N D McKay, in “A Method for Registration of 3-D Shapes,” IEEE Trans. Pattern Analysis & Machine Intelligence, vol. 14, no. 2, February 1992, pp. 239-256. The steps of the rigid registration algorithm 1012 serves to correct for overlap between adjacent 3D scan cones acquired in either the 4-quadrant supine grid procedure or lateral line multi data cone acquisition procedures. The rigid algorithm 1012 first processes the fixed image 1104 in polar coordinate terms to Cartesian coordinate terms using the 3D Scan Convert 1108 algorithm. Separately, the moving image 1124 is also converted to Cartesian coordinates using the 3D Scan Convert 1128 algorithm. Next, the edges of the amniotic fluid regions on the fixed and moving images are determined and converted into point sets p and q respectively by a 3D edge detection process 1112 and 1132. Also, the fixed image point set, p, undergoes a 3D distance transform process 1116 which maps every voxel in a 3D image to a number representing the distance to the closest edge point in p. Pre-computing this distance transform makes subsequent distance calculations and closest point determinations very efficient.
  • Next, the known initial transform 1136, for example, (6, 0, 0) for the Cartesian Tx, Ty, Tz terms and (0, 0, 0) for the θx, θy, θz Euler angle terms for an inter-transceiver interval of 6 cm, is subsequently applied to the moving image by the Apply Transform 1140 step. This transformed image is then compared to the fixed image to examine for the quantitative occurrence of overlapping voxels. If the overlap is less than 20%, there are not enough common voxels available for registration and the initial transform is considered sufficient for fusing at step 1016.
  • If the overlapping voxel sets by the initial transform exceed 20% of the fixed image p voxel sets, the q-voxels of the initial transform are subjected to an iterative sequence of rigid registration.
  • A transformation T serves to register a first voxel point set p from the first image cone by merging or overlapping a second voxel point set q from a second image cone that is common to p of the first image cone. A point in the first voxel point set p may be defined as pi=(xi, yi, zi) and a point in the second voxel point set q may similarly be defined as qj=(xj, yj, zj), If the first image cone is considered to be a fixed landmark, then the T factor is applied to align (translate and rotate) the moving voxel point set q onto the fixed voxel point set p.
  • The precision of T is often affected by noise in the images that accordingly affects the precision of t and R, and so the variability of each voxel point set will in turn affect the overall variability of each matrix equation set for each point. The composite variability between the fixed voxel point set p and a corresponding moving voxel point set q is defined to have a cross-covariance matrix Cpq, more fully described in equation E8 as:
  • C pq = 1 n i = 1 n ( p i - p _ ) ( q i - q _ ) T E8
  • where, n is the number of points in each point set and p and q are the central points in the two voxel point sets. How strong the correlation is between two sets data is determined by statistically analyzing the cross-covariance Cpq. The preferred embodiment uses a statistical process known as the Single Value Decomposition (SVD) originally developed by Eckart and Young (G. Eckart and G. Young, 1936, The Approximation of One Matrix by Another of Lower Rank, Pychometrika 1, 211-218). When numerical data is organized into matrix form, the SVD is applied to the matrix, and the resulting SVD values are determined to solve for the best fitting rotation transform R to be applied to the moving voxel point set q to align with the fixed voxel point set p to acquire optimum overlapping accuracy of the pixel or voxels common to the fixed and moving images.
  • Equation E9 gives the SVD value of the cross-covariance Cpq:

  • Cpq=UDVt   E9
  • where D is a 3×3 diagonal matrix and U and V are orthogonal 3×3 matrices
  • Equation E10 further defines the rotational R description of the transformation T in terms of U and V orthogonal 3×3 matrices as:

  • R=UVT   E10
  • Equation E11 further defines the translation transform t description of the transformation T in terms of p, q and R as:

  • t= p−R q   E11
  • Equations E8 through E11 present a method to determine the rigid transformation between two point sets p and q—this process corresponds to step 1152 in FIG. 17.
  • The steps of the registration algorithm are applied iteratively until convergence. The iterative sequence includes a Find Closest Points on Fixed Image 1148 step, a Determine New Transform 1152 step, a Calculate Distances 1156 step, and Converged decision 1160 step.
  • In the Find Closest Points on Fixed Image 1148 step, corresponding q points are found for each point in the fixed set p. Correspondence is defined by determining the closest edge point on q to the edge point of p. The distance transform image helps locate these closest points. Once p and closest −q pixels are identified, the Determine New Transform 1152 step calculates the rotation R via SVD analysis using equations E8-E10 and translation transform t via equation E11. If, at decision step 1160, the change in the average closest point distance between two iterations is less than 5%, then the predicted-q pixel candidates are considered converged and suitable for receiving the transforms R and t to rigidly register the moving image Transform 1136 onto the common voxels p of the 3D Scan Converted 1108 image. At this point, the rigid registration process is complete as closest proximity between voxel or pixel sets has occurred between the fixed and moving images, and the process continues with fusion at step 1016.
  • If, however, there is >5% change between the predicted-q pixels and p pixels, another iteration cycle is applied via the Apply Transform 1140 to the Find Closest Points on Fixed Image 1148 step, and is cycled through the converged 1160 decision block. Usually in 3 cycles, though as many as 20 iterative cycles, are engaged until is the transformation T is considered converged.
  • A representative example for the application of the preferred embodiment for the registration and fusion of a moving image onto a fixed image is shown in FIGS. 17A-17C.
  • FIG. 17A is a first measurement view of a fixed scanplane 1200A from a 3D data set measurement taken at a first site. A first pixel set p consistent for the dark pixels of AFV is shown in a region 1204A. The region 1204A has approximate x-y coordinates of (150, 120) that is closest to dark edge.
  • FIG. 17B is a second measurement view of a moving scanplane 1200B from a 3D data set measurement taken at a second site. A second pixel set q consistent for the dark pixels of AFV is shown in a region 1204B. The region 1204B has approximate x-y coordinates of (50, 125) that is closest to dark edge.
  • FIG. 17C is a composite image 1200C of the first (fixed) 1200A and second (moving) 1200B images in which common pixels 1204B at approximate coordinates (50,125) is aligned or overlapped with the voxels 1204A at approximate coordinates (150, 120). That is, the region 1204B pixel set q is linearly and rotational transformed consistent with the closest edge selection methodology as shown in FIGS. 13A and 13B from employing the 3D Edge Detection 1112 step. The composite image 1200C is a mosaic image from scanplanes having approximately the same φ and rotation θ angles.
  • The registration and fusing of common pixel sets p and q from scanplanes having approximately the same φ and rotation θ angles can be repeated for other scanplanes in each 3D data set taken at the first (fixed) and second (moving) anatomical sites. For example, if the composite image 1200C above was for scanplane # 1, then the process may be repeated for the remaining scanplanes #2-24 or #2-48 or greater as needed to capture a completed uterine mosaic image. Thus an array similar to the 3D array 240 from FIG. 5B is assembled, except this time the scanplane array is made of composite images, each composited image belonging to a scanplane having approximately the same φ and rotation θ angles.
  • If a third and a fourth 3D data sets are taken, the respective registration, fusing, and assembling into scanplane arrays of composited images is undertaken with the same procedures. In this case, the scanplane composite array similar to the 3D array 240 is composed of a greater mosaic number of registered and fused scanplane images.
  • A representative example the fusing of two moving images onto a fixed image is shown in FIGS. 18A-18D.
  • FIG. 18A is a first view of a fixed scanplane 1220A. Region 1224A is identified as p voxels approximately at the coordinates (150, 70).
  • FIG. 18B is a second view of a first moving scanplane 1220B having some q voxels 1224B at x-y coordinates (300, 100) common with the first measurements p voxels at x-y coordinates (150, 70). Another set of voxels 1234A is shown roughly near the intersection of x-y coordinates (200, 125). As the transceiver 10 was moved only translationally, The scanplane 1220B from the second site has approximately the same tilt φ and rotation θ angles of the fixed scanplane 1220A taken from the first lateral in-line site.
  • FIG. 18C is a third view of a moving scanplane 1220C. A region 1234B is identified as q voxels approximately at the x-y coordinates (250, 100) that are common with the second views q voxels 1234A. The scanplane 1220 c from the third lateral in-line site has approximately the same tilt φ and rotation θ angles of the fixed scanplane 1220A taken from the first lateral in-line site and the first moving scanplane 1220B taken from the second lateral in-line site.
  • FIG. 18D is a composite mosaic image 1220D of the first (fixed) 1220A image, the second (moving) 1220B image, and the third (moving) 1220C image representing the sequential alignment and fusing of q voxel sets 1224B to 1224A, and 1234B with 1234A.
  • A fourth image similarly could be made to bring about a 4-image mosaic from scanplanes from a fourth 3D data set acquired from the transceiver 10 taking measurements at a fourth anatomical site where the fourth 3D data set is acquired with approximately the same tilt φ and rotation θ angles.
  • The transceiver 10 is moved to different anatomical sites to collect 3D data sets by hand placement by an operator. Such hand placement could create the acquiring of 3D data sets under conditions in which the tilt φ and rotation θ angles are not approximately equal, but differ enough to cause some measurement error requiring correction to use the rigid registration 1012 algorithm. In the event where the 3D data sets between anatomical sites, either between a moving supine site in relation to its beginning fixed site, or between a moving lateral site with its beginning fixed site, cannot be acquired with the tilt φ and rotation θ angles being approximately the same, then the built-in accelerometer measures the changes in tilt φ and rotation θ angles and compensates accordingly so that acquired moving images are presented if though they were acquired under approximately equal tilt φ and rotation θ angle conditions.
  • FIG. 19 illustrates a 6-section supine procedure to acquire multiple image cones around the center point of a uterus of a patient in a supine position. Each of the 6 segments are scanned in the order indicated, starting with segment 1 on the lower right side of the patient. The display on the scanner 10 is configured to indicate how many segments have been scanned, so that the display shows “0 of 6,” “1 of 6,” . . . “6 of 6.” The scans are positioned such that the lateral distances between each scanning position (except between positions 3 and 4) are approximately about 8 cm.
  • To repeat the scan, the top button of the scanner 10 is repetitively depressed, so that it returns the scan to “0 of 6,” to permit a user to repeat all six scans again. Finally, the scanner 10 is returned to the cradle to upload the raw ultrasound data to computer, intranet, or Internet as depicted in FIGS. 2C, 3, and 4 for algorithmic processing, as will be described in detail below. Within a predetermined time period, a result is generated that includes an estimate of the amniotic fluid volume.
  • As with the quadrant and the four in-line scancone measuring methods described earlier, the six-segment procedure ensures that the measurement process detects all amniotic fluid regions. The transceiver 10 projects outgoing ultrasound signals, in this case into the uterine region of a patient, at six anatomical locations, and receives incoming echoes reflected back from the regions of interest to the transceiver 10 positioned at a given anatomical location. An array of scanplane images are obtained for each anatomical location based upon the incoming echo signals. Image enhanced and segmented regions for the scanplane images are determined for each scanplane array, which may be a rotational, wedge, or translationally configured scanplane array. The segmented regions are used to align or register the different scancones into one common coordinate system. Thereafter, the registered datasets are merged with each other so that the total amniotic fluid volume is computed from the resulting fused image.
  • FIG. 20 is a block diagrammatic overview of an algorithm for the registration and correction processing of the 6-section multiple image cone data sets depicted in FIG. 19. A six-section algorithm overview 1000A includes many of the same blocks of algorithm overview 1000 depicted in FIG. 15. However, the segmentation registration procedures are modified for the 6-section multiple image cones. In the algorithm overview 1000A, the subprocesses include the InputCones block 1004, an Image Enhancement and Segmentation block 1010, a RigidRegistration block 1014, the FuseData block 1016, and the CalculateVolume block 1020. Generally, the Image Enhanced and Segmentation block 1010 reduces the effects of noise, which may include speckle noise, in the data while preserving the salient edges on the image. The enhanced images are then segmented by an edge-based and intensity-based method, and the results of each segmentation method are then subsequently combined. The results of the combined segmentation method are then cleaned up to fill gaps and to remove outliers. The area and/or the volume of the segmented regions is then computed.
  • FIG. 21 is a more detailed view of the Image Enhancement and Segmentation block 1010 of FIG. 20. Very similar to the algorithm processes of Image Enhancement 418, Intensity-based segmentation 422, and Edge-based segmentation 438 explained for FIG. 7, the enhancement-segmentation block 1010 begins with an input data block 1010A2, wherein the signals of pixel image data are subjected to a blurring and speckle removal process followed by a sharpening or deblurring process. The combination of blurring and speckle removal followed by sharpening or deblurring enhances the appearance of the pixel-based input image.
  • The blurring and deblurring is achieved by a combination of heat and shock filters. The inputted pixel related data from process 1010A2 is first subjected to a heat filter process block 1010A4. The heat filter block 1010A4 is a Laplacian-based filtering and results in reduction of the speckle noise and smooths or otherwise blurs the edges in the image. The heat filter block 1010A4 is modified via a user-determined stored data block 1010A6 wherein the number of heat filter iterations and step sizes are defined by the user and are applied to the inputted data 1010A2 in the heat filter process block 1010A4. The effect of heat iteration number in progressively blurring and removing speckle from an original image as the number of iteration cycles is increased is shown in FIG. 23. Once the pixel image data has been heat filter processed, the pixel image data is further processed by a shock filter block 1010A8. The shock filter block 1010A8 is subjected to a user-determined stored data block 1010A10 wherein the number shock filter iterations, step sizes, and gradient threshold are specified by the user. The foregoing values are then applied to heat filtered pixel data in the shock filter block 1010A8. The effect of shock iteration number, step sizes, and gradient thresholds in reducing the blurring is seen in signal plots (a) and (b) of FIG. 24. Thereafter, a heat and shock-filtered pixel data is parallel processed in two algorithm pathways, as defined by blocks 1010B2-6 (Intensity-Based Segmentation Group) and blocks 1010C2-4 (Edge-Based Segmentation Group).
  • The Intensity-based Segmentation relies on the observation that amniotic fluid is usually darker than the rest of the image. Pixels associated with fluids are classified based upon a threshold intensity level. Thus pixels below this intensity threshold level are interpreted as fluid, and pixels above this intensity threshold are interpreted as solid or non-fluid tissues. However, pixel values within a dataset can vary widely, so a means to automatically determine a threshold level within a given dataset is required in order to distinguish between fluid and non-fluid pixels. The intensity-based segmentation is divided into three steps. A first step includes estimating the fetal body and shadow regions, a second step includes determining an automatic thresholding for the fluid region after removing the body region, and a third step includes removing the shadow and fetal body regions from the potential fluid regions.
  • The Intensity-Based Segmentation Group includes a fetal body region block 1010B2, wherein an estimate of the fetal shadow and body regions is obtained. Generally, the fetal body regions in ultrasound images appear bright and are relatively easily detected. Commonly, anterior bright regions typically correspond with the dome reverberation of the transceiver 10, and the darker appearing uterus is easily discerned against the bright pixel regions formed by the more echogenic fetal body that commonly appears posterior to the amniotic fluid region. In fetal body region block 1010B2, the fetal body and shadow is found in scanlines that extend between the bright dome reverberation region and the posterior bright-appearing fetal body. A magnitude of the estimate of fetal and body region is then modified by a user-determined input parameter stored in a body threshold data block 1010B4, and a pixel value is chosen by the user. For example, a pixel value of 40 may be selected by the user. An example of the image obtained from blocks 1010B2-4 is panel (c) of FIG. 25. Once the fetal body regions and the shadow has been estimated, an automatic region threshold block 1010B6 is applied to this estimate to determine which pixels are fluid related and which pixels are non-fluid related. The automatic region threshold block 1010B6 uses a version of the Otsu algorithm (R M Haralick and L G Shaprio, Computer and Robot Vision, vol. 1, Addison Wesley 1992, page 11, incorporated by reference). Briefly, and in general terms, the Otsu algorithm determines a threshold value from an assumed bimodal pixel value histogram that generally corresponds to fluid and some soft tissue (non-fluid) such as placental or other fetal or maternal soft tissue. All pixel values less than the threshold value as determined by the Otsu algorithm are designated as potential fluid pixels. Using the Otsu algorithm determined threshold value, the first pathway is completed by a removing body regions above this threshold value in block 1010B8 so that the amniotic fluid regions are isolated. An example of the effect of the Intensity-based segmentation group is shown in panel (d) of FIG. 25. The isolated amniotic fluid region image thus obtained from the intensity-based segmentation process is then processed for subsequent combination with the end result of the second edge-based segmentation method.
  • Referring now to the second pathway or the Edge-Based Segmentation Group, the procedural blocks find pixel points on an image having high spatial gradient magnitudes. The edge-based segmentation process begins processing the shock filtered 1010A8 pixel data via a spatial gradients block 1010C2 in which the gradient magnitude of a given pixel neighborhood within the image is determined. The gradient magnitude is determined by the taking the X and Y derivatives using the difference kernels shown in FIG. 26. The gradient magnitude of the image is given by Equation E7:

  • ∥∇I∥=√{square root over (Ix 2 +I y 2)}

  • I x =I*K x

  • I y =I*K y   E7
  • where * is the convolution operator.
  • Once the gradient magnitude is determined, pixel edge points are determined by a hysteresis threshold of gradients process block 1010C4. In block 1010C4, a lower and upper threshold value is selected. The image is then thresholded using the lower value and a connected component labeling is carried out on the resulting image. The pixel value of each connected component is measured to determine which pixel edge points have gradient magnitude pixel values equal to or greater than the upper threshold value. Those pixel edge points having gradient magnitude pixel values equal to or exceeding the upper threshold are retained. This retention of pixels having strong gradient values serves to retain selected long connected edges which have one or more high gradient points.
  • Thereafter, the image is thresholded using the upper value, and a connected component labeling is carried out on the resulting image. The hysteresis threshold 1010C4 is modified by a user-determined edge threshold block 1010C6. An example of an application of the second pathway will be shown in panels (b) for the spatial gradients block 1010C2 and (c) for the threshold of gradients process block 1010C4 of FIG. 27. Another example of application of the edge detection block group for blocks 1010C2 and 1010C4 can also be seen in panel (e) of FIG. 25.
  • Referring again to FIG. 21, the first and second pathways are merged at a combine region and edges process block 1010D2. The combining process avoids erroneous segmentation arising from either the intensity-based or edge-based segmentation processes. The goal of the combining process is to ensure that good edges are reliably identified so that fluid regions are bounded by strong edges. Intensity-based segmentation may underestimate fluid volume, so that the boundaries need to be corrected using the edge-based segmentation information. In block 1010D2, the beginning and end of each scanline within the segmented region is determined by searching for edge pixels on each scanline. If no edge pixels are found in the search region, the segmentation on that scanline is removed. If edge pixels are found, then the region boundary locations are moved to the location of these edge pixels. Panel (f) of FIG. 25 illustrates the effects of the combining block 1010D2.
  • The segmentation resulting from the combination of region and edge information occasionally includes extraneous regions or even holes. A cleanup stage helps ensure consistency of segmented regions in a single scanplane and between scanplanes. The cleanup stage uses morphological operators (such as erosion, dilation, opening, closing) using the Markov Random Fields (MRFs) as disclosed in Forbes et al. (Florence Forbes and Adrian E. Raftery, “Bayesian morphology: Fast Unsupervised Bayesian Image Analysis,” Journal of the American Statistical Association, June 1999, herein incorporated by reference). The combined segmentation images receive the MRFs by being subjected to an In-plane Closing and Opening process block 1010D4. The In-plane opening-closing block 1010D4 block is a morphological operator wherein pixel regions are opened to remove pixel outliers from the segmented region, or that fills in or “closes” gaps and holes in the segmented region within a given scanplane. Block 1010D4 uses a one-dimensional structuring element extending through five scanlines. The closing-opening block is affected by a user-determined width, height, and depth parameter block 1010D6. Thereafter, an Out-of-plane Closing and Opening processing block 1010D8 is applied. The block 1010D8 applies a set of out-of-plane morphological closings and openings using a one-dimensional structuring element extending through three scanlines. Pixel inconsistencies are accordingly removed between the scanplanes. Panel (g) of FIG. 25 illustrates the effects of the blocks 1010D4-8.
  • FIG. 22 is an expansion of the RigidRegistration block 1014 of FIG. 20. Similar in purpose and general operation using the previously described ICP algorithm as used in the RigidRegistration block 1012 of FIG. 16, the block 1014 begins with parallel inputs of a fixed Image 1014A, a Moving Image 1014B, and an Initial Transform input 1014B10.
  • The steps of the rigid registration algorithm 1014 correct any overlaps between adjacent 3D scan cones acquired in the 6-section supine grid procedure. The rigid algorithm 1014 first converts the fixed image 1104A2 from polar coordinate terms to Cartesian coordinate terms using the 3D Scan Convert 1014A4 algorithm. Separately, the moving image 1014B2 is also converted to Cartesian coordinates using the 3D Scan Convert 1014B4 algorithm. Next, the edges of the amniotic fluid regions on the fixed and moving images are determined and converted into point sets p and q, respectively by a 3D edge detection process 1014A6 and 1014B6. Also, the fixed image point set, p, undergoes a 3D distance transform process 1014B8 which maps every voxel in a 3D image to a number representing the distance to the closest edge point in p. Pre-computing this distance transform makes subsequent distance calculations and closest point determinations very efficient.
  • Next, the known initial transform 1014B10, for example, (6, 0, 0) for the Cartesian Tx, Ty, Tz terms and (0, 0, 0) for the θx, θy, θz Euler angle terms, for an inter-transceiver interval of 6 cm, is subsequently applied to the moving image by the transform edges 1014B8 block. This transformed image is then subjected to the Find Closest Points on Fixed Image block 1014C2, similar in operation to the block 1148 of FIG. 16. Thereafter, a new transform is determined in block 1014C4, and the new transform is queried for convergence at decision diamond 1014C8. If conversion is attained, the RigidRegistration 1014 is done at terminus 1014C10. Alternatively, if conversion is not attained, then a return to the transform edges block 1014B8 occurs to start another iterative cycle.
  • The RigidRegistration block 1014 typically converges in less than 20 iterations. After applying the initial transformation, the entire registration process is carried out in case there are any overlapping segmented regions between any two images. Similar to the process described in connection with FIG. 16, an overlap threshold of approximately 20% is currently set as an input parameter.
  • FIG. 23 is a 4-panel image set that shows the effect of multiple iterations of the heat filter applied to an original image. The effect of shock iteration number in progressively blurring and removing speckle from an original image as the number of iterations increases is shown in FIG. 23. In this case the heat filter is described by process blocks 1010A4 and A6 of FIG. 21. In this example, an original image of a bladder is shown in panel (a) having visible speckle spread throughout the image. Some blurring is seen with the 10 iteration image in panel (b), followed by more progressive blurring at 50 iterations in panel (c) and 100 iterations in panel (d). As the blurring increases with iteration number, the speckle progressively decreases.
  • FIG. 24 shows the effect of shock filtering and a combination heat-and-shock filtering to the pixel values of the image. The effect of shock iteration number, step sizes, and gradient thresholds on the blurring of a heat filter is seen in ultrasound signal plots (a) and (b) of FIG. 24. Signal plot (a) depicts a smoothed or blurred signal gradient as a sigmoidal long dashed line that is subsequently shock filtered. As can be seen by the more abrupt or steep stepped signal plots after shock filtering, the magnitude of the shock filtered signal (short dashed line) approaches that of the original signal (solid line) without the choppy or noisy pattern associated with speckle. For the most part there is virtually a complete overlap of the shock filtered signal with the original signal through the pixel plot range.
  • Similarly, ultrasound signal plot (b) depicts the effects of applying a shock filter to a noisy (speckle rich) signal line (sinuous long dash line) that has been smooth or blurred by the heat filter (short dashed line with sigmoidal appearance). In operation the shock filter results in a generally deblurring or sharpening of the edges of the image that were previously blurred. Adjacent with, but not entirely overlapping with the original signal (solid line) throughout the pixel plot range, the shock filtered plot substantially overlaps the vertical portion of the original signal, but is stay elevated in the low and high pixel ranges. Like in (a), a more abrupt or steep stepped signal plot after shock filtering is obtained without significant removal of speckle. Dependent on the gradient threshold, step size, and iteration number imposed by block 1010A10 upon shock block 1010A8, different overlapping levels of the shock filtered line to that of the original is obtained.
  • FIG. 25 is a 7-panel image set generated by the image enhancement and segmentation algorithms of FIG. 21. Panel (a) is an image of the original uterine image. Panel (b) is the image that is produced from the image enhancement processes primarily described in blocks 1010A4-6 (heat filters) and blocks 1010A8-10 (shock filters) of FIG. 21. Panel (c) shows the effects of the processing obtained from blocks 1010B2-4 (Estimate Shadow and Fetal Body Regions/Body Threshold). Panel (d) is the image when processed by the Intensity-Based Segmentation Block Group 1010B2-8. Panel (e) results from application of the Edge-Based Segmentation Block Group 1010C2-6. Thereafter, the two Intensity-based and Edge-based block groups are combined (combining block 1010D2) to result in the image shown in panel (f). Panel (g) illustrates the effects of the blocks In-plane and Out-of-plane opening and closing processing blocks 1010D4-8.
  • FIG. 26 is a pixel difference kernel for obtaining X and Y derivatives to determine pixel gradient magnitudes for edge-based segmentation. As illustrated, a simplest case convolution is obtained for a first derivative computation where Kx and Ky are convolution constants.
  • FIG. 27 is a 3-panel image set showing the progressive demarcation or edge detection of organ wall interfaces arising from edge-based segmentation algorithms. Panel (a) is the enhanced input image. Panel (b) is the image result when the enhanced input image is subjected to the spatial gradients block 1010C2. Panel (c) is the image result when the enhanced and spatial gradients 1010C2 processed image is further processed by the threshold of gradients process block 1010C4.
  • Demonstrations of the algorithmic manipulation of pixels of the present invention are provided in Appendix 1: Examples of Algorithmic Steps. Source code of the algorithms of the present invention is provided in Appendix 2: Matlab Source Code.
  • While the preferred embodiment of the invention has been illustrated and described, as noted above, many changes can be made without departing from the spirit and scope of the invention. For example, other uses of the invention include determining the areas and volumes of the prostate, heart, bladder, and other organs and body regions of clinical interest. Accordingly, the scope of the invention is not limited by the disclosure of the preferred embodiment.
  • Systems, methods, and devices for image clarity of ultrasound-based images are described and illustrated in the following figures. The clarity of ultrasound imaging requires the efficient coordination of ultrasound transfer or communication to and from an examined subject, image acquisition from the communicated ultrasound, and microprocessor based image processing. Oftentimes the examined subject moves while image acquisition occurs, the ultrasound transducer moves, and/or movement occurs within the scanned region of interest that requires refinements as described below to secure clear images.
  • The ultrasound transceivers or DCD devices developed by Diagnostic Ultrasound are capable of collecting in vivo three-dimensional (3-D) cone-shaped ultrasound images of a patient. Based on these 3-D ultrasound images, various applications have been developed such as bladder volume and mass estimation.
  • During the data collection process initiated by DCD, a pulsed ultrasound field is transmitted into the body, and the back-scattered “echoes” are detected as a one-dimensional (1-D) voltage trace, which is also referred to as a RF line. After envelope detection, a set of 1-D data samples is interpolated to form a two-dimensional (2-D) or 3-D ultrasound image.
  • FIGS. 1A-D depicts a partial schematic and a partial isometric view of a transceiver, a scan cone comprising a rotational array of scan planes, and a scan plane of the array of various ultrasound harmonic imaging systems 60A-D illustrated in FIGS. 3 and 4 below.
  • FIG. 1A is a side elevation view of an ultrasound transceiver 10A that includes an inertial reference unit, according to an embodiment of the invention. The transceiver 10A includes a transceiver housing 18 having an outwardly extending handle 12 suitably configured to allow a user to manipulate the transceiver 10A relative to a patient. The handle 12 includes a trigger 14 that allows the user to initiate an ultrasound scan of a selected anatomical portion, and a cavity selector 16. The cavity selector 16 will be described in greater detail below. The transceiver 10A also includes a transceiver dome 20 that contacts a surface portion of the patient when the selected anatomical portion is scanned. The dome 20 generally provides an appropriate acoustical impedance match to the anatomical portion and/or permits ultrasound energy to be properly focused as it is projected into the anatomical portion. The transceiver 10A further includes one, or preferably an array of separately excitable ultrasound transducer elements (not shown in FIG. 1A) positioned within or otherwise adjacent with the housing 18. The transducer elements may be suitably positioned within the housing 18 or otherwise to project ultrasound energy outwardly from the dome 20, and to permit reception of acoustic reflections generated by internal structures within the anatomical portion. The one or more array of ultrasound elements may include a one-dimensional, or a two-dimensional array of piezoelectric elements that may be moved within the housing 18 by a motor. Alternately, the array may be stationary with respect to the housing 18 so that the selected anatomical region may be scanned by selectively energizing the elements in the array.
  • A directional indicator panel 22 includes a plurality of arrows that may be illuminated for initial targeting and guiding a user to access the targeting of an organ or structure within an ROI. In particular embodiments if the organ or structure is centered from placement of the transceiver 10A acoustically placed against the dermal surface at a first location of the subject, the directional arrows may be not illuminated. If the organ is off-center, an arrow or set of arrows may be illuminated to direct the user to reposition the transceiver 10A acoustically at a second or subsequent dermal location of the subject. The acrostic coupling may be achieved by liquid sonic gel applied to the skin of the patient or by sonic gel pads to which the transceiver dome 20 is placed against. The directional indicator panel 22 may be presented on the display 54 of computer 52 in harmonic imaging subsystems described in FIGS. 3 and 4 below, or alternatively, presented on the transceiver display 16.
  • Transceiver 10A includes an inertial reference unit that includes an accelerometer 22 and/or gyroscope 23 positioned preferably within or adjacent to housing 18. The accelerometer 22 may be operable to sense an acceleration of the transceiver 10A, preferably relative to a coordinate system, while the gyroscope 23 may be operable to sense an angular velocity of the transceiver 10A relative to the same or another coordinate system. Accordingly, the gyroscope 23 may be of conventional configuration that employs dynamic elements, or it may be an optoelectronic device, such as the known optical ring gyroscope. In one embodiment, the accelerometer 22 and the gyroscope 23 may include a commonly packaged and/or solid-state device. One suitable commonly packaged device may be the MT6 miniature inertial measurement unit, available from Omni Instruments, Incorporated, although other suitable alternatives exist. In other embodiments, the accelerometer 22 and/or the gyroscope 23 may include commonly packaged micro-electromechanical system (MEMS) devices, which are commercially available from MEMSense, Incorporated. As described in greater detail below, the accelerometer 22 and the gyroscope 23 cooperatively permit the determination of positional and/or angular changes relative to a known position that is proximate to an anatomical region of interest in the patient. Other configurations related to the accelerometer 22 and gyroscope 23 concerning transceivers 10A,B equipped with inertial reference units and the operations thereto may be obtained from copending U.S. patent application Ser. No. 11/222,360 filed Sep. 8, 2005, herein incorporated by reference.
  • The transceiver 10A includes (or if capable at being in signal communication with) a display 24 operable to view processed results from an ultrasound scan, and/or to allow an operational interaction between the user and the transceiver 10A. For example, the display 24 may be configured to display alphanumeric data that indicates a proper and/or an optimal position of the transceiver 10A relative to the selected anatomical portion. Display 24 may be used to view two- or three-dimensional images of the selected anatomical region. Accordingly, the display 24 may be a liquid crystal display (LCD), a light emitting diode (LED) display, a cathode ray tube (CRT) display, or other suitable display devices operable to present alphanumeric data and/or graphical images to a user.
  • Still referring to FIG. 1A, a cavity selector 16 may be operable to adjustably adapt the transmission and reception of ultrasound signals to the anatomy of a selected patient. In particular, the cavity selector 16 adapts the transceiver 10A to accommodate various anatomical details of male and female patients. For example, when the cavity selector 16 is adjusted to accommodate a male patient, the transceiver 10A may be suitably configured to locate a single cavity, such as a urinary bladder in the male patient. In contrast, when the cavity selector 16 is adjusted to accommodate a female patient, the transceiver 10A may be configured to image an anatomical portion having multiple cavities, such as a bodily region that includes a bladder and a uterus. Alternate embodiments of the transceiver 10A may include a cavity selector 16 configured to select a single cavity scanning mode, or a multiple cavity-scanning mode that may be used with male and/or female patients. The cavity selector 16 may thus permit a single cavity region to be imaged, or a multiple cavity region, such as a region that includes a lung and a heart to be imaged.
  • To scan a selected anatomical portion of a patient, the transceiver dome 20 of the transceiver 10A may be positioned against a surface portion of a patient that is proximate to the anatomical portion to be scanned. The user actuates the transceiver 10A by depressing the trigger 14. In response, the transceiver 10 transmits ultrasound signals into the body, and receives corresponding return echo signals that may be at least partially processed by the transceiver 10A to generate an ultrasound image of the selected anatomical portion. In a particular embodiment, the transceiver 10A transmits ultrasound signals in a range that extends from approximately about two megahertz (MHz) to approximately about ten MHz.
  • In one embodiment, the transceiver 10A may be operably coupled to an ultrasound system that may be configured to generate ultrasound energy at a predetermined frequency and/or pulse repetition rate and to transfer the ultrasound energy to the transceiver 10A. The system also includes a processor that may be configured to process reflected ultrasound energy that is received by the transceiver 10A to produce an image of the scanned anatomical region. Accordingly, the system generally includes a viewing device, such as a cathode ray tube (CRT), a liquid crystal display (LCD), a plasma display device, or other similar display devices, that may be used to view the generated image. The system may also include one or more peripheral devices that cooperatively assist the processor to control the operation of the transceiver 10A, such a keyboard, a pointing device, or other similar devices. In still another particular embodiment, the transceiver 10A may be a self-contained device that includes a microprocessor positioned within the housing 18 and software associated with the microprocessor to operably control the transceiver 10A, and to process the reflected ultrasound energy to generate the ultrasound image. Accordingly, the display 24 may be used to display the generated image and/or to view other information associated with the operation of the transceiver 10A. For example, the information may include alphanumeric data that indicates a preferred position of the transceiver 10A prior to performing a series of scans. In yet another particular embodiment, the transceiver 10A may be operably coupled to a general-purpose computer, such as a laptop or a desktop computer that includes software that at least partially controls the operation of the transceiver 10A, and also includes software to process information transferred from the transceiver 10A, so that an image of the scanned anatomical region may be generated. The transceiver 10A may also be optionally equipped with electrical contacts to make communication with receiving cradles 50 as discussed in FIGS. 3 and 4 below. Although transceiver 10A of FIG. 1A may be used in any of the foregoing embodiments, other transceivers may also be used. For example, the transceiver may lack one or more features of the transceiver 10A. For example, a suitable transceiver need not be a manually portable device, and/or need not have a top-mounted display, and/or may selectively lack other features or exhibit further differences.
  • Referring still to FIG. 1A is a graphical representation of a plurality of scan planes that form a three-dimensional (3D) array having a substantially conical shape. An ultrasound scan cone 40 formed by a rotational array of two-dimensional scan planes 42 projects outwardly from the dome 20 of the transceivers 10A. The other transceiver embodiments 10B-10E may also be configured to develop a scan cone 40 formed by a rotational array of two-dimensional scan planes 42. The pluralities of scan planes 40 may be oriented about an axis 11 extending through the transceivers 10A-10E. One or more, or preferably each of the scan planes 42 may be positioned about the axis 11, preferably, but not necessarily at a predetermined angular position θ. The scan planes 42 may be mutually spaced apart by angles θ1 and θ2. Correspondingly, the scan lines within each of the scan planes 42 may be spaced apart by angles φ1 and φ2. Although the angles θ1 and θ2 are depicted as approximately equal, it is understood that the angles θ1 and θ2 may have different values. Similarly, although the angles φ1 and φ2 are shown as approximately equal, the angles φ1 and φ2 may also have different angles. Other scan cone configurations are possible. For example, a wedge-shaped scan cone, or other similar shapes may be generated by the transceiver 10A, 10B and 10C.
  • FIG. 1B is a graphical representation of a scan plane 42. The scan plane 42 includes the peripheral scan lines 44 and 46, and an internal scan line 48 having a length r that extends outwardly from the transceivers 10A-10E. Thus, a selected point along the peripheral scan lines 44 and 46 and the internal scan line 48 may be defined with reference to the distance r and angular coordinate values φ and θ. The length r preferably extends to approximately 18 to 20 centimeters (cm), although any length is possible. Particular embodiments include approximately seventy-seven scan lines 48 that extend outwardly from the dome 20, although any number of scan lines is possible.
  • FIG. 1C a graphical representation of a plurality of scan lines emanating from a hand-held ultrasound transceiver forming a single scan plane 42 extending through a cross-section of an internal bodily organ. The number and location of the internal scan lines emanating from the transceivers 10A-10E within a given scan plane 42 may thus be distributed at different positional coordinates about the axis line 11 as required to sufficiently visualize structures or images within the scan plane 42. As shown, four portions of an off-centered region-of-interest (ROI) are exhibited as irregular regions 49. Three portions may be viewable within the scan plane 42 in totality, and one may be truncated by the peripheral scan line 44.
  • As described above, the angular movement of the transducer may be mechanically effected and/or it may be electronically or otherwise generated. In either case, the number of lines 48 and the length of the lines may vary, so that the tilt angle φ sweeps through angles approximately between −60° and +60° for a total arc of approximately 120°. In one particular embodiment, the transceiver 10 may be configured to generate approximately about seventy-seven scan lines between the first limiting scan line 44 and a second limiting scan line 46. In another particular embodiment, each of the scan lines has a length of approximately about 18 to 20 centimeters (cm). The angular separation between adjacent scan lines 48 (FIG. 1B) may be uniform or non-uniform. For example, and in another particular embodiment, the angular separation φ1 and φ2 (as shown in FIG. 5C) may be about 1.5°. Alternately, and in another particular embodiment, the angular separation φ1 and φ2 may be a sequence wherein adjacent angles may be ordered to include angles of 1.5°, 6.8°, 15.5°, 7.2°, and so on, where a 1.5° separation is between a first scan line and a second scan line, a 6.8° separation is between the second scan line and a third scan line, a 15.5° separation is between the third scan line and a fourth scan line, a 7.2° separation is between the fourth scan line and a fifth scan line, and so on. The angular separation between adjacent scan lines may also be a combination of uniform and non-uniform angular spacings, for example, a sequence of angles may be ordered to include 1.5°, 1.5°, 1.5°, 7.2°, 14.3°, 20.2°, 8.0°, 8.0°, 8.0°, 4.3°, 7.8°, and so on.
  • FIG. 1D is an isometric view of an ultrasound scan cone that projects outwardly from the transceivers of FIGS. 1A-E. Three-dimensional images of a region of interest may be presented within a scan cone 40 that comprises a plurality of 2D images formed in an array of scan planes 42. A dome cutout 41 that is the complementary to the dome 20 of the transceivers 10A-10E is shown at the top of the scan cone 40.
  • FIG. 2 depicts a partial schematic and partial isometric and side view of a transceiver, and a scan cone array comprised of 3D-distributed scan lines in alternate embodiment of an ultrasound harmonic imaging system. A plurality of three-dimensional (3D) distributed scan lines emanating from a transceiver that cooperatively forms a scan cone 30. Each of the scan lines have a length r that projects outwardly from the transceivers 10A-10E of FIGS. 1A-1E. As illustrated the transceiver 10A emits 3D-distributed scan lines within the scan cone 30 that may be one-dimensional ultrasound A-lines. The other transceiver embodiments 10B-10E may also be configured to emit 3D-distributed scan lines. Taken as an aggregate, these 3D-distributed A-lines define the conical shape of the scan cone 30. The ultrasound scan cone 30 extends outwardly from the dome 20 of the transceiver 10A, 10B and 10C centered about an axis line 11. The 3D-distributed scan lines of the scan cone 30 include a plurality of internal and peripheral scan lines that may be distributed within a volume defined by a perimeter of the scan cone 30. Accordingly, the peripheral scan lines 31A-31F define an outer surface of the scan cone 30, while the internal scan lines 34A-34C may be distributed between the respective peripheral scan lines 31A-31F. Scan line 34B may be generally collinear with the axis 11, and the scan cone 30 may be generally and coaxially centered on the axis line 11.
  • The locations of the internal and peripheral scan lines may be further defined by an angular spacing from the center scan line 34B and between internal and peripheral scan lines. The angular spacing between scan line 34B and peripheral or internal scan lines may be designated by angle Φ and angular spacings between internal or peripheral scan lines may be designated by angle Ø. The angles Φ1, Φ2, and Φ3 respectively define the angular spacings from scan line 34B to scan lines 34A, 34C, and 31D. Similarly, angles Ø1, Ø2, and Ø3 respectively define the angular spacings between scan line 31B and 31C, 31C and 34A, and 31D and 31E.
  • With continued reference to FIG. 2, the plurality of peripheral scan lines 31A-E and the plurality of internal scan lines 34A-D may be three dimensionally distributed A-lines (scan lines) that are not necessarily confined within a scan plane, but instead may sweep throughout the internal regions and along the periphery of the scan cone 30. Thus, a given point within the scan cone 30 may be identified by the coordinates r, Φ, and Ø whose values generally vary. The number and location of the internal scan lines emanating from the transceivers 10A-10E may thus be distributed within the scan cone 30 at different positional coordinates as required to sufficiently visualize structures or images within a region of interest (ROI) in a patient. The angular movement of the ultrasound transducer within the transceiver 10A-10E may be mechanically effected, and/or it may be electronically generated. In any case, the number of lines and the length of the lines may be uniform or otherwise vary, so that angle Φ sweeps through angles approximately between −60° between scan line 34B and 31A, and +60° between scan line 34B and 31B. Thus angle Φ in this example presents a total arc of approximately 120°. In one embodiment, the transceiver 10A, 10B and 10C may be configured to generate a plurality of 3D-distributed scan lines within the scan cone 30 having a length r of approximately 18 to 20 centimeters (cm).
  • FIG. 3 is a schematic illustration of a server-accessed local area network in communication with a plurality of ultrasound harmonic imaging systems. An ultrasound harmonic imaging system 100 includes one or more personal computer devices 52 that may be coupled to a server 56 by a communications system 55. The devices 52 may be, in turn, coupled to one or more ultrasound transceivers 10A and/or 10B, for examples the ultrasound harmonic sub-systems 60A-60D. Ultrasound based images of organs or other regions of interest derived from either the signals of echoes from fundamental frequency ultrasound and/or harmonics thereof, may be shown within scan cone 30 or 40 presented on display 54. The server 56 may be operable to provide additional processing of ultrasound information, or it may be coupled to still other servers (not shown in FIG. 3) and devices. Transceivers 10A or 10B may be in wireless communication with computer 52 in sub-system 60A, in wired signal communication in sub-system 10B, in wireless communication with computer 52 via receiving cradle 50 in sub-system 10C, or in wired communication with computer 52 via receiving cradle 50 in sub-system 10D.
  • FIG. 4 is a schematic illustration of the Internet in communication with a plurality of ultrasound harmonic imaging systems. An Internet system 110 may be coupled or otherwise in communication with the ultrasound harmonic sub-systems 60A-60D.
  • FIG. 5 schematically depicts a master method flow chart algorithm 120 to acquire a clarity enhanced ultrasound image. Algorithm 120 begins with process block 150, in which an acoustic coupling or sonic gel is applied to the dermal surface near the region-of-interest (ROI) using a degassing gel dispenser. Embodiments illustrating the degassing gel dispenser and its uses are depicted in FIGS. 36A-G below. After applying the sonic gel, decision diamond 170 is reached with the query “Targeting a moving structure?”, and if negative to this query, algorithm 120 continues to process block 200. At process block 200, the ultrasound transceiver dome 20 of transceivers 10A,B are placed into the dermal residing sonic gel to and pulsed ultrasound energy is transmitted to the ROI. Thereafter, echoes of the fundamental ultrasound frequency and/or harmonics thereof are captured by the transceiver 10A,B and converted to echogenic signals. If the answer to decision diamond is affirmative for targeting a moving structure within the ROI, the ROI is re-targeted, at process block 300, using optical flow real-time analysis.
  • Whether receiving echogenic signals from non-moving targets within the ROI from processing block 200, or moving targets within the ROI from process block 300, algorithm 120 continues with processing blocks 400A or 400B. Processing blocks 400A and 400B process echogenic datasets of the echogenic signals from process blocks 200 and 300 using a point spread function algorithms to compensate or otherwise suppress motion induced reverberations within the ROI echogenic data sets. Processing block 400A employs nonparametric analysis, and processing block 400B employs parametric analysis and described in FIG. 9 below. Once motion artifacts are corrected, algorithm 120 continues with processing block 50 to segment image sections derived from the distortion-compensated data sets. At process block 600, areas of the segmented sections within 2D images and/or 3D volumes are determined. Thereafter, master algorithm 120 completes at process block 700 in which segmented structures within the static or moving ROI is displayed along with any segmented section area and/or volume measurements.
  • FIG. 6 is an expansion of sub-algorithm 150 of master algorithm 120 of FIG. 7. Beginning from the entry point of master algorithm 120, sub-algorithm 150 starts at process block 152 wherein a metered volume of sonic gel is applied from the volume-controlled dispenser to the dermal surface believed to overlap the ROI. Thereafter, at process block 154, any gas pockets within the applied gel are expelled by a roller pressing action. Sub-algorithm 150 is then completed and exits to sub-algorithm 200.
  • FIG. 7 is an expansion of sub-algorithms 200 of FIG. 5. Entering from process block 154, sub-algorithm 200 starts at process block 202 wherein the transceiver dome 20 of transceivers 10A,B are placed into the gas-purged sonic gel to get a firm sonic coupling, and then at process block 206, pulsed frequency ultrasound is transmitted to the underlying ROI. Thereafter, at process block 210, ultrasound echoes from the ROI and any intervening structure, is collected by the transceivers 10A,B and converted to the echogenic data sets for presentation of an image of the ROI. Once the image of the ROI is displayed, decision diamond 218 is reached with the query “Are structures of interest SOI sufficiently in view within ROI?”, and if negative to this query, sub-algorithm 200 continues to process block 222 in which the transceiver is moved to a new anatomical location for re-routing to process block 202. At process block 200, the ultrasound transceiver dome 20 of transceivers 10A,B are placed into the dermal residing sonic gel to and pulsed ultrasound energy is transmitted to the ROI. If the answer to the decision diamond 218 is affirmative for a sufficiently viewed SOI, sub-algorithm 200 continues to process block 226 in which a 3D echogenic data set array of the ROI is acquired using at least one of an ultrasound fundamental and/or harmonic frequency. Sub-algorithm 200 is then completed and exits to sub-algorithm 300.
  • FIG. 8 is an expansion of sub-algorithm 300 of master algorithm illustrated in FIG. 5. Entering from process block 170, sub-algorithm 300 begins in processing block 302 by making a transceiver 10A,B-to-ROI sonic coupling similar to process block 202, transmitting pulse frequency ultrasound at process block 306, and thereafter, at processing block 310, acquire ultrasound echoes, convert to echogenic data sets, and present a currently displayed image “i” of the ROI and compare “i” with any predecessor image “i-1” of the ROI, if available. Thereafter, at process block 314, pixel movement along Cartesian axes is ascertained to determine X and Y-axis pixel center-of-optical flow, and similarly, followed by process block 318 pixel movement along the phi angle to ascertain a rotational center-of-optical flow. Thereafter, at process block 322, optical flow velocity maps to ascertain whether axial and rotational vectors exceed a pre-defined threshold OFR value. Once the velocity maps are obtained, decision diamond 326 is reached with the query “Does optical flow velocity map match the expected pattern for the structure being imaged?”, and if negative, sub-algorithm re-routes to process block 306 for retransmission of ultrasound to the ROI via the sonically coupled transceiver 10A,B. If affirmative for a matched velocity map and expected pattern of the structure being imaged, sub-algorithm 300 continues with process block 330 in which a 3D echogenic data set array of the ROI is acquired using at least one of an ultrasound fundamental and/or harmonic frequency. Sub-algorithm 300 is then completed and exits to sub-algorithms 400A and 400B.
  • FIG. 9 depicts expansion of subalgorithms 400A and 400B of FIG. 5. Sub-algorithm 400A employs nonparametric pulse estimation and 400B employs parametric pulse estimation. Sub-algorithm 400A describes an implementation of the CLEAN algorithm for reducing reverberation and noise in the ultrasound signals and comprises an RF line processing block 400A-2, a non-parametric pulse estimation block 400A-4, a CLEAN iteration block 400A-6, a decision diamond block 400A-8 having the query “STOP?”, and a Scan Convert processing block 400A-10. The same algorithm is applied to each RF line in a scan plane, but each RF line uses its own unique estimate of the point spread function of the transducer (or pulse estimate). The algorithm is iterative by re-routing to Non parametric pulse estimation block 400A-4 in that the point spread function is estimated, the CLEAN sub-algorithm applied and then the pulse is re-estimated from the output of the CLEAN sub-algorithm. The iterations are stopped after a maximum number of iterations is reached or the changes in the signal are sufficiently small. Thereafter, once the iteration has stopped, the signals are converted for presentation as part of a scan plane image at process block 400A-10. Sub-algorithm 400A is then completed and exits to sub-algorithms 500.
  • Referring to sub-algorithm 400B, parametric analysis employs an implementation of the CLEAN algorithm that is not iterative. Sub-algorithm 400B comprise comprises an RF line processing block 400B-2, a parametric pulse estimation block 400B-4, a CLEAN algorithm block 400B-6, a CLEAN iteration block 400B-8, and a Scan Convert processing block 400B-10. The point spread function of the transducer is estimated once and becomes a priori information used in the CLEAN algorithm. A single estimate of the pulse is applied to all RF lines in a scan plane and the CLEAN algorithm is applied once to each line. The signal output is then converted for presentation as part of a scan plane image at process block 400B-10. Sub-algorithm 400B is then completed and exits to sub-algorithms 500.
  • FIG. 10 is an expansion of sub-algorithm 500 of FIG. 5. 3D data sets from processing blocks 400A-10 or 400B-10 of sub-algorithms 400A or 400B are entered at input data process block 504 that then undergoes a 2-step image enhancement procedure at process block 506. The 2-step image enhancement includes performing a heat filter to reduce noise followed by a shock filter to sharpen edges of structures within the 3D data sets. The heat and shock filters are partial differential equations (PDE) defined respectively in Equations E1 and E2 below:
  • u t = 2 u x 2 + 2 u y 2 ( Heat Filter ) E1 u t = - F ( ( u ) ) u ( Shock Filter ) E2
  • Here u in the heat filter represents the image being processed. The image u is 2D, and is comprised of an array of pixels arranged in rows along the x-axis, and an array of pixels arranged in columns along the y-axis. The pixel intensity of each pixel in the image u has an initial input image pixel intensity (I) defined as u0=I. The value of I depends on the application, and commonly occurs within ranges consistent with the application. For example, I can be as low as 0 to 1, or occupy middle ranges between 0 to 127 or 0 to 512. Similarly, I may have values occupying higher ranges of 0 to 1024 and 0 to 4096, or greater. For the shock filter u represents the image being processed whose initial value is the input image pixel intensity (I): u0=I where the l(u) term is the Laplacian of the image u, F is a function of the Laplacian, and ∥∇u∥ is the 2D gradient magnitude of image intensity defined by equation E3:

  • ∥∇u∥=√{square root over (ux 2 +u y 2)}  E3:
  • Where u2 x=the square of the partial derivative of the pixel intensity (u) along the x-axis, u2 y=the square of the partial derivative of the pixel intensity (u) along the y-axis, the Laplacian l(u) of the image, u, is expressed in equation E4:

  • l(u)=u xx u x 2+2u xy u x u y +u yy u y 2
  • Equation E9 relates to equation E6 as follows:
  • ux is the first partial derivative
  • u x
  • of u along the x-axis,
  • ux uy is the first partial derivative
  • u y
  • of u along the y-axis,
  • ux ux 2 is the square of the first partial derivative
  • u x
  • of u along the x-axis,
  • ux uy 2 is the square of the first partial derivative
  • u y
  • of u along the y-axis,
  • ux uxx is the second partial derivative
  • 2 u x 2
  • of u along the x-axis,
  • ux uyy is the second partial derivative
  • 2 u y 2
  • of u along the y-axis,
  • uxy is cross multiple first partial derivative
  • u xdy
  • of u along the x and y axes, and
  • uxy the sign of the function F modifies the Laplacian by the image gradient values selected to avoid placing spurious edges at points with small gradient values:
  • F ( ( u ) ) = 1 , if ( u ) > 0 and u > t = - 1 , if ( u ) < 0 and u > t = 0 , otherwise
  • where t is a threshold on the pixel gradient value ∥∇u∥.
  • The combination of heat filtering and shock filtering produces an enhanced image ready to undergo the intensity-based and edge-based segmentation algorithms as discussed below. The enhanced 3D data sets are then subjected to a parallel process of intensity-based segmentation at process block 510 and edge-based segmentation at process block 512. The intensity-based segmentation step uses a “k-means” intensity clustering technique where the enhanced image is subjected to a categorizing “k-means” clustering algorithm. The “k-means” algorithm categorizes pixel intensities into white, gray, and black pixel groups. Given the number of desired clusters or groups of intensities (k), the k-means algorithm is an iterative algorithm comprising four steps: Initially determine or categorize cluster boundaries by defining a minimum and a maximum pixel intensity value for every white, gray, or black pixels into groups or k-clusters that are equally spaced in the entire intensity range. Assign each pixel to one of the white, gray or black k-clusters based on the currently set cluster boundaries. Calculate a mean intensity for each pixel intensity k-cluster or group based on the current assignment of pixels into the different k-clusters. The calculated mean intensity is defined as a cluster center. Thereafter, new cluster boundaries are determined as mid points between cluster centers. The fourth and final step of intensity-based segmentation determines if the cluster boundaries significantly change locations from their previous values. Should the cluster boundaries change significantly from their previous values, iterate back to step 2, until the cluster centers do not change significantly between iterations. Visually, the clustering process is manifest by the segmented image and repeated iterations continue until the segmented image does not change between the iterations.
  • The pixels in the cluster having the lowest intensity value—the darkest cluster—are defined as pixels associated with internal cavity regions of bladders. For the 2D algorithm, each image is clustered independently of the neighboring images. For the 3D algorithm, the entire volume is clustered together. To make this step faster, pixels are sampled at 2 or any multiple sampling rate factors before determining the cluster boundaries. The cluster boundaries determined from the down-sampled data are then applied to the entire data.
  • The edge-based segmentation process block 512 uses a sequence of four sub-algorithms. The sequence includes a spatial gradients algorithm, a hysteresis threshold algorithm, a Region-of-Interest (ROI) algorithm, and a matching edges filter algorithm. The spatial gradient algorithm computes the x-directional and y-directional spatial gradients of the enhanced image. The hysteresis threshold algorithm detects salient edges. Once the edges are detected, the regions defined by the edges are selected by a user employing the ROI algorithm to select regions-of-interest deemed relevant for analysis.
  • Since the enhanced image has very sharp transitions, the edge points can be easily determined by taking x- and y-derivatives using backward differences along x- and y-directions. The pixel gradient magnitude ∥∇I∥ is then computed from the x- and y-derivative image in equation E5 as:

  • ∥∇I∥=√{square root over (Ix 2 +I y 2)}
  • Where I2 x=the square of x-derivative of intensity and I2 y=the square of y-derivative of intensity along the y-axis.
  • Significant edge points are then determined by thresholding the gradient magnitudes using a hysteresis thresholding operation. Other thresholding methods could also be used. In hysteresis thresholding, two threshold values, a lower threshold and a higher threshold, are used. First, the image is thresholded at the lower threshold value and a connected component labeling is carried out on the resulting image. Next, each connected edge component is preserved which has at least one edge pixel having a gradient magnitude greater than the upper threshold. This kind of thresholding scheme is good at retaining long connected edges that have one or more high gradient points.
  • In the preferred embodiment, the two thresholds are automatically estimated. The upper gradient threshold is estimated at a value such that at most 97% of the image pixels are marked as non-edges. The lower threshold is set at 50% of the value of the upper threshold. These percentages could be different in different implementations. Next, edge points that lie within a desired region-of-interest are selected. This region of interest algorithm excludes points lying at the image boundaries and points lying too close to or too far from the transceivers 10A,B. Finally, the matching edge filter is applied to remove outlier edge points and fill in the area between the matching edge points.
  • The edge-matching algorithm is applied to establish valid boundary edges and remove spurious edges while filling the regions between boundary edges. Edge points on an image have a directional component indicating the direction of the gradient. Pixels in scanlines crossing a boundary edge location can exhibit two gradient transitions depending on the pixel intensity directionality. Each gradient transition is given a positive or negative value depending on the pixel intensity directionality. For example, if the scanline approaches an echo reflective bright wall from a darker region, then an ascending transition is established as the pixel intensity gradient increases to a maximum value, i.e., as the transition ascends from a dark region to a bright region. The ascending transition is given a positive numerical value. Similarly, as the scanline recedes from the echo reflective wall, a descending transition is established as the pixel intensity gradient decreases to or approaches a minimum value. The descending transition is given a negative numerical value.
  • Valid boundary edges are those that exhibit ascending and descending pixel intensity gradients, or equivalently, exhibit paired or matched positive and negative numerical values. The valid boundary edges are retained in the image. Spurious or invalid boundary edges do not exhibit paired ascending-descending pixel intensity gradients, i.e., do not exhibit paired or matched positive and negative numerical values. The spurious boundary edges are removed from the image.
  • For bladder cavity volumes, most edge points for blood fluid surround a dark, closed region, with directions pointing inwards towards the center of the region. Thus, for a convex-shaped region, the direction of a gradient for any edge point, the edge point having a gradient direction approximately opposite to the current point represents the matching edge point. Those edge points exhibiting an assigned positive and negative value are kept as valid edge points on the image because the negative value is paired with its positive value counterpart. Similarly, those edge point candidates having unmatched values, i.e., those edge point candidates not having a negative-positive value pair, are deemed not to be true or valid edge points and are discarded from the image.
  • The matching edge point algorithm delineates edge points not lying on the boundary for removal from the desired dark regions. Thereafter, the region between any two matching edge points is filled in with non-zero pixels to establish edge-based segmentation. In a preferred embodiment of the invention, only edge points whose directions are primarily oriented co-linearly with the scanline are sought to permit the detection of matching front wall and back wall pairs of a bladder cavity, for example the left or right ventricle.
  • Referring again to FIG. 10, results from the respective segmentation procedures are then combined at process block 514 and subsequently undergoes a cleanup algorithm process at process block 516. The combining process of block 214 uses a pixel-wise Boolean AND operator step to produce a segmented image by computing the pixel intersection of two images. The Boolean AND operation represents the pixels of each scan plane of the 3D data sets as binary numbers and the corresponding assignment of an assigned intersection value as a binary number 1 or 0 by the combination of any two pixels. For example, consider any two pixels, say pixelA and pixelB, which can have a 1 or 0 as assigned values. If pixelA's value is 1, and pixelB's value is 1, the assigned intersection value of pixelA and pixelB is 1. If the binary value of pixelA and pixelB are both 0, or if either pixelA or pixelB is 0, then the assigned intersection value of pixelA and pixelB is 0. The Boolean AND operation takes the binary any two digital images as input, and outputs a third image with the pixel values made equivalent to the intersection of the two input images.
  • After combining the segmentation results, the combined pixel information in the 3D data sets In a fifth process is cleaned at process block 516 to make the output image smooth and to remove extraneous structures not relevant to bladder cavities. Cleanup 516 includes filling gaps with pixels and removing pixel groups unlikely to be related to the ROI undergoing study, for example pixel groups unrelated to bladder cavity structures. Sub-algorithm 500 is then completed and exits to sub-algorithm 600.
  • FIG. 11 depicts a logarithm of a Cepstrum. The Cepstrum is used in sub-algorithm 400A for the pulse estimation via application of point spread functions to the echogenic data sets generated by the transceivers 10A,B.
  • FIGS. 12A-C depict histogram waveform plots derived from water tank pulse-echo experiments undergoing parametric and non-parametric analysis. FIG. 12A is a measure plot. FIG. 12B is a nonparametric pulse estimated pattern derived from sub-algorithm 400A. FIG. 12 c is a parametric pulse estimated pattern derived from sub-algorithm 400B.
  • FIGS. 13-25 are bladder sonograms that depict image clarity after undergoing image enhancement processing by algorithms described in FIGS. 5-10.
  • FIG. 13 is an unprocessed image that will undergo image enhancement processing.
  • FIG. 14 illustrates an enclosed portion of a magnified region of FIG. 13.
  • FIG. 15 is the resultant image of FIG. 13 that has undergone image processing via nonparametric estimation under sub-algorithm 400A. The low echogenic region within the circle inset has more contrast than the unprocessed image of FIGS. 13 and 14.
  • FIG. 16 is the resultant image of FIG. 13 that has undergone image processing via parametric estimation under sub-algorithm 400B. Here the circle inset is in the echogenic musculature region encircling the bladder and is shown with enhanced contrast and clarity than the magnified, unprocessed image of FIG. 14.
  • FIG. 17 the resultant image of an alternate image-processing embodiment using a Weiner filter. Weiner filtration image does not have the clarity nor the contrast in the low echogenic bladder region of FIG. 15 (compare circle insets).
  • FIG. 18 is another unprocessed image that will undergo image enhancement processing.
  • FIG. 19 illustrates an enclosed portion of a magnified region of FIG. 18.
  • FIG. 20 is the resultant image of FIG. 18 that has undergone image processing via nonparametric estimation under sub-algorithm 400A. The low echogenic region is darker and the echogenic regions are brighter with more contrast than the magnified, unprocessed image of FIG. 19.
  • FIG. 21 is the resultant image of FIG. 18 that has undergone image processing via parametric estimation under sub-algorithm 400B. The low echogenic region is darker and the echogenic regions are brighter with enhanced contrast and clarity than the magnified, unprocessed image of FIG. 19.
  • FIG. 22 is another unprocessed image that will undergo image enhancement processing.
  • FIG. 23 illustrates an enclosed portion of a magnified region of FIG. 22.
  • FIG. 24 is the resultant image of FIG. 22 that has undergone image processing via nonparametric estimation under sub-algorithm 400A. The low echogenic region is darker and the echogenic regions are brighter with more contrast than the magnified, unprocessed image of FIG. 23.
  • FIG. 25 is the resultant image of FIG. 22 that has undergone image processing via parametric estimation under sub-algorithm 400B. The low echogenic region is darker and the echogenic regions are brighter with enhanced contrast and clarity than the magnified, unprocessed image of FIG. 23.
  • FIG. 26 depicts a schematic example of a time velocity map derived from sub-algorithm 310.
  • FIG. 27 depicts another schematic example of a time velocity map derived from sub-algorithm 310.
  • FIG. 28 illustrates a seven panel image series of a beating heart ventricle that will undergo the optical flow processes of sub-algorithm 300 in which at least two images are required.
  • FIG. 29 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 presented in a 2D flow pattern after undergoing sub-algorithm 310.
  • FIG. 30 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the X-axis direction or phi direction after undergoing sub-algorithm 310.
  • FIG. 31 illustrates an optical flow velocity map plot of the seven panel image series of FIG. 28 along the Y-axis direction radial direction after undergoing sub-algorithm 310.
  • FIG. 32 illustrates a 3D optical vector plot after undergoing sub-algorithm 310 and corresponds to the top row of FIG. 29.
  • FIG. 33 illustrates a 3D optical vector plot in the phi direction after undergoing sub-algorithm 310 and corresponds to FIG. 30 at threshold value T=1.
  • FIG. 34 illustrates a 3D optical vector plot in the radial direction after undergoing sub-algorithm 310 and corresponds to FIG. 31 at T=1.
  • FIG. 35 illustrates a 3D optical vector plot in the radial direction above a Y-axis threshold setting of 0.6 after undergoing sub-algorithm 310 and corresponds to FIG. 34 the threshold T that are less than 0.6 are set to 0.
  • FIGS. 36A-G depicts embodiments of the sonic gel dispenser.
  • FIG. 36A illustrates the metered dispensing of sonic gel by calibrated rotation of a compressing wheel. The peristaltic mechanism using the compressing wheel is shown in partial a side view compressing wheel mechanism.
  • FIG. 36B illustrates in cross-section the inside the dispenser showing a collapsible bag that is engaged by the compressing wheel. As more rotation action is conveyed the compressing wheel, the bag progressively collapses.
  • FIG. 36C illustrates an alternative embodiment employing compression by hand gripping.
  • FIG. 36D illustrates an alternative embodiment employing push button or lever compression to dispense metered quantities of sonic gel.
  • FIG. 36E illustrates an alternative embodiment employing air valves to limit re-gassing of internal sonic gel volume stores within the sonic gel dispenser. The value is pinched closed while when the gripping or compressing wheel pressure is lessened and spring opens when the gripping or compressing wheel pressure is increased to allow sonic gel to be dispensed.
  • FIG. 36F illustrates a side, cross-sectional view of the gel dispensing system that includes a pre-packaged collapsible bottle with a refill bag, a bottle holder that positions the pre-packaged bottle for use, and a sealed tip that may be clipped open.
  • FIG. 36G illustrate a side view of the pre-packaged collapsible bottle of FIG. 36F. A particular embodiment includes eight ounce squeeze bottle.
  • FIGS. 37-46 concern insertion viewed by ultrasonic systems in which the optimization of cannula motion detection during insertion is enhanced with method algorithms directed to detect moving cannula fitted with echogenic ultrasound micro reflectors.
  • An embodiment related to cannula insertion generally includes an ultrasound probe attached to a first camera and a second camera and a processing and display generating system that is in signal communication with the ultrasound probe, the first camera, and/or the second camera. A user of the system scans tissue containing a target vein using the ultrasound probe and a cross-sectional image of the target vein is displayed. The first camera records a first image of a cannula in a first direction and the second camera records a second image of the cannula in a second direction orthogonal to the first direction. The first and/or the second images are processed by the processing and display generating system along with the relative positions of the ultrasound probe, the first camera, and/or the second camera to determine the trajectory of the cannula. A representation of the determined trajectory of the cannula is then displayed on the ultrasound image.
  • FIG. 37 is a diagram illustrating a side view of one embodiment of the present invention. A two-dimensional (2D) ultrasound probe 1010 is attached to a first camera 1014 that takes images in a first direction. The ultrasound probe 1010 is also attached to a second camera 1018 via a member 1016. In other embodiments, the member 1016 may link the first camera 1014 to the second camera 1018 or the member 1016 may be absent, with the second camera 1018 being directly attached to a specially configured ultrasound probe. The second camera 1018 is oriented such that the second camera 1018 takes images in a second direction that is orthogonal to the first direction of the images taken by the first camera 1014. The placement of the cameras 1014, 1018 may be such that they can both take images of a cannula 1020 when the cannula 1020 is placed before the cameras 1014, 1018. A needle may also be used in place of a cannula. The cameras 1014, 1018 and the ultrasound probe 1010 are geometrically interlocked such that the cannula 1020 trajectory can be related to an ultrasound image. In FIG. 37, the second camera 1018 is behind the cannula 1020 when looking into the plane of the page. The cameras 1014, 1018 take images at a rapid frame rate of approximately 1030 frames per second. The ultrasound probe 1010 and/or the cameras 1014, 1018 are in signal communication with a processing and display generating system 1061.
  • First, a user employs the ultrasound probe 1010 and the processing and display generating system 1061 to generate a cross-sectional image of a patient's arm tissue containing a vein to be cannulated (“target vein”) 1019. This could be done by one of the methods disclosed in the related patents and/or patent applications which are herein incorporated by reference, for example. The user then identifies the target vein 1019 in the image using methods such as simple compression which differentiates between arteries and/or veins by using the fact that veins collapse easily while arteries do not. After the user has identified the target vein 1019, the ultrasound probe 1010 is affixed to the patient's arm over the previously identified target vein 19 using a magnetic tape material 1012. The ultrasound probe 1010 and the processing and display generating system 1061 continue to generate a 2D cross-sectional image of the tissue containing the target vein 1019. Images from the cameras 1014, 1018 are provided to the processing and display generating system 1061 as the cannula 1020 is approaching and/or entering the arm of the patient.
  • The processing and display generating system 1061 locates the cannula 1020 in the images provided by the cameras 1014, 1018 and determines the projected location at which the cannula 1020 will penetrate the cross-sectional ultrasound image being displayed. The trajectory of the cannula 1020 is determined in some embodiments by using image processing to identify bright spots corresponding to micro reflectors previously machined into the shaft of the cannula 1020 or a needle used alone or in combination with the cannula 1020. Image processing uses the bright spots to determine the angles of the cannula 1020 relative to the cameras 1014, 1018 and then generates a projected trajectory by using the determined angles and/or the known positions of the cameras 1014, 1018 in relation to the ultrasound probe 10. In other embodiments, determination of the cannula 1020 trajectory is performed using edge-detection algorithms in combination with the known positions of the cameras 1014, 1018 in relation to the ultrasound probe 1010, for example.
  • The projected location may be indicated on the displayed image as a computer-generated cross-hair 1066, the intersection of which is where the cannula 1020 is projected to penetrate the image. When the cannula 1020 does penetrate the cross-sectional plane of the scan produced by the ultrasound probe 1010, the ultrasound image confirms that the cannula 1020 penetrated at the location of the cross-hair 1066. This gives the user a real-time ultrasound image of the target vein 1019 with an overlaid real-time computer-generated image of the position in the ultrasound image that the cannula 1020 will penetrate. This allows the user to adjust the location and/or angle of the cannula 1020 before and/or during insertion to increase the likelihood they will penetrate the target vein 1019. Risks of pneumothorax and other adverse outcomes should be substantially reduced since a user will be able to use normal “free” insertion procedures but have the added knowledge of knowing where the cannula 1020 trajectory will lead.
  • FIG. 38 is a diagram illustrating a top view of the embodiment shown in FIG. 37. It is more easily seen from this view that the second camera 1018 is positioned behind the cannula 1020. The positioning of the cameras 1014, 1018 relative to the cannula 1020 allows the cameras 1014, 1018 to capture images of the cannula 1020 from two different directions, thus making it easier to determine the trajectory of the cannula 1020.
  • FIG. 39 is diagram showing additional detail for a needle shaft 1022 to be used with one embodiment of the invention. The needle shaft 1022 includes a plurality of micro corner reflectors 1024. The micro corner reflectors 1024 are cut into the needle shaft 1022 at defined intervals Δl in symmetrical patterns about the circumference of the needle shaft 1022. The micro corner reflectors 1024 could be cut with a laser, for example.
  • FIGS. 40A and 40B are diagrams showing close-up views of surface features of the needle shaft 1022 shown in FIG. 39. FIG. 40A shows a first input ray with a first incident angle of approximately 90° striking one of the micro corner reflectors 1024 on the needle shaft 1022. A first output ray is shown exiting the micro corner reflector 1024 in a direction toward the source of the first input ray. FIG. 40B shows a second input ray with a second incident angle other than 90° striking a micro corner reflector 1025 on the needle shaft 1022. A second output ray is shown exiting the micro corner reflector 1025 in a direction toward the source of the second input ray. FIGS. 40A and 40B illustrate that the micro corner reflectors 1024, 1025 are useful because they tend to reflect an output ray in the direction from which an input ray originated.
  • FIG. 41 is a diagram showing imaging components for use with the needle shaft 1022 shown in FIG. 39 in accordance with one embodiment of the invention. The imaging components are shown to include a first light source 1026, a second light source 1028, a lens 1030, and a sensor chip 1032. The first and/or second light sources 1026, 1028 may be light emitting diodes (LEDs), for example. In an example embodiment, the light sources 1026, 1028 are infra-red LEDs. Use of an infra-red source is advantageous because it is not visible to the human eye, but when an image of the needle shaft 1022 is recorded, the image will show strong bright dots where the micro corner reflectors 1024 are located because silicon sensor chips are sensitive to infra-red light and the micro corner reflectors 1024 tend to reflect output rays in the direction from which input rays originate, as discussed with reference to FIGS. 40A and 40B. In alternative embodiments, a single light source may be used. Although not shown, the sensor chip 1032 is encased in a housing behind the lens 1030 and the sensor chip 1032 and light sources 1026, 1028 are in electrical communication with the processing and display generating system1061. The sensor chip 1032 and/or the lens 1030 form a part of the first and second cameras 1014, 1018 in some embodiments. In an example embodiment, the light sources 1026, 1028 are pulsed on at the time the sensor chip 1032 captures an image. In other embodiments, the light sources 1026, 1028 are left on during video image capture.
  • FIG. 42 is a diagram showing a representation of an image 1034 produced by the imaging components shown in FIG. 41. The image 34 may include a needle shaft image 1036 that corresponds to a portion of the needle shaft 1022 shown in FIG. 41. The image 1034 also may include a series of bright dots 1038 running along the center of the needle shaft image 1036 that correspond to the micro corner reflectors 1024 shown in FIG. 41. A center line 1040 is shown in FIG. 42 to illustrate how an angle theta (θ) could be obtained by image processing to recognize the bright dots 1038 and determine a line through them. The angle theta represents the degree to which the needle shaft 1022 is inclined with respect to a reference line 1042 that is related to the fixed position of the sensor chip 1032.
  • FIG. 43 is a system diagram of an embodiment of the present invention and shows additional detail for the processing and display generating system 1061 in accordance with an example embodiment of the invention. The ultrasound probe 1010 is shown connected to the processing and display generating system via M control lines and N data lines. The M and N variables are for convenience and appear simply to indicate that the connections may be composed of one or more transmission paths. The control lines allow the processing and display generating system 61 to direct the ultrasound probe 1010 to properly perform an ultrasound scan and the data lines allow responses from the ultrasound scan to be transmitted to the processing and display generating system 1061. The first and second cameras 1014, 1018 are also each shown to be connected to the processing and display generating system 1061 via N lines. Although the same variable N is used, it is simply indicating that one or more lines may be present, not that each device with a label of N lines has the same number of lines.
  • The processing and display generating system 1061 is composed of a display 1064 and a block 1062 containing a computer, a digital signal processor (DSP), and analog to digital (A/D) converters. As discussed for FIG. 37, the display 1064 will display a cross-sectional ultrasound image. The computer-generated cross hair 66 is shown over a representation of a cross-sectional view of the target vein 1019 in FIG. 43. The cross hair 1066 consists of an x-crosshair 1068 and a z-crosshair 1070. The DSP and the computer in the block 1062 use images from the first camera 1014 to determine the plane in which the cannula 1020 will penetrate the ultrasound image and then write the z-crosshair 1070 on the ultrasound image provided to the display 1064. Similarly, the DSP and the computer in the block 1062 use images from the second camera 1018, which are orthogonal to the images provided by the first camera 1014 as discussed for FIG. 37, to write the x-crosshair 1068 on the ultrasound image.
  • FIG. 44 is a system diagram of an example embodiment showing additional detail for the block 1062 shown in FIG. 39. The block 1062 includes a first A/D converter 1080, a second A/D converter 1082, and a third A/D converter 1084. The first A/D converter 1080 receives signals from the ultrasound probe 1010 and converts them to digital information that is provided to a DSP 1086. The second and third A/ D converters 1082, 1084 receive signals from the first and second cameras 1014, 1018 respectively and convert the signals to digital information that is provided to the DSP 1086. In alternative embodiments, some or all of the A/D converters are not present. For example, video from the cameras 1014, 1018 may be provided to the DSP 1086 directly in digital form rather than being created in analog form before passing through A/ D converters 1082, 1084. The DSP 1086 is in data communication with a computer 1088 that includes a central processing unit (CPU) 1090 in data communication with a memory component 1092. The computer 1088 is in signal communication with the ultrasound probe 1010 and is able to control the ultrasound probe 1010 using this connection. The computer 1088 is also connected to the display 64 and produces a video signal used to drive the display 1064.
  • FIG. 45 is a flowchart of a method of displaying the trajectory of a cannula in accordance with an embodiment of the present invention. First, at a block 1200, an ultrasound image of a vein cross-section is produced and/or displayed. Next, at a block 1210, the trajectory of a cannula is determined. Then, at a block 1220, the determined trajectory of the cannula is displayed on the ultrasound image.
  • FIG. 46 is a flowchart showing additional detail for the block 1210 depicted in FIG. 45. The block 1210 includes a block 1212 where a first image of a cannula is recorded using a first camera. Next, at a block 1214, a second image of the cannula orthogonal to the first image of the cannula is recorded using a second camera. Then, at a block 1216, the first and second images are processed to determine the trajectory of the cannula.
  • While the preferred embodiment of the invention has been illustrated and described, as noted above, many changes can be made without departing from the spirit and scope of the invention. For example, a three dimensional ultrasound system could be used rather than a 2D system. In addition, different numbers of cameras could be used along with image processing that determines the cannula 1020 trajectory based on the number of cameras used. The two cameras 1014, 1018 could also be placed in a non-orthogonal relationship so long as the image processing was adjusted to properly determine the orientation and/or projected trajectory of the cannula 1020. Also, an embodiment of the invention could be used for needles and/or other devices which are to be inserted in the body of a patient. Additionally, an embodiment of the invention could be used in places other than arm veins. Regions of the patient's body other than an arm could be used and/or biological structures other than veins may be the focus of interest. As regards ultrasound-based algorithms, alternate embodiments may be configured to image acquisitions other than ultrasound, for example X-ray, visible and infrared light acquired images. Accordingly, the scope of the invention is not limited by the disclosure of the preferred embodiment.

Claims (25)

1. A method to determine amniotic fluid volume in digital images, the method comprising:
positioning an ultrasound transceiver to probe a first portion of a uterus of a patient, the transceiver adapted to obtain a first plurality of scanplanes;
re-positioning the ultrasound transceiver to probe a second portion of the uterus to obtain a second plurality of scanplanes;
enhancing the images of the amniotic fluid regions in the scanplanes with a plurality of algorithms;
registering the scanplanes of the first plurality with the second plurality;
associating the registered scanplanes into a composite array, and
determining the amniotic fluid volume of the amniotic fluid regions within the composite array.
2. The method of claim 1, wherein plurality of scanplanes are acquired from a rotational array, a translational array, or a wedge array.
3. The method of claim 1, wherein the plurality of algorithms includes algorithms for image enhancement, segmentation, and polishing.
4. The method of claim 3, wherein segmentation further includes an intensity clustering step, a spatial gradients step, a hysteresis threshold step, a Region-of-Interest selection step, and a matching edges filter step.
5. The method of claim 4, wherein the intensity clustering step is performed in a first parallel operation, and the spatial gradients, hysteresis threshold, Region-of-Interest selection, and matching edges filter steps are performed in a second parallel operation, and further wherein the results from the first parallel operation are combined with the results from the second parallel operation.
6. The method of claim 3, wherein image enhancement further includes applying a heat filter and a shock filter to the digital images.
7. The method of claim 6 wherein the heat filter is applied to the digital images followed by application of the shock filter to the digital images.
8. The method of claim 1, wherein the amniotic fluid volume is adjusted for underestimation or overestimation.
9. The method of claim 8, wherein the amniotic fluid volume is adjusted for underestimation by probing with adjustable ultrasound frequencies to penetrate deep tissues and to repositioning the transceiver to establish that deep tissues are exposed with probing ultrasound of sufficient strength to provide a reflecting ultrasound echo receivable by the transceiver, such that more than one rotational array to detect deep tissue and regions of the fetal head are obtained.
10. The method of claim 8, wherein amniotic fluid volume is adjusted for overestimation by automatically determining fetal head volume contribution to amniotic fluid volume and deducting it from the amniotic fluid volume.
11. The method of claim 10, wherein the steps to adjust for overestimated amniotic fluid volumes include a 2D clustering step, a matching edges step, an all edges step, a gestational age factor step, a head diameter step, an head edge detection step, and a Hough transform step.
12. The method of claim 12, wherein the Hough transform step includes a polar Hough Transform step, a Find Maximum Hough value step, and a fill circle region step.
13. The method of claim 12, wherein the polar Hough Transform step includes a first Hough transform to look for lines of a specified shape, and a second Hough transform to look for fetal head structures.
14. The method of claim 1, wherein the positions include lateral and transverse.
15. A method to determine amniotic fluid volume in digital images, the method comprising:
positioning an ultrasound transceiver to probe a first portion of a uterus of a patient, the transceiver adapted to obtain a first plurality of scanplanes;
re-positioning the ultrasound transceiver to probe a second and a third portion of the uterus to obtain a second and third plurality of scanplanes;
enhancing the images of the amniotic fluid regions in the scanplanes with a plurality of algorithms;
registering the scanplanes of the first plurality through the third plurality;
associating the registered scanplanes into a composite array, and
determining the amniotic fluid volume of the amniotic fluid regions within the composite array.
16. A method to determine amniotic fluid volume in digital images, the method comprising:
positioning an ultrasound transceiver to probe a first portion of a uterus of a patient, the transceiver adapted to obtain a first plurality of scanplanes;
re-positioning the ultrasound transceiver to probe a second through fourth portion of the uterus to obtain a second through fourth plurality of scanplanes;
enhancing the images of the amniotic fluid regions in the scanplanes with a plurality of algorithms;
registering the scanplanes of the first through fourth plurality;
associating the registered scanplanes into a composite array, and
determining the amniotic fluid volume of the amniotic fluid regions within the composite array.
17. A method to determine amniotic fluid volume in digital images, the method comprising:
positioning an ultrasound transceiver to probe a first portion of a uterus of a patient, the transceiver adapted to obtain a first plurality of scanplanes;
re-positioning the ultrasound transceiver to probe a second through fifth portion of the uterus to obtain a second through fifth plurality of scanplanes;
enhancing the images of the amniotic fluid regions in the scanplanes with a plurality of algorithms;
registering the scanplanes of the first through the fifth plurality;
associating the registered scanplanes into a composite array, and
determining the amniotic fluid volume of the amniotic fluid regions within the composite array.
18. A system for determining amniotic fluid volume, the system comprising:
a transceiver positioned from two to six locations of a patient, the transceiver configured to deliver radio frequency ultrasound pulses to amniotic fluid regions of a patient, to receive echoes of the pulses reflected from the amniotic fluid regions, to convert the echoes to digital form, and to obtain a plurality of scanplanes in the form of an array for each location;
a computer system in communication with the transceiver, the computer system having a microprocessor and a memory, the memory further containing stored programming instructions operable by the microprocessor to associate the plurality of scanplanes of each array, and
the memory further containing instructions operable by the microprocessor to determine the presence of an amniotic fluid region in each array and determine the amniotic fluid volume in each array.
19. The system of claim 18, wherein the array includes rotational, wedge, and translation.
20. The system of claim 18, wherein stored programming instructions further include aligning scanplanes having overlapping regions from each location into a plurality of registered composite scanplanes.
21. The system of claim 20, wherein the stored programming instructions further include fusing the registered composite scanplanes amniotic fluid regions of the scanplanes of each array.
22. The system of claim 21 wherein the stored programming instructions further include arranging the fused composite scanplanes into a composite array.
23. The system of claim 18, wherein the computer system is configured for remote operation via a local area network or an Internet web-based system, the internet web-based system having a plurality of programs that collect, analyze, and store amniotic fluid volume.
24. A method to determine amniotic fluid volume, the method comprising:
positioning an ultrasound transceiver to probe at least a portion of a uterus of a patient, the transceiver configured to obtain a plurality of scanlines;
enhancing the images of the amniotic fluid regions in the scanline plurality with a plurality of algorithms;
associating the registered scan lines into a composite array, and
determining the amniotic fluid volume of the amniotic fluid regions within the composite array.
25. A system to improve image clarity in ultrasound images comprising:
an ultrasound transducer connected with a microprocessor configured collect and process echoes returning from at least two ultrasound-based images from a scanned region-of-interest,
wherein motion sections are compensated with the stationary sections within the scanned region of interest.
US11/925,887 2002-06-07 2007-10-27 3D ultrasound-based instrument for non-invasive measurement of Amniotic Fluid Volume Abandoned US20080146932A1 (en)

Priority Applications (5)

Application Number Priority Date Filing Date Title
US11/925,887 US20080146932A1 (en) 2002-06-07 2007-10-27 3D ultrasound-based instrument for non-invasive measurement of Amniotic Fluid Volume
US12/121,726 US20090105585A1 (en) 2007-05-16 2008-05-15 System and method for ultrasonic harmonic imaging
US12/121,721 US8167803B2 (en) 2007-05-16 2008-05-15 System and method for bladder detection using harmonic imaging
PCT/US2008/063987 WO2008144570A1 (en) 2007-05-16 2008-05-16 Systems and methods for testing the functionality of ultrasound transducers
US12/537,985 US8133181B2 (en) 2007-05-16 2009-08-07 Device, system and method to measure abdominal aortic aneurysm diameter

Applications Claiming Priority (19)

Application Number Priority Date Filing Date Title
US10/165,556 US6676605B2 (en) 2002-06-07 2002-06-07 Bladder wall thickness measurement system and methods
US40062402P 2002-08-02 2002-08-02
US42388102P 2002-11-05 2002-11-05
PCT/US2003/014785 WO2003103499A1 (en) 2002-06-07 2003-05-09 Bladder wall thickness measurement system and methods
US47052503P 2003-05-12 2003-05-12
US10/443,126 US7041059B2 (en) 2002-08-02 2003-05-20 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US10/633,186 US7004904B2 (en) 2002-08-02 2003-07-31 Image enhancement and segmentation of structures in 3D ultrasound images for volume measurements
PCT/US2003/024368 WO2004012584A2 (en) 2002-08-02 2003-08-01 Image enhancing and segmentation of structures in 3d ultrasound
US10/701,955 US7087022B2 (en) 2002-06-07 2003-11-05 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US56682304P 2004-04-30 2004-04-30
US63348504P 2004-12-06 2004-12-06
US11/119,355 US7520857B2 (en) 2002-06-07 2005-04-29 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
PCT/US2005/030799 WO2006026605A2 (en) 2002-06-07 2005-08-29 Systems and methods for quantification and classification of fluids in human cavities in ultrasound images
PCT/US2005/031755 WO2006031526A2 (en) 2004-09-09 2005-09-09 Systems and methods for ultrasound imaging using an inertial reference unit
PCT/US2005/043836 WO2006062867A2 (en) 2004-12-06 2005-12-06 System and method for determining organ wall mass by three-dimensional ultrasound
US11/295,043 US7727150B2 (en) 2002-06-07 2005-12-06 Systems and methods for determining organ wall mass by three-dimensional ultrasound
US76067706P 2006-01-20 2006-01-20
US11/362,368 US7744534B2 (en) 2002-06-07 2006-02-24 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US11/925,887 US20080146932A1 (en) 2002-06-07 2007-10-27 3D ultrasound-based instrument for non-invasive measurement of Amniotic Fluid Volume

Related Parent Applications (2)

Application Number Title Priority Date Filing Date
US11/119,355 Continuation US7520857B2 (en) 2002-06-07 2005-04-29 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US11/925,896 Continuation-In-Part US20080249414A1 (en) 2002-06-07 2007-10-27 System and method to measure cardiac ejection fraction

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US11/926,522 Continuation-In-Part US20080139938A1 (en) 2002-06-07 2007-10-29 System and method to identify and measure organ wall boundaries

Publications (1)

Publication Number Publication Date
US20080146932A1 true US20080146932A1 (en) 2008-06-19

Family

ID=35320697

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/119,355 Expired - Fee Related US7520857B2 (en) 2002-06-07 2005-04-29 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US11/925,654 Abandoned US20080242985A1 (en) 2003-05-20 2007-10-26 3d ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US11/925,887 Abandoned US20080146932A1 (en) 2002-06-07 2007-10-27 3D ultrasound-based instrument for non-invasive measurement of Amniotic Fluid Volume

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US11/119,355 Expired - Fee Related US7520857B2 (en) 2002-06-07 2005-04-29 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US11/925,654 Abandoned US20080242985A1 (en) 2003-05-20 2007-10-26 3d ultrasound-based instrument for non-invasive measurement of amniotic fluid volume

Country Status (2)

Country Link
US (3) US7520857B2 (en)
WO (1) WO2005107581A2 (en)

Cited By (43)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20060173292A1 (en) * 2002-09-12 2006-08-03 Hirotaka Baba Biological tissue motion trace method and image diagnosis device using the trace method
US20060235301A1 (en) * 2002-06-07 2006-10-19 Vikram Chalana 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US20090062644A1 (en) * 2002-06-07 2009-03-05 Mcmorrow Gerald System and method for ultrasound harmonic imaging
US20090105585A1 (en) * 2007-05-16 2009-04-23 Yanwei Wang System and method for ultrasonic harmonic imaging
US20090137906A1 (en) * 2005-07-25 2009-05-28 Hakko Co., Ltd. Ultrasonic Piercing Needle
US20100125201A1 (en) * 2008-11-18 2010-05-20 Kabushiki Kaisha Toshiba Ultrasound imaging apparatus
WO2010084446A1 (en) * 2009-01-23 2010-07-29 Koninklijke Philips Electronics N.V. Cardiac image processing and analysis
US20100198075A1 (en) * 2002-08-09 2010-08-05 Verathon Inc. Instantaneous ultrasonic echo measurement of bladder volume with a limited number of ultrasound beams
WO2010106379A1 (en) * 2009-03-20 2010-09-23 Mediwatch Uk Limited Ultrasound probe with accelerometer
US7819806B2 (en) 2002-06-07 2010-10-26 Verathon Inc. System and method to identify and measure organ wall boundaries
US20110295121A1 (en) * 2010-05-31 2011-12-01 Medison Co., Ltd. 3d ultrasound system and method for operating 3d ultrasound system
US20120045111A1 (en) * 2010-08-23 2012-02-23 Palma Giovanni Image processing method to determine suspect regions in a tissue matrix, and use thereof for 3d navigation through the tissue matrix
US8133181B2 (en) 2007-05-16 2012-03-13 Verathon Inc. Device, system and method to measure abdominal aortic aneurysm diameter
US8167803B2 (en) 2007-05-16 2012-05-01 Verathon Inc. System and method for bladder detection using harmonic imaging
WO2012088471A1 (en) * 2010-12-22 2012-06-28 Veebot, Llc Systems and methods for autonomous intravenous needle insertion
US8221321B2 (en) 2002-06-07 2012-07-17 Verathon Inc. Systems and methods for quantification and classification of fluids in human cavities in ultrasound images
US8221322B2 (en) 2002-06-07 2012-07-17 Verathon Inc. Systems and methods to improve clarity in ultrasound images
US20120215107A1 (en) * 2011-02-17 2012-08-23 Fujifilm Corporation Ultrasound probe and ultrasound diagnostic apparatus
CN102670260A (en) * 2011-03-18 2012-09-19 富士胶片株式会社 Ultrasound diagnostic apparatus and ultrasound image producing method
WO2012129695A1 (en) * 2011-03-31 2012-10-04 Ats Automation Tooling Systems Inc. Three dimensional optical sensing through optical media
US20130158405A1 (en) * 2010-08-31 2013-06-20 B-K Medical Aps 3D View of 2D Ultrasound Images
US20130197370A1 (en) * 2012-01-30 2013-08-01 The Johns Hopkins University Automated Pneumothorax Detection
KR101386102B1 (en) * 2012-03-09 2014-04-16 삼성메디슨 주식회사 Method for providing ultrasound images and ultrasound apparatus thereof
US20140303498A1 (en) * 2011-08-08 2014-10-09 Canon Kabushiki Kaisha Object information acquisition apparatus, object information acquisition system, display control method, display method, and program
US20140328526A1 (en) * 2013-05-02 2014-11-06 Toshiba Medical Systems Corporation Medical imaging data processing apparatus and method
US20140371587A1 (en) * 2013-06-12 2014-12-18 Wisconsin Alumni Research Foundation Ultrasound Machine Providing Composite Image Data
US20170055955A1 (en) * 2015-09-02 2017-03-02 Aningbo Youchang Ultrasonic Technology Co.,Ltd Wireless intelligent ultrasound fetal imaging system
US20170245837A1 (en) * 2015-09-01 2017-08-31 Shenzhen Institutes Of Advanced Technology, Chinese Academy Of Sciences Ultrasound probe calibration phantom, ultrasound probe calibration system and calibration method thereof
US20170258386A1 (en) * 2014-11-27 2017-09-14 Umc Utrecht Holding B.V. Wearable ultrasound device for signalling changes in a human or animal body
WO2018031754A1 (en) * 2016-08-10 2018-02-15 U.S. Government As Represented By The Secretary Of The Army Automated three and four-dimensional ultrasound quantification and surveillance of free fluid in body cavities and intravascular volume
US20180042577A1 (en) * 2016-08-12 2018-02-15 General Electric Company Methods and systems for ultrasound imaging
US9947097B2 (en) * 2016-01-19 2018-04-17 General Electric Company Method and system for enhanced fetal visualization by detecting and displaying a fetal head position with cross-plane ultrasound images
US20180158190A1 (en) * 2013-03-15 2018-06-07 Conavi Medical Inc. Data display and processing algorithms for 3d imaging systems
US10074191B1 (en) 2015-07-05 2018-09-11 Cognex Corporation System and method for determination of object volume with multiple three-dimensional sensors
KR20210002197A (en) * 2019-06-27 2021-01-07 고려대학교 산학협력단 Method for automatic measurement of amniotic fluid volume based on artificial intelligence model
KR20210002198A (en) * 2019-06-27 2021-01-07 고려대학교 산학협력단 Method for automatic measurement of amniotic fluid volume with camera angle correction function
US20210096243A1 (en) * 2019-09-27 2021-04-01 Butterfly Network, Inc. Methods and apparatus for configuring an ultrasound system with imaging parameter values
US11304676B2 (en) 2015-01-23 2022-04-19 The University Of North Carolina At Chapel Hill Apparatuses, systems, and methods for preclinical ultrasound imaging of subjects
WO2022169808A1 (en) * 2021-02-02 2022-08-11 Thormed Innovation Llc Systems and methods for real time noninvasive urine output assessment
US11426142B2 (en) 2018-08-13 2022-08-30 Rutgers, The State University Of New Jersey Computer vision systems and methods for real-time localization of needles in ultrasound images
US11523799B2 (en) * 2016-03-09 2022-12-13 Koninklijke Philips N.V. Fetal imaging system and method
US11564656B2 (en) * 2018-03-13 2023-01-31 Verathon Inc. Generalized interlaced scanning with an ultrasound probe
US11638569B2 (en) * 2018-06-08 2023-05-02 Rutgers, The State University Of New Jersey Computer vision systems and methods for real-time needle detection, enhancement and localization in ultrasound

Families Citing this family (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040127797A1 (en) * 2002-06-07 2004-07-01 Bill Barnard System and method for measuring bladder wall thickness and presenting a bladder virtual image
US20100036252A1 (en) * 2002-06-07 2010-02-11 Vikram Chalana Ultrasound system and method for measuring bladder wall thickness and mass
US7520857B2 (en) * 2002-06-07 2009-04-21 Verathon Inc. 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US20080262356A1 (en) * 2002-06-07 2008-10-23 Vikram Chalana Systems and methods for ultrasound imaging using an inertial reference unit
US20060025689A1 (en) * 2002-06-07 2006-02-02 Vikram Chalana System and method to measure cardiac ejection fraction
US20090112089A1 (en) * 2007-10-27 2009-04-30 Bill Barnard System and method for measuring bladder wall thickness and presenting a bladder virtual image
US7920908B2 (en) * 2003-10-16 2011-04-05 David Hattery Multispectral imaging for quantitative contrast of functional and structural features of layers inside optically dense media such as tissue
US20070255137A1 (en) * 2006-05-01 2007-11-01 Siemens Medical Solutions Usa, Inc. Extended volume ultrasound data display and measurement
US8184927B2 (en) * 2006-07-31 2012-05-22 Stc.Unm System and method for reduction of speckle noise in an image
US7961975B2 (en) * 2006-07-31 2011-06-14 Stc. Unm System and method for reduction of speckle noise in an image
CN101441401B (en) * 2007-11-20 2012-07-04 深圳迈瑞生物医疗电子股份有限公司 Method and device for rapidly determining imaging area in imaging system
US8225998B2 (en) * 2008-07-11 2012-07-24 Es&S Innovations Llc Secure ballot box
WO2010033113A1 (en) * 2008-09-17 2010-03-25 Nippon Steel Corporation Method for detecting defect in material and system for the method
KR101401734B1 (en) * 2010-09-17 2014-05-30 한국전자통신연구원 Apparatus and method for detecting examination tissue using ultrasound signal
KR101562204B1 (en) * 2012-01-17 2015-10-21 삼성전자주식회사 Probe device, server, ultrasound image diagnosis system, and ultrasound image processing method
JP6037447B2 (en) * 2012-03-12 2016-12-07 東芝メディカルシステムズ株式会社 Ultrasonic diagnostic equipment
US9081097B2 (en) * 2012-05-01 2015-07-14 Siemens Medical Solutions Usa, Inc. Component frame enhancement for spatial compounding in ultrasound imaging
US9357905B2 (en) 2012-06-01 2016-06-07 Robert Molnar Airway device, airway assist device and the method of using same
US9415179B2 (en) 2012-06-01 2016-08-16 Wm & Dg, Inc. Medical device, and the methods of using same
EP3108456B1 (en) * 2014-02-19 2020-06-24 Koninklijke Philips N.V. Motion adaptive visualization in medical 4d imaging
US11633093B2 (en) 2014-08-08 2023-04-25 Wm & Dg, Inc. Medical devices and methods of placement
US10722110B2 (en) 2014-08-08 2020-07-28 Wm & Dg, Inc. Medical devices and methods of placement
US9918618B2 (en) 2014-08-08 2018-03-20 Wm & Dg, Inc. Medical devices and methods of placement
US11147442B2 (en) 2014-08-08 2021-10-19 Wm & Dg, Inc. Medical devices and methods of placement
WO2016081321A2 (en) 2014-11-18 2016-05-26 C.R. Bard, Inc. Ultrasound imaging system having automatic image presentation
EP3220828B1 (en) 2014-11-18 2021-12-22 C.R. Bard, Inc. Ultrasound imaging system having automatic image presentation
US9655592B2 (en) * 2014-11-21 2017-05-23 General Electric Corporation Method and apparatus for rendering an ultrasound image
CN110338840B (en) * 2015-02-16 2022-06-21 深圳迈瑞生物医疗电子股份有限公司 Three-dimensional imaging data display processing method and three-dimensional ultrasonic imaging method and system
WO2017000988A1 (en) * 2015-06-30 2017-01-05 Brainlab Ag Medical image fusion with reduced search space
CN105476666A (en) * 2015-11-27 2016-04-13 张艳萍 Integrated amniotic fluid examination device for obstetrical department
US10380993B1 (en) 2016-01-22 2019-08-13 United Services Automobile Association (Usaa) Voice commands for the visually impaired to move a camera relative to a document
US20170273662A1 (en) * 2016-03-24 2017-09-28 Elwha Llc Ultrasonic fetal imaging with shear waves
US11051682B2 (en) 2017-08-31 2021-07-06 Wm & Dg, Inc. Medical devices with camera and methods of placement
US10489677B2 (en) * 2017-09-07 2019-11-26 Symbol Technologies, Llc Method and apparatus for shelf edge detection
US10653307B2 (en) 2018-10-10 2020-05-19 Wm & Dg, Inc. Medical devices for airway management and methods of placement
EP3639751A1 (en) * 2018-10-15 2020-04-22 Koninklijke Philips N.V. Systems and methods for guiding the acquisition of an ultrasound image
US11497394B2 (en) 2020-10-12 2022-11-15 Wm & Dg, Inc. Laryngoscope and intubation methods

Citations (84)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3613069A (en) * 1969-09-22 1971-10-12 Gen Dynamics Corp Sonar system
US4431007A (en) * 1981-02-04 1984-02-14 General Electric Company Referenced real-time ultrasonic image display
US4556066A (en) * 1983-11-04 1985-12-03 The Kendall Company Ultrasound acoustical coupling pad
US4757921A (en) * 1985-05-24 1988-07-19 Wm Still & Sons Limited Water dispensers and methods
US4771205A (en) * 1983-08-31 1988-09-13 U.S. Philips Corporation Ultrasound transducer
US4821210A (en) * 1987-04-02 1989-04-11 General Electric Co. Fast display of three-dimensional images
US4844080A (en) * 1987-02-19 1989-07-04 Michael Frass Ultrasound contact medium dispenser
US4926871A (en) * 1985-05-08 1990-05-22 International Biomedics, Inc. Apparatus and method for non-invasively and automatically measuring the volume of urine in a human bladder
US5058591A (en) * 1987-11-10 1991-10-22 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Rapidly quantifying the relative distention of a human bladder
US5060515A (en) * 1989-03-01 1991-10-29 Kabushiki Kaisha Toshiba Image signal processing circuit for ultrasonic imaging apparatus
US5078149A (en) * 1989-09-29 1992-01-07 Terumo Kabushiki Kaisha Ultrasonic coupler and method for production thereof
US5125410A (en) * 1989-10-13 1992-06-30 Olympus Optical Co., Ltd. Integrated ultrasonic diagnosis device utilizing intra-blood-vessel probe
US5148809A (en) * 1990-02-28 1992-09-22 Asgard Medical Systems, Inc. Method and apparatus for detecting blood vessels and displaying an enhanced video image from an ultrasound scan
US5151856A (en) * 1989-08-30 1992-09-29 Technion R & D Found. Ltd. Method of displaying coronary function
US5159931A (en) * 1988-11-25 1992-11-03 Riccardo Pini Apparatus for obtaining a three-dimensional reconstruction of anatomic structures through the acquisition of echographic images
US5197019A (en) * 1989-07-20 1993-03-23 Asulab S.A. Method of measuring distance using ultrasonic waves
US5235985A (en) * 1992-04-30 1993-08-17 Mcmorrow Gerald J Automatic bladder scanning apparatus
US5265614A (en) * 1988-08-30 1993-11-30 Fujitsu Limited Acoustic coupler
US5299577A (en) * 1989-04-20 1994-04-05 National Fertility Institute Apparatus and method for image processing including one-dimensional clean approximation
US5381794A (en) * 1993-01-21 1995-01-17 Aloka Co., Ltd. Ultrasonic probe apparatus
US5432310A (en) * 1992-07-22 1995-07-11 Stoeger; Helmut Fluid pressure operated switch with piston actuator
US5435310A (en) * 1993-06-23 1995-07-25 University Of Washington Determining cardiac wall thickness and motion by imaging and three-dimensional modeling
US5465721A (en) * 1994-04-22 1995-11-14 Hitachi Medical Corporation Ultrasonic diagnostic apparatus and ultrasonic diagnosis method
US5473555A (en) * 1988-08-18 1995-12-05 Hewlett-Packard Company Method and apparatus for enhancing frequency domain analysis
US5487388A (en) * 1994-11-01 1996-01-30 Interspec. Inc. Three dimensional ultrasonic scanning devices and techniques
US5494038A (en) * 1995-04-25 1996-02-27 Abbott Laboratories Apparatus for ultrasound testing
US5503153A (en) * 1995-06-30 1996-04-02 Siemens Medical Systems, Inc. Noise suppression method utilizing motion compensation for ultrasound images
US5503152A (en) * 1994-09-28 1996-04-02 Tetrad Corporation Ultrasonic transducer assembly and method for three-dimensional imaging
US5526816A (en) * 1994-09-22 1996-06-18 Bracco Research S.A. Ultrasonic spectral contrast imaging
US5553618A (en) * 1993-03-12 1996-09-10 Kabushiki Kaisha Toshiba Method and apparatus for ultrasound medical treatment
US5575291A (en) * 1993-11-17 1996-11-19 Fujitsu Ltd. Ultrasonic coupler
US5575286A (en) * 1995-03-31 1996-11-19 Siemens Medical Systems, Inc. Method and apparatus for generating large compound ultrasound image
US5577506A (en) * 1994-08-10 1996-11-26 Hewlett Packard Company Catheter probe having a fixed acoustic reflector for full-circle imaging
US5588435A (en) * 1995-11-22 1996-12-31 Siemens Medical Systems, Inc. System and method for automatic measurement of body structures
US5601084A (en) * 1993-06-23 1997-02-11 University Of Washington Determining cardiac wall thickness and motion by imaging and three-dimensional modeling
US5605155A (en) * 1996-03-29 1997-02-25 University Of Washington Ultrasound system for automatically measuring fetal head size
US5615680A (en) * 1994-07-22 1997-04-01 Kabushiki Kaisha Toshiba Method of imaging in ultrasound diagnosis and diagnostic ultrasound system
US5644513A (en) * 1989-12-22 1997-07-01 Rudin; Leonid I. System incorporating feature-oriented signal enhancement using shock filters
US5645077A (en) * 1994-06-16 1997-07-08 Massachusetts Institute Of Technology Inertial orientation tracker apparatus having automatic drift compensation for tracking human head and other similarly sized body
US5655539A (en) * 1996-02-26 1997-08-12 Abbott Laboratories Method for conducting an ultrasound procedure using an ultrasound transmissive pad
US5698549A (en) * 1994-05-12 1997-12-16 Uva Patent Foundation Method of treating hyperactive voiding with calcium channel blockers
US5697525A (en) * 1993-02-10 1997-12-16 Daniel Joseph O'Reilly Bag for dispensing fluid material and a dispenser having the bag
US5724101A (en) * 1987-04-09 1998-03-03 Prevail, Inc. System for conversion of non standard video signals to standard formats for transmission and presentation
US5735282A (en) * 1996-05-30 1998-04-07 Acuson Corporation Flexible ultrasonic transducers and related systems
US5738097A (en) * 1996-11-08 1998-04-14 Diagnostics Ultrasound Corporation Vector Doppler system for stroke screening
US5770801A (en) * 1995-04-25 1998-06-23 Abbott Laboratories Ultrasound transmissive pad
US5776063A (en) * 1996-09-30 1998-07-07 Molecular Biosystems, Inc. Analysis of ultrasound images in the presence of contrast agent
US5782767A (en) * 1996-12-31 1998-07-21 Diagnostic Ultrasound Corporation Coupling pad for use with medical ultrasound devices
US5806521A (en) * 1996-03-26 1998-09-15 Sandia Corporation Composite ultrasound imaging apparatus and method
US5840033A (en) * 1996-05-29 1998-11-24 Ge Yokogawa Medical Systems, Limited Method and apparatus for ultrasound imaging
US5841889A (en) * 1995-12-29 1998-11-24 General Electric Company Ultrasound image texture control using adaptive speckle control algorithm
US5846202A (en) * 1996-07-30 1998-12-08 Acuson Corporation Ultrasound method and system for imaging
US5851186A (en) * 1996-02-27 1998-12-22 Atl Ultrasound, Inc. Ultrasonic diagnostic imaging system with universal access to diagnostic information and images
US5873829A (en) * 1996-01-29 1999-02-23 Kabushiki Kaisha Toshiba Diagnostic ultrasound system using harmonic echo imaging
US5892843A (en) * 1997-01-21 1999-04-06 Matsushita Electric Industrial Co., Ltd. Title, caption and photo extraction from scanned document images
US5898793A (en) * 1993-04-13 1999-04-27 Karron; Daniel System and method for surface rendering of internal structures within the interior of a solid object
US5903664A (en) * 1996-11-01 1999-05-11 General Electric Company Fast segmentation of cardiac images
US5908390A (en) * 1994-05-10 1999-06-01 Fujitsu Limited Ultrasonic diagnostic apparatus
US5913823A (en) * 1997-07-15 1999-06-22 Acuson Corporation Ultrasound imaging method and system for transmit signal generation for an ultrasonic imaging system capable of harmonic imaging
US5928151A (en) * 1997-08-22 1999-07-27 Acuson Corporation Ultrasonic system and method for harmonic imaging in three dimensions
US5945770A (en) * 1997-08-20 1999-08-31 Acuson Corporation Multilayer ultrasound transducer and the method of manufacture thereof
US5964710A (en) * 1998-03-13 1999-10-12 Srs Medical, Inc. System for estimating bladder volume
US5972023A (en) * 1994-08-15 1999-10-26 Eva Corporation Implantation device for an aortic graft method of treating aortic aneurysm
US5971923A (en) * 1997-12-31 1999-10-26 Acuson Corporation Ultrasound system and method for interfacing with peripherals
US5980459A (en) * 1998-03-31 1999-11-09 General Electric Company Ultrasound imaging using coded excitation on transmit and selective filtering of fundamental and (sub)harmonic signals on receive
US5993390A (en) * 1998-09-18 1999-11-30 Hewlett- Packard Company Segmented 3-D cardiac ultrasound imaging method and apparatus
US6008813A (en) * 1997-08-01 1999-12-28 Mitsubishi Electric Information Technology Center America, Inc. (Ita) Real-time PC based volume rendering system
US6030344A (en) * 1996-12-04 2000-02-29 Acuson Corporation Methods and apparatus for ultrasound image quantification
US6042545A (en) * 1998-11-25 2000-03-28 Acuson Corporation Medical diagnostic ultrasound system and method for transform ultrasound processing
US6048312A (en) * 1998-04-23 2000-04-11 Ishrak; Syed Omar Method and apparatus for three-dimensional ultrasound imaging of biopsy needle
US6063033A (en) * 1999-05-28 2000-05-16 General Electric Company Ultrasound imaging with higher-order nonlinearities
US6064906A (en) * 1997-03-14 2000-05-16 Emory University Method, system and apparatus for determining prognosis in atrial fibrillation
US6071242A (en) * 1998-06-30 2000-06-06 Diasonics Ultrasound, Inc. Method and apparatus for cross-sectional color doppler volume flow measurement
US6102858A (en) * 1998-04-23 2000-08-15 General Electric Company Method and apparatus for three-dimensional ultrasound imaging using contrast agents and harmonic echoes
US6106465A (en) * 1997-08-22 2000-08-22 Acuson Corporation Ultrasonic method and system for boundary detection of an object of interest in an ultrasound image
US6110111A (en) * 1999-05-26 2000-08-29 Diagnostic Ultrasound Corporation System for quantizing bladder distension due to pressure using normalized surface area of the bladder
US6117080A (en) * 1997-06-04 2000-09-12 Atl Ultrasound Ultrasonic imaging apparatus and method for breast cancer diagnosis with the use of volume rendering
US6122538A (en) * 1997-01-16 2000-09-19 Acuson Corporation Motion--Monitoring method and system for medical devices
US6123669A (en) * 1998-05-13 2000-09-26 Kabushiki Kaisha Toshiba 3D ultrasound imaging using 2D array
US6309353B1 (en) * 1998-10-27 2001-10-30 Mitani Sangyo Co., Ltd. Methods and apparatus for tumor diagnosis
US6939301B2 (en) * 2001-03-16 2005-09-06 Yaakov Abdelhak Automatic volume measurements: an application for 3D ultrasound
US7041059B2 (en) * 2002-08-02 2006-05-09 Diagnostic Ultrasound Corporation 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US7087022B2 (en) * 2002-06-07 2006-08-08 Diagnostic Ultrasound Corporation 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US20080242985A1 (en) * 2003-05-20 2008-10-02 Vikram Chalana 3d ultrasound-based instrument for non-invasive measurement of amniotic fluid volume

Family Cites Families (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6196226B1 (en) * 1990-08-10 2001-03-06 University Of Washington Methods and apparatus for optically imaging neuronal tissue and activity
DE69736549T2 (en) * 1996-02-29 2007-08-23 Acuson Corp., Mountain View SYSTEM, METHOD AND CONVERTER FOR ORIENTING MULTIPLE ULTRASOUND IMAGES
US6569101B2 (en) * 2001-04-19 2003-05-27 Sonosite, Inc. Medical diagnostic ultrasound instrument with ECG module, authorization mechanism and methods of use
US6343936B1 (en) * 1996-09-16 2002-02-05 The Research Foundation Of State University Of New York System and method for performing a three-dimensional virtual examination, navigation and visualization
US6045508A (en) * 1997-02-27 2000-04-04 Acuson Corporation Ultrasonic probe, system and method for two-dimensional imaging or three-dimensional reconstruction
US6213949B1 (en) * 1999-05-10 2001-04-10 Srs Medical Systems, Inc. System for estimating bladder volume
US6200266B1 (en) * 1998-03-31 2001-03-13 Case Western Reserve University Method and apparatus for ultrasound imaging using acoustic impedance reconstruction
US6511325B1 (en) * 1998-05-04 2003-01-28 Advanced Research & Technology Institute Aortic stent-graft calibration and training model
US6511426B1 (en) * 1998-06-02 2003-01-28 Acuson Corporation Medical diagnostic ultrasound system and method for versatile processing
US6359190B1 (en) * 1998-06-29 2002-03-19 The Procter & Gamble Company Device for measuring the volume of a body cavity
US7672708B2 (en) * 1998-07-23 2010-03-02 David Roberts Method and apparatus for the non-invasive imaging of anatomic tissue structures
US6346124B1 (en) * 1998-08-25 2002-02-12 University Of Florida Autonomous boundary detection system for echocardiographic images
US6545678B1 (en) * 1998-11-05 2003-04-08 Duke University Methods, systems, and computer program products for generating tissue surfaces from volumetric data thereof using boundary traces
US6524249B2 (en) * 1998-11-11 2003-02-25 Spentech, Inc. Doppler ultrasound method and apparatus for monitoring blood flow and detecting emboli
JP2000139917A (en) * 1998-11-12 2000-05-23 Toshiba Corp Ultrasonograph
US6193657B1 (en) * 1998-12-31 2001-02-27 Ge Medical Systems Global Technology Company, Llc Image based probe position and orientation detection
US6213951B1 (en) * 1999-02-19 2001-04-10 Acuson Corporation Medical diagnostic ultrasound method and system for contrast specific frequency imaging
US6544181B1 (en) * 1999-03-05 2003-04-08 The General Hospital Corporation Method and apparatus for measuring volume flow and area for a dynamic orifice
US6400848B1 (en) * 1999-03-30 2002-06-04 Eastman Kodak Company Method for modifying the perspective of a digital image
US6210327B1 (en) * 1999-04-28 2001-04-03 General Electric Company Method and apparatus for sending ultrasound image data to remotely located device
US6259945B1 (en) * 1999-04-30 2001-07-10 Uromed Corporation Method and device for locating a nerve
US6235038B1 (en) * 1999-10-28 2001-05-22 Medtronic Surgical Navigation Technologies System for translation of electromagnetic and optical localization systems
US6466817B1 (en) * 1999-11-24 2002-10-15 Nuvasive, Inc. Nerve proximity and status detection system and method
US6338716B1 (en) * 1999-11-24 2002-01-15 Acuson Corporation Medical diagnostic ultrasonic transducer probe and imaging system for use with a position and orientation sensor
US6350239B1 (en) * 1999-12-28 2002-02-26 Ge Medical Systems Global Technology Company, Llc Method and apparatus for distributed software architecture for medical diagnostic systems
US6515657B1 (en) * 2000-02-11 2003-02-04 Claudio I. Zanelli Ultrasonic imager
US6406431B1 (en) * 2000-02-17 2002-06-18 Diagnostic Ultasound Corporation System for imaging the bladder during voiding
US6551246B1 (en) * 2000-03-06 2003-04-22 Acuson Corporation Method and apparatus for forming medical ultrasound images
US6511427B1 (en) * 2000-03-10 2003-01-28 Acuson Corporation System and method for assessing body-tissue properties using a medical ultrasound transducer probe with a body-tissue parameter measurement mechanism
US6238344B1 (en) * 2000-03-30 2001-05-29 Acuson Corporation Medical diagnostic ultrasound imaging system with a wirelessly-controlled peripheral
US6503204B1 (en) * 2000-03-31 2003-01-07 Acuson Corporation Two-dimensional ultrasonic transducer array having transducer elements in a non-rectangular or hexagonal grid for medical diagnostic ultrasonic imaging and ultrasound imaging system using same
US20020016545A1 (en) * 2000-04-13 2002-02-07 Quistgaard Jens U. Mobile ultrasound diagnostic instrument and system using wireless video transmission
US6682473B1 (en) * 2000-04-14 2004-01-27 Solace Therapeutics, Inc. Devices and methods for attenuation of pressure waves in the body
EP1162476A1 (en) * 2000-06-06 2001-12-12 Kretztechnik Aktiengesellschaft Method for examining objects with ultrasound
KR100350026B1 (en) * 2000-06-17 2002-08-24 주식회사 메디슨 Ultrasound imaging method and apparatus based on pulse compression technique using a spread spectrum signal
US6569097B1 (en) * 2000-07-21 2003-05-27 Diagnostics Ultrasound Corporation System for remote evaluation of ultrasound information obtained by a programmed application-specific data collection device
US6375616B1 (en) * 2000-11-10 2002-04-23 Biomedicom Ltd. Automatic fetal weight determination
US6491636B2 (en) * 2000-12-07 2002-12-10 Koninklijke Philips Electronics N.V. Automated border detection in ultrasonic diagnostic images
US6540679B2 (en) * 2000-12-28 2003-04-01 Guided Therapy Systems, Inc. Visual imaging system for ultrasonic probe
US6868594B2 (en) * 2001-01-05 2005-03-22 Koninklijke Philips Electronics, N.V. Method for making a transducer
US7042386B2 (en) * 2001-12-11 2006-05-09 Essex Corporation Sub-aperture sidelobe and alias mitigation techniques
US6544179B1 (en) * 2001-12-14 2003-04-08 Koninklijke Philips Electronics, Nv Ultrasound imaging system and method having automatically selected transmit focal positions
KR100406098B1 (en) * 2001-12-26 2003-11-14 주식회사 메디슨 Ultrasound imaging system and method based on simultaneous multiple transmit-focusing using the weighted orthogonal chirp signals
US6878115B2 (en) * 2002-03-28 2005-04-12 Ultrasound Detection Systems, Llc Three-dimensional ultrasound computed tomography imaging system
US6705993B2 (en) * 2002-05-10 2004-03-16 Regents Of The University Of Minnesota Ultrasound imaging system and method using non-linear post-beamforming filter
US7727150B2 (en) * 2002-06-07 2010-06-01 Verathon Inc. Systems and methods for determining organ wall mass by three-dimensional ultrasound
US20090062644A1 (en) * 2002-06-07 2009-03-05 Mcmorrow Gerald System and method for ultrasound harmonic imaging
US8221321B2 (en) * 2002-06-07 2012-07-17 Verathon Inc. Systems and methods for quantification and classification of fluids in human cavities in ultrasound images
US7004904B2 (en) * 2002-08-02 2006-02-28 Diagnostic Ultrasound Corporation Image enhancement and segmentation of structures in 3D ultrasound images for volume measurements
US6884217B2 (en) * 2003-06-27 2005-04-26 Diagnostic Ultrasound Corporation System for aiming ultrasonic bladder instruments
US20060025689A1 (en) * 2002-06-07 2006-02-02 Vikram Chalana System and method to measure cardiac ejection fraction
US20090112089A1 (en) * 2007-10-27 2009-04-30 Bill Barnard System and method for measuring bladder wall thickness and presenting a bladder virtual image
GB2391625A (en) * 2002-08-09 2004-02-11 Diagnostic Ultrasound Europ B Instantaneous ultrasonic echo measurement of bladder urine volume with a limited number of ultrasound beams
US6780152B2 (en) * 2002-06-26 2004-08-24 Acuson Corporation Method and apparatus for ultrasound imaging of the heart
US6905468B2 (en) * 2002-09-18 2005-06-14 Diagnostic Ultrasound Corporation Three-dimensional system for abdominal aortic aneurysm evaluation
US6695780B1 (en) * 2002-10-17 2004-02-24 Gerard Georges Nahum Methods, systems, and computer program products for estimating fetal weight at birth and risk of macrosomia
US8708909B2 (en) * 2004-01-20 2014-04-29 Fujifilm Visualsonics, Inc. High frequency ultrasound imaging using contrast agents
US7846103B2 (en) * 2004-09-17 2010-12-07 Medical Equipment Diversified Services, Inc. Probe guide for use with medical imaging systems
WO2008144449A2 (en) * 2007-05-16 2008-11-27 Verathon Inc. System and method for bladder detection using ultrasonic harmonic imaging
WO2009032778A2 (en) * 2007-08-29 2009-03-12 Verathon Inc. System and methods for nerve response mapping

Patent Citations (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3613069A (en) * 1969-09-22 1971-10-12 Gen Dynamics Corp Sonar system
US4431007A (en) * 1981-02-04 1984-02-14 General Electric Company Referenced real-time ultrasonic image display
US4771205A (en) * 1983-08-31 1988-09-13 U.S. Philips Corporation Ultrasound transducer
US4556066A (en) * 1983-11-04 1985-12-03 The Kendall Company Ultrasound acoustical coupling pad
US4926871A (en) * 1985-05-08 1990-05-22 International Biomedics, Inc. Apparatus and method for non-invasively and automatically measuring the volume of urine in a human bladder
US4757921A (en) * 1985-05-24 1988-07-19 Wm Still & Sons Limited Water dispensers and methods
US4844080A (en) * 1987-02-19 1989-07-04 Michael Frass Ultrasound contact medium dispenser
US4821210A (en) * 1987-04-02 1989-04-11 General Electric Co. Fast display of three-dimensional images
US5724101A (en) * 1987-04-09 1998-03-03 Prevail, Inc. System for conversion of non standard video signals to standard formats for transmission and presentation
US5058591A (en) * 1987-11-10 1991-10-22 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Rapidly quantifying the relative distention of a human bladder
US5473555A (en) * 1988-08-18 1995-12-05 Hewlett-Packard Company Method and apparatus for enhancing frequency domain analysis
US5265614A (en) * 1988-08-30 1993-11-30 Fujitsu Limited Acoustic coupler
US5159931A (en) * 1988-11-25 1992-11-03 Riccardo Pini Apparatus for obtaining a three-dimensional reconstruction of anatomic structures through the acquisition of echographic images
US5060515A (en) * 1989-03-01 1991-10-29 Kabushiki Kaisha Toshiba Image signal processing circuit for ultrasonic imaging apparatus
US5299577A (en) * 1989-04-20 1994-04-05 National Fertility Institute Apparatus and method for image processing including one-dimensional clean approximation
US5197019A (en) * 1989-07-20 1993-03-23 Asulab S.A. Method of measuring distance using ultrasonic waves
US5151856A (en) * 1989-08-30 1992-09-29 Technion R & D Found. Ltd. Method of displaying coronary function
US5078149A (en) * 1989-09-29 1992-01-07 Terumo Kabushiki Kaisha Ultrasonic coupler and method for production thereof
US5125410A (en) * 1989-10-13 1992-06-30 Olympus Optical Co., Ltd. Integrated ultrasonic diagnosis device utilizing intra-blood-vessel probe
US5644513A (en) * 1989-12-22 1997-07-01 Rudin; Leonid I. System incorporating feature-oriented signal enhancement using shock filters
US5148809A (en) * 1990-02-28 1992-09-22 Asgard Medical Systems, Inc. Method and apparatus for detecting blood vessels and displaying an enhanced video image from an ultrasound scan
US5235985A (en) * 1992-04-30 1993-08-17 Mcmorrow Gerald J Automatic bladder scanning apparatus
US5432310A (en) * 1992-07-22 1995-07-11 Stoeger; Helmut Fluid pressure operated switch with piston actuator
US5381794A (en) * 1993-01-21 1995-01-17 Aloka Co., Ltd. Ultrasonic probe apparatus
US5697525A (en) * 1993-02-10 1997-12-16 Daniel Joseph O'Reilly Bag for dispensing fluid material and a dispenser having the bag
US5553618A (en) * 1993-03-12 1996-09-10 Kabushiki Kaisha Toshiba Method and apparatus for ultrasound medical treatment
US5898793A (en) * 1993-04-13 1999-04-27 Karron; Daniel System and method for surface rendering of internal structures within the interior of a solid object
US5435310A (en) * 1993-06-23 1995-07-25 University Of Washington Determining cardiac wall thickness and motion by imaging and three-dimensional modeling
US5601084A (en) * 1993-06-23 1997-02-11 University Of Washington Determining cardiac wall thickness and motion by imaging and three-dimensional modeling
US5575291A (en) * 1993-11-17 1996-11-19 Fujitsu Ltd. Ultrasonic coupler
US5465721A (en) * 1994-04-22 1995-11-14 Hitachi Medical Corporation Ultrasonic diagnostic apparatus and ultrasonic diagnosis method
US5908390A (en) * 1994-05-10 1999-06-01 Fujitsu Limited Ultrasonic diagnostic apparatus
US5698549A (en) * 1994-05-12 1997-12-16 Uva Patent Foundation Method of treating hyperactive voiding with calcium channel blockers
US5645077A (en) * 1994-06-16 1997-07-08 Massachusetts Institute Of Technology Inertial orientation tracker apparatus having automatic drift compensation for tracking human head and other similarly sized body
US5615680A (en) * 1994-07-22 1997-04-01 Kabushiki Kaisha Toshiba Method of imaging in ultrasound diagnosis and diagnostic ultrasound system
US5577506A (en) * 1994-08-10 1996-11-26 Hewlett Packard Company Catheter probe having a fixed acoustic reflector for full-circle imaging
US5972023A (en) * 1994-08-15 1999-10-26 Eva Corporation Implantation device for an aortic graft method of treating aortic aneurysm
US5526816A (en) * 1994-09-22 1996-06-18 Bracco Research S.A. Ultrasonic spectral contrast imaging
US5503152A (en) * 1994-09-28 1996-04-02 Tetrad Corporation Ultrasonic transducer assembly and method for three-dimensional imaging
US5487388A (en) * 1994-11-01 1996-01-30 Interspec. Inc. Three dimensional ultrasonic scanning devices and techniques
US5575286A (en) * 1995-03-31 1996-11-19 Siemens Medical Systems, Inc. Method and apparatus for generating large compound ultrasound image
US5494038A (en) * 1995-04-25 1996-02-27 Abbott Laboratories Apparatus for ultrasound testing
US5770801A (en) * 1995-04-25 1998-06-23 Abbott Laboratories Ultrasound transmissive pad
US5503153A (en) * 1995-06-30 1996-04-02 Siemens Medical Systems, Inc. Noise suppression method utilizing motion compensation for ultrasound images
US5588435A (en) * 1995-11-22 1996-12-31 Siemens Medical Systems, Inc. System and method for automatic measurement of body structures
US5841889A (en) * 1995-12-29 1998-11-24 General Electric Company Ultrasound image texture control using adaptive speckle control algorithm
US5873829A (en) * 1996-01-29 1999-02-23 Kabushiki Kaisha Toshiba Diagnostic ultrasound system using harmonic echo imaging
US5655539A (en) * 1996-02-26 1997-08-12 Abbott Laboratories Method for conducting an ultrasound procedure using an ultrasound transmissive pad
US5851186A (en) * 1996-02-27 1998-12-22 Atl Ultrasound, Inc. Ultrasonic diagnostic imaging system with universal access to diagnostic information and images
US5806521A (en) * 1996-03-26 1998-09-15 Sandia Corporation Composite ultrasound imaging apparatus and method
US5605155A (en) * 1996-03-29 1997-02-25 University Of Washington Ultrasound system for automatically measuring fetal head size
US5840033A (en) * 1996-05-29 1998-11-24 Ge Yokogawa Medical Systems, Limited Method and apparatus for ultrasound imaging
US5735282A (en) * 1996-05-30 1998-04-07 Acuson Corporation Flexible ultrasonic transducers and related systems
US5846202A (en) * 1996-07-30 1998-12-08 Acuson Corporation Ultrasound method and system for imaging
US5776063A (en) * 1996-09-30 1998-07-07 Molecular Biosystems, Inc. Analysis of ultrasound images in the presence of contrast agent
US5903664A (en) * 1996-11-01 1999-05-11 General Electric Company Fast segmentation of cardiac images
US5738097A (en) * 1996-11-08 1998-04-14 Diagnostics Ultrasound Corporation Vector Doppler system for stroke screening
US6030344A (en) * 1996-12-04 2000-02-29 Acuson Corporation Methods and apparatus for ultrasound image quantification
US5782767A (en) * 1996-12-31 1998-07-21 Diagnostic Ultrasound Corporation Coupling pad for use with medical ultrasound devices
US6122538A (en) * 1997-01-16 2000-09-19 Acuson Corporation Motion--Monitoring method and system for medical devices
US5892843A (en) * 1997-01-21 1999-04-06 Matsushita Electric Industrial Co., Ltd. Title, caption and photo extraction from scanned document images
US6064906A (en) * 1997-03-14 2000-05-16 Emory University Method, system and apparatus for determining prognosis in atrial fibrillation
US6117080A (en) * 1997-06-04 2000-09-12 Atl Ultrasound Ultrasonic imaging apparatus and method for breast cancer diagnosis with the use of volume rendering
US5913823A (en) * 1997-07-15 1999-06-22 Acuson Corporation Ultrasound imaging method and system for transmit signal generation for an ultrasonic imaging system capable of harmonic imaging
US6008813A (en) * 1997-08-01 1999-12-28 Mitsubishi Electric Information Technology Center America, Inc. (Ita) Real-time PC based volume rendering system
US5945770A (en) * 1997-08-20 1999-08-31 Acuson Corporation Multilayer ultrasound transducer and the method of manufacture thereof
US5928151A (en) * 1997-08-22 1999-07-27 Acuson Corporation Ultrasonic system and method for harmonic imaging in three dimensions
US6106465A (en) * 1997-08-22 2000-08-22 Acuson Corporation Ultrasonic method and system for boundary detection of an object of interest in an ultrasound image
US5971923A (en) * 1997-12-31 1999-10-26 Acuson Corporation Ultrasound system and method for interfacing with peripherals
US5964710A (en) * 1998-03-13 1999-10-12 Srs Medical, Inc. System for estimating bladder volume
US5980459A (en) * 1998-03-31 1999-11-09 General Electric Company Ultrasound imaging using coded excitation on transmit and selective filtering of fundamental and (sub)harmonic signals on receive
US6102858A (en) * 1998-04-23 2000-08-15 General Electric Company Method and apparatus for three-dimensional ultrasound imaging using contrast agents and harmonic echoes
US6048312A (en) * 1998-04-23 2000-04-11 Ishrak; Syed Omar Method and apparatus for three-dimensional ultrasound imaging of biopsy needle
US6123669A (en) * 1998-05-13 2000-09-26 Kabushiki Kaisha Toshiba 3D ultrasound imaging using 2D array
US6071242A (en) * 1998-06-30 2000-06-06 Diasonics Ultrasound, Inc. Method and apparatus for cross-sectional color doppler volume flow measurement
US5993390A (en) * 1998-09-18 1999-11-30 Hewlett- Packard Company Segmented 3-D cardiac ultrasound imaging method and apparatus
US6309353B1 (en) * 1998-10-27 2001-10-30 Mitani Sangyo Co., Ltd. Methods and apparatus for tumor diagnosis
US6042545A (en) * 1998-11-25 2000-03-28 Acuson Corporation Medical diagnostic ultrasound system and method for transform ultrasound processing
US6110111A (en) * 1999-05-26 2000-08-29 Diagnostic Ultrasound Corporation System for quantizing bladder distension due to pressure using normalized surface area of the bladder
US6063033A (en) * 1999-05-28 2000-05-16 General Electric Company Ultrasound imaging with higher-order nonlinearities
US6939301B2 (en) * 2001-03-16 2005-09-06 Yaakov Abdelhak Automatic volume measurements: an application for 3D ultrasound
US7087022B2 (en) * 2002-06-07 2006-08-08 Diagnostic Ultrasound Corporation 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US7520857B2 (en) * 2002-06-07 2009-04-21 Verathon Inc. 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US7041059B2 (en) * 2002-08-02 2006-05-09 Diagnostic Ultrasound Corporation 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US20080242985A1 (en) * 2003-05-20 2008-10-02 Vikram Chalana 3d ultrasound-based instrument for non-invasive measurement of amniotic fluid volume

Cited By (80)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8221322B2 (en) 2002-06-07 2012-07-17 Verathon Inc. Systems and methods to improve clarity in ultrasound images
US20060235301A1 (en) * 2002-06-07 2006-10-19 Vikram Chalana 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US20090062644A1 (en) * 2002-06-07 2009-03-05 Mcmorrow Gerald System and method for ultrasound harmonic imaging
US8221321B2 (en) 2002-06-07 2012-07-17 Verathon Inc. Systems and methods for quantification and classification of fluids in human cavities in ultrasound images
US7819806B2 (en) 2002-06-07 2010-10-26 Verathon Inc. System and method to identify and measure organ wall boundaries
US7744534B2 (en) * 2002-06-07 2010-06-29 Verathon Inc. 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US20100198075A1 (en) * 2002-08-09 2010-08-05 Verathon Inc. Instantaneous ultrasonic echo measurement of bladder volume with a limited number of ultrasound beams
US8308644B2 (en) 2002-08-09 2012-11-13 Verathon Inc. Instantaneous ultrasonic measurement of bladder volume
US9993225B2 (en) 2002-08-09 2018-06-12 Verathon Inc. Instantaneous ultrasonic echo measurement of bladder volume with a limited number of ultrasound beams
US8167802B2 (en) * 2002-09-12 2012-05-01 Hitachi Medical Corporation Biological tissue motion trace method and image diagnosis device using the trace method
US20060173292A1 (en) * 2002-09-12 2006-08-03 Hirotaka Baba Biological tissue motion trace method and image diagnosis device using the trace method
US20090137906A1 (en) * 2005-07-25 2009-05-28 Hakko Co., Ltd. Ultrasonic Piercing Needle
US20120117807A1 (en) * 2005-07-25 2012-05-17 Hakko, Co., Ltd. Ultrasonic piercing needle
US8133181B2 (en) 2007-05-16 2012-03-13 Verathon Inc. Device, system and method to measure abdominal aortic aneurysm diameter
US8167803B2 (en) 2007-05-16 2012-05-01 Verathon Inc. System and method for bladder detection using harmonic imaging
US20090105585A1 (en) * 2007-05-16 2009-04-23 Yanwei Wang System and method for ultrasonic harmonic imaging
US20100125201A1 (en) * 2008-11-18 2010-05-20 Kabushiki Kaisha Toshiba Ultrasound imaging apparatus
CN102292744A (en) * 2009-01-23 2011-12-21 皇家飞利浦电子股份有限公司 Cardiac image processing and analysis
US8848989B2 (en) 2009-01-23 2014-09-30 Koninklijke Philips N.V. Cardiac image processing and analysis
WO2010084446A1 (en) * 2009-01-23 2010-07-29 Koninklijke Philips Electronics N.V. Cardiac image processing and analysis
WO2010106379A1 (en) * 2009-03-20 2010-09-23 Mediwatch Uk Limited Ultrasound probe with accelerometer
US8914245B2 (en) 2009-03-20 2014-12-16 Andrew David Hopkins Ultrasound probe with accelerometer
US20110295121A1 (en) * 2010-05-31 2011-12-01 Medison Co., Ltd. 3d ultrasound system and method for operating 3d ultrasound system
US8491480B2 (en) * 2010-05-31 2013-07-23 Medison Co., Ltd. 3D ultrasound system and method for operating 3D ultrasound system
US9119591B2 (en) 2010-05-31 2015-09-01 Samsung Medison Co., Ltd. 3D ultrasound system and method for operating 3D ultrasound system
US9408590B2 (en) 2010-05-31 2016-08-09 Samsung Medison Co., Ltd. 3D ultrasound system and method for operating 3D ultrasound system
US8824761B2 (en) * 2010-08-23 2014-09-02 General Electric Company Image processing method to determine suspect regions in a tissue matrix, and use thereof for 3D navigation through the tissue matrix
US20120045111A1 (en) * 2010-08-23 2012-02-23 Palma Giovanni Image processing method to determine suspect regions in a tissue matrix, and use thereof for 3d navigation through the tissue matrix
US20130158405A1 (en) * 2010-08-31 2013-06-20 B-K Medical Aps 3D View of 2D Ultrasound Images
US9386964B2 (en) * 2010-08-31 2016-07-12 B-K Medical Aps 3D view of 2D ultrasound images
US11224369B2 (en) 2010-12-22 2022-01-18 Veebot Systems, Inc. Systems and methods for autonomous intravenous needle insertion
WO2012088471A1 (en) * 2010-12-22 2012-06-28 Veebot, Llc Systems and methods for autonomous intravenous needle insertion
US10238327B2 (en) 2010-12-22 2019-03-26 Veebot Systems Inc Systems and methods for autonomous intravenous needle insertion
US11751782B2 (en) 2010-12-22 2023-09-12 Veebot Systems, Inc. Systems and methods for autonomous intravenous needle insertion
US9913605B2 (en) 2010-12-22 2018-03-13 Veebot Systems, Inc. Systems and methods for autonomous intravenous needle insertion
US9364171B2 (en) 2010-12-22 2016-06-14 Veebot Systems, Inc. Systems and methods for autonomous intravenous needle insertion
US20120215107A1 (en) * 2011-02-17 2012-08-23 Fujifilm Corporation Ultrasound probe and ultrasound diagnostic apparatus
CN102670260A (en) * 2011-03-18 2012-09-19 富士胶片株式会社 Ultrasound diagnostic apparatus and ultrasound image producing method
US8657751B2 (en) * 2011-03-18 2014-02-25 Fujifilm Corporation Ultrasound diagnostic apparatus and ultrasound image producing method
US20120238878A1 (en) * 2011-03-18 2012-09-20 Fujifilm Corporation Ultrasound diagnostic apparatus and ultrasound image producing method
US9551570B2 (en) 2011-03-31 2017-01-24 Ats Automation Tooling Systems Inc. Three dimensional optical sensing through optical media
WO2012129695A1 (en) * 2011-03-31 2012-10-04 Ats Automation Tooling Systems Inc. Three dimensional optical sensing through optical media
US20140303498A1 (en) * 2011-08-08 2014-10-09 Canon Kabushiki Kaisha Object information acquisition apparatus, object information acquisition system, display control method, display method, and program
US9277877B2 (en) * 2012-01-30 2016-03-08 The Johns Hopkins University Automated pneumothorax detection
US20150065849A1 (en) * 2012-01-30 2015-03-05 The Johns Hopkins University Automated Pneumothorax Detection
US20130197370A1 (en) * 2012-01-30 2013-08-01 The Johns Hopkins University Automated Pneumothorax Detection
US8914097B2 (en) * 2012-01-30 2014-12-16 The Johns Hopkins University Automated pneumothorax detection
US9220482B2 (en) 2012-03-09 2015-12-29 Samsung Medison Co., Ltd. Method for providing ultrasound images and ultrasound apparatus
KR101386102B1 (en) * 2012-03-09 2014-04-16 삼성메디슨 주식회사 Method for providing ultrasound images and ultrasound apparatus thereof
US20180158190A1 (en) * 2013-03-15 2018-06-07 Conavi Medical Inc. Data display and processing algorithms for 3d imaging systems
US10699411B2 (en) * 2013-03-15 2020-06-30 Sunnybrook Research Institute Data display and processing algorithms for 3D imaging systems
US20140328526A1 (en) * 2013-05-02 2014-11-06 Toshiba Medical Systems Corporation Medical imaging data processing apparatus and method
US9468420B2 (en) * 2013-05-02 2016-10-18 Toshiba Medical Systems Corporation Medical imaging data processing apparatus and method
US20140371587A1 (en) * 2013-06-12 2014-12-18 Wisconsin Alumni Research Foundation Ultrasound Machine Providing Composite Image Data
US9386965B2 (en) * 2013-06-12 2016-07-12 Wisconsin Alumni Research Foundation Ultrasound machine providing composite image data
US11064924B2 (en) * 2014-11-27 2021-07-20 Novioscan B.V. Wearable ultrasound device for signalling changes in a human or animal body
US20170258386A1 (en) * 2014-11-27 2017-09-14 Umc Utrecht Holding B.V. Wearable ultrasound device for signalling changes in a human or animal body
US11304676B2 (en) 2015-01-23 2022-04-19 The University Of North Carolina At Chapel Hill Apparatuses, systems, and methods for preclinical ultrasound imaging of subjects
US10074191B1 (en) 2015-07-05 2018-09-11 Cognex Corporation System and method for determination of object volume with multiple three-dimensional sensors
US10555724B2 (en) * 2015-09-01 2020-02-11 Shenzhen Institutes Of Advanced Technology Chinese Academy Of Sciences Ultrasound probe calibration phantom, ultrasound probe calibration system and calibration method thereof
US20170245837A1 (en) * 2015-09-01 2017-08-31 Shenzhen Institutes Of Advanced Technology, Chinese Academy Of Sciences Ultrasound probe calibration phantom, ultrasound probe calibration system and calibration method thereof
US10342517B2 (en) * 2015-09-02 2019-07-09 Ningbo Youchang Ultrasonic Technology Co., Ltd Wireless intelligent ultrasound fetal imaging system
US20170055955A1 (en) * 2015-09-02 2017-03-02 Aningbo Youchang Ultrasonic Technology Co.,Ltd Wireless intelligent ultrasound fetal imaging system
US9947097B2 (en) * 2016-01-19 2018-04-17 General Electric Company Method and system for enhanced fetal visualization by detecting and displaying a fetal head position with cross-plane ultrasound images
US11523799B2 (en) * 2016-03-09 2022-12-13 Koninklijke Philips N.V. Fetal imaging system and method
US20210259665A1 (en) * 2016-08-10 2021-08-26 The Government Of The United States As Represented By The Secretary Of The Army Automated Three and Four-Dimensional Ultrasound Quantification and Surveillance of Free Fluid in Body Cavities and Intravascular Volume
US11123042B2 (en) 2016-08-10 2021-09-21 The Government Of The United States As Represented By The Secretary Of The Army Automated three and four-dimensional ultrasound quantification and surveillance of free fluid in body cavities and intravascular volume
WO2018031754A1 (en) * 2016-08-10 2018-02-15 U.S. Government As Represented By The Secretary Of The Army Automated three and four-dimensional ultrasound quantification and surveillance of free fluid in body cavities and intravascular volume
US20180042577A1 (en) * 2016-08-12 2018-02-15 General Electric Company Methods and systems for ultrasound imaging
US11432803B2 (en) * 2016-08-12 2022-09-06 General Electric Company Method and system for generating a visualization plane from 3D ultrasound data
US11564656B2 (en) * 2018-03-13 2023-01-31 Verathon Inc. Generalized interlaced scanning with an ultrasound probe
US11638569B2 (en) * 2018-06-08 2023-05-02 Rutgers, The State University Of New Jersey Computer vision systems and methods for real-time needle detection, enhancement and localization in ultrasound
US11426142B2 (en) 2018-08-13 2022-08-30 Rutgers, The State University Of New Jersey Computer vision systems and methods for real-time localization of needles in ultrasound images
KR20210002198A (en) * 2019-06-27 2021-01-07 고려대학교 산학협력단 Method for automatic measurement of amniotic fluid volume with camera angle correction function
KR102318155B1 (en) 2019-06-27 2021-10-28 고려대학교 산학협력단 Method for automatic measurement of amniotic fluid volume with camera angle correction function
KR102270917B1 (en) 2019-06-27 2021-07-01 고려대학교 산학협력단 Method for automatic measurement of amniotic fluid volume based on artificial intelligence model
KR20210002197A (en) * 2019-06-27 2021-01-07 고려대학교 산학협력단 Method for automatic measurement of amniotic fluid volume based on artificial intelligence model
CN114513989A (en) * 2019-09-27 2022-05-17 Bfly经营有限公司 Method and apparatus for configuring imaging parameter values for ultrasound systems
US20210096243A1 (en) * 2019-09-27 2021-04-01 Butterfly Network, Inc. Methods and apparatus for configuring an ultrasound system with imaging parameter values
WO2022169808A1 (en) * 2021-02-02 2022-08-11 Thormed Innovation Llc Systems and methods for real time noninvasive urine output assessment

Also Published As

Publication number Publication date
US7520857B2 (en) 2009-04-21
WO2005107581A3 (en) 2007-03-08
US20080242985A1 (en) 2008-10-02
WO2005107581A8 (en) 2007-09-20
WO2005107581A2 (en) 2005-11-17
US20050251039A1 (en) 2005-11-10

Similar Documents

Publication Publication Date Title
US20080146932A1 (en) 3D ultrasound-based instrument for non-invasive measurement of Amniotic Fluid Volume
US8221322B2 (en) Systems and methods to improve clarity in ultrasound images
US7744534B2 (en) 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US7087022B2 (en) 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
US7041059B2 (en) 3D ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
EP1538986B1 (en) 3d ultrasound-based instrument for non-invasive measurement of fluid-filled and non fluid-filled structures
US7819806B2 (en) System and method to identify and measure organ wall boundaries
JP6902625B2 (en) Ultrasonography based on probability map
Gee et al. Engineering a freehand 3D ultrasound system
Rohling et al. Automatic registration of 3-D ultrasound images
US8435181B2 (en) System and method to identify and measure organ wall boundaries
US20080139938A1 (en) System and method to identify and measure organ wall boundaries
EP1781176B1 (en) System and method for measuring bladder wall thickness and mass
US20080181479A1 (en) System and method for cardiac imaging
US20050228278A1 (en) Ultrasound system and method for measuring bladder wall thickness and mass
US20090112089A1 (en) System and method for measuring bladder wall thickness and presenting a bladder virtual image
US20100036252A1 (en) Ultrasound system and method for measuring bladder wall thickness and mass
US20130150718A1 (en) Ultrasound imaging system and method for imaging an endometrium
WO2007049207A1 (en) System and method for generating for display two-dimensional echocardiography views from a three-dimensional image
AU2009326864B2 (en) Medical diagnostic method and apparatus
CA2541798A1 (en) 3d ultrasound-based instrument for non-invasive measurement of amniotic fluid volume
CN114466620A (en) System and method for ultrasound perfusion imaging
Nada FETUS ULTRASOUND 3D IMAGE CONSTRUCTION
Brett Volume segmentation and visualisation for a 3D ultrasound acquisition system

Legal Events

Date Code Title Description
AS Assignment

Owner name: VERATHON INC.,WASHINGTON

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:MCMORROW, GERALD J;CHALANA, VIKRAM;DUDYCHA, STEPHEN;SIGNING DATES FROM 20100220 TO 20100308;REEL/FRAME:024088/0943

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION