WO2008127202A1 - Apparatus and method for manipulating a three-dimensional object/volume - Google Patents

Apparatus and method for manipulating a three-dimensional object/volume Download PDF

Info

Publication number
WO2008127202A1
WO2008127202A1 PCT/SG2008/000125 SG2008000125W WO2008127202A1 WO 2008127202 A1 WO2008127202 A1 WO 2008127202A1 SG 2008000125 W SG2008000125 W SG 2008000125W WO 2008127202 A1 WO2008127202 A1 WO 2008127202A1
Authority
WO
WIPO (PCT)
Prior art keywords
dimensional object
matrix
determination module
user
orientation
Prior art date
Application number
PCT/SG2008/000125
Other languages
French (fr)
Inventor
Jerome Chan Lee
Lin Chia Goh
Luis Serra
Original Assignee
Bracco Imaging S.P.A.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bracco Imaging S.P.A. filed Critical Bracco Imaging S.P.A.
Publication of WO2008127202A1 publication Critical patent/WO2008127202A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0481Interaction techniques based on graphical user interfaces [GUI] based on specific properties of the displayed interaction object or a metaphor-based environment, e.g. interaction with desktop elements like windows or icons, or assisted by a cursor's changing behaviour or appearance
    • G06F3/04815Interaction with a metaphor-based environment or interaction object displayed as three-dimensional, e.g. changing the user viewpoint with respect to the environment or object
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/048Interaction techniques based on graphical user interfaces [GUI]
    • G06F3/0484Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range
    • G06F3/04845Interaction techniques based on graphical user interfaces [GUI] for the control of specific functions or operations, e.g. selecting or manipulating an object, an image or a displayed text element, setting a parameter value or selecting a range for image manipulation, e.g. dragging, rotation, expansion or change of colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2016Rotation, translation, scaling

Definitions

  • the invention relates to an apparatus and method for orienting a three-dimensional (3D) object in a pre-defined orientation responsive to a user selection.
  • the invention also relates to an apparatus and method for positioning a 3D object to a translated position responsive to a user selection.
  • the invention also relates to an apparatus and method for defining a surgical trajectory in a 3D object displaying medical imaging data.
  • 3D object may also encompass a 3D volume, voxel, etc. in, say, the space of a virtual reality environment.
  • volume Interactions Pte Ltd of Singapore provides expertise in interactive three- dimensional virtual reality volumetric imaging techniques.
  • a family of products for implementing such techniques is the Dextroscope ® family.
  • the Dextroscope is an interactive console allowing a user to interact intuitively, comfortably and easily with virtual-reality 3D graphics generated by one or more modules from the RadioDexterTM suite of software programs.
  • the Dextroscope is one of the hardware platforms on which RadioDexter can be used. It is designed as a personal planning system, although it also allows collaborative work as a team of people can view the data simultaneously.
  • the Dextroscope allows a user to perceive stereoscopic virtual images within natural reach, in front of a user's eyes.
  • Liquid crystal display (LCD) shutter glasses are worn by the user to perceive the 3D image and plural users can simultaneously view and discuss the 3D data.
  • LCD liquid crystal display
  • a Dextroscope is described in, for example, commonly-assigned International Patent Application No. PCT/SGOl/00048, herein incorporated by reference in its entirety.
  • RadioDexter suite of medical imaging visualisation software modules offers realtime volumetric and 3D surface rendering functionality combined with state-of-the-art Virtual Reality (VR) technology. Radiologists and surgeons may work with complex multimodal imaging data with comfort, intuition and speed - for clear 3D understanding of anatomy and improved treatment planning.
  • VR Virtual Reality
  • RadioDexter generates a stereoscopic Virtual Reality environment in which a user can work interactively with real-time 3D data by "reaching into it” with both hands in a working volume in, say, the Dextroscope.
  • RadioDexter modules process data from Computer Tomography (CT) or Magnetic Resonance Imaging (MRI) scanning processes, as well as volumetric ultrasound etc.
  • CT Computer Tomography
  • MRI Magnetic Resonance Imaging
  • a variety of virtual tools for visualization and surgical planning are accessible while working inside RadioDexter' s 3D virtual workspace. This allows the user to work with complex multi-modal imaging data in a fast and intuitive way, for a clear three-dimensional understanding of the patient's anatomy and for improved treatment planning.
  • RadioDexter modules include: perspective stereoscopic shaded volume and surface rendering; multimodality image fusion (rendering of several volumes together); automatic volume registration and verification of the registered objects; segmentation with a click of a button; advanced surgical exploration tools- cropping, cutting, drilling, restoring, cloning, roaming, linear & volumetric measurement; easy-to-use colour and transparency mapping with volume rendering presets; capture of 3D interactive manipulations, with stereoscopic playback mode and video export capabilities to AVI; and easy reporting with tools incorporating 3D images, labeling, HTML exporting and printing.
  • RadioDexter can also optionally import Diffusion Weighted Imaging (DWI) datasets, generate Diffusion Tensor Imaging (DTI) fiber tracts and visualize them.
  • DWI Diffusion Weighted Imaging
  • a disclosed technique is for the generation of a Surgical Trajectory.
  • Other disclosed techniques are for manipulating 3D objects/volumes/voxels to a pre-defined orientation and/or position.
  • the pre-defined orientation/position may be defined with respect to, for example, a surgical trajectory.
  • a new module of the RadioDexter suite of programs allows for the provision of a "Surgical Plan" which defines a Surgical Trajectory.
  • a Surgical Trajectory is a plan of a trajectory of a surgeon's route during a planned surgical procedure.
  • Surgical planning is an important part to pre-surgical planning.
  • RadioDexter allows a user to map out a pre-defined plan/route/trajectory for the surgery, taking due cognisance of subject patient data from medical imaging scanning processes.
  • the Surgical Trajectory may be defined by target and entry point pairs, each of which can be defined by a user as 3D point pairs in the coordinate space of Dextroscope/RadioDexter.
  • RadioDexter When used to implement one or more of the techniques disclosed herein, RadioDexter incorporates a feature which allows automatic orientation of an object, responsive to the user's selection, in a predefined orientation along, say, a Surgical Trajectory. Such an orientation allows a user to look into and along the pre-defined orientation. This is a particularly useful tool when the pre-defined orientation defines a Surgical Trajectory.
  • the pre-defined direction is to an "up" direction which might be up in the sense of a patient's normal upright (e.g. when sitting or standing) position or to correspond with a prone position that the patent might be in during surgery.
  • an "up" vector or matrix is added to the 3D object in the same way as a Surgical Trajectory (described below) to allow the 3D object to be oriented with respect to the vector/matrix.
  • the surgical trajectory has a vector orthogonal to the trajectory to indicate its orientation. This vector is used to constrain the views, and define an 'up' direction, so that the display in the virtual planning can be made to correspond to the views that would be obtained during actual surgery.
  • the 3D object is (re)positioned by positioning a source position (say a point on the Surgical Trajectory) with respect to a pre-defined point on the display. Again, this process is executed responsive to the user selection.
  • This can allow the re-oriented image to be displayed in an optimal position in, say, the centre of a display screen. This is particularly useful if the image is being displayed in, say, a zoom mode prior to the re-orientation, where the surgical trajectory is "off-centre" in the display screen prior to the orientation/positioning.
  • an adjustable cutting or clipping plane oriented with respect to the surgical trajectory - for example oriented perpendicularly - is also provided. The user can move this cutting/clipping plane along a surgical path and view a cross-sectional view of the 3D object and all the objects along the surgical path. This allows a surgeon to explore carefully the surgical trajectory and objects in and around the path of the procedure.
  • Figure 1 is a diagram illustrating a perspective view of a surgical trajectory object
  • Figure 2 is a series of screen captures from RadioDexter illustrating a virtual reality interface showing a 3D object (a brain surgery case with CT, MRI, MRA) and a surgical trajectory object;
  • Figure 3 is a screen capture illustrating a virtual reality interface for the creation and/or modification and/or deletion of a surgical trajectory;
  • Figure 4 is a block diagram illustrating a first architecture for orienting a 3D object in a pre-defined orientation responsive to a user selection;
  • Figure 5 is a process flow diagram illustrating a first process flow for orienting a 3D object with the architecture of Figure 4;
  • Figure 6 is a block diagram illustrating a second architecture for positioning a 3D object to a translated position responsive to a user selection;
  • Figure 7 is a process flow diagram illustrating a second process flow for positioning a
  • Figure 8 is a series of screen captures illustrating a snap/alignment of a 3D object in normal mode using the processes of Figures 5 and 7;
  • Figure 9 is a series of screen captures illustrating a snap/alignment of a 3D object in zoom box mode
  • Figure 10 is a series of screen captures illustrating a snap/alignment of a 3D object in cutting plane mode
  • Figure 11 is a decision flow chart illustrating the various modes of operation of the described techniques
  • Figure 12 is a series of screen captures illustrating the addition of a surgical trajectory
  • Figure 14 is a series of screen captures illustrating the editing of a surgical trajectory
  • Figure 15 is a series of screen captures illustrating the deletion of a surgical trajectory
  • Figure 16 is a series of screen captures illustrating a process for editing a diameter of a surgical trajectory
  • Figure 17 is a series of screen captures illustrating implementation of cutting plane techniques
  • Figure 18 is a series of screen captures illustrating a technique for rotating a surgical trajectory
  • Figure 19 is a series of screen captures illustrating a technique for implementing an auto-shrink feature for a surgical trajectory.
  • a volume is defined by a matrix of values that corresponds to the sampling of a real object.
  • the matrix can be considered a 3D matrix.
  • 3D objects may result from scanning, or are just 3D objects defined in coordinate space.
  • Manipulation of the 3D objects is geometric manipulation in the coordinate space of Dextroscope/Radiodexter.
  • Surgical trajectories are defined as paths showing the corridor of the surgical procedure. 5 These, and manipulations of these, offer an efficient way to plan surgery with 3D images.
  • the surgical trajectory is identified manually by a user using a interface.
  • each surgical trajectory is specified mainly by an entry point and a
  • target point can also be defined by a colour and a volumetric arrangement (e.g. cross-sectional diameter if having a body in cylindrical form).
  • the interface enables a user to position target and entry points directly in 3D using the Dextroscope (or other) tools.
  • Each element or property (e.g. entry point, target point, colour and diameter, etc.) of a surgical trajectory is editable.
  • a surgical trajectory can be represented in a variety
  • the surgical trajectory 2 has an entry point 4 depicting the "entry point" in the subject for the surgical procedure and a target point 6, for locating adjacent the target object in the subject, usually an object which is to the subject of the surgical procedure
  • the surgical trajectory 2 can be defined in the xyz-plane as illustrated.
  • Imaging data 10 of a subject patient is illustrated.
  • the imaging data includes
  • the surgical trajectory 18 is illustrated as a surgical corridor - the route the surgical procedure will take - having an entry point 18 and a target point 20 (partially obscured in the view of Figure 2).
  • the surgical corridor is defined as a graphical object defined by a volume, and the target/entry points.
  • 3.0 is specified in three-dimensions, with respect to another 3D object (in this case, a patient's volumetric data).
  • a second example is shown in context in Figure 2b, where part of a skull object has been removed to show the path of the surgical trajectory.
  • the surgical trajectory sub-module is available in the virtual reality environment of the RadioDexter application.
  • a user launches the VR environment 30 by clicking on the "VR" button in a 2D loader provided on a graphical user interface displayed in, for example, the Dextroscope display.
  • Objects and/or data sets define the medical imaging data of the subject are loaded to the VR.
  • the user can activate the surgical trajectory planning mode by clicking on the "Surgical Path" button 32 in the virtual toolbar 34, which will activate the surgical trajectory module (or sub-module).
  • the surgical trajectory module the user can carry out the following actions: • Create/add a plan with the add surgical trajectory button 36;
  • buttons 37a, 37b,..., 37n • edit a plan and/or modify its properties with buttons 37a, 37b,..., 37n;
  • the user switches to the "Add Surgical Plan” tool by clicking the add surgical trajectory button 36.
  • a virtual surgical trajectory (not shown in Figure 3) may be illustrated. It is possible to add plural surgical trajectories, up to, say a total of ten.
  • the user specifies a first point for the Surgical Trajectory: the target point 6 of Figure 1.
  • the user presses and holds a button on the Dextroscope stylus tool to set the target point, and then moves to the entry point before releasing the stylus button.
  • An alternative implementation allows a user to click once to set the first target point, and then a rubber band type surgical trajectory (stretching in response to the user's movement of the stylus) is created.
  • a second click at the second point, the entry point defines and positions the entry point.
  • FIG 12a shows a screen capture illustrating a perspective of a 3D object 500 when a user has activated the "Add" surgical plan button 36 of Figure 3.
  • 3D object 500 comprises skull 502 and brain component 508 (partially obscured by skull 502) represented by prism 508.
  • brain component 508 and other brain components 504, 506 are represented as simple geometric shapes for the purposes of clarity.
  • typical components of the brain will comprise, say, a subject's blood vessels, brain matter and tumour matter, represented by imaged objects, or components of object 500.
  • a section 510 of skull 502 is removed from the view of object 500 as if a section of skull 502 has been cut away in order that a surgeon user might view the brain components one or more of which is to be the subject of the surgical procedure for which the plan is being made.
  • RadioDexter stylus 512 is shown adjacent skull 508 in the VR environment.
  • a surgical trajectory 514 (not shown in Figures 12a and 12b) is defined by two ends 514a, 514b and a volume, in this example a body having a generally cylindrical shape, the properties of which may be modified, as will be discussed in detail below.
  • RadioDexter stylus 512 is used by the user to insert the surgical trajectory 514 in object 500.
  • the stylus 512 positioning the surgical trajectory 514 is moved by the user towards the (tumour object) cylinder 504.
  • "Clipping" of object 500 is described in greater detail below, but in clipping the object image to allow a user to select a target point on or at a brain object 504, the object is "clipped” in a plane perpendicular to the clipping direction as demonstrated by sectional face 504a of object (or object component) 504 and section face 506a of object (or object component) 506.
  • the object 504 is "stripped” or clipped back with movement of the stylus 512 and this is illustrated by a section 502a of the skull 502 being exposed. Any other objects in the 3D object are also clipped back.
  • the user activates (e.g. by "clicking") the surgical tool to select the surgical trajectory target point 514a in or on the tumour 504 (or to select the tumour 504 object itself).
  • the user has moved stylus 512 away from tumour 504 (stylus 512 is now shown as being semi-transparent 512a to represent this) to define the path of the surgical trajectory 514 away from surgical trajectory target point 514a. While doing so, the clipping plane recedes away from the target point in accordance with movement of the stylus 512a, for example, with respect to the tip of stylus 512a. Accordingly, the clipping plane moves away from objects 504, 506 (now shown as whole objects) and into the volume of object 509, which has a sectional face 509a across it where the clipping plane intersects with the volume of this object.
  • the user has moved stylus 512a outside of the object 500 and, therefore, the "clipping" effect is no longer has any effect on object 500; skull 502 is displayed in its entirety save for the cut-out section 510 first illustrated in Figure 12a.
  • the user de-activates (or again activates) stylus 512a.
  • Adding a surgical trajectory will automatically set it as the selected object.
  • the user continues to use the "Add Surgical Plan" tool 36. If the tool shows a virtual surgical trajectory, the user uses it to add or move surgical trajectories. Otherwise, the user uses it to move existing surgical trajectories.
  • the user To modify the surgical trajectory object, the user must first place the tip of the stylus near the segment of interest of the surgical trajectory.
  • the entry point there are three segments of the object that a user can pick to control: the entry point, the target point, or the entire surgical trajectory as a whole including both the entry and target point.
  • the surgical trajectory's segment is highlighted to indicate that it is active.
  • the user places the tip of the stylus near the respective segment in the surgical trajectory.
  • the user places the tip of the stylus near the segment connecting the entry point and the target point.
  • the affected segment(s) When the tool is activated (i.e. while the stylus button is pressed) the affected segment(s) will be translated and/or oriented by following the tip of the stylus position and orientation.
  • the coordinates of the point to be modified are made to be those of the tip of the stylus.
  • the point will jump from its position to that of the tip.
  • this technique preserves the distance between the tip of the stylus and the position of the point at the time the user starts pressing the stylus button. This approach provides better interaction since it avoids the jump. In addition, minor incremental adjustments can be put to use because the applied position is relative to the tip of the stylus instead of its fixed position.
  • Figure 14a illustrates an object 1400 in the virtual space comprising skull 1402 having a section 1410 cut away for the user to see brain component 1408 similar or identical to those described above.
  • Stylus 1412 having a tip 1412b is illustrated as is a surgical trajectory 1414 previously created according to the techniques described.
  • the user selects the edit surgical trajectory button 37a of Figure 3 and, as shown in Figure 14b, moves the stylus 1412 until the tip 1412b of stylus 1412 is as close as possible to any of the three parts of the surgical trajectory: the entry point 1414b, the target point 1414a, and the body (generally referred to by 1414) of the surgical trajectory.
  • the selected part of surgical trajectory 1414 is highlighted and stylus 1412 is shown as being semi- transparent 1412a.
  • portion 1414c of surgical trajectory 1414 adjacent target point 1414a is highlighted.
  • portion 1414d of surgical trajectory 1414 adjacent entry point 1414b is highlighted. Then, the user presses the stylus button to move the selected part of the surgical trajectory. So, for example, entry point 1414b is moved by the user by manipulation of stylus 1412a and the body of surgical trajectory 1414 moves accordingly to a moved position (not shown) in which target point 1414a is not moved, but body 1414 of the surgical trajectory is modified to follow a route from target point 1414a to new entry point 1414b.
  • Moving a surgical trajectory will automatically set it as the selected object.
  • FIG. 15 Deletion of a surgical trajectory is illustrated in Figure 15.
  • a user deletes a surgical trajectory by clicking on the "Delete Surgical Plan” button 38 illustrated in Figure 3 and then positions tip 1512b of the stylus adjacent surgical trajectory 1514 to select it for deletion.
  • Surgical trajectory 1514 (or portion thereof) is highlighted 1514e to indicate it is selected for deletion and the stylus is shown as being semitransparent 1512a. Clicking the stylus button deletes the surgical trajectory 1514 from view, optionally permanently for the session.
  • the user can modify a number of the other properties of the surgical trajectory.
  • the user clicks on the edit button 1637 of the virtual toolbar 1602 in virtual environment 1600 to modify the cross-sectional area, (e.g. diameter) of the surgical trajectory 1614.
  • Surgical trajectory 1614 has a highlighted portion 1614e to indicate that the surgical trajectory has been selected for modification/editing.
  • the user may also select the surgical trajectory to be modified using the object list if it is not already selected.
  • the user activates the "Diameter" slider 1637a with stylus 1612a to adjust the diameter of the selected surgical trajectory 1614.
  • the range of the diameter is, in the example of Figure 16 from 1 mm to 20 mm.
  • the diameter of the surgical trajectory 1614 has been reduced to provide a reduced-diameter surgical trajectory 1614f by activation of slider button 1637a.
  • the diameter of the surgical trajectory 1614 has been increased to provide an increased-diameter surgical trajectory 1614g by activation of slider button 1637a.
  • the user selects the surgical trajectory to be modified using the object list if it is not already selected. Then the user activates the Colour Editor (invoked by clicking on the "Colour” button 37b of Figure 3) to change the colour of the selected surgical trajectory.
  • the user selects the surgical trajectory to be modified using the object list if it is not already selected. Then, the user activates the "Transparency" slider 37c to adjust the transparency value of the selected surgical trajectory.
  • the range of transparency value is from 0.3 to 1.0.
  • the user can manipulate the 3D object with respect to the surgical trajectory. The manipulation can be a reorientation of the 3D object with respect to the surgical trajectory. Additionally, or alternatively, the 3D object can be repositioned on the display with respect to the surgical trajectory. Further, a clipping plane can be moved with respect to (e.g. along) an axis of the surgical trajectory to display graded sectional views of the 3D object. The clipping plane can be set perpendicular to the surgical trajectory axis, or parallel and on the surgical trajectory, oriented so that the user views a cross section of the objects intersected along the surgical trajectory.
  • the clip plane provides the functionality for the user to clip away all imaging data defining the subject patient's anatomy from the clip plane facing the entry point onwards.
  • This clipping plane is defined with respect to the surgical trajectory, say perpendicularly to the surgical trajectory. Moving the position of the clipping plane from the entry point to the target point and vice versa gives a perception of the cut across anatomy. This serves as a preview of the anatomy before the actual surgery.
  • Object 500 is defined in virtual space and comprises skull 1702 with cut away partition 1710 providing a view of brain component 1708 (partially obscured by skull 1702).
  • Surgical trajectory 1714 has been defined in accordance with the techniques described above. The user moves the clip plane along the longitudinal axis of the surgical trajectory 1714 by the user's activation of clip plane slider 1742 by stylus 1726.
  • FIG. 17b A view of the clip plane intersecting with object 1700 is given in Figure 17b.
  • object 1700 has been stripped back along the axis of the surgical trajectory 1714 from the plane of the clip plane, back in the direction of the surgical trajectory entry point; thus, skull 1702 is exposed to a section 1702 and brain components 1704, 1706 and 1709 are now at least partially visible to the user. All objects (or object parts) from the clip plane back in the direction of the surgical trajectory target point (not shown) are removed from the image.
  • Figure 17c Another stripped back view is given in Figure 17c, where the clipping has continued to the target point of the surgical trajectory; the surgical trajectory object is thus removed from the view of Figure 17c.
  • the surgical trajectory may be rotated under the control of the user.
  • an aligned surgical trajectory 1814 is shown in skull object 1802 of virtual space 1800.
  • the user may now rotate the represented by stylus 1826a being shown as semi- transparent.
  • Vertical (in the example of Figure 18a) line 1814a associated with circle 1814 (representing the surgical trajectory) defines an "up" vector, described above, to constrain the views, and define an 'up' direction, so that the display in the virtual planning can be made to correspond to the views that would be obtained during actual surgery.
  • the user moves stylus 1826a within the virtual environment relative to the surgical trajectory 1814 to rotate the surgical path.
  • the object is locked to the surgical path and rotates with it, responsive to the user's manipulation of stylus 1826a.
  • stylus 1826a of Figure 18a has been moved to new position 1826b and, with it, the object and skull 1802 and all brain components have rotated too, around the centre line 1814b of the axis of surgical trajectory 1814.
  • line 1814a associated with surgical trajectory 1814 defining the up vector is now more or less inverted - as represented by new position and direction of line 1814a - corresponding to the user's movement of stylus 1826b.
  • line portion 1814a of surgical trajectory defining the up vector re-aligns itself to define a new up vector, again as represented by line 1814a as illustrated in Figure 18c.
  • the surgical trajectory is particularly useful when it intersects the volume of the patient data.
  • the interface(s) described above allow a user to position the entry point and target point anywhere in the virtual workspace.
  • the virtual workspace 1900 comprises skull object 1902, surgical trajectory object 1914 and stylus tool object 1912.
  • surgical trajectory 1914 is generally elongate in comparison to the stylus of the other examples disclosed herein as the user, with stylus 1912, has placed target point 1914b at the point in virtual workspace 1900 as shown.
  • Portion 1914c of surgical trajectory object 1914 is highlighted to illustrate this portion is being acted upon.
  • the target point is omitted for the sake of clarity; it is, in the views of Figure 19, obscured by skull object 1902 as it is "inside" skull object.
  • an optional "auto-shrink” feature allows automatic adjustment of the entry point to just outside the skull object at a user-configurable distance (e.g. 5, 10 or 15 cm) outside the skull object in the virtual environment 1900.
  • the apparatus may be configured for this feature to be activated when a new surgical trajectory is added, or when an existing surgical trajectory is modified.
  • One characteristic of a surgical trajectory is that the target point is located inside the object of interest while the entry point is located outside the object of interest. When this condition is satisfied, the surgical trajectory's auto shrink feature will shorten the distance between the entry point and the target point 1914b by repositioning the entry point 1914b nearer to the target point while preserving the direction of the surgical trajectory, as illustrated in Figure 19b.
  • the entry point will be positioned just outside the object of interest.
  • Figure 4 illustrates an apparatus architecture for orienting a 3D object in a pre-defined orientation responsive to a user selection.
  • the apparatus architecture 100 comprises a source direction determination module 102 for determining a source direction matrix corresponding with the pre-defined orientation.
  • the source direction matrix defines the pre-defined direction.
  • the pre-defined orientation corresponds with a longitudinal axis of surgical trajectory 20 of Figure 2 and is defined by two points - the surgical trajectory entry and target points - in the coordinate space of Dextroscope/RadioDexter.
  • the source direction matrix (and other matrices discussed herein) could be defined by a matrix of a single row (a vector) or a matrix/vector of, say, polar coordinate values.
  • Apparatus 100 also comprises perspective direction determination module 104 for determining a perspective direction matrix.
  • the perspective direction matrix defines the user's perspective direction, and is defined by two points - the surgical trajectory target point - in the coordinate space of Dextroscope/RadioDexter - and a user's "virtual viewpoint" (generation of a "virtual viewpoint” being a well-known technique in computer image generation).
  • Apparatus architecture 100 comprises orientation matrix determination module 106 for determining an orientation rotation matrix for the 3D object from the source direction matrix and the perspective direction matrix.
  • the orientation matrix defines the operating parameters to orient the 3D object to the predefined orientation.
  • 3D object manipulation module 108 applies the orientation matrix to a 3D object defining display parameters for the 3D object responsive to the user selection, thereby to orient the 3D object to the pre-defined orientation.
  • architecture 100 also utilises stand-alone 3D object positioning module 110, described below with respect to Figures 6 and 7.
  • the modules of apparatus architecture 100 may interact with and be supported by ancillary architecture modules 112 including, at least, image generation module 114, database 116, display device 118 and UO devices 120.
  • 3D object manipulation module 108 can be a stand-alone module or be incorporated in (e.g. as a sub module of) image generation module 114.
  • the 3D object for display on display 118 is defined by a dataset stored in database 116.
  • I/O devices 120 comprise, say, a user keyboard, a computer mouse, and/or Dextroscope tools for generation and manipulation of images.
  • Figure 5 illustrates a process flow for the orientation of a 3D object with respect to the pre-defined orientation.
  • the process flow starts at step 150 where an image is rendered for display on a Dextroscope display 118 at step 152.
  • a source direction in the present example, the direction of a longitudinal axis of a surgical trajectory
  • the source direction is the direction to which the 3D object is to be oriented. If the source direction is the direction of the longitudinal axis of the surgical trajectory, the 3D object will be oriented so that the surgical trajectory appears perpendicular to the plane of the display for the user's viewing.
  • a perspective direction e.g.
  • a user's perspective direction matrix is determined by perspective direction determination module 104.
  • orientation matrix determination module 106 determines the orientation rotation matrix for the 3D object from the source direction matrix and the perspective direction matrix.
  • the process loops at step 160 waiting for a user selection or instruction to orient the 3D object.
  • the 3D object manipulation module 108 applies the orientation matrix to the 3D object (to, say, the object data set) to orient the 3D object to the pre-defined orientation.
  • the 3D object is oriented with respect to the pre-defined orientation. In this example, looking along a longitudinal axis of a surgical trajectory.
  • the process ends at step 166.
  • source direction determination module 102 determines the source direction as a first surgical path in the 3D object.
  • the source direction determination module 102 may do this by taking due cognisance of coordinate values in the coordinate space for the first and second (entry and target) points in the surgical path.
  • the perspective direction determination module determines the perspective direction matrix to correspond with a user's perspective from a second path, the second path being a path between a point (e.g. the target or entry point of the surgical trajectory) in the 3D object and a user viewpoint, for example a "virtual viewpoint".
  • the apparatus is configured to detect a user perspective by receiving input signals from user tracking modules external to the apparatus.
  • the source direction is defined with a source direction matrix comprising a set of data entries defining two points in coordinate space: the entry and target points of the surgical trajectory.
  • the source direction also has an associated source rotation matrix which comprises data elements defining orientation data for the surgical trajectory/source direction such as axial tilt, rotational parameters etc.
  • the data elements of the source rotation matrix are modified in response to the changes.
  • the perspective direction is defined with a perspective direction matrix.
  • the perspective direction also has an associated perspective rotation matrix comprising a set of data entries for two points which define the perspective direction.
  • the two points are a "virtual viewpoint" and the surgical trajectory target point.
  • the perspective direction orientation matrix comprises data elements defining orientation data for the perspective direction.
  • the orientation rotation matrix is derived by orientation matrix determination module 106 from a manipulation of the source rotation matrix and the perspective rotation matrix. In this example, this manipulation comprises the multiplication (or product) of the perspective rotation matrix with an inverse of the source direction rotation matrix.
  • the 3D object comprises a plurality of objects which, collectively, are defined as a grouped "root object".
  • the root object can be considered as a single object and vastly simplifies on-screen manipulation of the multiple objects defining the root object.
  • Objects in the grouped root object include multi-modal imaging data gathered from the CT and MRI scans, etc.
  • the root object has a root object rotation matrix and the 3D object manipulation module 108 orients this object to the pre-defined orientation of the surgical trajectory by applying the orientation rotation matrix to the root object rotation matrix. This is done by matrix multiplication, and the result of the multiplication defines a new or updated root object rotation matrix.
  • the new/updated root object rotation matrix defines the (re)oriented 3D object at step 164.
  • Figure 6 illustrates an architecture for the imaging positioning module 110 illustrated in dashed lines in Figure 4.
  • the 3D object positioning techniques can be used either in stand-alone mode or in conjunction with the 3D object orientation techniques of Figures 4 and 5, thereby to provide a "snap" where the 3D object is both re-orientated and re- positioned in the display responsive to a user selection of the "align" button 40 of
  • Figure 3 The synergy provided by utilising both orientation and positioning algorithms of Figures 5 and 7 respectively is particularly advantageous; the 3D object is both orientated to the pre-defined direction of the surgical trajectory and re-positioned for display on display 118. Alternatively, one or both of these algorithms can be launched on activation of the "align" button 40, or equivalent.
  • Figure 6 illustrates a source position determination module 122 for determining a source position for the 3D object. The source position is the "start" position of the 3D object prior to re-positioning.
  • 3D object positioning module 110 also comprises a translated position determination module 124 for determining a translated position for the 3D object. The translated position is the position to which the 3D object is to be re-positioned.
  • Translation matrix determination module 126 is for determining a translation matrix for the 3D object from the source position to the translated position.
  • 3D object manipulation module (which can be either the module 108 of Figure 4 or a second module 128) applies the translation matrix to the 3D object responsive to the user selection, thereby positioning the 3D object to the translated position.
  • Figure 7 illustrates a process flow for the positioning of a 3D object to a translated position with the architecture of Figure 6.
  • the translated position is the centre of a display screen or the centre of the projected image.
  • the 3D object can be positioned for stereoscopic viewing.
  • the 3D object is, optionally, rotated and translated to be at an optimal viewing point for stereo, taking into account its position on the screen of display 118, as well as depth for the nearest point in the 3D object so that it is not too close to the viewpoint, so as to avoid difficulty in stereo convergence.
  • the process of Figure 7 starts at step 200.
  • this process can be used either as a stand-alone module to provide positioning of the 3D object only or in conjunction with the process of Figure 5 also to provide orientation. Therefore, optionally, the 3D object is already oriented as at step 164 of Figure 5.
  • the process proceeds directly after starting (i.e. without 3D object orientation) to determine the source position at step 204 with source position determination module 122.
  • translated position determination module 124 determines the position to which the 3D object is to be repositioned, the "translated" position.
  • Translation matrix determination module 126 determines a translation matrix defining the movement of the 3D object from the source to the translated position.
  • the process loops round at step 210 waiting for the user selection of the re-positioning (by activation of the "Align" button 40 of Figure 3 or equivalent) and upon detection of the user selection, 3D object manipulation module 108/128 applies the translation matrix to the 3D object data set (or the root object data) to position the 3D object at the translated position at step 214.
  • a real synergy can be provided by a "snap" of the 3D object by running 3D object orientation steps 162, 164 of Figure 5 in parallel with steps 212, 214 of Figure 7, so that the 3D object is "snapped" by re-orienting and re-positioning it.
  • the process ends at step 216.
  • the disclosed techniques are directed at reorientation and reposition of a surgical trajectory, together with the other objects.
  • orienting is performed to all objects in the scene with reference to the position and orientation of a surgical trajectory.
  • a surgical trajectory is reoriented to be perpendicular to a viewing plane (the plane in which the image is oriented for viewing) and repositioned to an arbitrary position (in this implementation, it is positioned approximately at the centre of the viewing plane).
  • the representation of the viewing plane varies according to the different modes of operation.
  • zoom Mode the viewing plane is the front plane (the nearest plane the user can see) of the Zoom Box.
  • CM the Cut Plane is used as the viewing plane.
  • NM Normal Mode
  • a clipping plane can be moved with respect to (e.g. along, in an orientation perpendicular to a surgical trajectory) an axis of the surgical trajectory to display graded sectional views of the 3D object.
  • Use of the clip plane provides functionality for the user to clip away all imaging data defining the subject patient's anatomy from the clip plane facing the entry point onwards.
  • the clipping plane can be either perpendicular to the surgical trajectory, or constrained along the surgical trajectory, oriented to be seen perpendicularly by the user
  • An algorithm to provide this functionality requires that a "clipping point" along the surgical trajectory is defined.
  • the clipping plane is defined corresponding with and in relation to the clipping point and the sectioned patient subject's image is generated for display accordingly.
  • the distance along the length of the surgical trajectory from the entry point to the target point is normalised, so that the clipping point is defined with respect to proportion of the normalised length along the surgical trajectory.
  • no translation of the 3D object for display is required and the clipped 3D object is displayed.
  • Clipping functionality is achieved by clipping from the surgical trajectory entry point plane to the surgical trajectory target point plane when sliding through the slider 42 from left to right.
  • the entry point plane is a plane at entry point 4 that is perpendicular to the surgical trajectory direction
  • the target point plane is a plane at target point 6 that is perpendicular to the surgical trajectory direction.
  • Numerous clipping planes can be generated from an entry point plane to a target point plane with a pre- specified interval between two consecutive planes. At any one interval, only one plane is displayed. In the current implementation, a total of 101 clipping planes can be viewed from normalized slider bar showing value 0.00 (at entry point) to 1.00 (at target point) with interval 0.01.
  • NM Normal Mode
  • ZM Zoom Box mode
  • CM Cut Plane mode
  • a Zoom Box is a defined area as described in, for example, PCT/US03/38078 noted above.
  • a zoom box is defined as a rectangular prism shape/space volume in which 3D objects inside the volume are displayed in enlarged scale while 3D objects or parts of the objects outside of the zoom box are not displayed.
  • a zoom box is generated for display in the Dextroscope display 118 when the zoom factor of the displayed image is increased by sliding through a zoom slider 37d of Figure 3 by the user.
  • the Zoom Box helps to maintain the interaction speed of the application while still allowing a user to view a volume of interest of the objects within the Zoom Box. Without the Zoom Box, the objects can occupy up to the whole display volume and the interaction will be severely limited as display objects will obscure the virtual control panel of Figure 3. The interaction speed may also be slower than normal since the whole scene has to be rendered (and iteratively re-rendered), responsive to user manipulations of the 3D object.
  • the apparatus when re-positioning the 3D object as described above but in Zoom Box mode of operation, the apparatus generates for display an enlarged portion of the 3D object in the zoom box, and the translated position determination module is configured to determine the translated position for the 3D object enlarged portion as a centre point of a plane of the zoom box.
  • the enlarged 3D object is re-positioned at the centre of the "front" (from a user's perspective) plane of the zoom box.
  • both the Align and Clip operations act on the clipping plane that is used to clip the objects.
  • the clipping plane moves with respect to the objects while keeping the objects stationary.
  • OpenGL clipping planes that support graphics hardware acceleration.
  • OpenGL Open Graphics Library by Silicon Graphics Inc.
  • OpenGL is a standard specification defining a cross-language cross-platform API (application programming interface) for writing applications that produce, amongst other things, 3D computer graphics.
  • OpenGL supports more than six planes for the zoom box, but use of six planes is chosen in the present example in order to avoid a degradation of performance which could happen with use of more than six planes.
  • the clip plane slider button 42 When the zoom box front-facing plane acts as the clipping plane, an alternative functionality can be assigned to the clip plane slider button 42 so that this can be used to move the object, say back and forth, on the display 118.
  • the clip plane slider button 42 is renamed in the virtual toolbar of Figure 3 to an appropriate title to reflect the change in functionality. No software clipping planes are used since that will severely impact the graphics rendering speed and hence the speed of interaction.
  • the Cut Plane When the objects are viewed at normal magnifications and the Cut Plane is enabled, the Cut Plane will act as the Align and Clip operation's clipping plane. If the user "snaps" when the Cut Plane is active, the Cut Plane will be reoriented perpendicular to the user's viewpoint. When the user enables the sliding of the clipping plane along the perpendicular path of the surgical trajectory, the objects move with respect to the Cut Plane while keeping the Cut Plane (and hence the clipping plane) stationary.
  • a surgical trajectory 250 is shown oriented with respect to a 3D object of a subject patient's skull 254.
  • a surgical trajectory is superimposed with the skull 254 at point 252.
  • a Dextroscope virtual tool 256 and virtual toolbar 34 are shown rendered in the image with the virtual tool 256 poised over virtual toolbar 34 ready to activate the align button 40.
  • the user (not shown) has utilised virtual tool 256 to active the align button 40 and a "snap" has been effected utilising the algorithms of Figures 5 and 7 so that the 3D object is oriented in a direction of the surgical trajectory and re-positioned in the axis 258 of the display.
  • the surgical trajectory 250 is represented in this view by circle 252, superimposed on the skull object 254 at 252.
  • Line object 1814a (and its operation and functionality) is described with reference to Figure 18.
  • FIG 9 a clip along the surgical trajectory when in Zoom Box mode is illustrated.
  • the "front" face of the Zoom Box acts as the clipping plane.
  • Figure 9a shows surgical trajectory 250 in aligned mode after the snap from activation of "align" button 40.
  • a magnified view of the object is shown in the zoom box 260 including a magnified view of the skull 254 and brain component represented by cylinder 262 for the sake of simplicity.
  • the surgical trajectory itself is made visible as the point of intersection of the surgical trajectory and the plane cutting the 3D objects.
  • the surgical trajectory itself is not displayed as a 3D object in stereo in zoom mode, since if it is displayed as a line in front of a user's two eyes, the user may find it difficult to converge on the surgical trajectory (since it appears as two diverging lines) and may disturb the user's view.
  • zoom mode a point of intersection is provided, displayed over all the 3D objects (that is, not obscured by them), so that the surgeon can see the point where the surgical trajectory enters the 3D object. It may not be sufficient to hide the surgical trajectory intersection point, since then the "front" plane of the zoom box could cut the 3D object at points that are not the surgical trajectory points (if the 3D object has a protrusion closer to the viewpoint than the surgical trajectory intersection point).
  • the surgical trajectory When the surgical trajectory is in Align mode and the user initiates a zoom operation, the surgical trajectory will be given priority for determining the zoom point.
  • the apparatus casts a "ray" (not shown) starting from the surgical trajectory's entry point (as illustrated by, for example, entry point 18 of Figure 2a) with a direction from the entry point to the target point (e.g. target point 20of Figure 2a). If a visible part of an object is intersected (i.e. visible voxel of a volume or a visible triangle of a mesh) this location will be used as the new zoom point. If there is no intersection, then the usual algorithm for computing the zoom point will be used.
  • an additional criterion for the surgical trajectory's generated zoom point is that it should be located inside the zoom box. If the computed result is not within the zoom box, the nearest side of the zoom box's center will be used as the new zoom point.
  • Front plane 264 of zoom box 260 acts as the cutting plane.
  • skull object 254 is "stripped back" with movement of the zoom box front plane 264 along the surgical trajectory 250 responsive to movement of the slider 42, as illustrated by section 254a of skull 254.
  • the view presented to the user is of the brain components 262, 264, 266, 268, 270 represented by simple geometric objects for the sake of simplicity.
  • a decision flow chart illustrating the various modes of operation of the described techniques is given.
  • the process starts at step 300.
  • a decision is made as to whether the scene is to be oriented with reference to a surgical trajectory. If the decision returns "Yes”, the algorithm checks to determine whether the RadioDexter software is operating in Zoom Box mode at step 304. If detected that Zoom Box mode is activated, the scene is oriented with reference to the surgical trajectory in Zoom Box mode at step 306. If the decision at step 304 returns "No", the algorithm checks to determine whether the RadioDexter software is operating in Cut Plane mode at step 308. In which case, the scene is oriented with reference to a surgical trajectory in cut plane mode at step 310.
  • step 308 determines that the RadioDexter model is operating in normal mode at step 312 and orients the scene accordingly. Regardless of the outcome of the checks at step 304 and 308, a next check is made at step 314 to determine whether a clip plane is to move along the surgical trajectory. If "Yes”, the algorithm checks to determine whether Zoom Box is activated at step 316. If it is activated, the clip plane is moved along the surgical trajectory in Zoom Box mode at step 318 as described above. If the Zoom Box is not activated, the algorithm checks to determine whether Cut Plane is activated at step 320 and, if so, the clip plane is moved along a surgical trajectory in Cut Plane mode at step 322. If neither Zoom Box nor Cut Plane modes are activated, the algorithm adopts the default position of normal mode and moves a clip plane along the surgical trajectory in normal mode at step 324. The process ends at step 326.
  • S represents a source direction in world coordinates. Set the value of S as the direction of the surgical trajectory's entry point to the target point in world coordinates.
  • D represents a perspective direction in world coordinates. Set the value of D to the direction from the viewpoint to the surgical trajectory's target point.
  • M represents a rotation matrix from the S vector to the D vector. Set the value of M as the inverse of the S's rotation matrix, multiplied by the D's rotation matrix.
  • ⁇ S represents a source position in world coordinates. Set the value of S as the world position of the surgical trajectory's entry point. D represents a desired position in world coordinates. Set the value as the center of the projected image.
  • V represents the translation vector from the S point to the D point. Set the value of V by subtracting the S point from the D point.
  • S represents a source direction in world coordinates. Set the value of S as the direction of the surgical trajectory's entry point to the target point in world coordinates.
  • D represents a perspective direction in world coordinates. Set the value of D to the direction from the viewpoint to the surgical trajectory's target point.
  • M represents a rotation matrix from the S vector to the D vector. Set the value of M as the inverse of the S's rotation matrix, multiplied by the D's rotation matrix.
  • S represents a source position in world coordinates. Set the value of S as the world position of the surgical trajectory's entry point.
  • D represents a desired position in world coordinates. Set the value as the Zoom Box's center point projected to the Zoom Box plane in world coordinates.
  • V represents the translation vector from the S point to the D point. Set the value of V by subtracting the S point from the D point.
  • S represents a source direction in world coordinates. Set the value of S as the direction of the surgical trajectory's entry point to the target point in world coordinates.
  • D represents a perspective direction in world coordinates. Set the value to the negative z-direction.
  • M represents a rotation matrix from the S vector to the D vector. Set the value of M as the inverse of the S's rotation matrix, multiplied by the D's rotation matrix.
  • S represents a source position in world coordinates. Set the value of S as the world position of the surgical trajectory's entry point.
  • D represents a desired position in world coordinates. Set the value as the center point of the Cut Plane.
  • V represents the translation vector from the S point to the D point. Set the value of V by subtracting the S point from the D point.
  • S represents a source direction in world coordinates. Set the value of S as the direction of the surgical trajectory's entry point to the target point in world coordinates.
  • D represents a perspective direction in world coordinates. Set the value of D to the direction from the viewpoint to the surgical trajectory's target point.
  • M represents a rotation matrix from the S vector to the D vector. Set the value of M as the inverse of the S's rotation matrix, multiplied by the D's rotation matrix.
  • ⁇ p represents the normalized distance from the entry point (0.0) to the target point (1.0).
  • D represents a desired position in world coordinates. Set the value as the Zoom Box's center point projected to the front face of the Zoom Box in world coordinates.
  • V represents the translation vector from the S point to the D point. Set the value of V by subtracting the S point from the D point.
  • S represents a source direction in world coordinates. Set the value of S as the direction of the surgical trajectory's entry point to the target point in world coordinates.
  • D represents a perspective direction in world coordinates. Set the value to the negative z-direction.
  • M represents a rotation matrix from the S vector to the D vector.
  • M represents the inverse of the S's rotation matrix, multiplied by the D's rotation matrix.
  • ⁇ p represents the normalized distance from the entry point (0.0) to the target point (1.0).
  • S represents a source position in world coordinates. Set the value of S as a point along the surgical trajectory from the entry point to the target point.
  • D represents a desired position in world coordinates. Set the value as the center point of the Cut Plane.
  • V represents the translation vector from the S point to the D point. Set the value of V by subtracting the S point from the D point.
  • R represents the direction D's rotation matrix in the Root Object's object coordinate space.
  • ⁇ p represents the normalized distance from the entry point (0.0) to the target point (1.0).
  • R represents the point D in the Root Object's object coordinate space.
  • Pseudo code for modifying the entry point when editing the surgical trajectory is now given _V is a vector declared as a global variable
  • ⁇ E represents the position of the entry point in world space
  • T represents the position of the target point in world space
  • D represents the direction of the surgical trajectory in world space
  • P represents the world position of the stylus
  • E represents the position of the entry point in world space
  • T represents the position of the target point in world space
  • D represents the direction of the surgical trajectory in world space
  • P represents the world position of the stylus
  • E represents the position of the entry point in world space
  • T represents the position of the target point in world space
  • D represents the direction of the surgical trajectory in world space
  • R represents the ray formed by the initial position E and direction D
  • P represents the nearest intersection point of the ray R and the object of interest(s) in world space
  • V represents the vector from T to P
  • N represents the length of the surgical trajectory protruding out of the object of interest(s) in world space

Abstract

An apparatus orients a three-dimensional object in a pre-defined orientation. A source direction determination module determines a source direction matrix corresponding with the pre-defined orientation. A perspective direction determination module determines a perspective direction matrix. An orientation matrix determination module determines an orientation rotation matrix for the three-dimensional object from the source direction matrix and the perspective direction matrix. A three-dimensional object manipulation module applies the orientation matrix to a three-dimensional object to orient the three-dimensional object. Optionally, the apparatus positions the three- dimensional object. A source position determination module determines a source position. A translated position determination module determines a translated position. A translation matrix determination module determines a translation matrix for the three- dimensional object from the source position and the translated position. The three- dimensional object is positioned by applying the translation matrix to the three- dimensional object responsive to the user selection.

Description

APPARATUS AND METHOD FOR MANIPULATING A THREE- DIMENSIONAL OBJECT/VOLUME
The invention relates to an apparatus and method for orienting a three-dimensional (3D) object in a pre-defined orientation responsive to a user selection. The invention also relates to an apparatus and method for positioning a 3D object to a translated position responsive to a user selection. The invention also relates to an apparatus and method for defining a surgical trajectory in a 3D object displaying medical imaging data.
The term 3D object may also encompass a 3D volume, voxel, etc. in, say, the space of a virtual reality environment.
Volume Interactions Pte Ltd of Singapore provides expertise in interactive three- dimensional virtual reality volumetric imaging techniques. A family of products for implementing such techniques is the Dextroscope® family. The Dextroscope is an interactive console allowing a user to interact intuitively, comfortably and easily with virtual-reality 3D graphics generated by one or more modules from the RadioDexter™ suite of software programs. The Dextroscope is one of the hardware platforms on which RadioDexter can be used. It is designed as a personal planning system, although it also allows collaborative work as a team of people can view the data simultaneously. The Dextroscope allows a user to perceive stereoscopic virtual images within natural reach, in front of a user's eyes. This can be achieved by reflecting an image displayed by a monitor, so that a user's hands are in fact moving in the workspace behind the mirror and interacting with the virtual image produced by the reflection. This allows the user to manipulate the 3D object directly with both hands without obscuring it. The fact that object and hand movements take place in the same apparent position allows for careful, dexterous work.
The user's interaction with the 3D object is not just hand-eye coordinated, but hand-eye collocated, which results in enhanced depth-cues. Liquid crystal display (LCD) shutter glasses are worn by the user to perceive the 3D image and plural users can simultaneously view and discuss the 3D data. A Dextroscope is described in, for example, commonly-assigned International Patent Application No. PCT/SGOl/00048, herein incorporated by reference in its entirety.
The RadioDexter suite of medical imaging visualisation software modules offers realtime volumetric and 3D surface rendering functionality combined with state-of-the-art Virtual Reality (VR) technology. Radiologists and surgeons may work with complex multimodal imaging data with comfort, intuition and speed - for clear 3D understanding of anatomy and improved treatment planning.
RadioDexter generates a stereoscopic Virtual Reality environment in which a user can work interactively with real-time 3D data by "reaching into it" with both hands in a working volume in, say, the Dextroscope. RadioDexter modules process data from Computer Tomography (CT) or Magnetic Resonance Imaging (MRI) scanning processes, as well as volumetric ultrasound etc. A variety of virtual tools for visualization and surgical planning are accessible while working inside RadioDexter' s 3D virtual workspace. This allows the user to work with complex multi-modal imaging data in a fast and intuitive way, for a clear three-dimensional understanding of the patient's anatomy and for improved treatment planning.
Some of the functionality provided by RadioDexter modules include: perspective stereoscopic shaded volume and surface rendering; multimodality image fusion (rendering of several volumes together); automatic volume registration and verification of the registered objects; segmentation with a click of a button; advanced surgical exploration tools- cropping, cutting, drilling, restoring, cloning, roaming, linear & volumetric measurement; easy-to-use colour and transparency mapping with volume rendering presets; capture of 3D interactive manipulations, with stereoscopic playback mode and video export capabilities to AVI; and easy reporting with tools incorporating 3D images, labeling, HTML exporting and printing.
Some of these features and others are described in, for example, the following patent applications, herein incorporated by reference in their entirety: PCT/US03/38077; PCT/US03/38078; PCT/US03/38053; PCT/EP04/53155; US 60/845,654; PCT/EP2005/056269; PCT/EP2005/056273; PCT/EP2005/056275; and PCT/SG2007/000002. RadioDexter can also optionally import Diffusion Weighted Imaging (DWI) datasets, generate Diffusion Tensor Imaging (DTI) fiber tracts and visualize them.
The invention is defined in the independent claims. Some optional features are defined in the dependent claims.
A disclosed technique is for the generation of a Surgical Trajectory. Other disclosed techniques are for manipulating 3D objects/volumes/voxels to a pre-defined orientation and/or position. The pre-defined orientation/position may be defined with respect to, for example, a surgical trajectory.
In one implementation, a new module of the RadioDexter suite of programs allows for the provision of a "Surgical Plan" which defines a Surgical Trajectory. In one definition, a Surgical Trajectory is a plan of a trajectory of a surgeon's route during a planned surgical procedure. Surgical planning is an important part to pre-surgical planning. RadioDexter allows a user to map out a pre-defined plan/route/trajectory for the surgery, taking due cognisance of subject patient data from medical imaging scanning processes. The Surgical Trajectory may be defined by target and entry point pairs, each of which can be defined by a user as 3D point pairs in the coordinate space of Dextroscope/RadioDexter.
When used to implement one or more of the techniques disclosed herein, RadioDexter incorporates a feature which allows automatic orientation of an object, responsive to the user's selection, in a predefined orientation along, say, a Surgical Trajectory. Such an orientation allows a user to look into and along the pre-defined orientation. This is a particularly useful tool when the pre-defined orientation defines a Surgical Trajectory. In another useful implementation, the pre-defined direction is to an "up" direction which might be up in the sense of a patient's normal upright (e.g. when sitting or standing) position or to correspond with a prone position that the patent might be in during surgery. In this implementation, an "up" vector or matrix is added to the 3D object in the same way as a Surgical Trajectory (described below) to allow the 3D object to be oriented with respect to the vector/matrix. Alternatively or additionally, the surgical trajectory has a vector orthogonal to the trajectory to indicate its orientation. This vector is used to constrain the views, and define an 'up' direction, so that the display in the virtual planning can be made to correspond to the views that would be obtained during actual surgery.
Alternatively or additionally, the 3D object is (re)positioned by positioning a source position (say a point on the Surgical Trajectory) with respect to a pre-defined point on the display. Again, this process is executed responsive to the user selection. This can allow the re-oriented image to be displayed in an optimal position in, say, the centre of a display screen. This is particularly useful if the image is being displayed in, say, a zoom mode prior to the re-orientation, where the surgical trajectory is "off-centre" in the display screen prior to the orientation/positioning. As a further option, an adjustable cutting or clipping plane oriented with respect to the surgical trajectory - for example oriented perpendicularly - is also provided. The user can move this cutting/clipping plane along a surgical path and view a cross-sectional view of the 3D object and all the objects along the surgical path. This allows a surgeon to explore carefully the surgical trajectory and objects in and around the path of the procedure.
The present invention will now be described, by way of example only, and with reference to the accompanying drawings in which:
Figure 1 is a diagram illustrating a perspective view of a surgical trajectory object;
Figure 2 is a series of screen captures from RadioDexter illustrating a virtual reality interface showing a 3D object (a brain surgery case with CT, MRI, MRA) and a surgical trajectory object; Figure 3 is a screen capture illustrating a virtual reality interface for the creation and/or modification and/or deletion of a surgical trajectory; Figure 4 is a block diagram illustrating a first architecture for orienting a 3D object in a pre-defined orientation responsive to a user selection;
Figure 5 is a process flow diagram illustrating a first process flow for orienting a 3D object with the architecture of Figure 4; Figure 6 is a block diagram illustrating a second architecture for positioning a 3D object to a translated position responsive to a user selection;
Figure 7 is a process flow diagram illustrating a second process flow for positioning a
3D object with the architecture of Figure 6;
Figure 8 is a series of screen captures illustrating a snap/alignment of a 3D object in normal mode using the processes of Figures 5 and 7;
Figure 9 is a series of screen captures illustrating a snap/alignment of a 3D object in zoom box mode;
Figure 10 is a series of screen captures illustrating a snap/alignment of a 3D object in cutting plane mode; Figure 11 is a decision flow chart illustrating the various modes of operation of the described techniques;
Figure 12 is a series of screen captures illustrating the addition of a surgical trajectory;
Figure 13 (not used);
Figure 14 is a series of screen captures illustrating the editing of a surgical trajectory; Figure 15 is a series of screen captures illustrating the deletion of a surgical trajectory;
Figure 16 is a series of screen captures illustrating a process for editing a diameter of a surgical trajectory;
Figure 17 is a series of screen captures illustrating implementation of cutting plane techniques; Figure 18 is a series of screen captures illustrating a technique for rotating a surgical trajectory; and
Figure 19 is a series of screen captures illustrating a technique for implementing an auto-shrink feature for a surgical trajectory.
In the context of the exemplary description of the disclosed techniques below a volume is defined by a matrix of values that corresponds to the sampling of a real object. The matrix can be considered a 3D matrix. 3D objects may result from scanning, or are just 3D objects defined in coordinate space. Manipulation of the 3D objects is geometric manipulation in the coordinate space of Dextroscope/Radiodexter.
Surgical trajectories are defined as paths showing the corridor of the surgical procedure. 5 These, and manipulations of these, offer an efficient way to plan surgery with 3D images.
The surgical trajectory is identified manually by a user using a interface. In Dextroscope and/or RadioDexter each surgical trajectory is specified mainly by an entry point and a
10 target point, and can also be defined by a colour and a volumetric arrangement (e.g. cross-sectional diameter if having a body in cylindrical form). The interface enables a user to position target and entry points directly in 3D using the Dextroscope (or other) tools. Each element or property (e.g. entry point, target point, colour and diameter, etc.) of a surgical trajectory is editable. A surgical trajectory can be represented in a variety
15 of geometrical shapes, such as a hollow tube having a diameter having a pointed end signifying the target point. One such surgical trajectory object 2 is illustrated in Figure 1. The surgical trajectory 2 has an entry point 4 depicting the "entry point" in the subject for the surgical procedure and a target point 6, for locating adjacent the target object in the subject, usually an object which is to the subject of the surgical procedure
20 such as, say, a tumour. The surgical trajectory 2 can be defined in the xyz-plane as illustrated.
An example of a surgical trajectory is shown in its "normal" context in Figure 2a. Medical imaging data 10 of a subject patient is illustrated. The imaging data includes
25 brain 12, skull 14 and vessel 16 image data derived from CT, MRI, ultrasound imaging data and the like. The surgical trajectory 18 is illustrated as a surgical corridor - the route the surgical procedure will take - having an entry point 18 and a target point 20 (partially obscured in the view of Figure 2). The surgical corridor is defined as a graphical object defined by a volume, and the target/entry points. A surgical trajectory
3.0 is specified in three-dimensions, with respect to another 3D object (in this case, a patient's volumetric data). A second example is shown in context in Figure 2b, where part of a skull object has been removed to show the path of the surgical trajectory.
Generation of a surgical trajectory is discussed with respect to Figure 3. The surgical trajectory sub-module is available in the virtual reality environment of the RadioDexter application. A user launches the VR environment 30 by clicking on the "VR" button in a 2D loader provided on a graphical user interface displayed in, for example, the Dextroscope display. Objects and/or data sets define the medical imaging data of the subject are loaded to the VR.
Once a user is in the VR environment of Figure 3, the user can activate the surgical trajectory planning mode by clicking on the "Surgical Path" button 32 in the virtual toolbar 34, which will activate the surgical trajectory module (or sub-module). Using the surgical trajectory module, the user can carry out the following actions: • Create/add a plan with the add surgical trajectory button 36;
• edit a plan and/or modify its properties with buttons 37a, 37b,..., 37n;
• delete a surgical trajectory with the delete surgical trajectory button 38;
• "snap" to a surgical trajectory view, with the align button 40, as will be described in detail below; and • cut along the surgical trajectory with the move clip plane slider button 42, as will be described in further detail below.
The user switches to the "Add Surgical Plan" tool by clicking the add surgical trajectory button 36. When the "Add Surgical Plan" tool is active, a virtual surgical trajectory (not shown in Figure 3) may be illustrated. It is possible to add plural surgical trajectories, up to, say a total of ten.
The user specifies a first point for the Surgical Trajectory: the target point 6 of Figure 1. The user presses and holds a button on the Dextroscope stylus tool to set the target point, and then moves to the entry point before releasing the stylus button. An alternative implementation allows a user to click once to set the first target point, and then a rubber band type surgical trajectory (stretching in response to the user's movement of the stylus) is created. A second click at the second point, the entry point, defines and positions the entry point.
Figure 12a shows a screen capture illustrating a perspective of a 3D object 500 when a user has activated the "Add" surgical plan button 36 of Figure 3. 3D object 500 comprises skull 502 and brain component 508 (partially obscured by skull 502) represented by prism 508. (Brain component 508 and other brain components 504, 506 (not shown in Figure 12a) are represented as simple geometric shapes for the purposes of clarity. The reader will appreciate, of course, that typical components of the brain will comprise, say, a subject's blood vessels, brain matter and tumour matter, represented by imaged objects, or components of object 500.) A section 510 of skull 502 is removed from the view of object 500 as if a section of skull 502 has been cut away in order that a surgeon user might view the brain components one or more of which is to be the subject of the surgical procedure for which the plan is being made. RadioDexter stylus 512 is shown adjacent skull 508 in the VR environment.
A surgical trajectory 514 (not shown in Figures 12a and 12b) is defined by two ends 514a, 514b and a volume, in this example a body having a generally cylindrical shape, the properties of which may be modified, as will be discussed in detail below.
RadioDexter stylus 512 is used by the user to insert the surgical trajectory 514 in object 500. Referring to Figure 12b, the stylus 512 positioning the surgical trajectory 514 is moved by the user towards the (tumour object) cylinder 504. "Clipping" of object 500 is described in greater detail below, but in clipping the object image to allow a user to select a target point on or at a brain object 504, the object is "clipped" in a plane perpendicular to the clipping direction as demonstrated by sectional face 504a of object (or object component) 504 and section face 506a of object (or object component) 506. That is, the object 504 is "stripped" or clipped back with movement of the stylus 512 and this is illustrated by a section 502a of the skull 502 being exposed. Any other objects in the 3D object are also clipped back. When the tumour 504 - the target - is reached, the user activates (e.g. by "clicking") the surgical tool to select the surgical trajectory target point 514a in or on the tumour 504 (or to select the tumour 504 object itself).
Referring now to Figure 12c, the user has moved stylus 512 away from tumour 504 (stylus 512 is now shown as being semi-transparent 512a to represent this) to define the path of the surgical trajectory 514 away from surgical trajectory target point 514a. While doing so, the clipping plane recedes away from the target point in accordance with movement of the stylus 512a, for example, with respect to the tip of stylus 512a. Accordingly, the clipping plane moves away from objects 504, 506 (now shown as whole objects) and into the volume of object 509, which has a sectional face 509a across it where the clipping plane intersects with the volume of this object.
As illustrated in Figure 12d, the user has moved stylus 512a outside of the object 500 and, therefore, the "clipping" effect is no longer has any effect on object 500; skull 502 is displayed in its entirety save for the cut-out section 510 first illustrated in Figure 12a. To define entry point 514b of the surgical trajectory 514, the user de-activates (or again activates) stylus 512a.
Adding a surgical trajectory will automatically set it as the selected object.
To add a surgical trajectory, the user continues to use the "Add Surgical Plan" tool 36. If the tool shows a virtual surgical trajectory, the user uses it to add or move surgical trajectories. Otherwise, the user uses it to move existing surgical trajectories.
To modify the surgical trajectory object, the user must first place the tip of the stylus near the segment of interest of the surgical trajectory.
In the described technique, there are three segments of the object that a user can pick to control: the entry point, the target point, or the entire surgical trajectory as a whole including both the entry and target point. When the stylus is near the segment of interest, the surgical trajectory's segment is highlighted to indicate that it is active. To activate the entry point or the target point, the user places the tip of the stylus near the respective segment in the surgical trajectory. To modify both the whole surgical trajectory, but maintaining the length between entry point and target point, the user places the tip of the stylus near the segment connecting the entry point and the target point.
When the tool is activated (i.e. while the stylus button is pressed) the affected segment(s) will be translated and/or oriented by following the tip of the stylus position and orientation.
There are a number of ways to implement the way the point of interest is modified with respect to the stylus position and orientation. Two methods are described. In the first, "Absolute" technique, the tip of the stylus is made to coincide exactly with the point that is being modified. In the second, "Relative" technique, the relative position of the tip to the point to modify is preserved.
In the Absolute technique, the coordinates of the point to be modified are made to be those of the tip of the stylus. When the user activates the segment the coordinates of the tip and the point are a few millimeters away, the point will jump from its position to that of the tip.
In the Relative technique, this technique preserves the distance between the tip of the stylus and the position of the point at the time the user starts pressing the stylus button. This approach provides better interaction since it avoids the jump. In addition, minor incremental adjustments can be put to use because the applied position is relative to the tip of the stylus instead of its fixed position.
Referring now to Figure 14, an example of a user edit a surgical trajectory is now given. Figure 14a illustrates an object 1400 in the virtual space comprising skull 1402 having a section 1410 cut away for the user to see brain component 1408 similar or identical to those described above. Stylus 1412 having a tip 1412b is illustrated as is a surgical trajectory 1414 previously created according to the techniques described. To move surgical trajectory 1414, the user selects the edit surgical trajectory button 37a of Figure 3 and, as shown in Figure 14b, moves the stylus 1412 until the tip 1412b of stylus 1412 is as close as possible to any of the three parts of the surgical trajectory: the entry point 1414b, the target point 1414a, and the body (generally referred to by 1414) of the surgical trajectory. When the tool tip 1412b is close enough to any of these, the selected part of surgical trajectory 1414 is highlighted and stylus 1412 is shown as being semi- transparent 1412a. In the example of Figure 14b, portion 1414c of surgical trajectory 1414 adjacent target point 1414a is highlighted. Then, the user presses the stylus button to move the selected part of the surgical trajectory. So, for example, target point 1414a is moved by the user by manipulation of stylus 1412a and the body of surgical trajectory 1414 moves accordingly to a moved position (not shown) in which entry point 1414b is not moved, but body 1414 of the surgical trajectory is modified to follow a route from entry point 1414b to new target point 1414a. In the example of Figure 14c, portion 1414d of surgical trajectory 1414 adjacent entry point 1414b is highlighted. Then, the user presses the stylus button to move the selected part of the surgical trajectory. So, for example, entry point 1414b is moved by the user by manipulation of stylus 1412a and the body of surgical trajectory 1414 moves accordingly to a moved position (not shown) in which target point 1414a is not moved, but body 1414 of the surgical trajectory is modified to follow a route from target point 1414a to new entry point 1414b.
If the user selects the body of the surgical trajectory by positioning tip 1412b of semi- transparent stylus 1412a adjacent the body of the surgical trajectory 1414 as in Figure 14d (thereby highlighting the entire stylus (or a portion thereof) 1414e, further manipulation of stylus 1412a will move both target and entry points 1414a, 1414b and the volume of surgical trajectory 1414 at the same time, while maintaining the distance between the two endpoints.
Moving a surgical trajectory will automatically set it as the selected object.
Deletion of a surgical trajectory is illustrated in Figure 15. A user deletes a surgical trajectory by clicking on the "Delete Surgical Plan" button 38 illustrated in Figure 3 and then positions tip 1512b of the stylus adjacent surgical trajectory 1514 to select it for deletion. Surgical trajectory 1514 (or portion thereof) is highlighted 1514e to indicate it is selected for deletion and the stylus is shown as being semitransparent 1512a. Clicking the stylus button deletes the surgical trajectory 1514 from view, optionally permanently for the session.
The user can modify a number of the other properties of the surgical trajectory.
For example, and with respect to Figure 16, the user clicks on the edit button 1637 of the virtual toolbar 1602 in virtual environment 1600 to modify the cross-sectional area, (e.g. diameter) of the surgical trajectory 1614. Surgical trajectory 1614 has a highlighted portion 1614e to indicate that the surgical trajectory has been selected for modification/editing. The user may also select the surgical trajectory to be modified using the object list if it is not already selected. Then, the user activates the "Diameter" slider 1637a with stylus 1612a to adjust the diameter of the selected surgical trajectory 1614. The range of the diameter is, in the example of Figure 16 from 1 mm to 20 mm. As shown in Figure 16b the diameter of the surgical trajectory 1614 has been reduced to provide a reduced-diameter surgical trajectory 1614f by activation of slider button 1637a. As shown in Figure 16c the diameter of the surgical trajectory 1614 has been increased to provide an increased-diameter surgical trajectory 1614g by activation of slider button 1637a.
To modify the colour of the surgical trajectory, the user selects the surgical trajectory to be modified using the object list if it is not already selected. Then the user activates the Colour Editor (invoked by clicking on the "Colour" button 37b of Figure 3) to change the colour of the selected surgical trajectory.
If the user wishes to adjust the transparency of the surgical trajectory, the user selects the surgical trajectory to be modified using the object list if it is not already selected. Then, the user activates the "Transparency" slider 37c to adjust the transparency value of the selected surgical trajectory. The range of transparency value is from 0.3 to 1.0. Once a surgical trajectory has been added, the user can manipulate the 3D object with respect to the surgical trajectory. The manipulation can be a reorientation of the 3D object with respect to the surgical trajectory. Additionally, or alternatively, the 3D object can be repositioned on the display with respect to the surgical trajectory. Further, a clipping plane can be moved with respect to (e.g. along) an axis of the surgical trajectory to display graded sectional views of the 3D object. The clipping plane can be set perpendicular to the surgical trajectory axis, or parallel and on the surgical trajectory, oriented so that the user views a cross section of the objects intersected along the surgical trajectory.
Use of the clip plane provides the functionality for the user to clip away all imaging data defining the subject patient's anatomy from the clip plane facing the entry point onwards. This clipping plane is defined with respect to the surgical trajectory, say perpendicularly to the surgical trajectory. Moving the position of the clipping plane from the entry point to the target point and vice versa gives a perception of the cut across anatomy. This serves as a preview of the anatomy before the actual surgery.
An illustration of this is given in Figure 17. Object 500 is defined in virtual space and comprises skull 1702 with cut away partition 1710 providing a view of brain component 1708 (partially obscured by skull 1702). Surgical trajectory 1714 has been defined in accordance with the techniques described above. The user moves the clip plane along the longitudinal axis of the surgical trajectory 1714 by the user's activation of clip plane slider 1742 by stylus 1726.
A view of the clip plane intersecting with object 1700 is given in Figure 17b. As shown, object 1700 has been stripped back along the axis of the surgical trajectory 1714 from the plane of the clip plane, back in the direction of the surgical trajectory entry point; thus, skull 1702 is exposed to a section 1702 and brain components 1704, 1706 and 1709 are now at least partially visible to the user. All objects (or object parts) from the clip plane back in the direction of the surgical trajectory target point (not shown) are removed from the image. Another stripped back view is given in Figure 17c, where the clipping has continued to the target point of the surgical trajectory; the surgical trajectory object is thus removed from the view of Figure 17c.
The surgical trajectory may be rotated under the control of the user. Referring to Figure 18a, an aligned surgical trajectory 1814 is shown in skull object 1802 of virtual space 1800. The user may now rotate the represented by stylus 1826a being shown as semi- transparent. Vertical (in the example of Figure 18a) line 1814a associated with circle 1814 (representing the surgical trajectory) defines an "up" vector, described above, to constrain the views, and define an 'up' direction, so that the display in the virtual planning can be made to correspond to the views that would be obtained during actual surgery. In this mode of operation, the user moves stylus 1826a within the virtual environment relative to the surgical trajectory 1814 to rotate the surgical path. In this implementation, the object is locked to the surgical path and rotates with it, responsive to the user's manipulation of stylus 1826a. Thus, in Figure 18b, stylus 1826a of Figure 18a has been moved to new position 1826b and, with it, the object and skull 1802 and all brain components have rotated too, around the centre line 1814b of the axis of surgical trajectory 1814. Thus, line 1814a associated with surgical trajectory 1814 defining the up vector is now more or less inverted - as represented by new position and direction of line 1814a - corresponding to the user's movement of stylus 1826b. When the user de-activates the rotation operation, line portion 1814a of surgical trajectory defining the up vector re-aligns itself to define a new up vector, again as represented by line 1814a as illustrated in Figure 18c.
The surgical trajectory is particularly useful when it intersects the volume of the patient data. However, the interface(s) described above allow a user to position the entry point and target point anywhere in the virtual workspace. This is demonstrated in Figure 19a. The virtual workspace 1900 comprises skull object 1902, surgical trajectory object 1914 and stylus tool object 1912. As shown, surgical trajectory 1914 is generally elongate in comparison to the stylus of the other examples disclosed herein as the user, with stylus 1912, has placed target point 1914b at the point in virtual workspace 1900 as shown. Portion 1914c of surgical trajectory object 1914 is highlighted to illustrate this portion is being acted upon.
The target point is omitted for the sake of clarity; it is, in the views of Figure 19, obscured by skull object 1902 as it is "inside" skull object.
Thus, an optional "auto-shrink" feature allows automatic adjustment of the entry point to just outside the skull object at a user-configurable distance (e.g. 5, 10 or 15 cm) outside the skull object in the virtual environment 1900. The apparatus may be configured for this feature to be activated when a new surgical trajectory is added, or when an existing surgical trajectory is modified. One characteristic of a surgical trajectory is that the target point is located inside the object of interest while the entry point is located outside the object of interest. When this condition is satisfied, the surgical trajectory's auto shrink feature will shorten the distance between the entry point and the target point 1914b by repositioning the entry point 1914b nearer to the target point while preserving the direction of the surgical trajectory, as illustrated in Figure 19b.
Ideally the entry point will be positioned just outside the object of interest.
Apparatus architectures and process flows allowing implementation of these techniques are now described. The techniques described can be implemented in hardware, say computer hardware, computer program software or a combination thereof.
Figure 4 illustrates an apparatus architecture for orienting a 3D object in a pre-defined orientation responsive to a user selection. The apparatus architecture 100 comprises a source direction determination module 102 for determining a source direction matrix corresponding with the pre-defined orientation. The source direction matrix defines the pre-defined direction. In the present example, the pre-defined orientation corresponds with a longitudinal axis of surgical trajectory 20 of Figure 2 and is defined by two points - the surgical trajectory entry and target points - in the coordinate space of Dextroscope/RadioDexter. Of course, the source direction matrix (and other matrices discussed herein) could be defined by a matrix of a single row (a vector) or a matrix/vector of, say, polar coordinate values. Apparatus 100 also comprises perspective direction determination module 104 for determining a perspective direction matrix. Li the present example, the perspective direction matrix defines the user's perspective direction, and is defined by two points - the surgical trajectory target point - in the coordinate space of Dextroscope/RadioDexter - and a user's "virtual viewpoint" (generation of a "virtual viewpoint" being a well-known technique in computer image generation). Apparatus architecture 100 comprises orientation matrix determination module 106 for determining an orientation rotation matrix for the 3D object from the source direction matrix and the perspective direction matrix. The orientation matrix defines the operating parameters to orient the 3D object to the predefined orientation. 3D object manipulation module 108 applies the orientation matrix to a 3D object defining display parameters for the 3D object responsive to the user selection, thereby to orient the 3D object to the pre-defined orientation.
Optionally, architecture 100 also utilises stand-alone 3D object positioning module 110, described below with respect to Figures 6 and 7.
The modules of apparatus architecture 100 may interact with and be supported by ancillary architecture modules 112 including, at least, image generation module 114, database 116, display device 118 and UO devices 120.
3D object manipulation module 108 can be a stand-alone module or be incorporated in (e.g. as a sub module of) image generation module 114.
The 3D object for display on display 118 is defined by a dataset stored in database 116.
I/O devices 120 comprise, say, a user keyboard, a computer mouse, and/or Dextroscope tools for generation and manipulation of images.
Figure 5 illustrates a process flow for the orientation of a 3D object with respect to the pre-defined orientation. The process flow starts at step 150 where an image is rendered for display on a Dextroscope display 118 at step 152. At step 154, a source direction (in the present example, the direction of a longitudinal axis of a surgical trajectory) matrix is determined by source direction determination matrix 102. The source direction is the direction to which the 3D object is to be oriented. If the source direction is the direction of the longitudinal axis of the surgical trajectory, the 3D object will be oriented so that the surgical trajectory appears perpendicular to the plane of the display for the user's viewing. At step 156, a perspective direction (e.g. a user's perspective) direction matrix is determined by perspective direction determination module 104. At step 158, orientation matrix determination module 106 determines the orientation rotation matrix for the 3D object from the source direction matrix and the perspective direction matrix. The process loops at step 160 waiting for a user selection or instruction to orient the 3D object. Upon detection of a user selection to orient the 3D object, the 3D object manipulation module 108 applies the orientation matrix to the 3D object (to, say, the object data set) to orient the 3D object to the pre-defined orientation. At step 164, the 3D object is oriented with respect to the pre-defined orientation. In this example, looking along a longitudinal axis of a surgical trajectory. The process ends at step 166.
Referring back to step 154, source direction determination module 102 determines the source direction as a first surgical path in the 3D object. The source direction determination module 102 may do this by taking due cognisance of coordinate values in the coordinate space for the first and second (entry and target) points in the surgical path.
At step 156, the perspective direction determination module determines the perspective direction matrix to correspond with a user's perspective from a second path, the second path being a path between a point (e.g. the target or entry point of the surgical trajectory) in the 3D object and a user viewpoint, for example a "virtual viewpoint". Alternatively, the apparatus is configured to detect a user perspective by receiving input signals from user tracking modules external to the apparatus. As noted above, the source direction is defined with a source direction matrix comprising a set of data entries defining two points in coordinate space: the entry and target points of the surgical trajectory. The source direction also has an associated source rotation matrix which comprises data elements defining orientation data for the surgical trajectory/source direction such as axial tilt, rotational parameters etc. As the user creates, moves or changes the surgical trajectory on screen of, say, Figure 3 in display 118 using I/O devices 120, the data elements of the source rotation matrix are modified in response to the changes.
In this example, the perspective direction is defined with a perspective direction matrix. The perspective direction also has an associated perspective rotation matrix comprising a set of data entries for two points which define the perspective direction. In this example, the two points are a "virtual viewpoint" and the surgical trajectory target point. The perspective direction orientation matrix comprises data elements defining orientation data for the perspective direction. At step 158, the orientation rotation matrix is derived by orientation matrix determination module 106 from a manipulation of the source rotation matrix and the perspective rotation matrix. In this example, this manipulation comprises the multiplication (or product) of the perspective rotation matrix with an inverse of the source direction rotation matrix.
The 3D object comprises a plurality of objects which, collectively, are defined as a grouped "root object". The root object can be considered as a single object and vastly simplifies on-screen manipulation of the multiple objects defining the root object. Objects in the grouped root object include multi-modal imaging data gathered from the CT and MRI scans, etc. The root object has a root object rotation matrix and the 3D object manipulation module 108 orients this object to the pre-defined orientation of the surgical trajectory by applying the orientation rotation matrix to the root object rotation matrix. This is done by matrix multiplication, and the result of the multiplication defines a new or updated root object rotation matrix. The new/updated root object rotation matrix defines the (re)oriented 3D object at step 164. Figure 6 illustrates an architecture for the imaging positioning module 110 illustrated in dashed lines in Figure 4. The 3D object positioning techniques can be used either in stand-alone mode or in conjunction with the 3D object orientation techniques of Figures 4 and 5, thereby to provide a "snap" where the 3D object is both re-orientated and re- positioned in the display responsive to a user selection of the "align" button 40 of
Figure 3. The synergy provided by utilising both orientation and positioning algorithms of Figures 5 and 7 respectively is particularly advantageous; the 3D object is both orientated to the pre-defined direction of the surgical trajectory and re-positioned for display on display 118. Alternatively, one or both of these algorithms can be launched on activation of the "align" button 40, or equivalent. Thus, Figure 6 illustrates a source position determination module 122 for determining a source position for the 3D object. The source position is the "start" position of the 3D object prior to re-positioning. 3D object positioning module 110 also comprises a translated position determination module 124 for determining a translated position for the 3D object. The translated position is the position to which the 3D object is to be re-positioned. Translation matrix determination module 126 is for determining a translation matrix for the 3D object from the source position to the translated position. 3D object manipulation module (which can be either the module 108 of Figure 4 or a second module 128) applies the translation matrix to the 3D object responsive to the user selection, thereby positioning the 3D object to the translated position.
Figure 7 illustrates a process flow for the positioning of a 3D object to a translated position with the architecture of Figure 6. In this example, the translated position is the centre of a display screen or the centre of the projected image. The 3D object can be positioned for stereoscopic viewing. The 3D object is, optionally, rotated and translated to be at an optimal viewing point for stereo, taking into account its position on the screen of display 118, as well as depth for the nearest point in the 3D object so that it is not too close to the viewpoint, so as to avoid difficulty in stereo convergence.
The process of Figure 7 starts at step 200. As mentioned above, this process can be used either as a stand-alone module to provide positioning of the 3D object only or in conjunction with the process of Figure 5 also to provide orientation. Therefore, optionally, the 3D object is already oriented as at step 164 of Figure 5. When operating in stand-alone mode, the process proceeds directly after starting (i.e. without 3D object orientation) to determine the source position at step 204 with source position determination module 122. At step 206, translated position determination module 124 determines the position to which the 3D object is to be repositioned, the "translated" position. Translation matrix determination module 126 determines a translation matrix defining the movement of the 3D object from the source to the translated position. This can be relatively straightforward matrix subtraction of the source point from the translated point. The process loops round at step 210 waiting for the user selection of the re-positioning (by activation of the "Align" button 40 of Figure 3 or equivalent) and upon detection of the user selection, 3D object manipulation module 108/128 applies the translation matrix to the 3D object data set (or the root object data) to position the 3D object at the translated position at step 214. As noted above, a real synergy can be provided by a "snap" of the 3D object by running 3D object orientation steps 162, 164 of Figure 5 in parallel with steps 212, 214 of Figure 7, so that the 3D object is "snapped" by re-orienting and re-positioning it. The process ends at step 216.
The disclosed techniques are directed at reorientation and reposition of a surgical trajectory, together with the other objects. In other words, orienting is performed to all objects in the scene with reference to the position and orientation of a surgical trajectory. A surgical trajectory is reoriented to be perpendicular to a viewing plane (the plane in which the image is oriented for viewing) and repositioned to an arbitrary position (in this implementation, it is positioned approximately at the centre of the viewing plane).
The representation of the viewing plane varies according to the different modes of operation. In Zoom Mode (ZM, discussed further below), the viewing plane is the front plane (the nearest plane the user can see) of the Zoom Box. In CM, the Cut Plane is used as the viewing plane. Lastly, the viewing plane in Normal Mode (NM) is placed perpendicular to the user's viewpoint but no visible representation is shown in this mode. As noted above, it is possible for a user to "clip" the 3D object using a clip plane function.
With the present techniques, a clipping plane can be moved with respect to (e.g. along, in an orientation perpendicular to a surgical trajectory) an axis of the surgical trajectory to display graded sectional views of the 3D object. Use of the clip plane provides functionality for the user to clip away all imaging data defining the subject patient's anatomy from the clip plane facing the entry point onwards. The clipping plane can be either perpendicular to the surgical trajectory, or constrained along the surgical trajectory, oriented to be seen perpendicularly by the user
An algorithm to provide this functionality requires that a "clipping point" along the surgical trajectory is defined. The clipping plane is defined corresponding with and in relation to the clipping point and the sectioned patient subject's image is generated for display accordingly. The distance along the length of the surgical trajectory from the entry point to the target point is normalised, so that the clipping point is defined with respect to proportion of the normalised length along the surgical trajectory. In normal mode, no translation of the 3D object for display is required and the clipped 3D object is displayed.
Clipping functionality is achieved by clipping from the surgical trajectory entry point plane to the surgical trajectory target point plane when sliding through the slider 42 from left to right. The entry point plane is a plane at entry point 4 that is perpendicular to the surgical trajectory direction, whereas the target point plane is a plane at target point 6 that is perpendicular to the surgical trajectory direction. Numerous clipping planes can be generated from an entry point plane to a target point plane with a pre- specified interval between two consecutive planes. At any one interval, only one plane is displayed. In the current implementation, a total of 101 clipping planes can be viewed from normalized slider bar showing value 0.00 (at entry point) to 1.00 (at target point) with interval 0.01. The foregoing describes basic 3D object manipulation techniques used in Normal Mode (NM). These techniques are also implemented in two other modes: Zoom Box mode (ZM) and Cut Plane mode (CM), but further refinement of the algorithms are required for the special technical considerations necessitated by these modes.
A Zoom Box is a defined area as described in, for example, PCT/US03/38078 noted above. A zoom box is defined as a rectangular prism shape/space volume in which 3D objects inside the volume are displayed in enlarged scale while 3D objects or parts of the objects outside of the zoom box are not displayed.
A zoom box is generated for display in the Dextroscope display 118 when the zoom factor of the displayed image is increased by sliding through a zoom slider 37d of Figure 3 by the user.
The Zoom Box helps to maintain the interaction speed of the application while still allowing a user to view a volume of interest of the objects within the Zoom Box. Without the Zoom Box, the objects can occupy up to the whole display volume and the interaction will be severely limited as display objects will obscure the virtual control panel of Figure 3. The interaction speed may also be slower than normal since the whole scene has to be rendered (and iteratively re-rendered), responsive to user manipulations of the 3D object.
In normal mode, no translation of the 3D object for display is required. However, in Cut and Zoom modes, it is a technical requirement - due to constraints imposed by the graphics libraries - to translate the 3D object with respect to the Cut Plane or the Zoom Box. A translation vector is calculated, defining a translation for the clipping point to the desired point for display. In Zoom Box mode, the desired point is the centre of the "front" (from the user's viewing perspective) face of the Zoom Box. In Cut Plane Mode, the desired point is the centre of the cut plane. In each of the Zoom Box and Cut plane modes, the orientation matrix for the 3D object (or root object) is calculated and then Zoom Box and Cut Plane techniques implemented.
Therefore when re-positioning the 3D object as described above but in Zoom Box mode of operation, the apparatus generates for display an enlarged portion of the 3D object in the zoom box, and the translated position determination module is configured to determine the translated position for the 3D object enlarged portion as a centre point of a plane of the zoom box. In this example, the enlarged 3D object is re-positioned at the centre of the "front" (from a user's perspective) plane of the zoom box.
When rendered objects are viewed at normal magnification (i.e. not in Zoom Mode) both the Align and Clip operations act on the clipping plane that is used to clip the objects. When the user enables the clipping plane along the perpendicular path of the surgical trajectory (by moving the slider button of the "Move Clip" slider), the clipping plane moves with respect to the objects while keeping the objects stationary.
With the Zoom Box enabled and the Align and Clip operation active, the front facing side of the Zoom Box is used as the clipping plane. This is because the Zoom Box uses six (6) OpenGL clipping planes (that support graphics hardware acceleration). (OpenGL (Open Graphics Library by Silicon Graphics Inc.) is a standard specification defining a cross-language cross-platform API (application programming interface) for writing applications that produce, amongst other things, 3D computer graphics.) OpenGL supports more than six planes for the zoom box, but use of six planes is chosen in the present example in order to avoid a degradation of performance which could happen with use of more than six planes.
When the zoom box front-facing plane acts as the clipping plane, an alternative functionality can be assigned to the clip plane slider button 42 so that this can be used to move the object, say back and forth, on the display 118. In one implementation, the clip plane slider button 42 is renamed in the virtual toolbar of Figure 3 to an appropriate title to reflect the change in functionality. No software clipping planes are used since that will severely impact the graphics rendering speed and hence the speed of interaction. When the user enables the sliding of the clipping plane along the perpendicular path of the surgical trajectory, the objects move with respect to the Zoom Box while keeping the Zoom Box (and hence the clipping plane) stationary.
When the objects are viewed at normal magnifications and the Cut Plane is enabled, the Cut Plane will act as the Align and Clip operation's clipping plane. If the user "snaps" when the Cut Plane is active, the Cut Plane will be reoriented perpendicular to the user's viewpoint. When the user enables the sliding of the clipping plane along the perpendicular path of the surgical trajectory, the objects move with respect to the Cut Plane while keeping the Cut Plane (and hence the clipping plane) stationary.
Referring to Figure 8a, a surgical trajectory 250 is shown oriented with respect to a 3D object of a subject patient's skull 254. A surgical trajectory is superimposed with the skull 254 at point 252. For the sake of illustration only, a Dextroscope virtual tool 256 and virtual toolbar 34 are shown rendered in the image with the virtual tool 256 poised over virtual toolbar 34 ready to activate the align button 40. Referring to Figure 8b, the user (not shown) has utilised virtual tool 256 to active the align button 40 and a "snap" has been effected utilising the algorithms of Figures 5 and 7 so that the 3D object is oriented in a direction of the surgical trajectory and re-positioned in the axis 258 of the display. The surgical trajectory 250 is represented in this view by circle 252, superimposed on the skull object 254 at 252. Line object 1814a (and its operation and functionality) is described with reference to Figure 18.
Referring to Figure 9, a clip along the surgical trajectory when in Zoom Box mode is illustrated. As noted above, in Zoom Box mode the "front" face of the Zoom Box acts as the clipping plane. Figure 9a shows surgical trajectory 250 in aligned mode after the snap from activation of "align" button 40. As can be seen, a magnified view of the object is shown in the zoom box 260 including a magnified view of the skull 254 and brain component represented by cylinder 262 for the sake of simplicity. In all modes, the surgical trajectory itself is made visible as the point of intersection of the surgical trajectory and the plane cutting the 3D objects. The surgical trajectory itself is not displayed as a 3D object in stereo in zoom mode, since if it is displayed as a line in front of a user's two eyes, the user may find it difficult to converge on the surgical trajectory (since it appears as two diverging lines) and may disturb the user's view. In zoom mode, a point of intersection is provided, displayed over all the 3D objects (that is, not obscured by them), so that the surgeon can see the point where the surgical trajectory enters the 3D object. It may not be sufficient to hide the surgical trajectory intersection point, since then the "front" plane of the zoom box could cut the 3D object at points that are not the surgical trajectory points (if the 3D object has a protrusion closer to the viewpoint than the surgical trajectory intersection point).
When the surgical trajectory is in Align mode and the user initiates a zoom operation, the surgical trajectory will be given priority for determining the zoom point. The apparatus casts a "ray" (not shown) starting from the surgical trajectory's entry point (as illustrated by, for example, entry point 18 of Figure 2a) with a direction from the entry point to the target point (e.g. target point 20of Figure 2a). If a visible part of an object is intersected (i.e. visible voxel of a volume or a visible triangle of a mesh) this location will be used as the new zoom point. If there is no intersection, then the usual algorithm for computing the zoom point will be used.
If the zoom box is shown, an additional criterion for the surgical trajectory's generated zoom point is that it should be located inside the zoom box. If the computed result is not within the zoom box, the nearest side of the zoom box's center will be used as the new zoom point.
With the zoom box 260 activated, the user moves the focus plane by activating cut plane slider 42 with virtual tool 256, as illustrated in Figure 9b. Front plane 264 of zoom box 260 acts as the cutting plane. As shown, skull object 254 is "stripped back" with movement of the zoom box front plane 264 along the surgical trajectory 250 responsive to movement of the slider 42, as illustrated by section 254a of skull 254. As the skull 254 is stripped back the view presented to the user is of the brain components 262, 264, 266, 268, 270 represented by simple geometric objects for the sake of simplicity.
As the user continues to slide slider 42 to the right (from the user's perspective) under control of virtual tool 256 the cutting plane continues to cut along the axis of the surgical trajectory 254. In the view of Figure 9c, skull 254 has been stripped farther back as represented by section 254b. Additionally zoom box front plane 264 cuts into brain components 262, 268 as represented by sections 262a, 268a respectively.
The operation of the cutting plane in cut plane mode is illustrated with respect to Figure 10, in which the cut plane itself "strips" back the object.
Referring now to Figure 11, a decision flow chart illustrating the various modes of operation of the described techniques is given. The process starts at step 300. At step 302, a decision is made as to whether the scene is to be oriented with reference to a surgical trajectory. If the decision returns "Yes", the algorithm checks to determine whether the RadioDexter software is operating in Zoom Box mode at step 304. If detected that Zoom Box mode is activated, the scene is oriented with reference to the surgical trajectory in Zoom Box mode at step 306. If the decision at step 304 returns "No", the algorithm checks to determine whether the RadioDexter software is operating in Cut Plane mode at step 308. In which case, the scene is oriented with reference to a surgical trajectory in cut plane mode at step 310. If the decision at step 308 returns "No", the algorithm determines that the RadioDexter model is operating in normal mode at step 312 and orients the scene accordingly. Regardless of the outcome of the checks at step 304 and 308, a next check is made at step 314 to determine whether a clip plane is to move along the surgical trajectory. If "Yes", the algorithm checks to determine whether Zoom Box is activated at step 316. If it is activated, the clip plane is moved along the surgical trajectory in Zoom Box mode at step 318 as described above. If the Zoom Box is not activated, the algorithm checks to determine whether Cut Plane is activated at step 320 and, if so, the clip plane is moved along a surgical trajectory in Cut Plane mode at step 322. If neither Zoom Box nor Cut Plane modes are activated, the algorithm adopts the default position of normal mode and moves a clip plane along the surgical trajectory in normal mode at step 324. The process ends at step 326.
Pseudo-code for the method to align to a surgical trajectory in three modes (normal mode, Zoom Box mode and Cut Plane mode) is given in the following subsections.
Normal Mode
// Orienting the whole scene with reference to a surgical trajectory in normal mode {
// Computing the New Rotation Matrix of the Root Object
{
S represents a source direction in world coordinates. Set the value of S as the direction of the surgical trajectory's entry point to the target point in world coordinates.
D represents a perspective direction in world coordinates. Set the value of D to the direction from the viewpoint to the surgical trajectory's target point.
M represents a rotation matrix from the S vector to the D vector. Set the value of M as the inverse of the S's rotation matrix, multiplied by the D's rotation matrix.
Multiply the Root Object's rotation matrix with the M matrix and use it as the Root Object's new rotation matrix.
}
// Computing the New Position of the Root Object
{ S represents a source position in world coordinates. Set the value of S as the world position of the surgical trajectory's entry point. D represents a desired position in world coordinates. Set the value as the center of the projected image.
V represents the translation vector from the S point to the D point. Set the value of V by subtracting the S point from the D point.
Apply the V vector to the Root Object's position with an addition operation and use it as the Root Object's new position.
}
}
Zoom Box Mode
// Orienting the whole scene with reference to a surgical trajectory in Zoom Box mode
{
// Computing the New Rotation Matrix of the Root Object
{
S represents a source direction in world coordinates. Set the value of S as the direction of the surgical trajectory's entry point to the target point in world coordinates.
D represents a perspective direction in world coordinates. Set the value of D to the direction from the viewpoint to the surgical trajectory's target point.
M represents a rotation matrix from the S vector to the D vector. Set the value of M as the inverse of the S's rotation matrix, multiplied by the D's rotation matrix.
Multiply the Root Object's rotation matrix with the M matrix and use it as the Root Object's new rotation matrix. }
// Computing the New Position of the Root Object
{
S represents a source position in world coordinates. Set the value of S as the world position of the surgical trajectory's entry point.
D represents a desired position in world coordinates. Set the value as the Zoom Box's center point projected to the Zoom Box plane in world coordinates.
V represents the translation vector from the S point to the D point. Set the value of V by subtracting the S point from the D point.
Apply the V vector to the Root Object's position with an addition operation and use it as the Root Object's new position.
}
Cut Plane Mode
// Orienting the whole scene with reference to a surgical trajectory in Cut Plane mode
{
// Computing the New Rotation Matrix of the Root Object and the Cut Plane {
S represents a source direction in world coordinates. Set the value of S as the direction of the surgical trajectory's entry point to the target point in world coordinates.
D represents a perspective direction in world coordinates. Set the value to the negative z-direction. M represents a rotation matrix from the S vector to the D vector. Set the value of M as the inverse of the S's rotation matrix, multiplied by the D's rotation matrix.
Multiply the Root Object's rotation matrix with the M matrix and use it as the Root Object's new rotation matrix.
Use the negated D's rotation matrix as the new rotation matrix for the Cut Plane.
}
// Computing the New Position of the Root Object and the Cut Plane
{
S represents a source position in world coordinates. Set the value of S as the world position of the surgical trajectory's entry point.
D represents a desired position in world coordinates. Set the value as the center point of the Cut Plane.
V represents the translation vector from the S point to the D point. Set the value of V by subtracting the S point from the D point.
Apply the V vector to the Root Object's position with an addition operation and use it as the Root Object's new position.
Use the D point as the new position for the Cut Plane.
} Pseudo code for the Clip (method to move along clip plane along a surgical trajectory) in three modes (normal mode, Zoom Box mode and Cut Plane mode) is given in the following subsections:
Zoom Box Mode
// Moving A Clip Plane Along A Surgical Plan In Zoom Box Mode {
// Computing the New Rotation Matrix of the Root Object {
S represents a source direction in world coordinates. Set the value of S as the direction of the surgical trajectory's entry point to the target point in world coordinates.
D represents a perspective direction in world coordinates. Set the value of D to the direction from the viewpoint to the surgical trajectory's target point.
M represents a rotation matrix from the S vector to the D vector. Set the value of M as the inverse of the S's rotation matrix, multiplied by the D's rotation matrix.
Multiply the Root Object's rotation matrix with the M matrix and use it as the Root Object's new rotation matrix.
}
// Computing the New Position of the Root Object
{ p represents the normalized distance from the entry point (0.0) to the target point (1.0).
S represents a source position in world coordinates. Set the value of S as a point along the surgical trajectory from the entry point to the target point. S = entry point + ( target point - entry point ) * p
D represents a desired position in world coordinates. Set the value as the Zoom Box's center point projected to the front face of the Zoom Box in world coordinates.
V represents the translation vector from the S point to the D point. Set the value of V by subtracting the S point from the D point.
Apply the V vector to the Root Object's position with an addition operation and use it as the Root Object's new position. } }
Cut Plane Mode
// Moving A Clip Plane Along A Surgical Plan In Cut Plane Mode {
// Computing the New Rotation Matrix of the Root Object and the Cut Plane
{
S represents a source direction in world coordinates. Set the value of S as the direction of the surgical trajectory's entry point to the target point in world coordinates.
D represents a perspective direction in world coordinates. Set the value to the negative z-direction.
M represents a rotation matrix from the S vector to the D vector. Set the value of M as the inverse of the S's rotation matrix, multiplied by the D's rotation matrix. Multiply the Root Object's rotation matrix with the M matrix and use it as the Root Object's new rotation matrix.
}
// Computing the New Position of the Root Object and the Cut Plane
{ p represents the normalized distance from the entry point (0.0) to the target point (1.0).
S represents a source position in world coordinates. Set the value of S as a point along the surgical trajectory from the entry point to the target point.
S = entry point + ( target point - entry point ) * p
D represents a desired position in world coordinates. Set the value as the center point of the Cut Plane.
V represents the translation vector from the S point to the D point. Set the value of V by subtracting the S point from the D point.
Apply the V vector to the Root Object's position with an addition operation and use it as the Root Object's new position.
}
}
Normal Mode
// Moving A Clip Plane Along A Surgical Plan In Normal Mode
{
// Computing the New Rotation Matrix of the Clip Plane D represents a direction in world coordinates. Set the value of D as the direction of the surgical trajectory's entry point to the target point in world coordinates.
R represents the direction D's rotation matrix in the Root Object's object coordinate space.
Use R as the new orientation for the Clip Plane.
}
// Computing the New Position of the Clip Plane
{ p represents the normalized distance from the entry point (0.0) to the target point (1.0).
D represents a point in world coordinates. Set the value of D as a point along the surgical trajectory from the entry point to the target point. D = entry point + ( target point - entry point ) * p
R represents the point D in the Root Object's object coordinate space.
Use R as the new position for the Clip Plane.
}
Pseudo code for modifying the entry point when editing the surgical trajectory is now given _V is a vector declared as a global variable
Start Action // called once when the stylus button is pressed
{ E represents the position of the entry point in world space
T represents the position of the target point in world space
D represents the direction of the surgical trajectory in world space
O represents the inverse orientation of D
P represents the world position of the stylus
Set _V as the vector from P to E
Multiply _V by O and store it to _V
} on // called every time until the stylus button is released
{
E represents the position of the entry point in world space
T represents the position of the target point in world space
D represents the direction of the surgical trajectory in world space
P represents the world position of the stylus
Set V as the value of _V
Multiply V by D and store it to V
X represents P + V
Transform X to the surgical trajectory's object space Set the surgical trajectory's entry point to X }
Pseudo code for the auto-shrink feature described above is now given.
If surgical trajectory is added or modified then
If target point is inside the object of interest and entry point is outside the object of interest then
E represents the position of the entry point in world space
T represents the position of the target point in world space
D represents the direction of the surgical trajectory in world space
R represents the ray formed by the initial position E and direction D
P represents the nearest intersection point of the ray R and the object of interest(s) in world space
V represents the vector from T to P
N represents the length of the surgical trajectory protruding out of the object of interest(s) in world space
Extend the vector V by N units in world space
Set the new entry point E as the target point T plus the vector V
End End
It will be appreciated that the invention has been described by way of example only and various modifications in detail may be made with departing from the spirit and scope of the appended claims. Features presented in relation to one aspect of the invention may be presented with respect to another aspect of the invention.

Claims

Claims
1. Apparatus for orienting three-dimensional object in a pre-defined orientation responsive to a user selection, the three-dimensional object having an associated data set defining parameters for the three-dimensional object, the apparatus comprising: a source direction determination module configured to determine a source direction matrix corresponding with the pre-defined orientation; a perspective direction determination module configured to determine a perspective direction matrix; an orientation matrix determination module configured to determine an orientation rotation matrix for the three-dimensional object from the source direction matrix and the perspective direction matrix; and an object manipulation module configured to apply the orientation matrix to the three-dimensional object responsive to the user selection, thereby to orient the three- dimensional object to the pre-defined orientation.
2. Apparatus according to claim 1, wherein the source direction determination module is configured to determine the source direction matrix from a first path in the three-dimensional object.
3. Apparatus according to claim 2, wherein the source direction determination module is configured to determine the source direction matrix from coordinate values for first and second points in the first path.
4. Apparatus according to any preceding claim, wherein the perspective direction determination module is configured to determine the perspective direction matrix to correspond with a user's perspective of the displayed three-dimensional object.
5. Apparatus according to claim 4, wherein the perspective direction determination module is configured to determine the perspective direction matrix to correspond with a user's perspective from a second path, the second path being a path between a point in the three-dimensional object and a user viewpoint.
6. Apparatus according to any preceding claim, wherein: the source direction determination module is configured to define a source direction with the source direction matrix, the source direction having an associated source rotation matrix comprising elements defining orientation data for the source direction; the perspective direction determination module is configured to define a perspective direction with the perspective direction matrix, the perspective direction having an associated perspective rotation matrix comprising elements defining orientation data for the perspective direction; and the orientation matrix determination module is configured to determine the orientation rotation matrix from a manipulation of the source rotation matrix and the perspective rotation matrix.
7. Apparatus according to claim 6, wherein the orientation matrix determination module is configured to determine the orientation matrix from a product of the perspective rotation matrix and an inverse of the source rotation matrix.
8. Apparatus according to any preceding claim, wherein the three-dimensional object comprises a plurality of objects grouped as a root object, the root object having a root object orientation matrix and the three-dimensional object manipulation module is configured to orient the three-dimensional object to the pre-defined orientation by application of the orientation rotation matrix to the root object orientation matrix.
9. Apparatus according to any preceding claim, further configured to position the three-dimensional object, the apparatus further comprising: a source position determination module configured to determine a source position for the three-dimensional object; a translated position determination module configured to determine a translated position for the three-dimensional object; a translation matrix determination module configured to determine a translation matrix for the three-dimensional object from the source position and the translated position; and wherein the object manipulation module is configured to apply the translation matrix to the three-dimensional object responsive to the user selection, thereby to position the three-dimensional object to the translated position.
10. Apparatus for positioning a three-dimensional object to a translated position responsive to a user selection, the three-dimensional object having an associated data set defining display parameters for the three-dimensional object, the apparatus comprising: a source position determination module configured to determine a source position for the three-dimensional object; a translated position determination module configured to determine a translated position for the three-dimensional object; a translation matrix determination module configured to determine a translation matrix for the three-dimensional object from the source position and the translated position; and an object manipulation module configured to apply the translation matrix to the three-dimensional object responsive to the user selection, thereby to position the three- dimensional object to the translated position.
11. Apparatus according to claim 9 or claim 10, wherein the source position determination module is configured to determine the source position for the three- dimensional object from a first point in a or the first path of the three-dimensional object.
12. Apparatus according to any of claims 9 to 11, wherein the translated position determination module is configured to determine the translated position for the three- dimensional object as an origin point in a space for rendering the three-dimensional object in.
13. Apparatus according to any of claims 9 to 12, wherein the three-dimensional object comprises a plurality of objects grouped as a or the root object, the root object having a root object position, and the object manipulation module is configured to reposition the three-dimensional object to the translated position by application of the translation matrix to the root object position.
14. Apparatus according to any of claims 9 to 13, wherein the apparatus is configured to generate for display an enlarged portion of the three-dimensional object in a zoom box, and the translated position determination module is configured to determine the translated position for the three-dimensional object enlarged portion as a centre point of a plane of the zoom box.
15. Apparatus according to any of claims 9 to 14, wherein the apparatus is configured to generate for display a section of the three-dimensional object in a cut plane, and wherein: the perspective direction determination module is configured to determine the perspective direction matrix as an axis of a or the space for rendering the three- dimensional object in and; the translated position determination module is configured to determine the translated position for the three-dimensional object as a centre point of the cut plane.
16. Apparatus according to claim 15, wherein the apparatus is configured to reposition the cut plane to the translated position for display of another section of the three-dimensional object corresponding to the repositioned cut plane.
17. Apparatus according to any preceding claim, the object manipulation module being configured to generate for display one or more sectional views of the three- dimensional object relative to a or the first path responsive to a user instruction, the one or more sectional views simulating movement of a clip plane clipping the three- dimensional object.
18. Apparatus according to claim 17, the object manipulation module being configured to generate the one or more sectional view in a plane perpendicular to the first path.
19. Apparatus according to any preceding claim, configured to generate for display a surgical trajectory in the three-dimensional object, the three-dimensional object being medical imaging data, the apparatus being configured to define the surgical trajectory as an object in coordinate space having a first end and a second end, and to allow a user to define the first and second ends by the user's selection of coordinates in the coordinate space.
20. Apparatus according to any preceding claim, comprising an image generation module configured to generate, for display, three-dimensional objects representing medical imaging data.
21. Apparatus for displaying a surgical trajectory in three-dimensional medical image data, the apparatus comprising an image generation or manipulation module configured to generate for display the surgical trajectory in the three-dimensional object, the apparatus being configured to define the surgical trajectory as an object in coordinate space having a first end and a second end, and to allow a user to define the first and second ends by the user's selection of coordinates in the coordinate space, the apparatus being further configured to clip the three-dimensional object for display responsive to a user's manipulation of the surgical trajectory in defining the first end.
22. The apparatus of claim 21, configured to clip the three-dimensional object in a plane orthogonal to a point on a user's virtual tool for manipulating the surgical trajectory.
23. The apparatus of claim 21, configured to allow the user to define the first and second ends by interacting directly in three-dimensional space.
24. A method for orienting a three-dimensional object in a pre-defined orientation responsive to a user selection, the three-dimensional object having an associated data set defining display parameters for the three-dimensional object, the method comprising: determining, with a source direction determination module, a source direction matrix corresponding with the pre-defined orientation; determining, with a perspective direction determination module, a perspective direction matrix; determining, with an orientation matrix determination module, an orientation rotation matrix for the three-dimensional object from the source direction matrix and the perspective direction matrix; and applying, with a three-dimensional object manipulation module, the orientation matrix to the three-dimensional object responsive to the user selection, thereby to orient the three-dimensional object to the pre-defined orientation.
25. A method for orienting a three-dimensional object in a pre-defined orientation responsive to a user selection using the apparatus of any of claims 1 to 9 and 20.
26. A method for positioning a three-dimensional object to a translated position responsive to a user selection, the three-dimensional object having an associated data set defining display parameters for the three-dimensional object, the method comprising: determining, with a source position determination module, a source position for the three-dimensional object; determining, with a translated position determination module, a translated position for the three-dimensional object; determining, with a translation matrix determination module, a translation matrix for the three-dimensional object from the source position and the translated position; and applying, with an three-dimensional object manipulation module, the translation matrix to the three-dimensional object responsive to the user selection, thereby to position the three-dimensional object to the translated position.
27. A method for positioning a three-dimensional object to a translated position responsive to a user selection using the apparatus of any of claims 10 to 20.
28. A method for displaying a surgical trajectory in three-dimensional medical image data, the method comprising generating for display, with an image generation or manipulation module, the surgical trajectory in the three-dimensional object, the surgical trajectory being defined as an object in coordinate space having a first end and a second end, and receiving from a user a selection of coordinates in the coordinate space to define the first and second ends.
29. A method for displaying a surgical trajectory in three-dimensional medical imaging data using the apparatus of any of claims 21 to 23.
29. A computer storage media having computer code stored thereon for orienting a three-dimensional object in a pre-defined orientation responsive to a user selection, the three-dimensional object having an associated data set defining display parameters for the three-dimensional object, the computer code comprising instructions for: determining, with a source direction determination module, a source direction matrix corresponding with the pre-defined orientation; determining, with a perspective direction determination module, a perspective direction matrix; determining, with an orientation matrix determination module, an orientation rotation matrix for the three-dimensional object from the source direction matrix and the perspective direction matrix; and applying, with an three-dimensional object manipulation module, the orientation matrix to the three-dimensional object responsive to the user selection, thereby to orient the three-dimensional object to the pre-defined orientation.
30. A computer storage media having computer code stored thereon for positioning a three-dimensional object to a translated position responsive to a user selection, the three-dimensional object having an associated data set defining display parameters for the three-dimensional object, the computer code comprising instructions for: determining, with a source position determination module, a source position for the three-dimensional object; determining, with a translated position determination module, a translated position for the three-dimensional object; determining, with a translation matrix determination module, a translation matrix for the three-dimensional object from the source position and the translated position; and applying, with an three-dimensional object manipulation module, the translation matrix to the three-dimensional object responsive to the user selection, thereby to position the three-dimensional object to the translated position.
31. A computer storage media having computer code stored thereon for displaying a surgical trajectory in three-dimensional medical image data, the computer code comprising instructions for generating for display, with an image generation or manipulation module, the surgical trajectory in the three-dimensional object, the surgical trajectory being defined as an object in coordinate space having a first end and a second end, and receiving from a user a selection of coordinates in the coordinate space to define the first and second ends and for clipping the three-dimensional object for display responsive to a user's manipulation of the surgical trajectory in defining the first end.
PCT/SG2008/000125 2007-04-16 2008-04-15 Apparatus and method for manipulating a three-dimensional object/volume WO2008127202A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
SG200702797-2A SG147325A1 (en) 2007-04-16 2007-04-16 Apparatus and method for manipulating a 3d object/volume
SG200702797-2 2007-04-16

Publications (1)

Publication Number Publication Date
WO2008127202A1 true WO2008127202A1 (en) 2008-10-23

Family

ID=39864189

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/SG2008/000125 WO2008127202A1 (en) 2007-04-16 2008-04-15 Apparatus and method for manipulating a three-dimensional object/volume

Country Status (2)

Country Link
SG (1) SG147325A1 (en)
WO (1) WO2008127202A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8896631B2 (en) 2010-10-25 2014-11-25 Hewlett-Packard Development Company, L.P. Hyper parallax transformation matrix based on user eye positions
US10019131B2 (en) 2016-05-10 2018-07-10 Google Llc Two-handed object manipulations in virtual reality

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6920347B2 (en) * 2000-04-07 2005-07-19 Surgical Navigation Technologies, Inc. Trajectory storage apparatus and method for surgical navigation systems

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6920347B2 (en) * 2000-04-07 2005-07-19 Surgical Navigation Technologies, Inc. Trajectory storage apparatus and method for surgical navigation systems

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
OWEN: "3-Dimensional modelling Transformations", 22 June 1999 (1999-06-22), Retrieved from the Internet <URL:http://www.siggraph.org/education/materials/HyperGraph/modeling/mod_tran/3d.htm> *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8896631B2 (en) 2010-10-25 2014-11-25 Hewlett-Packard Development Company, L.P. Hyper parallax transformation matrix based on user eye positions
US10019131B2 (en) 2016-05-10 2018-07-10 Google Llc Two-handed object manipulations in virtual reality
US10754497B2 (en) 2016-05-10 2020-08-25 Google Llc Two-handed object manipulations in virtual reality

Also Published As

Publication number Publication date
SG147325A1 (en) 2008-11-28

Similar Documents

Publication Publication Date Title
US7061484B2 (en) User-interface and method for curved multi-planar reformatting of three-dimensional volume data sets
US7889227B2 (en) Intuitive user interface for endoscopic view visualization
US20040246269A1 (en) System and method for managing a plurality of locations of interest in 3D data displays (&#34;Zoom Context&#34;)
EP2765776A1 (en) Graphical system with enhanced stereopsis
US20070279436A1 (en) Method and system for selective visualization and interaction with 3D image data, in a tunnel viewer
US20090079738A1 (en) System and method for locating anatomies of interest in a 3d volume
US20180310907A1 (en) Simulated Fluoroscopy Images with 3D Context
Bornik et al. A hybrid user interface for manipulation of volumetric medical data
US11922581B2 (en) Systems and methods of controlling an operating room display using an augmented reality headset
US20210353361A1 (en) Surgical planning, surgical navigation and imaging system
US20070032720A1 (en) Method and system for navigating in real time in three-dimensional medical image model
Serra et al. The Brain Bench: virtual tools for stereotactic frame neurosurgery
CN110660130A (en) Medical image-oriented mobile augmented reality system construction method
JP6112689B1 (en) Superimposed image display system
Serra et al. Interactive vessel tracing in volume data
WO2008127202A1 (en) Apparatus and method for manipulating a three-dimensional object/volume
Kratz et al. GPU-based high-quality volume rendering for virtual environments
Hinckley et al. The props-based interface for neurosurgical visualization
Hinckley et al. Three-dimensional user interface for neurosurgical visualization
Hinckley et al. New applications for the touchscreen in 2D and 3D medical imaging workstations
Fischer et al. Intuitive and lightweight user interaction for medical augmented reality
Serra et al. Interaction techniques for a virtual workspace
JP6142462B1 (en) Superimposed image display system
Behrendt et al. The Virtual Reality Flow Lens for Blood Flow Exploration.
Poston et al. Interactive tube finding on a virtual workbench

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 08741930

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 08741930

Country of ref document: EP

Kind code of ref document: A1