US20060284834A1 - Apparatus and methods for haptic rendering using a haptic camera view - Google Patents

Apparatus and methods for haptic rendering using a haptic camera view Download PDF

Info

Publication number
US20060284834A1
US20060284834A1 US11/169,271 US16927105A US2006284834A1 US 20060284834 A1 US20060284834 A1 US 20060284834A1 US 16927105 A US16927105 A US 16927105A US 2006284834 A1 US2006284834 A1 US 2006284834A1
Authority
US
United States
Prior art keywords
virtual
haptic
haptic interface
graphics
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US11/169,271
Inventor
Brandon Itkowitz
Loren Shih
Marc Midura
Joshua Handley
William Goodwin
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
3D Systems Inc
Original Assignee
SensAble Technologies Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SensAble Technologies Inc filed Critical SensAble Technologies Inc
Priority to US11/169,271 priority Critical patent/US20060284834A1/en
Publication of US20060284834A1 publication Critical patent/US20060284834A1/en
Assigned to SENSABLE TECHNOLOGIES, INC. reassignment SENSABLE TECHNOLOGIES, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HANDLEY, JOSHUA E., ITKOWITZ, BRANDON D., GOODWIN, WILLIAM ALEXANDER, SHIH, LOREN C.
Assigned to GEOMAGIC, INC. reassignment GEOMAGIC, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: SENSABLE TECHNOLOGIES, INC.
Assigned to 3D SYSTEMS, INC. reassignment 3D SYSTEMS, INC. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GEOMAGIC, INC.
Priority to US14/276,845 priority patent/US9030411B2/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/10Geometric effects
    • G06T15/20Perspective computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F3/00Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
    • G06F3/01Input arrangements or combined input and output arrangements for interaction between user and computer
    • G06F3/016Input arrangements with force or tactile feedback as computer generated output to the user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/50Lighting effects
    • G06T15/80Shading
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/006Mixed reality
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F2203/00Indexing scheme relating to G06F3/00 - G06F3/048
    • G06F2203/01Indexing scheme relating to G06F3/01
    • G06F2203/012Walk-in-place systems for allowing a user to walk in a virtual environment while constraining him to a given position in the physical environment
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/28Force feedback
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2215/00Indexing scheme for image rendering
    • G06T2215/06Curved planar reformation of 3D line structures

Definitions

  • the present application is related to commonly-owned U.S. patent application entitled, “Apparatus and Methods for Haptic Rendering Using Data in a Graphics Pipeline,” by Itkowitz, Shih, Midura, Handley, and Goodwin, filed under Attorney Docket No. SNS-012 on even date herewith, the text of which is hereby incorporated by reference in its entirety; the present application is also related to commonly-owned international (PCT) patent application entitled, “Apparatus and Methods for Haptic Rendering Using Data in a Graphics Pipeline,” by Itkowitz, Shih, Midura, Handley, and Goodwin, filed under Attorney Docket No.
  • PCT commonly-owned international
  • the invention relates generally to haptic rendering of virtual environments. More particularly, in certain embodiments, the invention relates to the haptic rendering of a virtual environment using data from the graphics pipeline of a 3D graphics application.
  • Haptic technology involves simulating virtual environments to allow user interaction through the user's sense of touch.
  • Haptic interface devices and associated computer hardware and software are used in a variety of systems to provide kinesthetic and/or tactile sensory feedback to a user in addition to conventional visual feedback, thereby affording an enhanced man/machine interface.
  • Haptic systems are used, for example, in manufactured component design, surgical technique training, industrial modeling, robotics, and personal entertainment.
  • An example haptic interface device is a six degree of freedom force reflecting device as described in co-owned U.S. Pat. No. 6,417,638, to Rodomista et al., the description of which is incorporated by reference herein in its entirety.
  • a haptic rendering process provides a computer-based kinesthetic and/or tactile description of one or more virtual objects in a virtual environment.
  • a user interacts with the virtual environment via a haptic interface device.
  • a graphical rendering process provides a graphical description of one or more virtual objects in a virtual environment.
  • a user interacts with graphical objects via a mouse, joystick, or other controller.
  • Current haptic systems process haptic rendering data separately from graphical rendering data.
  • 3D graphics application programming interfaces APIs
  • 3D graphics video cards
  • a programmer may create or adapt a 3D graphics application for rendering a 3D graphics virtual environment using the specialized libraries and function calls of a 3D graphics API.
  • the programmer avoids having to write graphics rendering code that is provided in the API library.
  • graphics standards have developed such that many currently-available 3D graphics applications are compatible with currently-available 3D graphics API's, allowing a user to adapt the 3D graphics application to suit his/her purpose. Examples of such 3D graphics API's include OpenGL, DirectX, and Java 3D.
  • 3D graphics cards have also improved the graphical rendering of 3D virtual objects.
  • a 3D graphics card is a specialized type of computer hardware that speeds the graphical rendering process.
  • a 3D graphics card performs a large amount of the computation work necessary to translate 3D information into 2D images for viewing on a screen, thereby saving CPU resources.
  • haptic rendering processes are generally computation-intensive, requiring high processing speed and a low latency control loop for accurate force feedback rendering.
  • a haptic rendering process in order to realistically simulate touch-based interaction with a virtual object, a haptic rendering process must typically update force feedback calculations at a rate of about 1000 times per second. This is significantly greater than the update rate needed for realistic dynamic graphics display, which is from about 30 to about 60 times per second in certain systems.
  • current haptic systems are usually limited to generating force feedback based on single point interaction with a virtual environment. This is particularly true for haptic systems that are designed to work with widely-available desktop computers and workstations with state-of-the-art processors.
  • the invention provides systems and methods for using a “haptic camera” within a virtual environment and for using graphical data from the haptic camera to produce touch feedback.
  • the haptic camera obtains graphical data pertaining to virtual objects within the vicinity and along the trajectory of a user-controlled haptic interface device.
  • the graphical data from the camera is interpreted haptically, thereby allowing touch feedback corresponding to the virtual environment to be provided to the user.
  • haptic rendering is improved, because the view volume can be limited to a region of the virtual environment that the user will be able to touch at any given time, and further, because the method takes advantage of the processing capacity of the graphics pipeline.
  • This method also allows haptic rendering of portions of a virtual environment that cannot be seen in a 2D display of the virtual object, for example, the back side of an object, the inside of crevices and tunnels, and portions of objects that lie behind other objects.
  • a moving haptic camera offers this advantage. Graphical data from a static camera view of a virtual environment can be used for haptic rendering; however, it is generally true that only geometry visible in the view direction of the camera can be used to produce touch feedback.
  • a moving camera (and/or multiple cameras) allows graphical data to be obtained from more than one view direction, thereby allowing the production of force feedback corresponding to portions of the virtual environment that are not visible from a single static view.
  • the interaction between the user and the virtual environment is further enhanced by providing the user with a main view of the virtual environment on a 2D display while, at the same time, providing the user with haptic feedback corresponding to the 3D virtual environment.
  • the haptic feedback is updated according to the user's manipulation of a haptic interface device, allowing the user to “feel” the virtual object at any position, including regions that are not visible on the 2D display.
  • the invention provides increased haptic rendering efficiency, permitting greater haptic processing speeds for more realistic touch-based simulation.
  • the force feedback computation speed is increased from a rate of about 1000 Hz to a rate of about 10,000 Hz or more.
  • the invention allows more sophisticated haptic interaction techniques to be used with widely-available desktop computers and workstations. For example, forces can be computed based on the interaction of one or more points, lines, planes, and/or spheres with virtual objects in the virtual environment, not just based on single point interaction. More sophisticated haptic interface devices that require multi-point interaction can be used, including pinch devices, multi-finger devices, and gloves, thereby enhancing the user's haptic experience.
  • Supported devices include kinesthetic and/or tactile feedback devices. For example, in one embodiment, a user receives tactile feedback when in contact with the surface of a virtual object such that the user can sense the texture of the surface.
  • a method for haptically rendering a virtual object in a virtual environment.
  • the method includes determining a haptic interface location in a 3D virtual environment corresponding to a haptic interface device in real space.
  • a first virtual camera is positioned at the haptic interface location, and graphical data corresponding to the virtual environment is accessed from this first virtual camera.
  • the method comprises determining a position of the haptic interface location in relation to one or more geometric features of a virtual object in the virtual environment—for example, a surface, point, line, or plane of (or associate with) the virtual object—by using graphical data from the first virtual camera.
  • the method also includes determining an interaction force based at least in part on the position of the haptic interface location in relation to the geometric feature(s) of the virtual object.
  • the interaction force is delivered to a user through the haptic interface device.
  • the position of the first camera is updated as the haptic interface location changes, according to movement of the haptic interface device.
  • the invention also provides a two-pass rendering technique using two virtual cameras.
  • the invention provides methods using a first virtual camera view dedicated for use in haptically rendering a 3D virtual environment and a second virtual camera view for graphically rendering the virtual environment for display.
  • the invention includes the steps of positioning a second virtual camera at a location other than the haptic interface location and accessing graphical data from the second virtual camera corresponding to the virtual environment.
  • the second virtual camera is at a fixed location, while the first virtual camera moves, for example, according to the movement of the haptic interface location.
  • the step of determining a position of the haptic interface location using data from the first virtual camera includes determining a world-view transformation that maps coordinates corresponding to the haptic virtual environment (i.e. world coordinates) to coordinates corresponding to the first virtual camera (i.e. view coordinates).
  • the world-view transformation can be customized for translating and rotating the camera to view the scene as if attached to the position of the haptic device's proxy in the virtual environment (i.e. the haptic interface location). Additional transforms may be determined and/or applied, including a shape-world transformation, a view-clip transformation, a clip-window transformation, a view-touch transformation, and a touch-workspace transformation.
  • the invention also provides a method of determining what the view looks like from the “haptic camera.”
  • a camera eye position and a look direction are needed.
  • the step of determining a world-view transformation includes determining an eye position and a look direction.
  • the position of the haptic interface location i.e. the virtual proxy position
  • the eye position is sampled.
  • the eye position is preferably updated only when the virtual proxy moves beyond a threshold distance from the current eye position.
  • a vector representing the motion of the haptic interface location is determined.
  • the look direction is determined by the motion of the proxy and optionally by the contact normal, for example, if in contact with a virtual object and constrained on the surface of the contacted object.
  • the look direction is the normalized motion vector.
  • the look direction becomes a linear combination of the normalized motion vector and the contact normal.
  • a view volume associated with the first virtual camera is sized to exclude geometric elements that lie beyond a desired distance from the haptic interface location. This involves culling the graphical data to remove geometric primitives that lie outside the view volume of the first virtual camera.
  • hardware culling is employed, where primitives are culled by graphics hardware (i.e. a graphics card).
  • culling involves the use of a spatial partition, for example, an octree, BSP tree, or other hierarchical data structure, to exclude graphical data outside the view volume. Both hardware culling and a spatial partition can be used together. For example, where the number of primitives being culled by the graphics hardware is large, the spatial partition can reduce the amount of data sent to the hardware for culling, allowing for a more efficient process.
  • the types of graphical data obtained from the first virtual camera include, for example, data in a depth buffer, a feedback buffer, a color buffer, a selection buffer, an accumulation buffer, a texture map, a fat framebuffer, rasterization primitives, application programming interface input data, and/or state data.
  • a fat framebuffer is also known as and/or includes a floating point auxiliary buffer, an attribute buffer, a geometry buffer, and/or a super buffer.
  • Fat framebuffers are flexible and allow a user to store a wide variety of different types of graphical data.
  • a fat framebuffer can include, for example, vertex positions, normals, color, texture, normal maps, bump maps, and/or depth data. Fat framebuffers can be used as input in custom pixel and/or vertex shader programs that are run on graphics hardware (i.e. on the graphics card). In one embodiment, a fat framebuffer is used to capture vertex positions and normals.
  • primitives are graphically rendered to a fat framebuffer, and pixel shading and/or vertex shading is performed using data from the fat framebuffer in the haptic rendering of a virtual environment.
  • a deferred shading process is used to render graphics primitives to a fat framebuffer.
  • determining the position of the haptic interface location using data from the first virtual camera includes performing an intersection test to determine an intersection point and intersection normal in screen space, and transforming the coordinates of the intersection point and intersection normal from screen space to object space.
  • the graphical data can be used to determine the closest geometric feature, such as a point, line or plane, to the virtual proxy via a projection test.
  • a system for haptically rendering a virtual object in a virtual environment.
  • the system comprises a graphics thread that generates a visual display of a virtual environment, a collision thread that uses input from the graphics thread to determine if a user-directed virtual proxy collides with a surface within the virtual environment, and a servo thread that generates force to be applied to a user in real space though a haptic interface device according to input from the collision thread.
  • the graphics thread refreshes the visual display at a rate within a range, for example, from about 5 Hz to about 150 Hz, or from about 30 Hz to about 60 Hz. Refresh rates above and below these levels are possible as well.
  • the collision thread performs a collision detection computation at a rate within a range, for example, from about 30 Hz to about 200 Hz, or from about 80 Hz to about 120 Hz. Computation rates above and below these levels are possible as well.
  • the servo thread refreshes the force to be applied through the haptic interface device at a rate within a range from about 1000 Hz to about 10,000 Hz. Force refresh rates above and below these levels are possible as well.
  • the servo thread includes a force shader.
  • an apparatus for providing haptic feedback to a user of a 3D graphics application.
  • the apparatus comprises a user-controlled haptic interface device adapted to provide a user input to a computer and to transmit force to a user.
  • the apparatus also includes computer software that, when operating with the computer and the user input, is adapted to determine force transmitted to the user.
  • the force transmitted to the user is determined by a process that comprises determining a haptic interface location in a 3D virtual environment corresponding to a location of the haptic interface device in real space and positioning a first virtual camera substantially at the haptic interface location. Graphical data is then accessed using the first virtual camera.
  • a position of the haptic interface location in relation to a surface of a virtual object in the virtual environment is determined using the graphical data from the first virtual camera.
  • an interaction force is determined, based at least in part on the position of the haptic interface location in relation to the surface of the virtual object.
  • each individual virtual object in a scene may have its own camera; thus, the number of cameras is unlimited. This allows a user to adapt the camera view to best suit individual objects, which allows for further optimization.
  • the camera position and view frustum for objects that are graphically rendered (and/or haptically rendered) using the depth buffer can be set differently than those rendered using the feedback buffer.
  • FIG. 1 is a block diagram featuring a method of haptically rendering one or more virtual objects in a virtual environment using data in a graphics pipeline, according to an illustrative embodiment of the invention.
  • FIG. 2 is a schematic diagram illustrating a system for haptically rendering a virtual environment using data in a graphics pipeline, the diagram showing an interaction between a 3D graphics application, a graphics application programming interface (API), a 3D graphics card, and a haptics API, according to an illustrative embodiment of the invention.
  • API graphics application programming interface
  • FIG. 3 is a schematic diagram illustrating a graphics pipeline of a 3D graphics application, according to an illustrative embodiment of the invention.
  • FIG. 4A is a schematic diagram illustrating a system for haptically rendering a virtual environment using data in a graphics pipeline, the system including a graphics thread, a collision thread, and a servo thread, according to an illustrative embodiment of the invention.
  • FIG. 4B is a schematic diagram illustrating the system of FIG. 4A in further detail, according to an illustrative embodiment of the invention.
  • FIG. 5 is a schematic diagram illustrating a servo thread of a haptics rendering pipeline, according to an illustrative embodiment of the invention.
  • FIG. 6 is a schematic diagram illustrating a system for haptically rendering a virtual environment using data in a graphics pipeline, the diagram showing how third-party 3D graphics application software is integrated with the system, according to an illustrative embodiment of the invention.
  • FIG. 7 is a block diagram featuring a method of delivering interaction force to a user via a haptic interface device, the force based at least in part on graphical data from a virtual camera located at a haptic interface location, according to an illustrative embodiment of the invention.
  • FIG. 8A is a screenshot of a virtual object in a virtual environment as imaged from a fixed camera view, the screenshot indicating a haptic interface location, or proxy position, representing the position of a user in the virtual environment, according to an illustrative embodiment of the invention.
  • FIG. 8B is a screenshot of the virtual object of FIG. 8A as imaged from a moving camera view located at the haptic interface location shown in FIG. 8A , where graphical data from the images of either or both of FIG. 8A and FIG. 8B is/are used to haptically render the virtual object, according to an illustrative embodiment of the invention.
  • FIG. 9 is a block diagram featuring a 3D transformation pipeline for displaying 3D model coordinates on a 2D display device and for haptic rendering via a haptic interface device, according to an illustrative embodiment of the invention.
  • FIG. 10 is a schematic diagram illustrating the specification of a viewing transformation for a haptic camera view, according to an illustrative embodiment of the invention.
  • FIG. 11 is a schematic diagram illustrating the specification of a look direction for use in determining a viewing transformation for a haptic camera view when the position of a haptic interface location is constrained on the surface of a virtual object, according to an illustrative embodiment of the invention.
  • FIG. 12 is a block diagram featuring a method for interpreting data for haptic rendering by intercepting data from a graphics pipeline via a pass-through dynamic link library (DLL), according to an illustrative embodiment of the invention.
  • DLL pass-through dynamic link library
  • FIG. 13 is a schematic diagram illustrating a system for haptically rendering a virtual environment using data intercepted from a graphics pipeline of a 3D graphics application via a pass-through dynamic link library, according to an illustrative embodiment of the invention.
  • a computer hardware apparatus may be used in carrying out any of the methods described herein.
  • the apparatus may include, for example, a general purpose computer, an embedded computer, a laptop or desktop computer, or any other type of computer that is capable of running software, issuing suitable control commands, receiving graphical user input, and recording information.
  • the computer typically includes one or more central processing units for executing the instructions contained in software code that embraces one or more of the methods described herein.
  • the software may include one or more modules recorded on machine-readable media, where the term machine-readable media encompasses software, hardwired logic, firmware, object code, and the like. Additionally, communication buses and I/O ports may be provided to link any or all of the hardware components together and permit communication with other computers and computer networks, including the internet, as desired.
  • the term “3D” is interpreted to include 4D, 5D, and higher dimensions.
  • the view volume of this “haptic camera” can be sized to exclude unnecessary regions of the virtual environment, and the graphical data can be used for haptically rendering one or more virtual objects as the user moves about the virtual environment.
  • FIG. 1 is a block diagram 100 featuring a method of haptically rendering one or more virtual objects in a virtual environment using data in a graphics pipeline of a 3D graphics application.
  • the method shown in FIG. 1 includes three main steps—accessing data in a graphics pipeline of a 3D graphics application 102 ; interpreting data for use in haptic rendering 105 ; and haptically rendering one or more virtual objects in the virtual environment 110 .
  • a graphics pipeline generally is a series of steps, or modules, that involve the processing of 3D computer graphics information for viewing on a 2D screen, while at the same time rendering an illusion of three dimensions for a user viewing the 2D screen.
  • a graphics pipeline may comprise a modeling transformation module, in which a virtual object is transformed from its own object space into a common coordinate space containing other objects, light sources, and/or one or more cameras.
  • a graphics pipeline may also include a rejection module in which objects or primitives that cannot be seen are eliminated.
  • a graphics pipeline may include an illumination module that colors objects based on the light sources in the virtual environment and the material properties of the objects.
  • Other modules of the graphics pipeline may perform steps that include, for example, transformation of coordinates from world space to view space, clipping of the scene within a three dimensional volume (a viewing frustum), projection of primitives into two dimensions, scan-conversion of primitives into pixels (rasterization), and 2D image display.
  • Information about the virtual environment is produced in the graphics pipeline of a 3D graphics application to create a 2D display of the virtual environment as viewed from a given camera view.
  • the camera view can be changed to view the same virtual environment from a myriad of vantage points.
  • the invention capitalizes on this capability by haptically rendering the virtual environment using graphical data obtained from one or more virtual cameras.
  • the invention accesses data corresponding to either or both of a primary view 115 and a haptic camera view 120 , where the primary view 115 is a view of the virtual environment from a fixed location, and the haptic camera view 120 is a view of the virtual environment from a moving location corresponding to a user-controlled haptic interface location.
  • the haptic camera view 120 allows a user to reach behind an object to feel what is not immediately visible on the screen (the primary view 115 ).
  • Information about the geometry of the virtual environment can be accessed by making the appropriate function call to the graphics API.
  • Data can be accessed from one or more data buffers—for example, a depth buffer 125 , as shown in the block diagram of FIG. 1 , and a feedback buffer 130 (or its equivalent). Use of this data for haptic rendering enables the reuse of the scene traversal and graphics API rendering state and functionality.
  • the depth buffer 125 is typically a two-dimensional image containing pixels whose intensities correspond to depth (or height) values associated with those pixels.
  • the depth buffer is used during polygon rasterization to quickly determine if a fragment is occluded by a previously rendered polygon.
  • the depth buffer is accessed by making the appropriate function call to the graphics API. This information is then interpreted in step 105 of the method of FIG. 1 for haptic use.
  • Using depth buffer data provides several advantages. For example, depth buffer data is in a form whereby it can be used to quickly compute 3D line segment intersections and inside/outside tests. Furthermore, the speed at which these depth buffer computations can be performed is substantially invariant to the density of the polygons in the virtual environment. This is because the data in the depth buffer is scalar data organized in a 2D grid having known dimensions, the result of rasterization and occlusion processing.
  • Other data buffers in the graphics pipeline include a color buffer 135 , a stencil buffer 140 , and an accumulation buffer 145 .
  • the color buffer 135 can store data describing the color and lighting conditions of vertices.
  • the accumulation buffer 145 can be used to accumulate precise intermediate rendering data.
  • the stencil buffer 140 can be used to flag attributes for each pixel and perform logic operations as part of pixel fragment rendering. These buffers may be used, for example, to modify and/or map various haptic attributes—for example, friction, stiffness, and/or damping—to the pixel locations of the depth buffer.
  • color buffer data 135 may be used to encode surface normals for force shading.
  • Stencil buffer data 140 can indicate whether or not to allow drawing for given pixels.
  • Stencil buffer data 140 can also be incremented or decreased every time a pixel is touched, thereby counting the number of overlapping primitives for a pixel.
  • the stencil contents can be used directly or indirectly for haptic rendering. For example, it can be used directly to flag pixels with attributes for enabling and/or disabling surface materials, such as areas of friction. It can also be used indirectly for haptics by graphically rendering geometry in a special way for haptic exploration, like depth peeling or geometry capping.
  • Encoding normals in the color buffer includes setting up the lighting of the virtual environment so that normals may be mapped into values in the color buffer, wherein each pixel contains four components ⁇ r,g,b,a>.
  • a normal vector ⁇ x,y,z> can be stored, for example, in the ⁇ r,g,b> components by modifying the lighting equation to use only the diffuse term and by applying the lighting equation for six colored lights directed along the local axes of the object coordinate space. For example, the x direction light is colored red, the y direction light is colored green, and the z direction light is colored blue, so that the directional components of the pixels match their color components. Then the lighting equation is written as a summation of dot products scaled by the respective color of the light. This results in normal values which may be used, for example, for smooth force shading.
  • Data contained in the depth buffer 125 , feedback buffer 130 , color buffer 135 , stencil buffer 140 , and/or accumulation buffer 145 , among other data buffers, may be altered by hardware such as a graphics card.
  • a graphics card can perform some of the graphical data processing required to produce 2D screen views of 3D objects, thereby saving CPU resources.
  • Data produced from such hardware-accelerated geometry modifications 150 is used in certain embodiments of the invention.
  • Modern graphics cards have the ability to execute custom fragment and vertex shading programs, enabling a programmable graphics pipeline. It is possible to leverage the results of such geometry modifications for purposes of haptic rendering. For example, view-dependent adaptive subdivision and view-dependent tessellation be used to produce smoother-feeling surfaces. Displacement mapping can result in the haptic rendering of surface details such as ripples, crevices, and bumps, which are generated onboard the graphics card.
  • an “adaptive viewport” is used to optimize depth buffer haptic rendering, wherein the bounds of the viewport are read-back from the graphics card. For example, the entire viewport may not be needed; only the portion of the depth buffer that contains geometry within the immediate vicinity of the haptic interface location may be needed.
  • the bounds of the viewport that are to be read-back from the graphics card are determined by projecting the haptic interface location onto the near plane and by determining a size based on a workspace to screen scale factor. In this way, it is possible to ensure that enough depth buffer information is obtained to contain a radius of workspace motion mapped to screen space.
  • Certain 3D graphics API's offer a mode of operation called feedback mode, which provides access to the feedback buffer 130 ( FIG. 1 ) containing information used by the rasterizer for scan-filling primitives to the viewport.
  • the method of FIG. 1 includes the step of accessing the feedback buffer 130 and interpreting the data from the feedback buffer for haptic use.
  • the feedback buffer 130 provides access to the primitives within a view volume.
  • the view volume may be sized to include only portions of the virtual environment of haptic interest. Therefore, haptic rendering of primitives outside the view volume need not take place, and valuable processing resources are saved.
  • the feedback buffer provides data that is more precise than depth buffer data, since primitives in the feedback buffer have only undergone a linear transformation, whereas the depth buffer represents rasterized primitives, thereby possibly introducing aliasing errors.
  • Step 105 of the method of FIG. 1 is directed to interpreting the graphical rendering data accessed in step 102 for haptic use.
  • step 105 involves performing an intersection test 160 to determine an intersection point and a normal in screen space, and transforming the intersection point coordinates and normal coordinates to object space 165 .
  • the point and normal together define a local plane tangent to the surface of the virtual object.
  • the intersection test of step 160 is essentially a pixel raycast along a line segment, where the depth buffer is treated as a height map. A line segment that is defined in object space is transformed into screen space and tested against the height map to find an intersection.
  • intersection is found by searching along the line segment (in screen space) and comparing depth values to locations along the line segment. Once a crossing has been determined, a more precise intersection can be determined by forming triangles from the local depth values. This provides an intersection point and an intersection normal, where the intersection normal is normal to a surface corresponding to the screen space height map at the intersection point. In step 165 , the intersection point and normal are transformed back into object space to be used as part of a haptic rendering method.
  • Example haptic rendering methods are described in co-owned U.S. Pat. No. 6,191,796 to Tarr, U.S. Pat. No. 6,421,048 to Shih et al., U.S. Pat. No.
  • the intersection test of step 160 also involves transforming a line segment from object space to screen space and performing a line intersection test against candidate primitives. An intersection point and intersection normal are found along the line segment and are transformed back into object space for haptic rendering.
  • Step 110 of the method of FIG. 1 is directed to haptically rendering one or more virtual objects in the virtual environment using the interpreted data from step 105 .
  • the haptic rendering step includes determining a haptic interface location in the virtual environment corresponding to a user's position in real space (i.e. via a user's manipulation of a haptic interface device) 170 , locating one or more points on the surface of one or more virtual objects in the virtual environment (i.e. the surface point nearest the haptic interface location) 175 , and determining an interaction force 180 according to the relationship between the haptic interface location and the surface location(s).
  • step 110 may involve determining when a collision occurs between a haptic interface location (i.e.
  • a collision occurs when the haptic interface location crosses through the surface of a virtual object.
  • the interaction force that is determined in step 180 may be delivered to the user through the haptic interface device.
  • the determination and delivery of a feedback force to a haptic interface device is described, for example, in co-owned U.S. Pat. Nos. 6,191,796, 6,421,048, 6,552,722, 6,417,638, and 6,671,651, the disclosures of which are incorporated by reference herein in their entirety.
  • FIG. 2 is a schematic diagram 200 illustrating, in a simplified way, a system for haptically rendering a virtual environment using data in a graphics pipeline.
  • the diagram shows an interaction between a 3D graphics application 202 , a graphics application programming interface (API) 205 , a 3D graphics card 215 , and a haptics API 210 .
  • Certain methods of the invention may be embodied in, and may be performed using, the haptics API 210 , the graphics API 205 , the 3D graphics application 202 , and/or combinations thereof.
  • a 3D graphics application 202 may be written or adapted to enable the user of the application to see a visual representation of a 3D virtual environment on a two-dimensional screen while “feeling” objects in the 3D virtual environment using a peripheral device, such as a haptic interface device.
  • the graphics application makes function calls referencing function libraries in a graphics API 265 .
  • the graphics API communicates with the 3D graphics card 215 in order to graphically render a virtual environment.
  • a representation of at least a portion of the virtual environment is displayed on a display device 220 .
  • the system 200 of FIG. 2 permits a programmer to write function calls in the 3D graphics application 202 to call a haptics API 210 for rendering a haptic representation of at least a portion of the virtual environment.
  • the haptics API 210 accesses graphical rendering data from the 3D graphics pipeline by making function calls to the graphics API.
  • the graphical data may include a data buffer, such as a depth buffer or feedback buffer.
  • the system 200 interprets the graphical data to haptically render at least a portion of the virtual environment.
  • the haptic rendering process may include determining a force feedback to deliver to the user via a haptic interface device 230 .
  • a haptic device API and a haptic device driver 225 are used to determine and/or deliver the force feedback to the user via the haptic interface device 230 .
  • the haptics API 210 performs high-level haptics scene rendering, and the haptic device API 225 performs low-level force rendering.
  • the high-level haptics API 210 provides haptic rendering of shapes and constraints and the low-level haptic device API 225 queries device state, sends forces, and/or performs thread control, calibration, and error handling.
  • the 3D graphics application may make direct calls to either or both the haptics API 210 and the haptic device API 225 .
  • FIG. 3 illustrates a 3D graphics pipeline 300 , in which graphical data describing one or more 3D objects in a virtual environment is used to create a 2D representation for display on a two-dimensional screen.
  • Graphical data corresponding to the scene geometry 302 , a camera view 305 , and lighting 310 undergoes a series of transformations 315 .
  • the resultant primitives data then undergoes a rasterization process 320 , producing 2D graphical data that may be stored in 2D buffers, for example, a color buffer 330 and a depth buffer 335 .
  • the primitives data as it exists prior to rasterization can be accessed, for example, via a feedback buffer 325 .
  • Methods of the invention use the graphical data in the 3D graphics pipeline 300 , for example, the feedback buffer 325 , the depth buffer 335 , and the color buffer 330 , for haptically rendering the virtual environment, as described in more detail herein.
  • FIG. 4A is a simplified schematic diagram illustrating components of a system 400 for haptically rendering a virtual environment using data in a graphics pipeline.
  • the system 400 comprises computational elements 402 including a graphics thread 405 , a collision thread 410 , and a servo thread 415 , as well as a display device 420 and a haptic interface device 425 .
  • the graphics thread 405 is adapted to generate a visual display of a virtual environment to be displayed on the display device 420 .
  • the collision thread 410 determines if a user-directed virtual proxy (i.e. a haptic interface location) collides with a surface within the virtual environment, based on input from the graphics thread 405 .
  • the servo thread 415 determines (and may generate) a force to be applied to a user in real space via the haptic interface device 425 according to input from the collision thread 410 .
  • FIG. 4B is a schematic diagram 427 illustrating the system of FIG. 4A in further detail.
  • the graphics thread 405 is adapted to generate a visual display of a virtual environment.
  • API commands 430 are used to access graphical rendering data, including the depth buffer 437 and feedback buffer 440 . In one embodiment, this data is used for both haptic and graphical rendering.
  • the user and/or the 3D graphics software programmer may define custom shapes 435 and custom constraints 442 independent of the graphics API 430 .
  • Custom shapes 435 include, for example, NURBS shapes, SubDs, voxel-shapes, and the like.
  • Custom constraints include, for example, constraint to surfaces, lines, curves, arcs, and the like.
  • Standard force effects 447 and user-defined force effects 450 may also be assigned in the graphics thread 405 .
  • Additional software for example, third-party software, may be integrated with the graphics thread 405 , for example, in a user-defined proxy module 445 .
  • the graphics thread 405 refreshes the display device 420 at a rate, for example, within the range from about 10 Hz to about 150 Hz, within the range from about 20 Hz to about 110 Hz, or, preferably, within the range from about 30 Hz to about 60 Hz. Rates above and below these levels are possible as well.
  • the collision thread 410 of FIG. 4B is adapted to determine whether a user-directed virtual proxy collides with a surface within the virtual environment.
  • the collision thread comprises three modules, including a shape collision renderer 453 , a constraint collision renderer 455 , and an effect renderer 460 .
  • the shape collision renderer 453 is adapted to calculate the shapes in the virtual environment and to identify their collision with each other or with proxies.
  • the shape collision renderer 453 may use data from the depth buffer 437 , the feedback buffer 400 , and user defined shape data 435 .
  • the constraint collision renderer 455 may use data from the depth buffer 437 , feedback buffer 440 , and from user-defined constraints 442 .
  • the effect renderer 460 may use data from the standard force effects module 447 and from the user-defined force effects module 450 .
  • One of the functions of the effect renderer 460 is to compose the force shader 480 in the servo thread 415 , so that the force shader 480 is able to simulate force effects at the typically higher servo loop rate.
  • the effect renderer 460 can start, stop, and manage parameters for the force shader 480 .
  • the collision thread 410 may perform a collision detection computation at a rate within the range from about 10 Hz to about 200 Hz, from about 80 Hz to about 120 Hz, or, preferably, at about 100 Hz. Rates above and below these levels are possible as well.
  • the servo thread 415 generates a force to be applied to a user in real space via the haptic interface device 425 according to input from the collision thread 410 .
  • the force is calculated by using data from the shape collision renderer 453 and from the constraint collision renderer 455 . Data from these two renderers are used to calculate a local approximation, which is transmitted to the local approximation renderer 465 .
  • the local approximation renderer 465 resolves a position/orientation transform for the proxy, which is used for producing a contact or constraint force.
  • the proxy can be represented by the position of a single point, but can alternatively be chosen as having any arbitrary geometry.
  • the local approximation transmitted to the local approximation renderer 465 is a collection of geometry determined in the collision thread generally at a lower processing rate than the servo thread. This local approximation geometry may be used for several updates of the servo loop thread. The local approximation geometry generally serves as a more efficient representation for collision detection and resolution than the source geometry processed by the collision thread.
  • the proxy position information is transmitted to a proxy shader 470 and then to a proxy renderer 475 , along with the user-defined proxy information 445 from the graphics thread.
  • a force shader 480 enables modification of a calculated force vector prior to transmitting the force vector to the haptic interface device 425 .
  • rendered proxy data from the proxy renderer 475 along with force vector data from the effect renderer 460 , are used by the force shader 480 to calculate a modified force vector, which is then transmitted to the haptic interface device 425 .
  • the force shader 480 is thus able to modify the direction and magnitude of the force vector as determined by preceding modules such as the proxy renderer 475 and the effect renderer 460 .
  • the force shader 480 may also have access to data from other modules in the schematic diagram 427 of FIG. 4B , such as the local approximation renderer 465 and the proxy shader 470 .
  • the force shader 480 may be used for simulating arbitrary force effects. Examples of such force effects include inertia, viscosity, friction, attraction, repulsion, and buzzing.
  • the force shader 480 may also be used for modifying the feel of a contacted surface.
  • the force shader 480 may be used to simulate a smooth surface by modifying the force vector direction so that it is smoothly varying while contacting discontinuous surface features. As such, force discontinuities apparent when transitioning from one polygonal face to another may be minimized by the force shader 480 by aligning the force vector to an interpolated normal based on adjacent faces.
  • the force shader 480 may also be used for general conditioning or filtering of the computed force vector, such as clamping the magnitude of the force vector or increasing the magnitude of the force vector over time. In one embodiment, the force shader is used to reduce the magnitude and directional discontinuities over time, which can result from instabilities in the control system or mechanical instabilities in the haptic interface device 425 .
  • the servo thread 415 may refresh the force to be applied through the haptic interface device 425 at a rate within the range from about 500 Hz to about 15,000 Hz, from about 1000 Hz to about 10,000 Hz, or from about 2000 Hz to about 6000 Hz. Rates above and below these levels are possible as well.
  • a scheduler interface manages the high frequency for sending forces and retrieving state information from the haptic interface device 425 .
  • the scheduler allows the 3D graphics application to communicate effectively with the servo thread in a thread-safe manner and may add and delete operations to be performed in the servo thread.
  • a calibration interface allows the system to maintain an accurate estimate of the physical position of the haptic interface device 425 . Calibration procedures may be manual and/or automatic.
  • FIG. 5 is a schematic diagram 500 illustrating a servo thread of an illustrative haptics rendering pipeline.
  • Collision and constraint resolution data 502 from the virtual environment is transmitted from the collision thread to the local approximation renderer 465 .
  • the local approximation renderer 465 calculates a proxy position, which is then transmitted to a proxy shader 470 and then to impedance control 515 , producing a force.
  • the force is modified by the force shader 480 , then transmitted to the haptic interface device 425 following application of inverse kinematics 525 .
  • Forward kinematics 535 from the haptic interface device 535 is fed back to the force shader 480 and the impedance controller 515 , and is transmitted to a transform shader 540 , which provides feedback to the local approximation renderer 465 and proxy shader 470 .
  • FIG. 6 is a schematic diagram 600 illustrating a system for haptically rendering a virtual environment using data in a graphics pipeline of a 3D graphics application.
  • the diagram 600 shows how third-party 3D graphics application software is integrated with the system.
  • the diagram 600 illustrates the interaction between the 3D graphics application 602 , a haptics API 610 , and a haptic device API 625 .
  • the graphics application 600 can make a function call to the haptics API 610 .
  • the haptics API 610 then accesses data from the 3D graphics pipeline.
  • the haptics API 610 also transmits data to the haptic device API 625 , which performs low-level force rendering.
  • FIG. 7 is a block diagram 700 featuring a method of delivering interaction force to a user via a haptic interface device, where the force is based at least in part on graphical data from a virtual camera located at a haptic interface location.
  • the method includes determining a haptic interface location in a 3D virtual environment corresponding to the position of a haptic interface device in real space 702 .
  • the method further includes positioning a first virtual camera at the haptic interface location 705 .
  • the first virtual camera is usually implemented using matrix transformations that map 3D virtual objects in coordinate space into a 2D representation, so that the virtual environment, populated with the virtual objects, appears as if viewed by a camera.
  • the virtual camera view can be changed to view the same object from any of a plurality of vantage points.
  • These transformations include a modeling transformation, a viewing transformation, a projection transformation, and a display device transformation. These are discussed in further detail with respect to FIG. 9 herein below.
  • the position of the first camera is updated as the haptic interface location changes, according to the manipulation of the haptic interface device by the user.
  • the method of FIG. 7 next includes the step of accessing graphical data corresponding to the virtual environment as viewed from the first virtual camera at the haptic interface location 710 .
  • the accessed data is then used in the graphical rendering of the virtual environment, for example, according to methods described herein.
  • the method of FIG. 7 may optionally include the step of positioning a second virtual camera at a location other than the haptic interface location 715 .
  • the method would then comprise the step of accessing graphical data from the second virtual camera 720 .
  • the accessed data may be used for graphical rendering, haptic rendering, or both.
  • the second virtual camera is used for graphical rendering, while the first virtual camera is used for haptic rendering.
  • the second camera may move, or it may be static.
  • the second virtual camera is fixed while the first virtual camera is capable of moving.
  • the second virtual camera operates using matrix transformations as described with respect to step 705 .
  • the second virtual camera has associated with it a look direction and an eye position, independent of the look direction and eye position of the first virtual camera.
  • FIG. 8A is a screenshot 800 of a virtual object (a teapot) in a virtual environment as imaged from a fixed camera view (i.e. the second camera view, as described with respect to FIG. 7 ).
  • the screenshot 800 shows a haptic interface location 805 , representing the position of a user in the virtual environment.
  • a “haptic camera” (first virtual camera) is located at the haptic interface location, which moves as a user manipulates a haptic interface device in real space.
  • FIG. 8B is a screenshot 810 of the virtual object of FIG. 8A as imaged from the moving haptic camera. As can be seen from the screenshot 810 , additional detail is viewable from this vantage point.
  • the view volume of the haptic camera may be optimized so as to view only areas of the virtual environment the user will want to touch or will be able to touch at any given time.
  • the view volume of the first virtual camera, dedicated to haptic rendering may be limited to objects within the vicinity and trajectory of the haptic interface.
  • haptic rendering will only need to be performed for this limited view volume, and not for all the geometry that is viewed from the vantage point of a graphics-dedicated second virtual camera. The method thereby increases the efficiency of the haptic rendering process.
  • the method of FIG. 7 comprises determining a position of the haptic interface location in relation to a surface of a virtual object in the virtual environment by using graphical data from either or both of the first virtual camera and the second virtual camera 725 .
  • the method also includes determining an interaction force based at least in part on the position of the haptic interface location in relation to the surface of the virtual object 730 .
  • an interaction force is delivered to a user through the haptic interface device 735 .
  • the determination and delivery of an interaction force is described, for example, in U.S. Pat. Nos. 6,191,796, 6,421,048, 6,552,722, 6,417,638, and 6,671,651, the disclosures of which are incorporated by reference herein in their entirety.
  • FIG. 9 is a schematic diagram 900 illustrating a 3D transformation pipeline.
  • 3D graphics applications generally perform a series of transformations in order to display 3D model coordinates on a 2D display device. These transformations include a shape-world transformation 902 , a world-view transformation 905 , a view-clip transformation 910 , and a clip-window transformation 915 . Additional transformations that are used to haptically render a virtual environment via a haptic interface device include a view-touch transformation 920 and a touch-workspace transformation 925 . The transformations in FIG. 9 can be repurposed for rendering a scene from a virtual haptic camera viewpoint, thereby affording improved acquisition and utilization of graphics pipeline data.
  • the shape-world transformation 902 of the pipeline of FIG. 9 transforms geometry describing a virtual object from its local coordinate space, or shape coordinates, into world coordinates, i.e., the main reference coordinate space for the 3D virtual environment. All objects in the virtual environment have a relationship to world coordinates, including cameras.
  • the world-view transformation 905 of the pipeline of FIG. 9 maps world coordinates to view coordinates, the local coordinates of the virtual camera.
  • FIG. 10 illustrates the relation of view coordinates (X V , Y V , Z V ), with an associated look direction and camera eye position, to world coordinates (X W , Y W , Z W ).
  • the look direction of FIG. 10 is preferably mapped to the z-axis of the world-view transform.
  • the world-view transformation can be customized for translating and rotating the virtual camera so that it can view the scene as if attached to the position of the haptic device's virtual proxy.
  • the camera eye position of the world-view transformation is sampled from the virtual proxy position.
  • the camera eye position is preferably only updated when the virtual proxy moves beyond a threshold distance from the current eye position.
  • the threshold distance is 2 mm.
  • the look direction of the world-view transformation is determined by the motion of the proxy and optionally by the contact normal, for example, if the proxy is in contact with a virtual object in the virtual environment.
  • the proxy's position can be constrained to remain on the surface of the contacted virtual object.
  • FIG. 11 illustrates the look direction 1110 when the virtual proxy is in contact with a virtual object 1101 .
  • the camera eye position is updated as soon as the proxy has moved beyond a threshold distance. This defines the motion vector 1120 of the proxy.
  • the look direction is the normalized motion vector 1120 .
  • the look direction is a linear combination of the normalized motion vector 1120 and the contact normal 1105 , as illustrated in FIG. 11 .
  • the look direction may be computed as a linear combination of the normalized motion vector and the contact normal.
  • the haptic camera angle tilts to show more of what lies ahead, along the direction of motion.
  • the world-view transformation 905 of FIG. 9 can be computed by forming a composite rotation-translation matrix that transforms coordinates from world coordinates into view coordinates, mapping the look direction to an-axis (preferably the z-axis), and mapping the camera eye position to the origin.
  • An up vector, such as the y-axis, may be selected to keep the view consistently oriented.
  • the view-clip transformation 910 is another of the transformations in the 3D transformation pipeline of FIG. 9 , also known as the projection transform.
  • the view-clip transformation 910 enables manipulations of the shape and size of the view volume.
  • the view volume determines which geometry is lit and rasterized for display on the 2D display device. As a result, geometry that lies outside the view volume is usually excluded from the remainder of the graphics pipeline.
  • the view volume may be sized so as to include only objects that are likely to be touched.
  • the size of the view volume is specified as a radius of motion in workspace coordinates of the haptic device which is transformed into view coordinates when composing the view-clip matrix.
  • An orthographic view volume mapping centered around the origin is used with extents determined by the motion radius.
  • the clip-window transformation 915 converts clip coordinates into the physical coordinates of the display device so that an object in clip coordinates may be displayed on the display device.
  • the clip-window transformation 915 is specified by a 2D pixel offset and a width and height in pixels.
  • the size of a display device buffer may be determined in consideration of the aforementioned tradeoff. In one embodiment, a width and height of 256 by 256 pixels for the display device buffer provides a sufficient compromise. Optimization of these dimensions is possible by considering the allowable time for pixel buffer read-back from the graphics card and the size of the smallest geometric feature in pixel coordinates.
  • the view-touch transformation 920 maps an object from view-coordinates into the touch coordinate space.
  • the view-touch transformation 920 is convenient for altering the alignment or offset of touch interactions with respect to the view. As a default, this transformation may be left as identity so that the position and alignment of touch interactions are consistent with the view position and direction.
  • the view-touch transformation 920 may be optionally modified to accommodate touch interactions with the scene in which the haptic device and display device are meant to be independent, for example, during use of a head-mounted display.
  • the touch-workspace transformation 925 maps an object in touch-coordinates into the local coordinate space of the haptic interface device.
  • the haptic workspace is the physical space reachable by the haptic device.
  • the PHANTOMS OmniTM device manufactured by SensAble Technologies, Inc., of Woburn, Mass., has a physical workspace of dimensions 160 ⁇ 120 ⁇ 70 mm.
  • the shape-world transformation 900 , the world-view transformation 905 , the view-clip transformation 910 , the clip-window transformation 915 , the view-touch transformation 920 , and/or the touch-workspace transformation 925 may be structured for viewing a scene of a virtual environment from any of one or more virtual cameras.
  • these transformations may be structured for viewing a scene from a first virtual camera dedicated to haptic rendering, as well as a second virtual camera dedicated to graphical rendering.
  • the processing capability of the graphics pipeline is leveraged for both graphical and haptic rendering.
  • FIG. 12 is a block diagram 1200 featuring an alternative method for interpreting data for haptic rendering, including the step of intercepting data from a graphics pipeline via a pass-through dynamic link library (DLL).
  • DLL pass-through dynamic link library
  • a graphics API generally uses a DLL file so that a 3D graphics application may access the functions in its library.
  • a pass-through DLL may be named to match the name of the usual DLL file used by the graphics API, while the “real” graphics API DLL file is renamed.
  • function calls from the 3D graphics application will call the pass through DLL, instead of calling the graphics API DLL.
  • the pass-through DLL does not impede normal functioning of the 3D graphics application because all function calls are redirected by the pass-through DLL to the regular graphics API DLL.
  • the pass-through DLL In order for the pass-through DLL to intercept data from the 3D graphics pipeline, logic is inserted in its code to respond to particular graphics API function calls.
  • the pass-through DLL may also directly call functions of the graphics API, hence directly accessing the 3D graphics pipeline and the associated buffer data.
  • Creating a pass-through DLL may require replicating the exported function table interface of the graphics API DLL. This may be accomplished by determining the signature of every function exported by the DLL. A binary file dumper can then be used to view the symbols exported by the DLL and access to the header file can be used for determining the number and type of the function arguments and return type.
  • step 1205 of the method of FIG. 12 a subset of the accessed data is written to a memory buffer and a subset of data is read from this memory buffer.
  • This memory buffer may be shared between the pass through DLL and a separate haptic rendering process.
  • a height map is determined using the accessed data. For example, if the depth buffer is accessed in step 1200 , the depth buffer itself may be treated as a height map. Such a height map may describe at least some of a surface of a virtual object in the virtual environment.
  • a mesh is generated using the height map determined in step 1210 .
  • the haptic rendering method interprets a height field directly, as described elsewhere herein. Haptic rendering of a depth buffer is performed directly in screen space and in a local fashion (i.e. via a haptic camera). It is not necessary that the entire image be transformed and then processed to generate a mesh. In order to generate a mesh from depth buffer data, the data representing depth values and screen coordinate locations may be transformed from screen space to object space.
  • FIG. 13 is a schematic diagram 1300 illustrating an alternative system for haptically rendering a virtual environment using data intercepted from a graphics pipeline of a 3D graphics application via a pass-through dynamic link library.
  • a 3D graphics application 1300 is developed using a graphics API.
  • the 3D graphics application 1300 makes calls to the graphics API DLL file 1310 , the calls are intercepted by a pass-through DLL file 1305 .
  • the pass-through DLL does not impede normal functioning of the 3D graphics application because all function calls are redirected by the pass through DLL to the regular graphics API DLL.
  • the pass-through DLL 1305 may then make function calls to the graphics API DLL 1310 , thereby accessing buffer data from the 3D graphics pipeline.
  • the graphics API DLL 1310 operates to render graphics on a display screen via a 3D graphics card 1315 .
  • the pass-through DLL 1305 may call the graphics API DLL to access the graphic rendering data from the 3D graphics pipeline and store this data in memory buffer 1320 .
  • the data may be read from the memory buffer 1320 in a haptic rendering process to provide touch feedback based on the intercepted graphical data.
  • the memory buffer 1320 may be shared with a haptic API 1325 .
  • the haptic API 1325 accesses the graphic rendering data in the memory buffer 1320 and prepares it for low level haptic rendering by the haptic device API 1330 .
  • the haptic device API 1330 then produces a force signal which a device driver uses to generate and transmit a force to a user via the haptic interface device 1335 .

Abstract

The invention provides systems and methods for using a “haptic camera” within a virtual environment and for using graphical data from the haptic camera to produce touch feedback. The haptic camera obtains graphical data pertaining to virtual objects within the vicinity and along the trajectory of a user-controlled haptic interface device. The graphical data from the camera is interpreted haptically, thereby allowing touch feedback corresponding to the virtual environment to be provided to the user.

Description

    RELATED APPLICATIONS
  • The present application is related to commonly-owned U.S. patent application entitled, “Apparatus and Methods for Haptic Rendering Using Data in a Graphics Pipeline,” by Itkowitz, Shih, Midura, Handley, and Goodwin, filed under Attorney Docket No. SNS-012 on even date herewith, the text of which is hereby incorporated by reference in its entirety; the present application is also related to commonly-owned international (PCT) patent application entitled, “Apparatus and Methods for Haptic Rendering Using Data in a Graphics Pipeline,” by Itkowitz, Shih, Midura, Handley, and Goodwin, filed under Attorney Docket No. SNS-012PC on even date herewith, the text of which is hereby incorporated by reference in its entirety; the present application claims the benefit of U.S. Provisional Patent Application No. 60/584,001, filed on Jun. 29, 2004, the entirety of which is incorporated by reference herein.
  • FIELD OF THE INVENTION
  • The invention relates generally to haptic rendering of virtual environments. More particularly, in certain embodiments, the invention relates to the haptic rendering of a virtual environment using data from the graphics pipeline of a 3D graphics application.
  • BACKGROUND OF THE INVENTION
  • Haptic technology involves simulating virtual environments to allow user interaction through the user's sense of touch. Haptic interface devices and associated computer hardware and software are used in a variety of systems to provide kinesthetic and/or tactile sensory feedback to a user in addition to conventional visual feedback, thereby affording an enhanced man/machine interface. Haptic systems are used, for example, in manufactured component design, surgical technique training, industrial modeling, robotics, and personal entertainment. An example haptic interface device is a six degree of freedom force reflecting device as described in co-owned U.S. Pat. No. 6,417,638, to Rodomista et al., the description of which is incorporated by reference herein in its entirety.
  • A haptic rendering process provides a computer-based kinesthetic and/or tactile description of one or more virtual objects in a virtual environment. A user interacts with the virtual environment via a haptic interface device. Analogously, a graphical rendering process provides a graphical description of one or more virtual objects in a virtual environment. Typically, a user interacts with graphical objects via a mouse, joystick, or other controller. Current haptic systems process haptic rendering data separately from graphical rendering data.
  • The graphical rendering of 3D virtual environments has been enhanced by the advent of 3D graphics application programming interfaces (APIs), as well as 3D graphics (video) cards. A programmer may create or adapt a 3D graphics application for rendering a 3D graphics virtual environment using the specialized libraries and function calls of a 3D graphics API. Thus, the programmer avoids having to write graphics rendering code that is provided in the API library. As a result, the task of programming a 3D graphics application is simplified. Furthermore, graphics standards have developed such that many currently-available 3D graphics applications are compatible with currently-available 3D graphics API's, allowing a user to adapt the 3D graphics application to suit his/her purpose. Examples of such 3D graphics API's include OpenGL, DirectX, and Java 3D.
  • In addition to 3D graphics API's, 3D graphics cards have also improved the graphical rendering of 3D virtual objects. A 3D graphics card is a specialized type of computer hardware that speeds the graphical rendering process. A 3D graphics card performs a large amount of the computation work necessary to translate 3D information into 2D images for viewing on a screen, thereby saving CPU resources.
  • While 3D graphics API's and graphics cards have significantly improved the graphical rendering of 3D objects, the haptic rendering of 3D objects in a virtual environment is a comparatively inefficient process. Haptic rendering is largely a separate process from graphical rendering, and currently-available 3D graphics applications are incompatible with haptic systems, since graphics applications are not designed to interpret or provide haptic information about a virtual environment.
  • Furthermore, haptic rendering processes are generally computation-intensive, requiring high processing speed and a low latency control loop for accurate force feedback rendering. For example, in order to realistically simulate touch-based interaction with a virtual object, a haptic rendering process must typically update force feedback calculations at a rate of about 1000 times per second. This is significantly greater than the update rate needed for realistic dynamic graphics display, which is from about 30 to about 60 times per second in certain systems. For this reason, current haptic systems are usually limited to generating force feedback based on single point interaction with a virtual environment. This is particularly true for haptic systems that are designed to work with widely-available desktop computers and workstations with state-of-the-art processors.
  • Thus, there is a need for increased efficiency in haptic rendering. Improvement is needed, for example, to facilitate the integration of haptics with currently-available 3D applications, to permit greater haptic processing speeds, and to enable the use of more sophisticated force feedback techniques, thereby increasing the realism of a user's interaction with a virtual environment.
  • SUMMARY OF THE INVENTION
  • The invention provides systems and methods for using a “haptic camera” within a virtual environment and for using graphical data from the haptic camera to produce touch feedback. The haptic camera obtains graphical data pertaining to virtual objects within the vicinity and along the trajectory of a user-controlled haptic interface device. The graphical data from the camera is interpreted haptically, thereby allowing touch feedback corresponding to the virtual environment to be provided to the user.
  • The efficiency of haptic rendering is improved, because the view volume can be limited to a region of the virtual environment that the user will be able to touch at any given time, and further, because the method takes advantage of the processing capacity of the graphics pipeline. This method also allows haptic rendering of portions of a virtual environment that cannot be seen in a 2D display of the virtual object, for example, the back side of an object, the inside of crevices and tunnels, and portions of objects that lie behind other objects.
  • A moving haptic camera offers this advantage. Graphical data from a static camera view of a virtual environment can be used for haptic rendering; however, it is generally true that only geometry visible in the view direction of the camera can be used to produce touch feedback. A moving camera (and/or multiple cameras) allows graphical data to be obtained from more than one view direction, thereby allowing the production of force feedback corresponding to portions of the virtual environment that are not visible from a single static view. The interaction between the user and the virtual environment is further enhanced by providing the user with a main view of the virtual environment on a 2D display while, at the same time, providing the user with haptic feedback corresponding to the 3D virtual environment. The haptic feedback is updated according to the user's manipulation of a haptic interface device, allowing the user to “feel” the virtual object at any position, including regions that are not visible on the 2D display.
  • The invention provides increased haptic rendering efficiency, permitting greater haptic processing speeds for more realistic touch-based simulation. For example, in one embodiment, the force feedback computation speed is increased from a rate of about 1000 Hz to a rate of about 10,000 Hz or more. Furthermore, the invention allows more sophisticated haptic interaction techniques to be used with widely-available desktop computers and workstations. For example, forces can be computed based on the interaction of one or more points, lines, planes, and/or spheres with virtual objects in the virtual environment, not just based on single point interaction. More sophisticated haptic interface devices that require multi-point interaction can be used, including pinch devices, multi-finger devices, and gloves, thereby enhancing the user's haptic experience. Supported devices include kinesthetic and/or tactile feedback devices. For example, in one embodiment, a user receives tactile feedback when in contact with the surface of a virtual object such that the user can sense the texture of the surface.
  • In one aspect of the invention, a method is provided for haptically rendering a virtual object in a virtual environment. The method includes determining a haptic interface location in a 3D virtual environment corresponding to a haptic interface device in real space. A first virtual camera is positioned at the haptic interface location, and graphical data corresponding to the virtual environment is accessed from this first virtual camera. Additionally, the method comprises determining a position of the haptic interface location in relation to one or more geometric features of a virtual object in the virtual environment—for example, a surface, point, line, or plane of (or associate with) the virtual object—by using graphical data from the first virtual camera. The method also includes determining an interaction force based at least in part on the position of the haptic interface location in relation to the geometric feature(s) of the virtual object. In one embodiment, the interaction force is delivered to a user through the haptic interface device. In a preferred embodiment, the position of the first camera is updated as the haptic interface location changes, according to movement of the haptic interface device.
  • The invention also provides a two-pass rendering technique using two virtual cameras. For example, the invention provides methods using a first virtual camera view dedicated for use in haptically rendering a 3D virtual environment and a second virtual camera view for graphically rendering the virtual environment for display. Accordingly, in one embodiment, the invention includes the steps of positioning a second virtual camera at a location other than the haptic interface location and accessing graphical data from the second virtual camera corresponding to the virtual environment. In one embodiment, the second virtual camera is at a fixed location, while the first virtual camera moves, for example, according to the movement of the haptic interface location.
  • Preferred methods of the invention leverage the processing capability of the graphics pipeline for haptic rendering. For example, graphical data corresponding to the view(s) from one or more virtual cameras is accessed from a graphics pipeline of a 3D graphics application. In one embodiment, the step of determining a position of the haptic interface location using data from the first virtual camera includes determining a world-view transformation that maps coordinates corresponding to the haptic virtual environment (i.e. world coordinates) to coordinates corresponding to the first virtual camera (i.e. view coordinates). The world-view transformation can be customized for translating and rotating the camera to view the scene as if attached to the position of the haptic device's proxy in the virtual environment (i.e. the haptic interface location). Additional transforms may be determined and/or applied, including a shape-world transformation, a view-clip transformation, a clip-window transformation, a view-touch transformation, and a touch-workspace transformation.
  • The invention also provides a method of determining what the view looks like from the “haptic camera.” Generally, in order to specify a 3D world-view transformation, a camera eye position and a look direction are needed. Thus, in one embodiment, the step of determining a world-view transformation includes determining an eye position and a look direction. To determine the eye position, the position of the haptic interface location (i.e. the virtual proxy position) is sampled. In order to avoid undesirable jitter, the eye position is preferably updated only when the virtual proxy moves beyond a threshold distance from the current eye position. To determine the look direction, a vector representing the motion of the haptic interface location is determined. Preferably, the look direction is determined by the motion of the proxy and optionally by the contact normal, for example, if in contact with a virtual object and constrained on the surface of the contacted object. For example, when moving in free space, the look direction is the normalized motion vector. When in contact with a virtual object, the look direction becomes a linear combination of the normalized motion vector and the contact normal.
  • In one embodiment, a view volume associated with the first virtual camera is sized to exclude geometric elements that lie beyond a desired distance from the haptic interface location. This involves culling the graphical data to remove geometric primitives that lie outside the view volume of the first virtual camera. In one embodiment, hardware culling is employed, where primitives are culled by graphics hardware (i.e. a graphics card). In another embodiment, culling involves the use of a spatial partition, for example, an octree, BSP tree, or other hierarchical data structure, to exclude graphical data outside the view volume. Both hardware culling and a spatial partition can be used together. For example, where the number of primitives being culled by the graphics hardware is large, the spatial partition can reduce the amount of data sent to the hardware for culling, allowing for a more efficient process.
  • The types of graphical data obtained from the first virtual camera include, for example, data in a depth buffer, a feedback buffer, a color buffer, a selection buffer, an accumulation buffer, a texture map, a fat framebuffer, rasterization primitives, application programming interface input data, and/or state data.
  • As the term is used herein, a fat framebuffer is also known as and/or includes a floating point auxiliary buffer, an attribute buffer, a geometry buffer, and/or a super buffer. Fat framebuffers are flexible and allow a user to store a wide variety of different types of graphical data. A fat framebuffer can include, for example, vertex positions, normals, color, texture, normal maps, bump maps, and/or depth data. Fat framebuffers can be used as input in custom pixel and/or vertex shader programs that are run on graphics hardware (i.e. on the graphics card). In one embodiment, a fat framebuffer is used to capture vertex positions and normals. For example, in one embodiment, primitives are graphically rendered to a fat framebuffer, and pixel shading and/or vertex shading is performed using data from the fat framebuffer in the haptic rendering of a virtual environment. In one embodiment, a deferred shading process is used to render graphics primitives to a fat framebuffer.
  • It is possible to use graphics hardware to graphically render virtual objects to a texture map instead of a buffer. Thus, throughout the specification, where graphical data is described as being stored in or read from a buffer, the data may alternately be stored in or read from a texture map.
  • In one embodiment, determining the position of the haptic interface location using data from the first virtual camera includes performing an intersection test to determine an intersection point and intersection normal in screen space, and transforming the coordinates of the intersection point and intersection normal from screen space to object space. Alternatively, the graphical data can be used to determine the closest geometric feature, such as a point, line or plane, to the virtual proxy via a projection test. These geometric queries are important for haptic rendering of 1D, 2D, and/or 3D contacts and/or constraints.
  • In another aspect, a system is provided for haptically rendering a virtual object in a virtual environment. The system comprises a graphics thread that generates a visual display of a virtual environment, a collision thread that uses input from the graphics thread to determine if a user-directed virtual proxy collides with a surface within the virtual environment, and a servo thread that generates force to be applied to a user in real space though a haptic interface device according to input from the collision thread.
  • In one embodiment, the graphics thread refreshes the visual display at a rate within a range, for example, from about 5 Hz to about 150 Hz, or from about 30 Hz to about 60 Hz. Refresh rates above and below these levels are possible as well. In one embodiment, the collision thread performs a collision detection computation at a rate within a range, for example, from about 30 Hz to about 200 Hz, or from about 80 Hz to about 120 Hz. Computation rates above and below these levels are possible as well. In one embodiment, the servo thread refreshes the force to be applied through the haptic interface device at a rate within a range from about 1000 Hz to about 10,000 Hz. Force refresh rates above and below these levels are possible as well. In one embodiment, the servo thread includes a force shader.
  • In yet another aspect, an apparatus is provided for providing haptic feedback to a user of a 3D graphics application. The apparatus comprises a user-controlled haptic interface device adapted to provide a user input to a computer and to transmit force to a user. The apparatus also includes computer software that, when operating with the computer and the user input, is adapted to determine force transmitted to the user. The force transmitted to the user is determined by a process that comprises determining a haptic interface location in a 3D virtual environment corresponding to a location of the haptic interface device in real space and positioning a first virtual camera substantially at the haptic interface location. Graphical data is then accessed using the first virtual camera. A position of the haptic interface location in relation to a surface of a virtual object in the virtual environment is determined using the graphical data from the first virtual camera. Finally, an interaction force is determined, based at least in part on the position of the haptic interface location in relation to the surface of the virtual object.
  • There may be any number of cameras in a given scene. For example, each individual virtual object in a scene may have its own camera; thus, the number of cameras is unlimited. This allows a user to adapt the camera view to best suit individual objects, which allows for further optimization. For example, the camera position and view frustum for objects that are graphically rendered (and/or haptically rendered) using the depth buffer can be set differently than those rendered using the feedback buffer. In addition, there can be multiple haptic devices in a given scene. Each haptic device can have a different camera for each object, since the position and motion of the haptic devices will generally be different.
  • BRIEF DESCRIPTION OF DRAWINGS
  • The objects and features of the invention can be better understood with reference to the drawings described below, and the claims. The drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating the principles of the invention. In the drawings, like numerals are used to indicate like parts throughout the various views.
  • FIG. 1 is a block diagram featuring a method of haptically rendering one or more virtual objects in a virtual environment using data in a graphics pipeline, according to an illustrative embodiment of the invention.
  • FIG. 2 is a schematic diagram illustrating a system for haptically rendering a virtual environment using data in a graphics pipeline, the diagram showing an interaction between a 3D graphics application, a graphics application programming interface (API), a 3D graphics card, and a haptics API, according to an illustrative embodiment of the invention.
  • FIG. 3 is a schematic diagram illustrating a graphics pipeline of a 3D graphics application, according to an illustrative embodiment of the invention.
  • FIG. 4A is a schematic diagram illustrating a system for haptically rendering a virtual environment using data in a graphics pipeline, the system including a graphics thread, a collision thread, and a servo thread, according to an illustrative embodiment of the invention.
  • FIG. 4B is a schematic diagram illustrating the system of FIG. 4A in further detail, according to an illustrative embodiment of the invention.
  • FIG. 5 is a schematic diagram illustrating a servo thread of a haptics rendering pipeline, according to an illustrative embodiment of the invention.
  • FIG. 6 is a schematic diagram illustrating a system for haptically rendering a virtual environment using data in a graphics pipeline, the diagram showing how third-party 3D graphics application software is integrated with the system, according to an illustrative embodiment of the invention.
  • FIG. 7 is a block diagram featuring a method of delivering interaction force to a user via a haptic interface device, the force based at least in part on graphical data from a virtual camera located at a haptic interface location, according to an illustrative embodiment of the invention.
  • FIG. 8A is a screenshot of a virtual object in a virtual environment as imaged from a fixed camera view, the screenshot indicating a haptic interface location, or proxy position, representing the position of a user in the virtual environment, according to an illustrative embodiment of the invention.
  • FIG. 8B is a screenshot of the virtual object of FIG. 8A as imaged from a moving camera view located at the haptic interface location shown in FIG. 8A, where graphical data from the images of either or both of FIG. 8A and FIG. 8B is/are used to haptically render the virtual object, according to an illustrative embodiment of the invention.
  • FIG. 9 is a block diagram featuring a 3D transformation pipeline for displaying 3D model coordinates on a 2D display device and for haptic rendering via a haptic interface device, according to an illustrative embodiment of the invention.
  • FIG. 10 is a schematic diagram illustrating the specification of a viewing transformation for a haptic camera view, according to an illustrative embodiment of the invention.
  • FIG. 11 is a schematic diagram illustrating the specification of a look direction for use in determining a viewing transformation for a haptic camera view when the position of a haptic interface location is constrained on the surface of a virtual object, according to an illustrative embodiment of the invention.
  • FIG. 12 is a block diagram featuring a method for interpreting data for haptic rendering by intercepting data from a graphics pipeline via a pass-through dynamic link library (DLL), according to an illustrative embodiment of the invention.
  • FIG. 13 is a schematic diagram illustrating a system for haptically rendering a virtual environment using data intercepted from a graphics pipeline of a 3D graphics application via a pass-through dynamic link library, according to an illustrative embodiment of the invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Throughout the description, where an apparatus is described as having, including, or comprising specific components, or where systems, processes, and methods are described as having, including, or comprising specific steps, it is contemplated that, additionally, there are apparati of the present invention that consist essentially of, or consist of, the recited components, and that there are systems, processes, and methods of the present invention that consist essentially of, or consist of, the recited steps.
  • It should be understood that the order of steps or order for performing certain actions is immaterial so long as the invention remains operable. Moreover, two or more steps or actions may be conducted simultaneously.
  • A computer hardware apparatus may be used in carrying out any of the methods described herein. The apparatus may include, for example, a general purpose computer, an embedded computer, a laptop or desktop computer, or any other type of computer that is capable of running software, issuing suitable control commands, receiving graphical user input, and recording information. The computer typically includes one or more central processing units for executing the instructions contained in software code that embraces one or more of the methods described herein. The software may include one or more modules recorded on machine-readable media, where the term machine-readable media encompasses software, hardwired logic, firmware, object code, and the like. Additionally, communication buses and I/O ports may be provided to link any or all of the hardware components together and permit communication with other computers and computer networks, including the internet, as desired. As used herein, the term “3D” is interpreted to include 4D, 5D, and higher dimensions.
  • It is an object of the invention to leverage the processing power of modern 3D graphical rendering systems for use in the haptic rendering of a virtual environment containing, for example, one or more virtual objects. It is a further object of the invention to introduce a virtual camera in the virtual environment located at a haptic interface location, which can be moved by a user. The view volume of this “haptic camera” can be sized to exclude unnecessary regions of the virtual environment, and the graphical data can be used for haptically rendering one or more virtual objects as the user moves about the virtual environment.
  • FIG. 1 is a block diagram 100 featuring a method of haptically rendering one or more virtual objects in a virtual environment using data in a graphics pipeline of a 3D graphics application. The method shown in FIG. 1 includes three main steps—accessing data in a graphics pipeline of a 3D graphics application 102; interpreting data for use in haptic rendering 105; and haptically rendering one or more virtual objects in the virtual environment 110.
  • A graphics pipeline generally is a series of steps, or modules, that involve the processing of 3D computer graphics information for viewing on a 2D screen, while at the same time rendering an illusion of three dimensions for a user viewing the 2D screen. For example, a graphics pipeline may comprise a modeling transformation module, in which a virtual object is transformed from its own object space into a common coordinate space containing other objects, light sources, and/or one or more cameras. A graphics pipeline may also include a rejection module in which objects or primitives that cannot be seen are eliminated. Furthermore, a graphics pipeline may include an illumination module that colors objects based on the light sources in the virtual environment and the material properties of the objects. Other modules of the graphics pipeline may perform steps that include, for example, transformation of coordinates from world space to view space, clipping of the scene within a three dimensional volume (a viewing frustum), projection of primitives into two dimensions, scan-conversion of primitives into pixels (rasterization), and 2D image display.
  • Information about the virtual environment is produced in the graphics pipeline of a 3D graphics application to create a 2D display of the virtual environment as viewed from a given camera view. The camera view can be changed to view the same virtual environment from a myriad of vantage points. The invention capitalizes on this capability by haptically rendering the virtual environment using graphical data obtained from one or more virtual cameras. In one embodiment, the invention accesses data corresponding to either or both of a primary view 115 and a haptic camera view 120, where the primary view 115 is a view of the virtual environment from a fixed location, and the haptic camera view 120 is a view of the virtual environment from a moving location corresponding to a user-controlled haptic interface location. The haptic camera view 120 allows a user to reach behind an object to feel what is not immediately visible on the screen (the primary view 115).
  • Information about the geometry of the virtual environment can be accessed by making the appropriate function call to the graphics API. Data can be accessed from one or more data buffers—for example, a depth buffer 125, as shown in the block diagram of FIG. 1, and a feedback buffer 130 (or its equivalent). Use of this data for haptic rendering enables the reuse of the scene traversal and graphics API rendering state and functionality.
  • The depth buffer 125 is typically a two-dimensional image containing pixels whose intensities correspond to depth (or height) values associated with those pixels. The depth buffer is used during polygon rasterization to quickly determine if a fragment is occluded by a previously rendered polygon. The depth buffer is accessed by making the appropriate function call to the graphics API. This information is then interpreted in step 105 of the method of FIG. 1 for haptic use. Using depth buffer data provides several advantages. For example, depth buffer data is in a form whereby it can be used to quickly compute 3D line segment intersections and inside/outside tests. Furthermore, the speed at which these depth buffer computations can be performed is substantially invariant to the density of the polygons in the virtual environment. This is because the data in the depth buffer is scalar data organized in a 2D grid having known dimensions, the result of rasterization and occlusion processing.
  • Other data buffers in the graphics pipeline include a color buffer 135, a stencil buffer 140, and an accumulation buffer 145. The color buffer 135 can store data describing the color and lighting conditions of vertices. The accumulation buffer 145 can be used to accumulate precise intermediate rendering data. The stencil buffer 140 can be used to flag attributes for each pixel and perform logic operations as part of pixel fragment rendering. These buffers may be used, for example, to modify and/or map various haptic attributes—for example, friction, stiffness, and/or damping—to the pixel locations of the depth buffer. For example, color buffer data 135 may be used to encode surface normals for force shading. Stencil buffer data 140 can indicate whether or not to allow drawing for given pixels. Stencil buffer data 140 can also be incremented or decreased every time a pixel is touched, thereby counting the number of overlapping primitives for a pixel. The stencil contents can be used directly or indirectly for haptic rendering. For example, it can be used directly to flag pixels with attributes for enabling and/or disabling surface materials, such as areas of friction. It can also be used indirectly for haptics by graphically rendering geometry in a special way for haptic exploration, like depth peeling or geometry capping.
  • Encoding normals in the color buffer includes setting up the lighting of the virtual environment so that normals may be mapped into values in the color buffer, wherein each pixel contains four components <r,g,b,a>. A normal vector <x,y,z> can be stored, for example, in the <r,g,b> components by modifying the lighting equation to use only the diffuse term and by applying the lighting equation for six colored lights directed along the local axes of the object coordinate space. For example, the x direction light is colored red, the y direction light is colored green, and the z direction light is colored blue, so that the directional components of the pixels match their color components. Then the lighting equation is written as a summation of dot products scaled by the respective color of the light. This results in normal values which may be used, for example, for smooth force shading.
  • Data contained in the depth buffer 125, feedback buffer 130, color buffer 135, stencil buffer 140, and/or accumulation buffer 145, among other data buffers, may be altered by hardware such as a graphics card. A graphics card can perform some of the graphical data processing required to produce 2D screen views of 3D objects, thereby saving CPU resources. Data produced from such hardware-accelerated geometry modifications 150 is used in certain embodiments of the invention. Modern graphics cards have the ability to execute custom fragment and vertex shading programs, enabling a programmable graphics pipeline. It is possible to leverage the results of such geometry modifications for purposes of haptic rendering. For example, view-dependent adaptive subdivision and view-dependent tessellation be used to produce smoother-feeling surfaces. Displacement mapping can result in the haptic rendering of surface details such as ripples, crevices, and bumps, which are generated onboard the graphics card.
  • In one embodiment, an “adaptive viewport” is used to optimize depth buffer haptic rendering, wherein the bounds of the viewport are read-back from the graphics card. For example, the entire viewport may not be needed; only the portion of the depth buffer that contains geometry within the immediate vicinity of the haptic interface location may be needed. In an adaptive viewport approach, the bounds of the viewport that are to be read-back from the graphics card are determined by projecting the haptic interface location onto the near plane and by determining a size based on a workspace to screen scale factor. In this way, it is possible to ensure that enough depth buffer information is obtained to contain a radius of workspace motion mapped to screen space.
  • Certain 3D graphics API's, for example, OpenGL, offer a mode of operation called feedback mode, which provides access to the feedback buffer 130 (FIG. 1) containing information used by the rasterizer for scan-filling primitives to the viewport. In one embodiment, the method of FIG. 1 includes the step of accessing the feedback buffer 130 and interpreting the data from the feedback buffer for haptic use. The feedback buffer 130 provides access to the primitives within a view volume. The view volume may be sized to include only portions of the virtual environment of haptic interest. Therefore, haptic rendering of primitives outside the view volume need not take place, and valuable processing resources are saved.
  • It is possible to simulate non-uniform surface properties using data in the feedback buffer 130 via groups of primitives, per vertex properties, and/or via texture mapping. In certain embodiments, the feedback buffer provides data that is more precise than depth buffer data, since primitives in the feedback buffer have only undergone a linear transformation, whereas the depth buffer represents rasterized primitives, thereby possibly introducing aliasing errors.
  • Step 105 of the method of FIG. 1 is directed to interpreting the graphical rendering data accessed in step 102 for haptic use. In one embodiment, step 105 involves performing an intersection test 160 to determine an intersection point and a normal in screen space, and transforming the intersection point coordinates and normal coordinates to object space 165. The point and normal together define a local plane tangent to the surface of the virtual object. In one embodiment in which a depth values from a depth buffer 125 are used, the intersection test of step 160 is essentially a pixel raycast along a line segment, where the depth buffer is treated as a height map. A line segment that is defined in object space is transformed into screen space and tested against the height map to find an intersection. An intersection is found by searching along the line segment (in screen space) and comparing depth values to locations along the line segment. Once a crossing has been determined, a more precise intersection can be determined by forming triangles from the local depth values. This provides an intersection point and an intersection normal, where the intersection normal is normal to a surface corresponding to the screen space height map at the intersection point. In step 165, the intersection point and normal are transformed back into object space to be used as part of a haptic rendering method. Example haptic rendering methods are described in co-owned U.S. Pat. No. 6,191,796 to Tarr, U.S. Pat. No. 6,421,048 to Shih et al., U.S. Pat. No. 6,552,722 to Shih et al., U.S. Pat. No. 6,417,638 to Rodomista et al., and U.S. Pat. No. 6,671,651 to Goodwin et al., the disclosures of which are incorporated by reference herein in their entirety.
  • In one embodiment in which screen space rasterization primitives 130 are accessed in step 102 in the method of FIG. 1, the intersection test of step 160 also involves transforming a line segment from object space to screen space and performing a line intersection test against candidate primitives. An intersection point and intersection normal are found along the line segment and are transformed back into object space for haptic rendering.
  • Step 110 of the method of FIG. 1 is directed to haptically rendering one or more virtual objects in the virtual environment using the interpreted data from step 105. In one embodiment, the haptic rendering step includes determining a haptic interface location in the virtual environment corresponding to a user's position in real space (i.e. via a user's manipulation of a haptic interface device) 170, locating one or more points on the surface of one or more virtual objects in the virtual environment (i.e. the surface point nearest the haptic interface location) 175, and determining an interaction force 180 according to the relationship between the haptic interface location and the surface location(s). Thus, step 110 may involve determining when a collision occurs between a haptic interface location (i.e. a virtual tool location) and a virtual object. In one embodiment, a collision occurs when the haptic interface location crosses through the surface of a virtual object. The interaction force that is determined in step 180 may be delivered to the user through the haptic interface device. The determination and delivery of a feedback force to a haptic interface device is described, for example, in co-owned U.S. Pat. Nos. 6,191,796, 6,421,048, 6,552,722, 6,417,638, and 6,671,651, the disclosures of which are incorporated by reference herein in their entirety.
  • FIG. 2 is a schematic diagram 200 illustrating, in a simplified way, a system for haptically rendering a virtual environment using data in a graphics pipeline. The diagram shows an interaction between a 3D graphics application 202, a graphics application programming interface (API) 205, a 3D graphics card 215, and a haptics API 210. Certain methods of the invention may be embodied in, and may be performed using, the haptics API 210, the graphics API 205, the 3D graphics application 202, and/or combinations thereof.
  • A 3D graphics application 202 may be written or adapted to enable the user of the application to see a visual representation of a 3D virtual environment on a two-dimensional screen while “feeling” objects in the 3D virtual environment using a peripheral device, such as a haptic interface device. The graphics application makes function calls referencing function libraries in a graphics API 265. The graphics API communicates with the 3D graphics card 215 in order to graphically render a virtual environment. A representation of at least a portion of the virtual environment is displayed on a display device 220.
  • The system 200 of FIG. 2 permits a programmer to write function calls in the 3D graphics application 202 to call a haptics API 210 for rendering a haptic representation of at least a portion of the virtual environment. The haptics API 210 accesses graphical rendering data from the 3D graphics pipeline by making function calls to the graphics API. The graphical data may include a data buffer, such as a depth buffer or feedback buffer. The system 200 interprets the graphical data to haptically render at least a portion of the virtual environment. The haptic rendering process may include determining a force feedback to deliver to the user via a haptic interface device 230. A haptic device API and a haptic device driver 225 are used to determine and/or deliver the force feedback to the user via the haptic interface device 230.
  • The haptics API 210 performs high-level haptics scene rendering, and the haptic device API 225 performs low-level force rendering. For example, the high-level haptics API 210 provides haptic rendering of shapes and constraints and the low-level haptic device API 225 queries device state, sends forces, and/or performs thread control, calibration, and error handling. The 3D graphics application may make direct calls to either or both the haptics API 210 and the haptic device API 225.
  • FIG. 3 illustrates a 3D graphics pipeline 300, in which graphical data describing one or more 3D objects in a virtual environment is used to create a 2D representation for display on a two-dimensional screen. Graphical data corresponding to the scene geometry 302, a camera view 305, and lighting 310, undergoes a series of transformations 315. The resultant primitives data then undergoes a rasterization process 320, producing 2D graphical data that may be stored in 2D buffers, for example, a color buffer 330 and a depth buffer 335. The primitives data as it exists prior to rasterization can be accessed, for example, via a feedback buffer 325. Methods of the invention use the graphical data in the 3D graphics pipeline 300, for example, the feedback buffer 325, the depth buffer 335, and the color buffer 330, for haptically rendering the virtual environment, as described in more detail herein.
  • FIG. 4A is a simplified schematic diagram illustrating components of a system 400 for haptically rendering a virtual environment using data in a graphics pipeline. The system 400 comprises computational elements 402 including a graphics thread 405, a collision thread 410, and a servo thread 415, as well as a display device 420 and a haptic interface device 425. The graphics thread 405 is adapted to generate a visual display of a virtual environment to be displayed on the display device 420. The collision thread 410 determines if a user-directed virtual proxy (i.e. a haptic interface location) collides with a surface within the virtual environment, based on input from the graphics thread 405. The servo thread 415 determines (and may generate) a force to be applied to a user in real space via the haptic interface device 425 according to input from the collision thread 410.
  • FIG. 4B is a schematic diagram 427 illustrating the system of FIG. 4A in further detail. The graphics thread 405 is adapted to generate a visual display of a virtual environment. API commands 430 are used to access graphical rendering data, including the depth buffer 437 and feedback buffer 440. In one embodiment, this data is used for both haptic and graphical rendering. Additionally, the user (and/or the 3D graphics software programmer) may define custom shapes 435 and custom constraints 442 independent of the graphics API 430. Custom shapes 435 include, for example, NURBS shapes, SubDs, voxel-shapes, and the like. Custom constraints include, for example, constraint to surfaces, lines, curves, arcs, and the like. Standard force effects 447 and user-defined force effects 450 may also be assigned in the graphics thread 405. Additional software, for example, third-party software, may be integrated with the graphics thread 405, for example, in a user-defined proxy module 445. In certain embodiments, the graphics thread 405 refreshes the display device 420 at a rate, for example, within the range from about 10 Hz to about 150 Hz, within the range from about 20 Hz to about 110 Hz, or, preferably, within the range from about 30 Hz to about 60 Hz. Rates above and below these levels are possible as well.
  • The collision thread 410 of FIG. 4B is adapted to determine whether a user-directed virtual proxy collides with a surface within the virtual environment. In one embodiment, the collision thread comprises three modules, including a shape collision renderer 453, a constraint collision renderer 455, and an effect renderer 460. The shape collision renderer 453 is adapted to calculate the shapes in the virtual environment and to identify their collision with each other or with proxies. The shape collision renderer 453 may use data from the depth buffer 437, the feedback buffer 400, and user defined shape data 435. Similarly, the constraint collision renderer 455 may use data from the depth buffer 437, feedback buffer 440, and from user-defined constraints 442. The effect renderer 460 may use data from the standard force effects module 447 and from the user-defined force effects module 450. One of the functions of the effect renderer 460 is to compose the force shader 480 in the servo thread 415, so that the force shader 480 is able to simulate force effects at the typically higher servo loop rate. For example, the effect renderer 460 can start, stop, and manage parameters for the force shader 480. In certain embodiments, the collision thread 410 may perform a collision detection computation at a rate within the range from about 10 Hz to about 200 Hz, from about 80 Hz to about 120 Hz, or, preferably, at about 100 Hz. Rates above and below these levels are possible as well.
  • Next, the servo thread 415 generates a force to be applied to a user in real space via the haptic interface device 425 according to input from the collision thread 410. The force is calculated by using data from the shape collision renderer 453 and from the constraint collision renderer 455. Data from these two renderers are used to calculate a local approximation, which is transmitted to the local approximation renderer 465. The local approximation renderer 465 resolves a position/orientation transform for the proxy, which is used for producing a contact or constraint force. The proxy can be represented by the position of a single point, but can alternatively be chosen as having any arbitrary geometry. The local approximation transmitted to the local approximation renderer 465 is a collection of geometry determined in the collision thread generally at a lower processing rate than the servo thread. This local approximation geometry may be used for several updates of the servo loop thread. The local approximation geometry generally serves as a more efficient representation for collision detection and resolution than the source geometry processed by the collision thread. The proxy position information is transmitted to a proxy shader 470 and then to a proxy renderer 475, along with the user-defined proxy information 445 from the graphics thread.
  • In one embodiment, a force shader 480 enables modification of a calculated force vector prior to transmitting the force vector to the haptic interface device 425. For example, rendered proxy data from the proxy renderer 475, along with force vector data from the effect renderer 460, are used by the force shader 480 to calculate a modified force vector, which is then transmitted to the haptic interface device 425. The force shader 480 is thus able to modify the direction and magnitude of the force vector as determined by preceding modules such as the proxy renderer 475 and the effect renderer 460. The force shader 480 may also have access to data from other modules in the schematic diagram 427 of FIG. 4B, such as the local approximation renderer 465 and the proxy shader 470. The force shader 480 may be used for simulating arbitrary force effects. Examples of such force effects include inertia, viscosity, friction, attraction, repulsion, and buzzing.
  • The force shader 480 may also be used for modifying the feel of a contacted surface. For example, the force shader 480 may be used to simulate a smooth surface by modifying the force vector direction so that it is smoothly varying while contacting discontinuous surface features. As such, force discontinuities apparent when transitioning from one polygonal face to another may be minimized by the force shader 480 by aligning the force vector to an interpolated normal based on adjacent faces. The force shader 480 may also be used for general conditioning or filtering of the computed force vector, such as clamping the magnitude of the force vector or increasing the magnitude of the force vector over time. In one embodiment, the force shader is used to reduce the magnitude and directional discontinuities over time, which can result from instabilities in the control system or mechanical instabilities in the haptic interface device 425.
  • The servo thread 415 may refresh the force to be applied through the haptic interface device 425 at a rate within the range from about 500 Hz to about 15,000 Hz, from about 1000 Hz to about 10,000 Hz, or from about 2000 Hz to about 6000 Hz. Rates above and below these levels are possible as well.
  • In one embodiment, a scheduler interface manages the high frequency for sending forces and retrieving state information from the haptic interface device 425. The scheduler allows the 3D graphics application to communicate effectively with the servo thread in a thread-safe manner and may add and delete operations to be performed in the servo thread. Furthermore, in one embodiment, a calibration interface allows the system to maintain an accurate estimate of the physical position of the haptic interface device 425. Calibration procedures may be manual and/or automatic.
  • FIG. 5 is a schematic diagram 500 illustrating a servo thread of an illustrative haptics rendering pipeline. Collision and constraint resolution data 502 from the virtual environment is transmitted from the collision thread to the local approximation renderer 465. The local approximation renderer 465 calculates a proxy position, which is then transmitted to a proxy shader 470 and then to impedance control 515, producing a force. The force is modified by the force shader 480, then transmitted to the haptic interface device 425 following application of inverse kinematics 525. Forward kinematics 535 from the haptic interface device 535 is fed back to the force shader 480 and the impedance controller 515, and is transmitted to a transform shader 540, which provides feedback to the local approximation renderer 465 and proxy shader 470.
  • FIG. 6 is a schematic diagram 600 illustrating a system for haptically rendering a virtual environment using data in a graphics pipeline of a 3D graphics application. The diagram 600 shows how third-party 3D graphics application software is integrated with the system. The diagram 600 illustrates the interaction between the 3D graphics application 602, a haptics API 610, and a haptic device API 625. The graphics application 600 can make a function call to the haptics API 610. The haptics API 610 then accesses data from the 3D graphics pipeline. The haptics API 610 also transmits data to the haptic device API 625, which performs low-level force rendering.
  • FIG. 7 is a block diagram 700 featuring a method of delivering interaction force to a user via a haptic interface device, where the force is based at least in part on graphical data from a virtual camera located at a haptic interface location. The method includes determining a haptic interface location in a 3D virtual environment corresponding to the position of a haptic interface device in real space 702. The method further includes positioning a first virtual camera at the haptic interface location 705. The first virtual camera is usually implemented using matrix transformations that map 3D virtual objects in coordinate space into a 2D representation, so that the virtual environment, populated with the virtual objects, appears as if viewed by a camera. By modifying these transformations, the virtual camera view can be changed to view the same object from any of a plurality of vantage points. These transformations include a modeling transformation, a viewing transformation, a projection transformation, and a display device transformation. These are discussed in further detail with respect to FIG. 9 herein below. Furthermore, the position of the first camera is updated as the haptic interface location changes, according to the manipulation of the haptic interface device by the user.
  • The method of FIG. 7 next includes the step of accessing graphical data corresponding to the virtual environment as viewed from the first virtual camera at the haptic interface location 710. The accessed data is then used in the graphical rendering of the virtual environment, for example, according to methods described herein.
  • The method of FIG. 7 may optionally include the step of positioning a second virtual camera at a location other than the haptic interface location 715. The method would then comprise the step of accessing graphical data from the second virtual camera 720. The accessed data may be used for graphical rendering, haptic rendering, or both. In one embodiment, the second virtual camera is used for graphical rendering, while the first virtual camera is used for haptic rendering. The second camera may move, or it may be static. In one embodiment, the second virtual camera is fixed while the first virtual camera is capable of moving. The second virtual camera operates using matrix transformations as described with respect to step 705. The second virtual camera has associated with it a look direction and an eye position, independent of the look direction and eye position of the first virtual camera.
  • FIG. 8A is a screenshot 800 of a virtual object (a teapot) in a virtual environment as imaged from a fixed camera view (i.e. the second camera view, as described with respect to FIG. 7). The screenshot 800 shows a haptic interface location 805, representing the position of a user in the virtual environment. A “haptic camera” (first virtual camera) is located at the haptic interface location, which moves as a user manipulates a haptic interface device in real space. FIG. 8B is a screenshot 810 of the virtual object of FIG. 8A as imaged from the moving haptic camera. As can be seen from the screenshot 810, additional detail is viewable from this vantage point. It is possible to haptically render the virtual object using the graphical data from the haptic camera. Efficiency is improved by limiting the information that is haptically rendered to only those parts of the virtual environment that can be “touched” by the user at any given time. Furthermore, geometry that is not visible from the second camera view (i.e. dedicated to providing a graphical display of the virtual environment) can be “felt” using graphical data from the haptic camera view. The user can feel behind the displayed teapot.
  • The view volume of the haptic camera may be optimized so as to view only areas of the virtual environment the user will want to touch or will be able to touch at any given time. For example, the view volume of the first virtual camera, dedicated to haptic rendering, may be limited to objects within the vicinity and trajectory of the haptic interface. As a result, haptic rendering will only need to be performed for this limited view volume, and not for all the geometry that is viewed from the vantage point of a graphics-dedicated second virtual camera. The method thereby increases the efficiency of the haptic rendering process.
  • Additionally, the method of FIG. 7 comprises determining a position of the haptic interface location in relation to a surface of a virtual object in the virtual environment by using graphical data from either or both of the first virtual camera and the second virtual camera 725. The method also includes determining an interaction force based at least in part on the position of the haptic interface location in relation to the surface of the virtual object 730. Finally, an interaction force is delivered to a user through the haptic interface device 735. The determination and delivery of an interaction force is described, for example, in U.S. Pat. Nos. 6,191,796, 6,421,048, 6,552,722, 6,417,638, and 6,671,651, the disclosures of which are incorporated by reference herein in their entirety.
  • FIG. 9 is a schematic diagram 900 illustrating a 3D transformation pipeline. 3D graphics applications generally perform a series of transformations in order to display 3D model coordinates on a 2D display device. These transformations include a shape-world transformation 902, a world-view transformation 905, a view-clip transformation 910, and a clip-window transformation 915. Additional transformations that are used to haptically render a virtual environment via a haptic interface device include a view-touch transformation 920 and a touch-workspace transformation 925. The transformations in FIG. 9 can be repurposed for rendering a scene from a virtual haptic camera viewpoint, thereby affording improved acquisition and utilization of graphics pipeline data.
  • The shape-world transformation 902 of the pipeline of FIG. 9 transforms geometry describing a virtual object from its local coordinate space, or shape coordinates, into world coordinates, i.e., the main reference coordinate space for the 3D virtual environment. All objects in the virtual environment have a relationship to world coordinates, including cameras.
  • The world-view transformation 905 of the pipeline of FIG. 9 maps world coordinates to view coordinates, the local coordinates of the virtual camera. FIG. 10 illustrates the relation of view coordinates (XV, YV, ZV), with an associated look direction and camera eye position, to world coordinates (XW, YW, ZW). The look direction of FIG. 10 is preferably mapped to the z-axis of the world-view transform. The world-view transformation can be customized for translating and rotating the virtual camera so that it can view the scene as if attached to the position of the haptic device's virtual proxy.
  • Furthermore, where the virtual camera is a haptic camera as described above, the camera eye position of the world-view transformation is sampled from the virtual proxy position. In order to avoid undesirable jitter, the camera eye position is preferably only updated when the virtual proxy moves beyond a threshold distance from the current eye position. In one embodiment, for example, the threshold distance is 2 mm.
  • The look direction of the world-view transformation is determined by the motion of the proxy and optionally by the contact normal, for example, if the proxy is in contact with a virtual object in the virtual environment. When in contact with a virtual object, the proxy's position can be constrained to remain on the surface of the contacted virtual object. FIG. 11 illustrates the look direction 1110 when the virtual proxy is in contact with a virtual object 1101. Additionally, the camera eye position is updated as soon as the proxy has moved beyond a threshold distance. This defines the motion vector 1120 of the proxy. When moving in free space, the look direction is the normalized motion vector 1120. However, when in contact with a virtual object 1101, the look direction is a linear combination of the normalized motion vector 1120 and the contact normal 1105, as illustrated in FIG. 11. For example, where the haptic interface location (proxy position) is on the surface of the virtual object, as shown in FIGS. 8A and 8B, the look direction may be computed as a linear combination of the normalized motion vector and the contact normal. Thus, the haptic camera angle tilts to show more of what lies ahead, along the direction of motion.
  • The world-view transformation 905 of FIG. 9 can be computed by forming a composite rotation-translation matrix that transforms coordinates from world coordinates into view coordinates, mapping the look direction to an-axis (preferably the z-axis), and mapping the camera eye position to the origin. An up vector, such as the y-axis, may be selected to keep the view consistently oriented.
  • Another of the transformations in the 3D transformation pipeline of FIG. 9 is the view-clip transformation 910, also known as the projection transform. The view-clip transformation 910 enables manipulations of the shape and size of the view volume. The view volume determines which geometry is lit and rasterized for display on the 2D display device. As a result, geometry that lies outside the view volume is usually excluded from the remainder of the graphics pipeline.
  • When data from a virtual haptic camera is used for haptic rendering, the view volume may be sized so as to include only objects that are likely to be touched. In one embodiment, the size of the view volume is specified as a radius of motion in workspace coordinates of the haptic device which is transformed into view coordinates when composing the view-clip matrix. An orthographic view volume mapping centered around the origin is used with extents determined by the motion radius. By limiting the size of the view volume via the view-clip transformation 910, it is possible to localize the geometry that is received by the graphic pipeline when haptically rendering the scene, thereby optimizing the haptic rendering process.
  • Another of the transformations in the 3D transformation pipeline of FIG. 9 is the clip-window transformation 915, which converts clip coordinates into the physical coordinates of the display device so that an object in clip coordinates may be displayed on the display device. The clip-window transformation 915 is specified by a 2D pixel offset and a width and height in pixels. By using the clip-window transformation 915, it is possible to limit the amount of pixels used for rasterizing the geometry in the graphics pipeline. For optimal performance, it is not necessary to rasterize the localized contents of the view volume using the entire pixel buffer dimensions. There may be a tradeoff between performance and sampling error. For example, if the pixel buffer is too big, it will require more memory and copying time. However, if the pixel buffer is too small, it is possible that too many details will be lost for adequately realistic haptic rendering. The size of a display device buffer may be determined in consideration of the aforementioned tradeoff. In one embodiment, a width and height of 256 by 256 pixels for the display device buffer provides a sufficient compromise. Optimization of these dimensions is possible by considering the allowable time for pixel buffer read-back from the graphics card and the size of the smallest geometric feature in pixel coordinates.
  • The view-touch transformation 920 maps an object from view-coordinates into the touch coordinate space. The view-touch transformation 920 is convenient for altering the alignment or offset of touch interactions with respect to the view. As a default, this transformation may be left as identity so that the position and alignment of touch interactions are consistent with the view position and direction. However, the view-touch transformation 920 may be optionally modified to accommodate touch interactions with the scene in which the haptic device and display device are meant to be independent, for example, during use of a head-mounted display.
  • The touch-workspace transformation 925 maps an object in touch-coordinates into the local coordinate space of the haptic interface device. The haptic workspace is the physical space reachable by the haptic device. For example, the PHANTOMS Omni™ device, manufactured by SensAble Technologies, Inc., of Woburn, Mass., has a physical workspace of dimensions 160×120×70 mm.
  • The shape-world transformation 900, the world-view transformation 905, the view-clip transformation 910, the clip-window transformation 915, the view-touch transformation 920, and/or the touch-workspace transformation 925 may be structured for viewing a scene of a virtual environment from any of one or more virtual cameras. For example, these transformations may be structured for viewing a scene from a first virtual camera dedicated to haptic rendering, as well as a second virtual camera dedicated to graphical rendering. The processing capability of the graphics pipeline is leveraged for both graphical and haptic rendering.
  • FIG. 12 is a block diagram 1200 featuring an alternative method for interpreting data for haptic rendering, including the step of intercepting data from a graphics pipeline via a pass-through dynamic link library (DLL). In Step 1202, data is intercepted from the graphics pipeline of a 3D graphics application using a pass-through dynamic link library (DLL). A graphics API generally uses a DLL file so that a 3D graphics application may access the functions in its library. A pass-through DLL may be named to match the name of the usual DLL file used by the graphics API, while the “real” graphics API DLL file is renamed. As a result, function calls from the 3D graphics application will call the pass through DLL, instead of calling the graphics API DLL. The pass-through DLL does not impede normal functioning of the 3D graphics application because all function calls are redirected by the pass-through DLL to the regular graphics API DLL.
  • In order for the pass-through DLL to intercept data from the 3D graphics pipeline, logic is inserted in its code to respond to particular graphics API function calls. The pass-through DLL may also directly call functions of the graphics API, hence directly accessing the 3D graphics pipeline and the associated buffer data. Creating a pass-through DLL may require replicating the exported function table interface of the graphics API DLL. This may be accomplished by determining the signature of every function exported by the DLL. A binary file dumper can then be used to view the symbols exported by the DLL and access to the header file can be used for determining the number and type of the function arguments and return type.
  • In step 1205 of the method of FIG. 12, a subset of the accessed data is written to a memory buffer and a subset of data is read from this memory buffer. This memory buffer may be shared between the pass through DLL and a separate haptic rendering process.
  • In optional step 1210 of the method of FIG. 12, a height map is determined using the accessed data. For example, if the depth buffer is accessed in step 1200, the depth buffer itself may be treated as a height map. Such a height map may describe at least some of a surface of a virtual object in the virtual environment. In optional step 1215, a mesh is generated using the height map determined in step 1210. However, in a preferred embodiment, the haptic rendering method interprets a height field directly, as described elsewhere herein. Haptic rendering of a depth buffer is performed directly in screen space and in a local fashion (i.e. via a haptic camera). It is not necessary that the entire image be transformed and then processed to generate a mesh. In order to generate a mesh from depth buffer data, the data representing depth values and screen coordinate locations may be transformed from screen space to object space.
  • FIG. 13 is a schematic diagram 1300 illustrating an alternative system for haptically rendering a virtual environment using data intercepted from a graphics pipeline of a 3D graphics application via a pass-through dynamic link library. In one embodiment, a 3D graphics application 1300 is developed using a graphics API. When the 3D graphics application 1300 makes calls to the graphics API DLL file 1310, the calls are intercepted by a pass-through DLL file 1305. The pass-through DLL does not impede normal functioning of the 3D graphics application because all function calls are redirected by the pass through DLL to the regular graphics API DLL.
  • The pass-through DLL 1305 may then make function calls to the graphics API DLL 1310, thereby accessing buffer data from the 3D graphics pipeline. The graphics API DLL 1310 operates to render graphics on a display screen via a 3D graphics card 1315. However, the pass-through DLL 1305 may call the graphics API DLL to access the graphic rendering data from the 3D graphics pipeline and store this data in memory buffer 1320. The data may be read from the memory buffer 1320 in a haptic rendering process to provide touch feedback based on the intercepted graphical data.
  • Thus, the memory buffer 1320 may be shared with a haptic API 1325. For example, the haptic API 1325 accesses the graphic rendering data in the memory buffer 1320 and prepares it for low level haptic rendering by the haptic device API 1330. The haptic device API 1330 then produces a force signal which a device driver uses to generate and transmit a force to a user via the haptic interface device 1335.
  • EQUIVALENTS
  • While the invention has been particularly shown and described with reference to specific preferred embodiments, it should be understood by those skilled in the art that various changes in form and detail may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (36)

1. A method for haptically rendering a virtual object in a virtual environment, the method comprising the steps of:
(a) determining a haptic interface location in a 3D virtual environment corresponding to a location of a haptic interface device in real space;
(b) positioning a first virtual camera substantially at the haptic interface location;
(c) accessing graphical data from the first virtual camera corresponding to the virtual environment;
(d) determining a position of the haptic interface location in relation to at least one geometric feature of a virtual object in the virtual environment using the graphical data from the first virtual camera; and
(e) determining an interaction force based at least in part on the position of the haptic interface location in relation to the at least one geometric feature of the virtual object.
2. The method of claim 1, wherein the at least one graphical feature of the virtual object comprises at least one of a surface, point, line, or plane.
3. The method of claim 1, further comprising the step of delivering the interaction force to a user through the haptic interface device.
4. The method of claim 1, wherein the position of the first camera is updated as the haptic interface location changes according to movement of the haptic interface device.
5. The method of claim 1, further comprising the steps of:
(f) positioning a second virtual camera at a location other than the haptic interface location; and
(g) accessing graphical data from the second virtual camera corresponding to the virtual environment.
6. The method of claim 5, wherein the second virtual camera is fixed and the first virtual camera moves.
7. The method of claim 1, wherein step (c) comprises accessing graphical data from a graphics pipeline of a 3D graphics application.
8. The method of claim 1, wherein step (d) comprises determining a world-view transform that maps world coordinates corresponding to the haptic virtual environment to view coordinates corresponding to the first virtual camera.
9. The method of claim 8, wherein the step of determining the world-view transform comprises determining an eye position and a look direction.
10. The method of claim 9, wherein determining the eye position comprises sampling the position of the haptic interface location and wherein determining the look direction comprises determining a vector representing a motion of the haptic interface location.
11. The method of claim 8, wherein step (d) further comprises at least one of the following:
determining a shape-world transformation;
determining a world-view transformation;
determining a view-clip transformation;
determining a clip-window transformation;
determining a view-touch transformation; and
determining a touch-workspace transformation.
12. The method of claim 1, wherein a view volume associated with the first virtual camera is sized to exclude geometric elements that lie beyond a desired distance from the haptic interface location.
13. The method of claim 1, further comprising the step of culling at least a portion of the graphical data from the first virtual camera to exclude data corresponding to geometric primitives that lie outside a view volume associated with the first camera.
14. The method of claim 13, wherein the culling step is performed using graphics hardware.
15. The method of claim 13, wherein the culling step is performed using a spatial partition.
16. The method of claim 15, wherein the spatial partition comprises a hierarchical data structure.
17. The method of claim 16, wherein the hierarchical data structure comprises at least one of an octree data structure and a BSP tree data structure.
18. The method of claim 13, wherein the culling step is performed using graphics hardware and a spatial partition.
19. The method of claim 1, wherein the graphical data from the first virtual camera comprises at least one of the following:
at least a portion of a depth buffer;
at least a portion of a feedback buffer;
at least a portion of a color buffer;
at least a portion of a selection buffer;
at least a portion of an accumulation buffer;
at least a portion of a texture map;
at least a portion of a fat framebuffer;
data from a pixel shading program;
data from a vertex shading program
rasterization primitives;
application programming interface input data; and
state data.
20. The method of claim 1, wherein the graphical data from the first virtual camera comprises at least a portion of a fat framebuffer.
21. The method of claim 20, wherein the fat framebuffer comprises at least one member of the group consisting of: vertex positions; normals; color; texture; normal maps; bump maps; and depth data.
22. The method of claim 20, wherein step (c) comprises performing at least one of a pixel shading and a vertex shading.
23. The method of claim 22, wherein the shading is performed using graphics hardware.
24. The method of claim 1, wherein the graphical data from the first virtual camera comprises at least a portion of a depth buffer.
25. The method of claim 1, wherein the graphical data from the first virtual camera comprises rasterization primitives.
26. The method of claim 1, wherein step (d) comprises:
performing an intersection test to determine at least one intersection point and at least one intersection normal in screen space; and
transforming coordinates of the at least one intersection point and the at least one intersection normal from screen space to object space.
27. The method of claim 26, wherein step (d) comprises defining for each intersection point a plane tangent to the surface of the virtual object at the intersection point.
28. The method of claim 26, wherein step (d) comprises performing a projection test to determine a geometric feature nearest the haptic interface location.
29. A system for haptically rendering a virtual object in a virtual environment, the system comprising:
a graphics thread that generates a visual display of a virtual environment;
a collision thread that determines if a user-directed virtual proxy collides with at least one geometric feature within the virtual environment, wherein the collision thread uses input from the graphics thread; and
a servo thread that generates force to be applied to a user in real space through a haptic interface device according to input from the collision thread, wherein the servo thread is in communication with the haptic interface device.
30. The system of claim 29, wherein the graphics thread refreshes the visual display at a rate within a range from about 5 Hz to about 150 Hz.
31. The system of claim 29, wherein the graphics thread refreshes the visual display at a rate within a range from about 30 Hz to about 60 Hz.
32. The system of claim 29, wherein the collision thread performs a collision detection computation at a rate within a range from about 30 Hz to about 200 Hz.
33. The system of claim 29, wherein the collision thread performs a collision detection computation at a rate within a range from about 80 Hz to about 120 Hz.
34. The system of claim 29, wherein the servo thread refreshes the force to be applied through the haptic interface device at a rate within a range from about 1000 Hz to about 10,000 Hz.
35. The system of claim 29, wherein the servo thread comprises at least one of a force shader and a proxy shader.
36. An apparatus for providing haptic feedback to a user of a 3D graphics application, the apparatus comprising:
a user-controlled haptic interface device adapted to provide a user input to a computer and to transmit force to a user; and
computer software that, when operating with the computer and the user input, is adapted to determine force transmitted to the user by:
(a) determining a haptic interface location in a 3D virtual environment corresponding to a location of the haptic interface device in real space;
(b) positioning a first virtual camera substantially at the haptic interface location;
(c) accessing graphical data from the first virtual camera corresponding to the virtual environment;
(d) determining a position of the haptic interface location in relation to at least one geometric feature of a virtual object in the virtual environment using the graphical data from the first virtual camera; and
(e) determining an interaction force based at least in part on the position of the haptic interface location in relation to the at least one geometric feature of the virtual object.
US11/169,271 2004-06-29 2005-06-28 Apparatus and methods for haptic rendering using a haptic camera view Abandoned US20060284834A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US11/169,271 US20060284834A1 (en) 2004-06-29 2005-06-28 Apparatus and methods for haptic rendering using a haptic camera view
US14/276,845 US9030411B2 (en) 2004-06-29 2014-05-13 Apparatus and methods for haptic rendering using a haptic camera view

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US58400104P 2004-06-29 2004-06-29
US11/169,271 US20060284834A1 (en) 2004-06-29 2005-06-28 Apparatus and methods for haptic rendering using a haptic camera view

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US14/276,845 Continuation US9030411B2 (en) 2004-06-29 2014-05-13 Apparatus and methods for haptic rendering using a haptic camera view

Publications (1)

Publication Number Publication Date
US20060284834A1 true US20060284834A1 (en) 2006-12-21

Family

ID=34982154

Family Applications (3)

Application Number Title Priority Date Filing Date
US11/169,175 Active 2028-04-02 US7990374B2 (en) 2004-06-29 2005-06-28 Apparatus and methods for haptic rendering using data in a graphics pipeline
US11/169,271 Abandoned US20060284834A1 (en) 2004-06-29 2005-06-28 Apparatus and methods for haptic rendering using a haptic camera view
US14/276,845 Active US9030411B2 (en) 2004-06-29 2014-05-13 Apparatus and methods for haptic rendering using a haptic camera view

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US11/169,175 Active 2028-04-02 US7990374B2 (en) 2004-06-29 2005-06-28 Apparatus and methods for haptic rendering using data in a graphics pipeline

Family Applications After (1)

Application Number Title Priority Date Filing Date
US14/276,845 Active US9030411B2 (en) 2004-06-29 2014-05-13 Apparatus and methods for haptic rendering using a haptic camera view

Country Status (2)

Country Link
US (3) US7990374B2 (en)
WO (1) WO2006004894A2 (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090251421A1 (en) * 2008-04-08 2009-10-08 Sony Ericsson Mobile Communications Ab Method and apparatus for tactile perception of digital images
US20090282331A1 (en) * 2008-05-08 2009-11-12 Kenichiro Nagasaka Information input/output device, information input/output method and computer program
US20100064357A1 (en) * 2008-09-09 2010-03-11 Kerstin Baird Business Processing System Combining Human Workflow, Distributed Events, And Automated Processes
US20100077411A1 (en) * 2008-09-22 2010-03-25 Alyson Ann Comer Routing function calls to specific-function dynamic link libraries in a general-function environment
WO2012037157A2 (en) * 2010-09-13 2012-03-22 Alt Software (Us) Llc System and method for displaying data having spatial coordinates
US20120075288A1 (en) * 2010-09-24 2012-03-29 Samsung Electronics Co., Ltd. Apparatus and method for back-face culling using frame coherence
US20130050062A1 (en) * 2010-05-07 2013-02-28 Gwangju Institute Of Science And Technology Apparatus and method for implementing haptic-based networked virtual environment which supports high-resolution tiled display
US8390623B1 (en) * 2008-04-14 2013-03-05 Google Inc. Proxy based approach for generation of level of detail
US20140019940A1 (en) * 2012-07-16 2014-01-16 Microsoft Corporation Tool-Based Testing For Composited Systems
US8849015B2 (en) 2010-10-12 2014-09-30 3D Systems, Inc. System and apparatus for haptically enabled three-dimensional scanning
US9667870B2 (en) 2013-01-07 2017-05-30 Samsung Electronics Co., Ltd Method for controlling camera operation based on haptic function and terminal supporting the same

Families Citing this family (70)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8352400B2 (en) 1991-12-23 2013-01-08 Hoffberg Steven M Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore
US7966078B2 (en) 1999-02-01 2011-06-21 Steven Hoffberg Network media appliance system and method
US8010180B2 (en) 2002-03-06 2011-08-30 Mako Surgical Corp. Haptic guidance system and method
US8996169B2 (en) 2011-12-29 2015-03-31 Mako Surgical Corp. Neural monitor-based dynamic haptics
US7831292B2 (en) * 2002-03-06 2010-11-09 Mako Surgical Corp. Guidance system and method for surgical procedures with improved feedback
AU2003218010A1 (en) * 2002-03-06 2003-09-22 Z-Kat, Inc. System and method for using a haptic device in combination with a computer-assisted surgery system
US11202676B2 (en) 2002-03-06 2021-12-21 Mako Surgical Corp. Neural monitor-based dynamic haptics
US7990374B2 (en) 2004-06-29 2011-08-02 Sensable Technologies, Inc. Apparatus and methods for haptic rendering using data in a graphics pipeline
CA2594678A1 (en) * 2005-01-21 2006-07-27 Handshake Vr Inc. Haptic-visual scene development and deployment
US20060250421A1 (en) * 2005-03-31 2006-11-09 Ugs Corp. System and Method to Determine a Visibility Solution of a Model
EP2023843B1 (en) 2006-05-19 2016-03-09 Mako Surgical Corp. System for verifying calibration of a surgical device
US8134566B1 (en) * 2006-07-28 2012-03-13 Nvidia Corporation Unified assembly instruction set for graphics processing
US8698735B2 (en) * 2006-09-15 2014-04-15 Lucasfilm Entertainment Company Ltd. Constrained virtual camera control
US20080163118A1 (en) * 2006-12-29 2008-07-03 Jason Wolf Representation of file relationships
DE102007021348A1 (en) * 2007-05-06 2008-11-20 Universitätsklinikum Hamburg-Eppendorf (UKE) A method for simulating the feel of an interaction of a guided object with a virtual three-dimensional object
CN100588186C (en) * 2007-06-19 2010-02-03 腾讯科技(深圳)有限公司 Method and device for realizing 3D panel at instant messaging software client end
US8422801B2 (en) 2007-12-20 2013-04-16 Koninklijke Philips Electronics N.V. Image encoding method for stereoscopic rendering
US8169436B2 (en) 2008-01-27 2012-05-01 Citrix Systems, Inc. Methods and systems for remoting three dimensional graphics
US8786596B2 (en) * 2008-07-23 2014-07-22 Disney Enterprises, Inc. View point representation for 3-D scenes
JP5112229B2 (en) * 2008-09-05 2013-01-09 株式会社エヌ・ティ・ティ・ドコモ Distribution device, terminal device, system and method
JP5080406B2 (en) * 2008-09-05 2012-11-21 株式会社エヌ・ティ・ティ・ドコモ Distribution device, terminal device, system and method
US20110063306A1 (en) * 2009-09-16 2011-03-17 Nvidia Corporation CO-PROCESSING TECHNIQUES ON HETEROGENEOUS GPUs INCLUDING IDENTIFYING ONE GPU AS A NON-GRAPHICS DEVICE
KR20110063297A (en) * 2009-12-02 2011-06-10 삼성전자주식회사 Mobile device and control method thereof
JP6035148B2 (en) * 2009-12-08 2016-11-30 コーニンクレッカ フィリップス エヌ ヴェKoninklijke Philips N.V. Ablation treatment plan and device
US9830889B2 (en) 2009-12-31 2017-11-28 Nvidia Corporation Methods and system for artifically and dynamically limiting the display resolution of an application
US20110221758A1 (en) * 2010-03-11 2011-09-15 Robert Livingston Apparatus and Method for Manipulating Images through a Computer
US9119655B2 (en) 2012-08-03 2015-09-01 Stryker Corporation Surgical manipulator capable of controlling a surgical instrument in multiple modes
US9921712B2 (en) 2010-12-29 2018-03-20 Mako Surgical Corp. System and method for providing substantially stable control of a surgical tool
FR2974217A1 (en) * 2011-04-12 2012-10-19 Thomson Licensing METHOD FOR ESTIMATING INFORMATION REPRESENTATIVE OF A HEIGHT
US9472163B2 (en) * 2012-02-17 2016-10-18 Monotype Imaging Inc. Adjusting content rendering for environmental conditions
EP4316409A2 (en) 2012-08-03 2024-02-07 Stryker Corporation Systems for robotic surgery
US9226796B2 (en) 2012-08-03 2016-01-05 Stryker Corporation Method for detecting a disturbance as an energy applicator of a surgical instrument traverses a cutting path
US9820818B2 (en) 2012-08-03 2017-11-21 Stryker Corporation System and method for controlling a surgical manipulator based on implant parameters
US9046925B2 (en) * 2012-09-11 2015-06-02 Dell Products L.P. Method for using the GPU to create haptic friction maps
US8917281B2 (en) * 2012-11-05 2014-12-23 Rightware Oy Image rendering method and system
FR3000242A1 (en) 2012-12-21 2014-06-27 France Telecom METHOD FOR MANAGING A GEOGRAPHIC INFORMATION SYSTEM SUITABLE FOR USE WITH AT LEAST ONE POINTING DEVICE, WITH CREATION OF ASSOCIATIONS BETWEEN DIGITAL OBJECTS
FR3000241A1 (en) * 2012-12-21 2014-06-27 France Telecom METHOD FOR MANAGING A GEOGRAPHIC INFORMATION SYSTEM ADAPTED TO BE USED WITH AT LEAST ONE POINTING DEVICE, WITH THE CREATION OF PURELY VIRTUAL DIGITAL OBJECTS.
AU2014248758B2 (en) 2013-03-13 2018-04-12 Stryker Corporation System for establishing virtual constraint boundaries
EP3459468B1 (en) 2013-03-13 2020-07-15 Stryker Corporation Method and system for arranging objects in an operating room
US9773341B2 (en) * 2013-03-14 2017-09-26 Nvidia Corporation Rendering cover geometry without internal edges
EP3211511A1 (en) * 2013-03-15 2017-08-30 Immersion Corporation Programmable haptic peripheral
US11379040B2 (en) 2013-03-20 2022-07-05 Nokia Technologies Oy Touch display device with tactile feedback
EP2801954A1 (en) * 2013-05-07 2014-11-12 Thomson Licensing Method and device for visualizing contact(s) between objects of a virtual scene
US9245376B2 (en) * 2013-05-14 2016-01-26 Roblox Corporation Lighting management in virtual worlds
JP2015015563A (en) * 2013-07-04 2015-01-22 セイコーエプソン株式会社 Image display device
KR20150008733A (en) * 2013-07-15 2015-01-23 엘지전자 주식회사 Glass type portable device and information projecting side searching method thereof
US20150177947A1 (en) * 2013-12-20 2015-06-25 Motorola Mobility Llc Enhanced User Interface Systems and Methods for Electronic Devices
KR102082132B1 (en) * 2014-01-28 2020-02-28 한국전자통신연구원 Device and Method for new 3D Video Representation from 2D Video
US9690370B2 (en) 2014-05-05 2017-06-27 Immersion Corporation Systems and methods for viewport-based augmented reality haptic effects
US11132983B2 (en) 2014-08-20 2021-09-28 Steven Heckenlively Music yielder with conformance to requisites
US9478109B2 (en) * 2014-12-29 2016-10-25 Immersion Corporation Virtual sensor in a virtual environment
US9737987B1 (en) 2015-11-20 2017-08-22 X Development Llc Visual cards for describing and loading operational modes to motorized interface element
EP3777749A3 (en) 2015-12-31 2021-07-21 Stryker Corporation System and method for preparing surgery on a patient at a target site defined by a virtual object
KR102462941B1 (en) * 2016-01-26 2022-11-03 삼성디스플레이 주식회사 Display device
US20180063205A1 (en) * 2016-08-30 2018-03-01 Augre Mixed Reality Technologies, Llc Mixed reality collaboration
US11202682B2 (en) 2016-12-16 2021-12-21 Mako Surgical Corp. Techniques for modifying tool operation in a surgical robotic system based on comparing actual and commanded states of the tool relative to a surgical site
US10347037B2 (en) * 2017-05-31 2019-07-09 Verizon Patent And Licensing Inc. Methods and systems for generating and providing virtual reality data that accounts for level of detail
US10586377B2 (en) * 2017-05-31 2020-03-10 Verizon Patent And Licensing Inc. Methods and systems for generating virtual reality data that accounts for level of detail
US10311630B2 (en) 2017-05-31 2019-06-04 Verizon Patent And Licensing Inc. Methods and systems for rendering frames of a virtual scene from different vantage points based on a virtual entity description frame of the virtual scene
US10194078B2 (en) 2017-06-09 2019-01-29 Immersion Corporation Haptic enabled device with multi-image capturing abilities
KR102364678B1 (en) 2017-06-20 2022-02-18 엘지전자 주식회사 Mobile terminal
US10559126B2 (en) * 2017-10-13 2020-02-11 Samsung Electronics Co., Ltd. 6DoF media consumption architecture using 2D video decoder
US10572016B2 (en) * 2018-03-06 2020-02-25 Microsoft Technology Licensing, Llc Spatialized haptic device force feedback
CN108519814B (en) * 2018-03-21 2020-06-02 北京科技大学 Man-machine interaction operating system
GB2578454A (en) * 2018-10-28 2020-05-13 Cambridge Mechatronics Ltd Haptic feedback generation
US10775894B2 (en) 2018-11-02 2020-09-15 Immersion Corporation Systems and methods for providing customizable haptic playback
US10909659B2 (en) * 2018-12-12 2021-02-02 Apical Limited Super-resolution image processing using a machine learning system
US11698680B2 (en) * 2020-06-23 2023-07-11 Immersion Corporation Methods and systems for decoding and rendering a haptic effect associated with a 3D environment
CN112206526A (en) * 2020-10-19 2021-01-12 珠海金山网络游戏科技有限公司 Role movement control method and device
CN114265503B (en) * 2021-12-22 2023-10-13 吉林大学 Texture rendering method applied to pen-type vibration touch feedback device

Citations (56)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3846826A (en) * 1971-08-12 1974-11-05 R Mueller Direct television drawing and image manipulating system
US4868761A (en) * 1985-03-13 1989-09-19 Toshiba Kikai Kabushiki Kaisha Method for evaluating free surface and NC system thereof
US4868766A (en) * 1986-04-02 1989-09-19 Oce-Nederland B.V. Method of generating and processing models of two-dimensional or three-dimensional objects in a computer and reproducing the models on a display
US4901253A (en) * 1987-02-23 1990-02-13 Mitutoyo Corporation Coordinate measuring instrument and method of generating pattern data concerning shape of work to be measured
US5027292A (en) * 1989-04-19 1991-06-25 International Business Machines Corporation Multiple depth buffers for graphics and solid modelling
US5265197A (en) * 1988-12-23 1993-11-23 Kabushiki Kaisha Toshiba Geometric modeling apparatus
US5273038A (en) * 1990-07-09 1993-12-28 Beavin William C Computer simulation of live organ
US5304884A (en) * 1988-01-19 1994-04-19 Olympus Optical Company Limited Molded armature
US5321622A (en) * 1988-04-18 1994-06-14 3D Systems, Inc. Boolean layer comparison slice
US5388199A (en) * 1986-04-25 1995-02-07 Toshiba Kikai Kabushiki Kaisha Interactive graphic input system
US5428715A (en) * 1991-03-14 1995-06-27 Mitsubishi Denki Kabushiki Kaisha Three-dimensional figure data generator device
US5455902A (en) * 1990-12-21 1995-10-03 Eastman Kodak Company Method and apparatus for performing real-time computer animation
US5479593A (en) * 1993-06-21 1995-12-26 Electronic Data Systems Corporation System and method for improved solving of equations employed during parametric geometric modeling
US5487012A (en) * 1990-12-21 1996-01-23 Topholm & Westermann Aps Method of preparing an otoplasty or adaptive earpiece individually matched to the shape of an auditory canal
US5497452A (en) * 1991-03-14 1996-03-05 International Business Machines Corporation Method and apparatus for generating a geometric model
US5515078A (en) * 1992-06-12 1996-05-07 The Computer Museum, Inc. Virtual-reality positional input and display system
US5561747A (en) * 1992-02-03 1996-10-01 Computervision Corporation Boundary evaluation in non-manifold environment
US5561748A (en) * 1990-11-26 1996-10-01 International Business Machines Corporation Method and apparatus for creating solid models from two-dimensional drawings on a graphics display
US5576727A (en) * 1993-07-16 1996-11-19 Immersion Human Interface Corporation Electromechanical human-computer interface with force feedback
US5623582A (en) * 1994-07-14 1997-04-22 Immersion Human Interface Corporation Computer interface or control input device for laparoscopic surgical instrument and other elongated mechanical objects
US5625576A (en) * 1993-10-01 1997-04-29 Massachusetts Institute Of Technology Force reflecting haptic interface
US5629594A (en) * 1992-12-02 1997-05-13 Cybernet Systems Corporation Force feedback system
US5633951A (en) * 1992-12-18 1997-05-27 North America Philips Corporation Registration of volumetric images which are relatively elastically deformed by matching surfaces
US5649076A (en) * 1993-08-06 1997-07-15 Toyota Jidosha Kabushiki Kaisha Method of generating or modifying solid model of an object according to cross-sectional shapes and a predetermined relationship and apparatus suitable for practicing the method
US5691898A (en) * 1995-09-27 1997-11-25 Immersion Human Interface Corp. Safe and low cost computer peripherals with force feedback for consumer applications
US5704791A (en) * 1995-03-29 1998-01-06 Gillio; Robert G. Virtual surgery system instrument
US5721566A (en) * 1995-01-18 1998-02-24 Immersion Human Interface Corp. Method and apparatus for providing damping force feedback
US5751289A (en) * 1992-10-01 1998-05-12 University Corporation For Atmospheric Research Virtual reality imaging system with image replay
US5766016A (en) * 1994-11-14 1998-06-16 Georgia Tech Research Corporation Surgical simulator and method for simulating surgical procedure
US5769640A (en) * 1992-12-02 1998-06-23 Cybernet Systems Corporation Method and system for simulating medical procedures including virtual reality and control method and system for use therein
US5808616A (en) * 1993-08-25 1998-09-15 Canon Kabushiki Kaisha Shape modeling method and apparatus utilizing ordered parts lists for designating a part to be edited in a view
US5815154A (en) * 1995-12-20 1998-09-29 Solidworks Corporation Graphical browser system for displaying and manipulating a computer model
US5999187A (en) * 1996-06-28 1999-12-07 Resolution Technologies, Inc. Fly-through computer aided design method and apparatus
US6046726A (en) * 1994-09-07 2000-04-04 U.S. Philips Corporation Virtual workspace with user-programmable tactile feedback
US6111577A (en) * 1996-04-04 2000-08-29 Massachusetts Institute Of Technology Method and apparatus for determining forces to be applied to a user through a haptic interface
US6115046A (en) * 1990-11-26 2000-09-05 International Business Machines Corporation Method and apparatus for generating three dimensional drawing on a graphics display
US6120171A (en) * 1996-06-14 2000-09-19 Mohammad Salim Shaikh Fully integrated machinable profile based parametric solid modeler
US6131097A (en) * 1992-12-02 2000-10-10 Immersion Corporation Haptic authoring
US6191796B1 (en) * 1998-01-21 2001-02-20 Sensable Technologies, Inc. Method and apparatus for generating and interfacing with rigid and deformable surfaces in a haptic virtual reality environment
US6308144B1 (en) * 1996-09-26 2001-10-23 Computervision Corporation Method and apparatus for providing three-dimensional model associativity
US6417638B1 (en) * 1998-07-17 2002-07-09 Sensable Technologies, Inc. Force reflecting haptic interface
US6448977B1 (en) * 1997-11-14 2002-09-10 Immersion Corporation Textures and other spatial sensations for a relative haptic interface device
US20020130820A1 (en) * 1998-04-20 2002-09-19 Alan Sullivan Multi-planar volumetric display system and method of operation
US6570564B1 (en) * 1999-09-24 2003-05-27 Sun Microsystems, Inc. Method and apparatus for rapid processing of scene-based programs
US6628280B2 (en) * 2001-03-16 2003-09-30 Mitsubishi Electric Research Laboratories, Inc. Method for selectively regenerating an adaptively sampled distance field
US6704694B1 (en) * 1998-10-16 2004-03-09 Massachusetts Institute Of Technology Ray based interaction system
US6703550B2 (en) * 2001-10-10 2004-03-09 Immersion Corporation Sound data output and manipulation using haptic feedback
US6773408B1 (en) * 1997-05-23 2004-08-10 Transurgical, Inc. MRI-guided therapeutic unit and methods
US6792398B1 (en) * 1998-07-17 2004-09-14 Sensable Technologies, Inc. Systems and methods for creating virtual objects in a sketch mode in a haptic virtual reality environment
US6803928B2 (en) * 2000-06-06 2004-10-12 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Extended virtual table: an optical extension for table-like projection systems
US6809738B2 (en) * 2001-12-21 2004-10-26 Vrcontext S.A. Performing memory management operations to provide displays of complex virtual environments
US6822635B2 (en) * 2000-01-19 2004-11-23 Immersion Corporation Haptic interface for laptop computers and other portable devices
US20050243086A1 (en) * 2004-05-03 2005-11-03 Microsoft Corporation Integration of three dimensional scene hierarchy into two dimensional compositing system
US7050955B1 (en) * 1999-10-01 2006-05-23 Immersion Corporation System, method and data structure for simulated interaction with graphical objects
US20060202953A1 (en) * 1997-08-22 2006-09-14 Pryor Timothy R Novel man machine interfaces and applications
US7432910B2 (en) * 1998-06-23 2008-10-07 Immersion Corporation Haptic interface device and actuator assembly providing linear haptic sensations

Family Cites Families (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4628136A (en) 1985-12-17 1986-12-09 Lummus Crest, Inc. Dehydrogenation process for production of styrene from ethylbenzene comprising low temperature heat recovery and modification of the ethylbenzene-steam feed therewith
JP2655597B2 (en) 1986-06-30 1997-09-24 日本電気株式会社 Loopback identification method for digital circuits
JPS63149416A (en) 1986-12-11 1988-06-22 Seiko Electronic Components Ltd Sliding bearing construction
JPS63177497A (en) 1987-01-16 1988-07-21 大日本印刷株式会社 Molded product with printed circuit and manufacture of the same
CA2000818C (en) 1988-10-19 1994-02-01 Akira Tsuchihashi Master slave manipulator system
JPH03137722A (en) 1989-10-24 1991-06-12 Canon Inc Two-dimensional memory device
JP2527854B2 (en) 1991-06-10 1996-08-28 富士通株式会社 Variable drag device and key switch device
US7225404B1 (en) 1996-04-04 2007-05-29 Massachusetts Institute Of Technology Method and apparatus for determining forces to be applied to a user through a haptic interface
US6552722B1 (en) 1998-07-17 2003-04-22 Sensable Technologies, Inc. Systems and methods for sculpting virtual objects in a haptic virtual reality environment
US6985133B1 (en) 1998-07-17 2006-01-10 Sensable Technologies, Inc. Force reflecting haptic interface
US6867770B2 (en) 2000-12-14 2005-03-15 Sensable Technologies, Inc. Systems and methods for voxel warping
US6958752B2 (en) 2001-01-08 2005-10-25 Sensable Technologies, Inc. Systems and methods for three-dimensional modeling
US6671651B2 (en) 2002-04-26 2003-12-30 Sensable Technologies, Inc. 3-D selection and manipulation with a multiple dimension haptic interface
US7962400B2 (en) 2003-04-02 2011-06-14 Cfph, Llc System and method for wagering based on the movement of financial markets
US7411576B2 (en) 2003-10-30 2008-08-12 Sensable Technologies, Inc. Force reflecting haptic interface
US7095418B2 (en) 2003-10-30 2006-08-22 Sensable Technologies, Inc. Apparatus and methods for texture mapping
US7382378B2 (en) 2003-10-30 2008-06-03 Sensable Technologies, Inc. Apparatus and methods for stenciling an image
US7889209B2 (en) 2003-12-10 2011-02-15 Sensable Technologies, Inc. Apparatus and methods for wrapping texture onto the surface of a virtual object
US7626589B2 (en) 2003-12-10 2009-12-01 Sensable Technologies, Inc. Haptic graphical user interface for adjusting mapped texture
US7149596B2 (en) 2004-01-13 2006-12-12 Sensable Technologies, Inc. Apparatus and methods for modifying a model of an object to enforce compliance with a manufacturing constraint
US7990374B2 (en) 2004-06-29 2011-08-02 Sensable Technologies, Inc. Apparatus and methods for haptic rendering using data in a graphics pipeline

Patent Citations (60)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3846826A (en) * 1971-08-12 1974-11-05 R Mueller Direct television drawing and image manipulating system
US4868761A (en) * 1985-03-13 1989-09-19 Toshiba Kikai Kabushiki Kaisha Method for evaluating free surface and NC system thereof
US4868766A (en) * 1986-04-02 1989-09-19 Oce-Nederland B.V. Method of generating and processing models of two-dimensional or three-dimensional objects in a computer and reproducing the models on a display
US5388199A (en) * 1986-04-25 1995-02-07 Toshiba Kikai Kabushiki Kaisha Interactive graphic input system
US4901253A (en) * 1987-02-23 1990-02-13 Mitutoyo Corporation Coordinate measuring instrument and method of generating pattern data concerning shape of work to be measured
US5304884A (en) * 1988-01-19 1994-04-19 Olympus Optical Company Limited Molded armature
US5321622A (en) * 1988-04-18 1994-06-14 3D Systems, Inc. Boolean layer comparison slice
US5481470A (en) * 1988-04-18 1996-01-02 3D Systems, Inc. Boolean layer comparison slice
US5265197A (en) * 1988-12-23 1993-11-23 Kabushiki Kaisha Toshiba Geometric modeling apparatus
US5027292A (en) * 1989-04-19 1991-06-25 International Business Machines Corporation Multiple depth buffers for graphics and solid modelling
US5273038A (en) * 1990-07-09 1993-12-28 Beavin William C Computer simulation of live organ
US5561748A (en) * 1990-11-26 1996-10-01 International Business Machines Corporation Method and apparatus for creating solid models from two-dimensional drawings on a graphics display
US6115046A (en) * 1990-11-26 2000-09-05 International Business Machines Corporation Method and apparatus for generating three dimensional drawing on a graphics display
US5455902A (en) * 1990-12-21 1995-10-03 Eastman Kodak Company Method and apparatus for performing real-time computer animation
US5487012A (en) * 1990-12-21 1996-01-23 Topholm & Westermann Aps Method of preparing an otoplasty or adaptive earpiece individually matched to the shape of an auditory canal
US5428715A (en) * 1991-03-14 1995-06-27 Mitsubishi Denki Kabushiki Kaisha Three-dimensional figure data generator device
US5497452A (en) * 1991-03-14 1996-03-05 International Business Machines Corporation Method and apparatus for generating a geometric model
US5561747A (en) * 1992-02-03 1996-10-01 Computervision Corporation Boundary evaluation in non-manifold environment
US5515078A (en) * 1992-06-12 1996-05-07 The Computer Museum, Inc. Virtual-reality positional input and display system
US5751289A (en) * 1992-10-01 1998-05-12 University Corporation For Atmospheric Research Virtual reality imaging system with image replay
US5769640A (en) * 1992-12-02 1998-06-23 Cybernet Systems Corporation Method and system for simulating medical procedures including virtual reality and control method and system for use therein
US5629594A (en) * 1992-12-02 1997-05-13 Cybernet Systems Corporation Force feedback system
US6131097A (en) * 1992-12-02 2000-10-10 Immersion Corporation Haptic authoring
US5844392A (en) * 1992-12-02 1998-12-01 Cybernet Systems Corporation Haptic browsing
US5633951A (en) * 1992-12-18 1997-05-27 North America Philips Corporation Registration of volumetric images which are relatively elastically deformed by matching surfaces
US5479593A (en) * 1993-06-21 1995-12-26 Electronic Data Systems Corporation System and method for improved solving of equations employed during parametric geometric modeling
US5701140A (en) * 1993-07-16 1997-12-23 Immersion Human Interface Corp. Method and apparatus for providing a cursor control interface with force feedback
US5576727A (en) * 1993-07-16 1996-11-19 Immersion Human Interface Corporation Electromechanical human-computer interface with force feedback
US5649076A (en) * 1993-08-06 1997-07-15 Toyota Jidosha Kabushiki Kaisha Method of generating or modifying solid model of an object according to cross-sectional shapes and a predetermined relationship and apparatus suitable for practicing the method
US5808616A (en) * 1993-08-25 1998-09-15 Canon Kabushiki Kaisha Shape modeling method and apparatus utilizing ordered parts lists for designating a part to be edited in a view
US5625576A (en) * 1993-10-01 1997-04-29 Massachusetts Institute Of Technology Force reflecting haptic interface
US5623582A (en) * 1994-07-14 1997-04-22 Immersion Human Interface Corporation Computer interface or control input device for laparoscopic surgical instrument and other elongated mechanical objects
US6046726A (en) * 1994-09-07 2000-04-04 U.S. Philips Corporation Virtual workspace with user-programmable tactile feedback
US5766016A (en) * 1994-11-14 1998-06-16 Georgia Tech Research Corporation Surgical simulator and method for simulating surgical procedure
US5721566A (en) * 1995-01-18 1998-02-24 Immersion Human Interface Corp. Method and apparatus for providing damping force feedback
US5704791A (en) * 1995-03-29 1998-01-06 Gillio; Robert G. Virtual surgery system instrument
US5691898A (en) * 1995-09-27 1997-11-25 Immersion Human Interface Corp. Safe and low cost computer peripherals with force feedback for consumer applications
US5815154A (en) * 1995-12-20 1998-09-29 Solidworks Corporation Graphical browser system for displaying and manipulating a computer model
US6111577A (en) * 1996-04-04 2000-08-29 Massachusetts Institute Of Technology Method and apparatus for determining forces to be applied to a user through a haptic interface
US6120171A (en) * 1996-06-14 2000-09-19 Mohammad Salim Shaikh Fully integrated machinable profile based parametric solid modeler
US5999187A (en) * 1996-06-28 1999-12-07 Resolution Technologies, Inc. Fly-through computer aided design method and apparatus
US6308144B1 (en) * 1996-09-26 2001-10-23 Computervision Corporation Method and apparatus for providing three-dimensional model associativity
US6773408B1 (en) * 1997-05-23 2004-08-10 Transurgical, Inc. MRI-guided therapeutic unit and methods
US20060202953A1 (en) * 1997-08-22 2006-09-14 Pryor Timothy R Novel man machine interfaces and applications
US6448977B1 (en) * 1997-11-14 2002-09-10 Immersion Corporation Textures and other spatial sensations for a relative haptic interface device
US6191796B1 (en) * 1998-01-21 2001-02-20 Sensable Technologies, Inc. Method and apparatus for generating and interfacing with rigid and deformable surfaces in a haptic virtual reality environment
US20020130820A1 (en) * 1998-04-20 2002-09-19 Alan Sullivan Multi-planar volumetric display system and method of operation
US7432910B2 (en) * 1998-06-23 2008-10-07 Immersion Corporation Haptic interface device and actuator assembly providing linear haptic sensations
US6417638B1 (en) * 1998-07-17 2002-07-09 Sensable Technologies, Inc. Force reflecting haptic interface
US6792398B1 (en) * 1998-07-17 2004-09-14 Sensable Technologies, Inc. Systems and methods for creating virtual objects in a sketch mode in a haptic virtual reality environment
US6704694B1 (en) * 1998-10-16 2004-03-09 Massachusetts Institute Of Technology Ray based interaction system
US6570564B1 (en) * 1999-09-24 2003-05-27 Sun Microsystems, Inc. Method and apparatus for rapid processing of scene-based programs
US7050955B1 (en) * 1999-10-01 2006-05-23 Immersion Corporation System, method and data structure for simulated interaction with graphical objects
US6822635B2 (en) * 2000-01-19 2004-11-23 Immersion Corporation Haptic interface for laptop computers and other portable devices
US6803928B2 (en) * 2000-06-06 2004-10-12 Fraunhofer-Gesellschaft Zur Foerderung Der Angewandten Forschung E.V. Extended virtual table: an optical extension for table-like projection systems
US6628280B2 (en) * 2001-03-16 2003-09-30 Mitsubishi Electric Research Laboratories, Inc. Method for selectively regenerating an adaptively sampled distance field
US7208671B2 (en) * 2001-10-10 2007-04-24 Immersion Corporation Sound data output and manipulation using haptic feedback
US6703550B2 (en) * 2001-10-10 2004-03-09 Immersion Corporation Sound data output and manipulation using haptic feedback
US6809738B2 (en) * 2001-12-21 2004-10-26 Vrcontext S.A. Performing memory management operations to provide displays of complex virtual environments
US20050243086A1 (en) * 2004-05-03 2005-11-03 Microsoft Corporation Integration of three dimensional scene hierarchy into two dimensional compositing system

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090251421A1 (en) * 2008-04-08 2009-10-08 Sony Ericsson Mobile Communications Ab Method and apparatus for tactile perception of digital images
WO2009126176A1 (en) * 2008-04-08 2009-10-15 Sony Ericsson Mobile Communications Ab Method and apparatus for tactile perception of digital images
US8390623B1 (en) * 2008-04-14 2013-03-05 Google Inc. Proxy based approach for generation of level of detail
US20090282331A1 (en) * 2008-05-08 2009-11-12 Kenichiro Nagasaka Information input/output device, information input/output method and computer program
US8648797B2 (en) * 2008-05-08 2014-02-11 Sony Corporation Information input/output device, information input/output method and computer program
US20100064357A1 (en) * 2008-09-09 2010-03-11 Kerstin Baird Business Processing System Combining Human Workflow, Distributed Events, And Automated Processes
US20100077411A1 (en) * 2008-09-22 2010-03-25 Alyson Ann Comer Routing function calls to specific-function dynamic link libraries in a general-function environment
US9098316B2 (en) * 2008-09-22 2015-08-04 International Business Machines Corporation Routing function calls to specific-function dynamic link libraries in a general-function environment
US20130050062A1 (en) * 2010-05-07 2013-02-28 Gwangju Institute Of Science And Technology Apparatus and method for implementing haptic-based networked virtual environment which supports high-resolution tiled display
US9041621B2 (en) * 2010-05-07 2015-05-26 Gwangju Institute Of Science And Technology Apparatus and method for implementing haptic-based networked virtual environment which supports high-resolution tiled display
WO2012037157A3 (en) * 2010-09-13 2012-05-24 Alt Software (Us) Llc System and method for displaying data having spatial coordinates
WO2012037157A2 (en) * 2010-09-13 2012-03-22 Alt Software (Us) Llc System and method for displaying data having spatial coordinates
US20120075288A1 (en) * 2010-09-24 2012-03-29 Samsung Electronics Co., Ltd. Apparatus and method for back-face culling using frame coherence
US8849015B2 (en) 2010-10-12 2014-09-30 3D Systems, Inc. System and apparatus for haptically enabled three-dimensional scanning
US20140019940A1 (en) * 2012-07-16 2014-01-16 Microsoft Corporation Tool-Based Testing For Composited Systems
US9069905B2 (en) * 2012-07-16 2015-06-30 Microsoft Technology Licensing, Llc Tool-based testing for composited systems
US9667870B2 (en) 2013-01-07 2017-05-30 Samsung Electronics Co., Ltd Method for controlling camera operation based on haptic function and terminal supporting the same

Also Published As

Publication number Publication date
WO2006004894A2 (en) 2006-01-12
US9030411B2 (en) 2015-05-12
US7990374B2 (en) 2011-08-02
WO2006004894A3 (en) 2006-05-18
US20140333625A1 (en) 2014-11-13
US20060109266A1 (en) 2006-05-25

Similar Documents

Publication Publication Date Title
US9030411B2 (en) Apparatus and methods for haptic rendering using a haptic camera view
US10417812B2 (en) Systems and methods for data visualization using three-dimensional displays
EP3368999B1 (en) Foveated geometry tessellation
US7170510B2 (en) Method and apparatus for indicating a usage context of a computational resource through visual effects
Knott CInDeR: collision and interference detection in real time using graphics hardware
EP2939208B1 (en) Sprite graphics rendering system
EP3008701B1 (en) Using compute shaders as front end for vertex shaders
US8154544B1 (en) User specified contact deformations for computer graphics
US7292242B1 (en) Clipping with addition of vertices to existing primitives
US20100289804A1 (en) System, mechanism, and apparatus for a customizable and extensible distributed rendering api
WO2006122212A2 (en) Statistical rendering acceleration
JP2012190428A (en) Stereoscopic image visual effect processing method
CN107784622A (en) Graphic system and graphics processor
US6831642B2 (en) Method and system for forming an object proxy
US20040012602A1 (en) System and method for image-based rendering with object proxies
WO1998043208A2 (en) Method and apparatus for graphics processing
Kessler Virtual environment models
JP2001273523A (en) Device and method for reducing three-dimensional data
Fuhrmann et al. Distributed Software-Based Volume Visualization in a Virtual Environment.
Juarez-Comboni et al. A multi-pass multi-stage multigpu collision detection algorithm
Sourin Let’s Draw
Yuan et al. P-buffer: a hidden-line algorithm in image-space
KR20190129602A (en) Mobile Viewing System
Dunn et al. 3-D Graphics: Real-Time Graphics Pipeline
Sheppard Real–time rendering of fur

Legal Events

Date Code Title Description
AS Assignment

Owner name: SENSABLE TECHNOLOGIES, INC., MASSACHUSETTS

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:ITKOWITZ, BRANDON D.;SHIH, LOREN C.;HANDLEY, JOSHUA E.;AND OTHERS;SIGNING DATES FROM 20100204 TO 20100209;REEL/FRAME:027758/0936

AS Assignment

Owner name: GEOMAGIC, INC., NORTH CAROLINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:SENSABLE TECHNOLOGIES, INC.;REEL/FRAME:029020/0254

Effective date: 20120411

AS Assignment

Owner name: 3D SYSTEMS, INC., SOUTH CAROLINA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GEOMAGIC, INC.;REEL/FRAME:029971/0482

Effective date: 20130308

STCB Information on status: application discontinuation

Free format text: ABANDONED -- AFTER EXAMINER'S ANSWER OR BOARD OF APPEALS DECISION