WO1994016406A1 - Improved panoramic image based virtual reality/telepresence audio-visual system and method - Google Patents

Improved panoramic image based virtual reality/telepresence audio-visual system and method Download PDF

Info

Publication number
WO1994016406A1
WO1994016406A1 PCT/US1994/000289 US9400289W WO9416406A1 WO 1994016406 A1 WO1994016406 A1 WO 1994016406A1 US 9400289 W US9400289 W US 9400289W WO 9416406 A1 WO9416406 A1 WO 9416406A1
Authority
WO
WIPO (PCT)
Prior art keywords
audio
display
computer
viewer
visual
Prior art date
Application number
PCT/US1994/000289
Other languages
French (fr)
Inventor
Kurtis J. Ritchey
Original Assignee
Ritchey Kurtis J
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ritchey Kurtis J filed Critical Ritchey Kurtis J
Publication of WO1994016406A1 publication Critical patent/WO1994016406A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/275Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals
    • H04N13/279Image signal generators from 3D object models, e.g. computer-generated stereoscopic image signals the virtual viewpoint locations being selected by the viewers or determined by tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/344Displays for viewing with the aid of special glasses or head-mounted displays [HMD] with head-mounted left-right displays
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/10Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals
    • A63F2300/1087Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera
    • A63F2300/1093Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game characterized by input arrangements for converting player-generated signals into game device control signals comprising photodetecting means, e.g. a camera using visible light
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/66Methods for processing data by generating or executing the game program for rendering three dimensional images
    • A63F2300/6661Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera
    • A63F2300/6676Methods for processing data by generating or executing the game program for rendering three dimensional images for changing the position of the virtual camera by dedicated player input
    • AHUMAN NECESSITIES
    • A63SPORTS; GAMES; AMUSEMENTS
    • A63FCARD, BOARD, OR ROULETTE GAMES; INDOOR GAMES USING SMALL MOVING PLAYING BODIES; VIDEO GAMES; GAMES NOT OTHERWISE PROVIDED FOR
    • A63F2300/00Features of games using an electronically generated display having two or more dimensions, e.g. on a television screen, showing representations related to the game
    • A63F2300/60Methods for processing data by generating or executing the game program
    • A63F2300/69Involving elements of the real world in the game world, e.g. measurement in live races, real video
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/15Processing image signals for colour aspects of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/161Encoding, multiplexing or demultiplexing different image signal components
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/257Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes
    • H04N13/289Switching between monoscopic and stereoscopic modes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/305Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/324Colour aspects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/332Displays for viewing with the aid of special glasses or head-mounted displays [HMD]
    • H04N13/337Displays for viewing with the aid of special glasses or head-mounted displays [HMD] using polarisation multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/361Reproducing mixed stereoscopic images; Reproducing mixed monoscopic and stereoscopic images, e.g. a stereoscopic image overlay window on a monoscopic image background
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/363Image reproducers using image projection screens
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/366Image reproducers using viewer tracking
    • H04N13/383Image reproducers using viewer tracking for tracking with gaze detection, i.e. detecting the lines of sight of the viewer's eyes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N2013/0074Stereoscopic image analysis
    • H04N2013/0088Synthesising a monoscopic image signal from stereoscopic images, e.g. synthesising a panoramic or high resolution monoscopic image

Definitions

  • This invention relates generally to panoramic display methods and more particularly to the sensor fusion of data from the panoramic arrangement of three-dimensional imaging sensors and surface contour sensors to form virtual objects and scenes, the processing of the virtual objects and scenes based on a viewer operating interactive computer input devices to effect the manipulation of the virtual objects and scenes defined in the computer, and the display of the effected virtual objects and scenes on a panoramic display unit to the extent that the viewer percevies that the virtual objects and scenes completely surround the viewer.
  • multi-lens camera system with spherical field-of-view (FOV) coverage As shown in FIG. 2, objective lenses of the '794 camera system face outward with adjacent or overlapping FOV coverage.
  • the imagery from the camera is surface mapped onto the interior of a three-dimensional (3-D) shape defined in a special effects processor of a computer.
  • the input source is at least one computer graphics system that generates three-dimensional graphics of spherical FOV coverage.
  • the viewer operates interactive input devices associated with the computer to manipulate the texture mapped virtual images.
  • the virtual environment is instantaneously affected before the viewer and displayed on either a
  • head-mounted display assembly or on contiguous display units positioned beneath, to the sides, and above the viewer.
  • Imagery from the camera is surface mapped onto the surface of a three-dimensional shape defined in a computer.
  • the shape is input by a panoramic 3-D digitizer device.
  • Audio data is input by a panoraimic 3-D audio system.
  • Audio attributes are assigned to subjects in the model. Shape, imagery, and audio sensors may be combined to form one sensor array. Sensors are positioned adjacent to one another to facilitate adjacent or overlapping coverage of a subject. Preferably corresponding panoramic shape, imagery, and audio signatures of a subject(s) are collected simultaneously. In this manner action of a 3-D subject is recorded from substantially all aspects at a single moment in time.
  • the participant operates interactive input devices associated with the computer to manipulate the v ⁇ rtual object.
  • the participant observes the model on a head mounted display system.
  • the participant is surrounded by contiguous audio-visual display units. In the latter example, each display unit displays a segment of the model.
  • an objective of this invention is to provide a positionable multi-lens camera system for recording contiguous image segments of an object, being, adjacent surrounding scene, or any combination of these types of subjects; a signal
  • processing means comprising first computerized fusion
  • portions of a being, object, or scene comprising a panoramic computer generated model where various 3-D digitizer systems may be incorporated for entering 3-D shape and contour data into a image processing computer; a third processing means to manipulate the geometry of subjects comprising the virtual model; a forth processing means for sampling out given fields of regard of the virtual model for presentation and
  • signal processing means includes an expert system for determining the actions of subjects of the computer generated model; where the signal processing means includes image segment circuit means for distributing, processing, and display of the model; wnert the system includes a 3-D graphics computer system for the generation, alteration, and display images; and a system and method for image based recording of 3-D data which may be processed for display on various 3-D display systems to include head mounted display systems, and room display systems with stereographic, autostereoscopic, or holographic display systems.
  • interactive input devices operable by a viewer to cause the generation, alteration, display of 3-D images on said display assembly means; to provide associated 3-D audio systems; to provide alternative viewer interactive and feedback devices to operate the interactive input devices and associated
  • processing means such that the resultant virtual environment is simultaneously effected before the viewers eyes; to provide an associated telecommunications system; and to provide a system for incorporation with a host vehicle, teleoperated vehicle, or robot.
  • FIG. 1 is a flowchart to which reference will be made in generally explaining the overall operation of the recording, processing, and audio-visual system 1 according to the present invention.
  • FIG. 2 is a perspective view of a cameraman carrying a panoramic camcorder system of spherical coverage described in pr ior art.
  • FIG. 3 is a greatly enlarged fragmentary sectional veiw of one of the camera arrangements for optically recording image segments representing sides of a three-dimensional subject into a single frame according to the present invention.
  • FIG. 4 is a perspective view of a sensor array for recording accoustical, visual, and shape data for input according to the present invention.
  • FIG. 5 is a side sectional view of the sensor array shown in
  • FIG. 4 is a diagrammatic representation of FIG. 4 .
  • FIG. 6 is a diagrammatic representation of an inward looking three-dimensional input source incorporating the sensor array shown in FIGS. 4 and 5..
  • FIG. 7 is a diagrammatic representation of a inward and outward looking panoramic three-dimensional input source assembly incorporating the sensor array shown in FIGS. 6 and 7.
  • FIGS. 8A-8D are diagramatic representations of video frames of three-dimensional coverage of beings and objects to be modeled in 3-D in the present invention.
  • FIGS. 9A-9B are diagramatic representations of video frames of three dimensional coverage of beings, objects, and
  • FIG. 10 is a diagramatic representation of a HDTV frame on which includes both foreground and background imagery
  • FIG. 11 is a fragmentary view onto the top of the virtual world model in which recorded three-dimensional beings, objects, and/or scenes are incorporated according to the present invention.
  • FIG. 12 is a fragmentary view onto the side of the virtual model shown in FIG. 11.
  • FIG. 13 is a diagramatic illustration showing how imagery is texture mapped onto a three-dimensional wireframe model to form a three-dimensional virtual model generated and processed for presentation by audio and video computer signal processing means of system 1.
  • FIG. 14 is a perspective, partially diagramatic view showing an image based virtual model generated for audio-visual presentation by the visual signal processing means of system 1.
  • FIG. 15 is a block diagram of an image formatting system for recording, processing, and display of an image of three- dimensional coverage which embodies the present invention.
  • FIG. 16 is a block diagram of a second embodiment of system.
  • FIG. 17 is a block diagram of a third embodiment of the system.
  • FIG. 18 is a block diagram illustrating the incorportation of a three-dimensional display system according to the present invention.
  • FIG. 19 is a perspective, partially diagramatic view
  • FIG. 20 is a block diagram of an embodiment of the present invention including a telecommunications system.
  • FIG. 21 is a perspective, partially diagramatic view
  • FIG. 20 illustrating the telecommunications embodiment according to FIG. 20.
  • FIG. 22 is block diagram illustrating an embodiment of the present invention wherein a host vehicle control system with a panoramic sensor, processing, and display system provides telepresence to a viewer/operator for control of the host vehicle.
  • FIG. 23 is a sectional view of a host vehicle incorporating the present invention shown in FIG. 22.
  • FIG. 24 is a block diagram illustrating an embodiment of the present invention wherein a remote control system for a remotely piloted vehicle with a panoramic sensor system transmits a three-dimensional panoramic scene to a control station for processing and spherical coverage viewing in order to assist the controller in piloting the teleoperated vehicle.
  • FIG. 25 is a perspective, partially diagrammatic view illustrating the remote control three-dimensional viewing system described in FIG. 24.
  • HMD Head-mounted display
  • First processing means First processing means; fusion processor to wed shape and image segments.
  • Second processing means Second processing means; fusion of model segments.
  • Forth processing means image processing for display and distribution.
  • Transimitter for transmitting an over-the-air stereo aud io signal.
  • Display unit generally; may include audio system.
  • Image control unit (including chasis, processors, etc.); may include audio means.
  • Non-contact position and orientation sensor system i.e.
  • Radar or LADAR may include camera system.
  • Processing means for holographic TV 221 Processing and distribution system for image segment circuit means
  • Heirispher ical scan of LADAR system may include
  • Near field of view of LADAR system may include
  • VRT control station for remotely piloted vehicle
  • Video compression and data system including
  • Video decompression and data system including
  • the reference 1 generally designates a system and method of rendering and interacting with a
  • the system 1 generally includes a panoramic input source means 2, panoramic signal processing means 3, and panoramic audio-visual presentation assembly means 4 connected generally by suitable electrical interface means 5.
  • Electrical interface means 5 including the
  • Input means 2 generally consists of a panoramic 3-D camera system 6, a panoramic 3-D digitizing system 7 , and a panoramic 3-D audio recording system 8.
  • Input means 6, 7 , and 8 include a plurality of respective sensors that are positioned to record geographic and geometric subjects.
  • a subject 13 may comprise three-dimensional beings or things in the real world.
  • the world model 14 comprises a model 14a that includes shape and imagery, and an audio model 14b that includes accoustical recordings.
  • the audio model corresponds to the shape and imagery model.
  • all sides of the subject are recorded simultaneously by the input means 6, 7, and 8.
  • Signal processing means 3 preferrably includes a first computer processing means 15 for sensor fusion of the
  • the first processing means operates on the signals 5a and 5b to combine shape and surface data of corresponding segments of the 3-D subject.
  • the resulting 3-D model segments 26a are portions of the computer generated world model 14a.
  • Signal processing means 3 preferrably also includes a second computer processing means 16 for fusion of imaging and shape data segments 26a derived by first apparatus 15 to form a
  • Signal processing means 1 also includes a third computer processing means 17 for manipulating the computer generated model 14a.
  • the third processing means is typically operated to perform interactive 3-D visual simulation and teleoperated applications.
  • Signal processing means 3 also includes a forth computer processing means 18 to sample out and transmit image scene 71 segments of the world model 14a to each respective display unit of the audio-visual assembly means 4.
  • Means 3 includes includes processing means for interface with input sources 3 ,
  • Signal processing means 15, 16, 17 , 18, and 23 include a central processing unit, terminal bus, communication ports, memory, and the like typical to a conventional computer (s). Operating system software, board level software, processing data, generated images and the like are stored in mass storage devices 25 which may include disk drives, optical disk drives, and so forth. All signal processing means 15, 16, 17 , 18, and 23 may be incorportated into a single computer 9. or a
  • means 3 may include a computer graphics system 19, a telecommunications system 20, a vehicle control system 21, or artificial intelligence system 22 to perform special processing functions. Special
  • processing systems 19, 20, 21, and 22 may be integral or networked to computer 9.
  • Audio sensors 30 are faced inward about a subject or outward to record signals representing audio segments 26b of a
  • the image, shape, and audio sensors 28, 29, and 30 respect ively, are positioned adjacent to one another and record a continuous corresponding subject 13.
  • the audio processing system 23 receives recorded audio signals 5c from the panoramic 3-D audio input system 8.
  • the audio signals 5c as assigned to modeled subject 14a comprise an accoustical world model 14b.
  • the audio model 14b is continuously updated by the computer 23 based on data recieved from the interactive input system 10.
  • Computer 9 communicates changes to the world model 14 via digital data link
  • Audio means 23 includes processing means and software means for the generation of 3-D audio output in response to changes and actions of subjects modeled in the computer generated model 14a.
  • the output audio signals are transmitted to speakers positioned about the participant by way of the panoramic audio-visual assembly me a ns 4 .
  • the preferred embodiment of the system 1 may generally comprise two alternative panoramic audio-visual assembly means 4: A headmounted display (HMD) assembly 11, or a large display assembly 12.
  • the large display assembly 12 may incorporate conventional 31, stereographic 32,
  • Specific processing means 18 compatable with a given display unit's 31, 32, 33, or 34 format operate on the virtual model 14a.
  • the processing means 18 then outputs a signal
  • Display units 31, 32, 33, or 34 are placed contiguous to one another in a communicating
  • the model 14 presented to the participant may be derived from prerecorded data stored in a mass storage device 25.
  • a live feeds from input sources 2 at a remote location are processed in near real time and the participant can interact with the remote location by using teleoperated devices. In these manners the viewer is immersed in a highly- interactive and realistic computer simulation.
  • a panoramic sensor array comprising a plurality of shape, visual, and aural sensors are positioned to record a three-dimensional subject in a substantially continuous panoramic fashion.
  • Each sensor 27 outputs a corressponding signal specific to that sensors field of coverage.
  • Signals representing visual and shape data are transmitted from input sources 6 and 7 to the signal processing means 3.
  • a first computer processing means 15 fuses the shape and visual signals to form model segments 26a.
  • the pool of model segments are then transmitted to a second processing means 16 that fuses or matches adjacent and corresonding model segments to one another.
  • the matching of intersections of the pool of model segments yields a panoramic three-dimensional model 14a.
  • the model 14a is rendered such that three-dimensional subjects in the
  • the foreground are of high-resolution and three-dimensional subjects in the background of less resolution.
  • the background scene lies approximately ten feet beyond the boundary of the furthest distance the participant would venture into the virtual model. This is because beyound ten feet perspective is not significantly perceptable to the average human. And beyound this viewing distance the
  • a third processing means 17 receives the fused model of panoramic coverage. The third means manipulates the geometry of the model 14a based on viewer interaction.
  • a forth processing means 18 samples out portions of the model and transmits signals representing scenes 71 of a given field of view to predetermined display units of the display assembly 11 or 12. The dimensions and detail of the virtual model 14 may be increased by moving the sensors to different locations
  • audio sensors 30 transmit audio signals to an audio processing system 23.
  • processing system is operated to assign audio signals to visual subjects positioned and comprising the panoramic computer generated model.
  • An interactive input system 10 monitors the viewers head position.
  • Position data is transmitted to the visual and audio simulation processing system 17 and 23 respectively.
  • the position and orientation data from system 10 is processed by the visual and audio simulation processing means to update the model 14 after each of the participants actions. Updating the model typically involves the participant moving a virtual object in the model with his hand, or changing the viewpoint of the displayed scene based upon a change in the participants head position. Positional changes of objects, subjects, and scenes are continuously updated and stored in the memory of the computer 9.
  • Imagery and audio signals are transmitted from the visual 15-18 and audio 23 processing means to the audio-visual assembly means 11 or 12.
  • the processing means has appropriate output processors, conductors, and interface connections to transmit the visual and audio signals to the visual display units 31, 32, 33, or 34 and audio speakers 35 of the display assemblies 11 or 12.
  • the visual model 14a and aural model 14b are updated and displayed instantaneously before the viewers eyes.
  • input means comprises a 3-D camera system 6, 3-D digitizing system 7, , and 3-D audio system 8.
  • input means comprises a 3-D camera system 6, 3-D digitizing system 7, , and 3-D audio system 8.
  • FIG. 2 illustrates a panoramic camera system 6 of prior art in which a plurality of image sensors 28a-28f and audio sensors (not shown) are faced outward about a point or area to record a contiguous surrounding visual subject scene 13c.
  • FIG. 3 illustrates a panoramic camera system in which image sensors 28a-28f are positionable and may be faced inward to record representations of each side of a subject 13.
  • FIG. 4 and 5 illustrates a sensor array 36 including a visual system comprising a small conventional camera 37, a 3-D digitizing system comprising a small conventional radar 38, and an accoustical system including a microphone 39.
  • the microphone, radar, and camera of each array have overlapping field-of-regard coverage 41 .
  • the overlapping coverage enables each the arrays sensors to record an accoustical, shape, and image signature of a given side of a subject 13.
  • FIG. 7 illustrates sensor arrays may be faced both inward and outward to record a subject.
  • Arrays are positioned adjacent to one another to form a panoramic array assembly 44.
  • Sensors of the adjacent arrays 36a-36f of the assembly are positioned to have adjacent field-of-regard coverage 42.
  • the array assembly has a substantially panoramic 3-D spherical field-of-regard coverage about a point.
  • a plurality of array assemblies 44a-44f may be arranged in the real world to simultaneously record a subject 13 environment from various points-of-regard. In this manner, virtually all sides of a subject surrounded by the array assemblies is recorded and backgroud scenes surrounding the subject are also
  • a single assembly 44 may be moved thru space in the real world and records a subject 13 environment from various points of regard at different times.
  • the array 36 or array assembly 44 may be constructed in a portable fashion such that the array or array assembly are carried through a real world environment by a living being or vehicle.
  • Each array of the assembly transmits its respective accoustic, shape, and imagery signatures to the processing means 3.
  • Processing means operates on the
  • Array 36 and array assembly 44 may be fastened together and supported by conventional means such as screws 45 and support armature 46.
  • sensors may be distributed over a vehicle such that the inner or outer skin oi the vehicle become: a housing for the sensors. The sensors can be placed on remote or host, piloted or unpiloted vehicles.
  • a panoramic 3-D digitizing system 7 comprises one type of input source 2 and is operated to input 3-D data representing a subject 13.
  • the system 7 is operated to record the shape of a 3-D subject.
  • System 7 may comprise a 3-D light pen, optical scanner, image recognition system, sonar, ultrasonic, laser scanner, radar, laser radar (LADAR) system or systems.
  • mathematical formula defining the shape of the subject may be entered by an operator via a keyboard.
  • the 3-D data is transmitted from the system 7 to a computer processing system 9 were it is operated upon.
  • the resulting 3-D data representing the subject is called a wireframe 55.
  • the wireframe is a 3-D line and point computer generated rendering of a subject. The intersection of the lines form polygons that define the surfaces of the subject.
  • a 3-D shape input system including a stylus and model table arrangement of the type described in U.S. Pat. 4,514,818 by Walker available from Quantel Limited, UK, or the 3SPACE TM Digitizer available from Polhemus of Colchester, VT may provide the shape data in system 1.
  • a 3-D shape input system including a stylus and model table arrangement of the type described in U.S. Pat. 4,514,818 by Walker available from Quantel Limited, UK, or the 3SPACE TM Digitizer available from Polhemus of Colchester, VT may provide the shape data in system 1.
  • the Cyberware digitizer incorporates sensing and illumination elements to record a three-dimensional subjects shape and color. Seconds later, a graphics workstation displays the object as a detailed, full color,
  • a radar and camera system decribed in U.S. Pat. 5,005,147 by Krishen et. al. may be incorporated to provide shape and imagery data in system 1.
  • a laser-radar (LADAR ) system including a video camera, available from Autonomous Technologies Corp. of Orlando, FL, may be incorporated to provide shape and imagery data in the system 1.
  • a 3-D camera system 6 comprises a plurality of objective lenses typically faced inward about a being or object, and outward to record a scene.
  • the objective lenses 48a-48f of the camera has overlapping or adjacent field of view coverage.
  • Any conventional TV camera or non-standard camera may be utilized in the present system 1 that is compatable with signal processing means 3.
  • the electrical section of the camera is structured to convert the visual images recieved by the image processor into electrical video signals 5a such that the information is in a format that is compatible with standard video processing equipment.
  • Any conventional or unconventional video camera 3 7 may be adapted to accept the images from the disclosed optical systems in FIG. 1 thru FIG. 7.
  • the image processor of the camera is structured to convert the visual images received into
  • the processed camera signals are typically standard synchronized coded signals utilized in the United States for video transmission.
  • the signal processor 3 may be modified to convert each received electrical video signal 5a from the image processor means into a standard or non-standard synchronized coded signal of any given country or format for transmission and processing as desired, such as NTSC, PAL, SECAM, IDTV, HDTV, or the like.
  • images may be combined by either electronic means or by optical means.
  • image chrominance, luminance, hue, and intensity may be controlled electronically
  • the plurality of images are compressed into a single frame by processing means 3.
  • the images are optically integrated into a single frame.
  • array 36 or array assembly 44 of the system 1 .
  • FIGS. 2-7 Although simple optical systems are depicted in FIGS. 2-7, it should be clear to one skilled in the art that more complex optical arrangements can be employed.
  • Other optical elements and electro-optical components taht may be included are automatic shutters, automatic focusing devices, optical filters and coatings, image intensif iers, correcting and inverting lenses, lens adapters, sensitive recording surfaces and medium of various types and formats, wavelengths, and resolutions, and so forth and so on. These various optical arrangements are given the designer to accomplish a given task.
  • Standard video compression devices can be incorporated into the camera to compress the signal to aid in the storage, transmission, and processing of each image.
  • Image sensors associated with moving target indicators (MTI), pattern recognition systems, and so forth may be integrated with the optical systems of system 1 .
  • Conventional over-the-air video transmitters can be incorporated to transmit each image to a remote video receiver for processing and display.
  • Fig. 2 illustrates a prior art camera used for recording a scene of spherical field of view coverage.
  • FIG. 3 illustrates a positionable 3-D camera system for recording all sides of subjects to be modeled in the computer generated virtual environment. Images are transmitted from the objective lenses, thru fiber optic image conduits in focus to a light receiving surface 49 of a camera 37. As shown in FIGS. 8a thru 20, in this way all sides of a subject or subjects are recorded in single frame 61.
  • optical elements as shown in FIG. 3, are arranged to record images of various sides of the subject into a single frame. Optical elements such as mirrors, prisms, or coherent fiber-optic image
  • conduits transmit images from one or more objective lenses to the light sensitve recording surface of the camera or cameras.
  • the optical fiber bundles are of a type that allows them to be twisted in route such that the desired image orientation at the exit end of the bundles is facilitated.
  • Image bundlles have a rectangular cross section to facilitate the format of the image with the cameras format. Cross sectional resolution of 60 to 100 fibers per millimeter and low light loss
  • FIG. 9b is a frame in which a
  • FIG. 10 shows that imagery of all beings, objects, and background scenery comprising model 14a may be combined in a single frame.
  • a high resolution sensor such as a HDTV or IDTV recording system, is
  • High resolution recording allows for images to be enlarged for later viewing and still retain acceptable detail.
  • the panoramic camera system may comprise a plurality of cameras. When a plurality of cameras are incorporated, images are recorded by
  • a video multiplexer/demultiplexer system of a type generally incorporated into the present system 1 is available from Colorado Video Inc. as the Video Multiplexer 496A/B and the Video Demultiplexer 497A/B. While digital compression, spatial compression, or spatial multiplexing the images is not required, it will be appreciated by those skilled in the art that compressing the image in one of these manners greatly assists in the transmission, processing, and storage of the vast amount of imagery necessary to build a panoramic model.
  • the 3-D audio input system 8 preferably is in communicating relationship with a high speed digital audio signal computer processing system 23 that delivers high quality three
  • microphones 39a-39f (not shown) are distributed to face outward from the lens housing to record a spherical
  • microphones 39a-39f are faced inward about a subject to record a contiguous accoustical field of regard coverage eminating from the subject 13. As illustrated in FIG. 4 and 5 , microphones may be integrated with the array 36.
  • the microphone 39 of each array preferrably have audio
  • FIG. 6 and 1 show that arrays may be placed beside one another to achieve continuous
  • Audio signals 5c from audio input sources 6c-6f are transmitted to the computer 23.
  • the computer 23 may consist of a
  • conventional personal computer or computer workstation 9 includes printed circuit boards designed to sample the input audio signals 5c.
  • Types of workstations and associated printed circuit boards generally utilized for 3-D audio input, processing, and output are manufactured by Division Inc. of Redwood City, CA as the Accoustetron complete studio quality audio workstation and the Convolvotron and Beachtron audio processor printed circuit boards.
  • the audio system 23 uses the virtual 3-D audio client/server protocol standard (VAP) as an operating system software interface with audio input devices such as compact disc or magnetic tape machines, and for communicating with means 17 of computer 9.
  • VAP virtual 3-D audio client/server protocol standard
  • the boards occupy ISA-compatable personnal computer slots.
  • accoustical data from audio input source 8a-8f are typically stored on digital or analog sources such as compact disc or magnetic tape machines, or may be digitized and stored in computer memory 25b and referenced to beings, objects, and scenes that form the visual model 14a comprising the computer generated environment 14.
  • the computer 23 samples and affects recorded audio data comprising the audio model 14b based on control data including audio sources position and orientation, participant position and orientation, environment reflection and transmission attributes, and audio source and participant velocities in the virtual environment 14.
  • the host computer processing means 17 is programmed with software such that control data is
  • Control data is operated upon by computer 23 as a local geometric transform and HRTF tables and gains and input into a high spped digital audio signal
  • processing system e.g. Convolvotron or Beachtron
  • the input data is operated upon by the board level system of means 23 to affect input audio sources
  • Input audio sources 8 are converted from an analog to digital audio signal 5c.
  • the board level system then output the affected audio to a digital to analog converter.
  • the audio system 23 outputs two independent 16-bit digital/analog converters synchronized to the analog to digital converters driving conventional stereo output.
  • the audio is typically transmitted over conductors to the stereo microphones on the participants 24 head. In this manner the perceived locations of sound sources in the
  • audio computer 23 may comprise a personal computer or workstation that access analog or digitally recorded sound data that is input or stored directly in computer 23 or stored on disk, digital-audio tape, or compact disc.
  • Computer 23 uses the Music Instrument Digital
  • MIDI MIDI
  • audio system to program, process, and output MIDI fomatted data streams which in turn are interpreted by MIDI devices such as synthesizers, drum machines, and
  • the system outputs a stereo signal to the participant 24 which typically wears stereo headphones 64.
  • the audio system 23 may be of a type generally available from Silicon Graphics Inc., CA as the IRIS Indigo or Personal IRIS 4D/35 workstation, which include DAT-quality digital-audio subsystem, and configured with Audio Library software; and available from VPL Research Inc. CA as the AudioSphere TM system for use with computer generated virtual environments.
  • the MIDI audio system may be designed integral to computer 9 (e.g. available on all 1993 and future Silicon Graphics Inc. platforms).
  • the stereo audio signals 5c can be any suitable stereo audio signals 5c.
  • the stereo audio signals 5c can be any suitable stereo audio signals 5c.
  • the transceiver 62 transmits an over-the-air stereo radio signals output by the computer system 23 tunable between 88-108 GHz to the reciever 63 of the an FM radio with stereo audio headphones 64 worn by the participant.
  • the audio signals 5c from audio input sources can be transmitted by conductors directly to speakers 35 distributed around the participant.
  • the speakers are
  • a graphics computer 19 is operated as an input source 2 (not shown) to create a 3-D world model 14a.
  • the computer system includes a digital computer including a central processing unit, memory, communications ports, and the like. Operating system software, graphics software,
  • processing data, generated images and the like are stored in mass storage devices which may include magnetic disk drives, optical disk drives, and so forth.
  • Commands to operate the computer system and graphics generation commands are entered by means of the viewer interaction devices which may include a keyboard and graphics input device.
  • the graphics input device 66 may consist of one or more of a joystick, a trackball, a "mouse", a digitizer pad, a position sensing or pointing system, a non-contact viewer position and recognition sensing system, a voice recognition system, or other such devices.
  • the graphic input device 66 may be operated as an input source 2 or as part of the participants interactive input system 10.
  • the computer graphics system 19 includes a bit mapped video display generator wherein each pixel element or pixel is accessible for generating high resolution images.
  • the video display generator is connected to an input port by suitable conductor lines.
  • the computer generated images are then further processed by the signal processing means 3 for
  • the digital computer may be any type of computer system which has the required processing power and speed such as the type which are employed in 3-D computer graphic
  • the computer system may function as a simulator controller if the display means of the present invention are used as simulators or as a game
  • the computer system may also be used to create special visual effects by combining artificial and animated scenes wici. live camera recorded scenes.
  • a graphics computer system of a type utilized herein is generally of the type manufactured by USA Quantel Inc.,
  • conventional videotape 67 and videodisc 68 player's input signals 5c representing prerecorded image and audio signals may transmit images to the signal processing means 3.
  • each frame may consist of images of one, several, or all subjects to be modeled in the virtual environment.
  • a single, several, or all the audio tracks may be recorded onto a single recording medium.
  • a computer mass storage 25 database may serve as an input source.
  • shape, imagery, and audio information may be encoded onto tape, or a magnetic or optical diskette or platter in an all digital format.
  • Processing means 3 of system 1 at least includes a host computer system 9 and interactive input system 10 .
  • host computer 9 preferrably comprises a digital computer system with high level image generation and graphics capabilities.
  • compatable with the present invention is generally of the type manufactured by Silicon Graphics of Mountain View, CA as the SkyWriter TM computer system.
  • the high level 3-D image generation and 3-D graphics capabilities of computer system 9 typically consists of a type of digital computer subsystem which has the capability to texture-map at least and NTSC video feed onto a three-dimensional wireframe 55.
  • the host computer 9 may include single or dual pipeline subsystem configurations.
  • the computer 9 may include all signal processing means 3
  • the high level image generation subsystem generally includes processing means 17 for manipulation of the geometric model 14, and processing means 18 for output of designated portions of the processed model 14 for display.
  • Computer 9 includes a digital computer including at least one central processing unit, system bus, memory, communications ports, and the like.
  • Computer 9 may be configured to drive one to twelve analog or digital output channels. Operating system software processing data, generated images, and the like are stored in mass storage devices which may include magnetic disk drives, optical disk drives, and so forth. Commands to operate the computer system 9 are entered by means of the participant interaction devices 10.
  • the computer 9 may be configured to receive a single or plurality of inputs from a single or plurality of interactive input devices associated with the system 10 via host 9 communication ports.
  • High- level 3-D image generation and 3-D graphics capabilities integrated with host computer 9 are generally of a type described in U.S. Pat. 4,827,445 by Puchs; or manufactured by Silicon Graphics Inc., Mountain View, CA as Real ityEngine TM Host Integrated Computer Image Generation System with VideoLab TM or VideoLab/2 TM input/output option, and with
  • Skywriter TM with RealityEngine TM incorporates the IRIS Performer TM software environment to provide the performance and functional requirements for image gneration applications.
  • the signal 5a is typically captured from an image input system 6 by a conventional frame grabber at preferably at 30 frames per second and converted from an analog to digital signal by a conventional analog to digital converter 69.
  • the computer may recieve and operate on a multiplexed or compressed video signal.
  • the converted analog signal representing the image is then transmittted to the computer 9 tor texture mapping on a 3-D wireframe 55
  • Pixels on the two dimensional frame 61 are assigned three-dimensional coordinates
  • the image segments are manipulated by the computer to generate the effect to the participant that he or she is within the computer generated environment.
  • the computer performs programmed mathematical operations on the input imagery.
  • a mass storage device 25 may be a separate conventional magnetic disk device or can be an integral part of the host graphics computer 9, i.e., the main memory of computer 9.
  • the mass storage device is used to store data representing beings, objects, and scenes representing subjects 13 in the world model 14.
  • the mass storage device contains a previouly generated data base comprising a digital representation of the world model.
  • FIGS. 11 and 12 illustrate a top plan and side view, respect ively, that can be termed the world model 14 or simulated environment constructed to illustrate the present invention.
  • the subjects rendered in the model 14a are
  • each vertex data word suitabley comprises a plurality of fields representing the coordinates of the vertex in a choosen coordinate system (x, y, z), the intrinsic color intensities present at the vertex (R, G, B), and a vector indicating the unit normal to the polygon surface at the vertex. In this manner the intensity and depth value of each pixel comprising an object is stored in computer memory.
  • a single or plurality of live video feeds may be transmitted to computer 9 and processed in near real time.
  • look up tables instruct the computer to sample predetermined portions of the image and texture map them onto predetermined portions of the 3-D wireframe model defined in the computer.
  • Video compression systems and video multiplexer/demultiplexer systems may be operated to
  • a television production unit with image compression functions may be used to compress a
  • the preassigned area on the model 55 on which the image segment is positioned and oriented for texture mapping may be stationary or have motion.
  • the model may correspond to a persons head or body or an object in the real world. In such an instance, the being or object is tracked by positions sensors located on the real object. In this manner the computer can keep track of the position and orientation of the corresponding model in the computer 9 and texture map each pixel image in its proper position on the model of the object or being.
  • the ability to sample a live video feed is a requirement in telepresence applications of system 1.
  • Real-time texture mapping from a live video feed is especially useful in an image based virtual teleconferencing system like that described in FIG 20 and FIG. 21.
  • a television production system that can function as host computer 9 in system 1 especially designed for creating the effect of texture mapping live video on a 3-D shape is
  • assemblies 12 may be incorporated or interconnected. Or additionally, participants may use plural computers (9a to the nth) to operate on the same environment; e.g.
  • an integrated part of the host computer 9 is a graphics computer system 19.
  • graphics computer system is in direct communicating
  • the keyboard, touch tablet, and host computer 9 are operated to control the processors and memory and other devices which comprise the graphics computer system.
  • Various input sources may be routed to the graphics computer system for rendering.
  • the computer graphics system comprising a digital computer may be operated to create or effect the recorded video images either before or after the images have been texture-mapped onto a wireframe.
  • the participant 24 affects the video images frame by frame.
  • Such a graphics system is used in the present invention to affect the images presented in either a 2-D or 3-D format.
  • the data to be affected is derived from a video input source 6 or storage device 25.
  • the components in a typical vector or raster electronic graphics system include a touch tablet, a computer, framestore, and a display.
  • the system 1 may include an expert system 22 with a complementary data base.
  • the expert system is in direct communicating relationship to computer 9.
  • the expert system may be housed in a separate chasis and communicate thru conventional conductors and input/output ports with computer 9. Or alternatively, the expert system may be integral to computer 9.
  • the knowledge system is provided to respond to participant 24 requests or actions, and each request or action has a record including a plurality of parameters and values for those parameters.
  • the expert system is provided to process the record of a specific request or action to answer or respond to that request or action, and the complementary database stores a plurality of records of requests or actions having known answers or responses, and any request from a participant 24 is
  • responses transmitted from the expert system are interpreted by computer 9 such that the model 14 is updated.
  • responses transmitted from the expert system may be received by other peripherial devices 228 such as motion simulators, participant interactive tactile and force feedback devices, teleoperated vehicles, or any computer actuated device.
  • Responses in the form of data is operated upon by the peripherial device to affect control surfaces, motors, and the like. In this manner subject beings modeled in the computer generated model may interact with subjects in the real world environment.
  • a motion simulator of a type compatable with system 1 device 228 generally responsive to participant actions and exper system data output is manufactured by
  • a tactile and force feedback device that is generally of a type compatable with system 1 device 228 is available from VPL Research Inc. of Redwood, CA as the
  • signal processing means 3 An important part of signal processing means 3 is the interactive input system 2 ⁇ operated by actions of the
  • Interactive input devices e.g. data gloves with position, orientation, and flexion sensors, and HMD with position, orientation, and eye tracking sensors
  • Data from device sensors are typically translated to machine language compatable with software programming of the host computer by an interactive input systems' system electronics unit 78.
  • the electronics unit 78 may be mounted adjacent to the device, a separate chasis, or in some configurations comprise a printed circuit board that is plugged into the system bus of computer 9.
  • an interactive, electronics unit 78 housing a single chasis may provide an interface between left and right data gloves, HMD, and voice recognition systems and computer 9.
  • the electronics unit 78 receives signals from the sensors over conductive cable and translate the signals into machine language compatable with the host computer 9.
  • the electronics unit 78 contains
  • An electronics unit 78 arrangement for interfacing various interactive devices to a host computer 9 workstation in system 1 is of a type generally utilized by NASA Ames Research Center, Moffett Field, CA, and operated as part of the Virtual Workstation and Virtual Visual Environment Display (VIVED) project.
  • the preferred embodiment of the system 1 generally comprises two display means: a headmounted display (HMD) assembly 11 and a large display assembly 12. Data from the input devices is operated upon by the processing means of computer 9 to affect the virtual environment. Interactive input systems 10 and associated devices compatable with the present system 1
  • Position sensing systems worn by a viewer or mounted on an object of a type particullary
  • VPL software compatable with VPL interactive input system 10 including associated electronic units, interface circuitry, devices, and with computer 9 (e.g. Skywritter TM workstation with RealityEngine TM image generation subsystem) is provided for incorporation with system 1 .
  • the VPL input device 10 contain a microprocessor that manages the real time tasks of data acquisition and communication through the various RS232C, RS422, user port and optional IEEE 488 port.
  • microprocessor also controls a 3Space Isotrack TM position and orientation system incorporated in the control unit 10.
  • Host computer 9 is programmed with software that allows a library of gestures, solid model simulations of complex objects, and the manipulation of said objects in real time.
  • non-contact position sensor systems 97 such as electro-optical systems, described in U.S. Pat. 4,843,568 by Krueger et al., U.S. Pat. 4,956,794 by Zeevi et al. are comptable and may be incorporated with the present system 1.
  • the radar in U.S. Pat. 5,005,147 by Krishen et al. and the LADAR available from Autonomous Technologies Corp. of Orlando, FL may also be positioned above the viewing space to sense the position and orientation of participants and objects in the viewing space 58. Data from the radar or LADAR is processed by suitable computer processing means to determine the
  • the position data is then processed by computer 9 to affect the model 14 of system 1.
  • data from the same sensors that track the position and orientation of a target subject 13 e.g. LADAR or radar system with a video camera
  • computer 2 may be operated upon by computer 2 to reconstruct the subject as a model 14a.
  • input system 2 and position sensing system 10 constitute the same system.
  • the combined system 2 and 10 is placed about the viewer as in FIG. 2.
  • a voice recognition system 227 may operate to translate audible sounds made by the participant 24 into machine language to control processing functions of the computer 9 .
  • the voice recognition system includes a microphone 39 worn by the participant that
  • a board level voice recognition system of a type utiized in system 1 is available from Speech Systems, Batavia, IL, as the Electronic Audio Recognition System.
  • the boards may be mounted in the rack of the host computer 9, or in a separate chasis in communicating relationship with computer 9.
  • the voice recognition system 227 operates with computer 9 to convert the voice signals into machine language to affect the model 14 of system 1.
  • conductors 82 is used to modify standard computer graphics algorithms of the computer 9. This preferably comprises a laser-disc based or other fast-access digital storage graphic imagery medium. Typically, the viewer of either the HMD assembly or large display assembly may interact with the virtual model 14 by issuing suitable commands to the computer 9 by manipulative means or sensors attached to his
  • the spatial coordinates of the virtual model can of course be changed to give the impression of movement relative to the viewer.
  • Display units 70 may consist of conventional 31, stereographic 32 or autostereogrphic 33, or holographic 34 television or projection display units. Typically, a single or two views are sampled by the computer for display on the HMD 11.
  • computer 9 includes a video processing means 18, such as a "VideoSplitter/2" TM printed circuit board, that is programmed to select six independent and adjacent fields of view of the world model 14a for display on the large display assembly 12.
  • the video processing means is in communicating relationship with the display processor and raster processor printed circuit boards.
  • the raster processor comprises a board level unit that includes VLSI processors and graphics system memory.
  • the data received by the raster processor from the output bus of the geometry processor 17 printed circuit board and data management system of computer 9 is scan-converted into pixel data, then processed into the frame buffer. Data in the frame buffer is then transmitted into the display processor of computer 9 .
  • Image memory is interleaved among the parallel processors such that adjacent pixels are always being processed by different processors.
  • the raster processor contains the frame buffer memory, texture memory, and all the processing hardware responsible for color allocation, subpixel anti-aliasing, fogging, lighting, and hidden surface removal.
  • the display processor processes data from the digital frame buffer and processes the pixels thru digital-to-analog converters (DACs ) to generate an analog pixel stream which may then be transmitted over coaxial cable to display units 31, 32, or 33 as component video.
  • the display processor supports programmable pixel timings to allow the system to drive displays with resolutions, refresh rates, and inter lace/non-inter lace character istics different from those of the standard computer 9 display monitor.
  • the display processor has a programmable pixel clock with a table of available video formats (such as 1280 x 1024 at 60 Hz
  • Non-interlaced NI or VGA (640 x 497 at 60 Hz NI), NTSC, PAL, and HDTV). All printed circuit boards comprising the forth processing means 18 may be held in the chasis of the computer 9 and generally of a type such as the RealityEngine TM host integrated computer system, and including the IRIS Performer TM software environment available from Silicon Graphics, Inc. of Mountain View CA .
  • imagery representing a 90 degree square field of regard corresponds to imagery representing a 90 degree square field of regard.
  • Corresponding square fields 71a-71f of regard are displayed adjacent to one another and form a cube such that imagery representing a continuous 360 degree field of view scene is displayed about the participant.
  • Video signals representing each of the six fields of view are transmited over a respective channel of the video processor 18 to an appropriate dislay unit 70a-70f or corresponding image segment circuit means 72 for additional processing.
  • FIG. 17 illustrates an embodiment of system 1 in which image segment circuit means 21 includes image control units 73a-73f which operate to distribute the images comprising spherical coverage to the large display assembly 12.
  • image segment circuit means 21 includes image control units 73a-73f which operate to distribute the images comprising spherical coverage to the large display assembly 12.
  • the function of the digital processing circuitry means is to accept an
  • computer 9 transmits each of the six fields of view is transmited over a respective channel of the
  • VideoSplitter TM to a corresponding image control unit
  • the electronic video control system 72a-72f accepts the incoming video signal and partitions and processes it for display on the display units 70a1-70f9. This partitioning is referred to as segmenting the image.
  • Each respective image control unit processes its repective image into image segments 71a1-71f9.
  • Each segment is transmitted from the image control unit to predetermined adjacent display units such that a continuous panoramic scene is displayed about the participant on the display units of the large display assembly.
  • each image controller 73a-73f is a central processing unit (CPU) which executes software commands to effect the processing of the images in each framestor memory of
  • each image controller is connected by an internal bus to each respective framestore card.
  • the software commands read by the CPU may be
  • the microcomputer or computer workstation preferrably includes a keyboard for an operator to enter software commands to the image controllers to effect image display.
  • Software commands consist of time code and program code for control of the displayed picture.
  • the microcomputer or computer workstation can also be operated to input software commands that specify the picture aspect ratio to be scanned for display. In this manner the signals representing model 14a may be segmented by the image
  • each framestore cards picture segment is then transmitted to a corresponding display unit 70.
  • the number of picture segements that the image controller can segment the composite picture into varies and determines the maximum number of display units that can be accommodated.
  • image segments are pre-formated by the camera system 6 or computer 9 processor 18 to correspond to the picture segmentation accomplished by the image controller.
  • Image segment circuit means 72 including assoicated cables, display units, audio system, and so forth of the type
  • FIG. 17 of the system 1 is marketed by DELCOM USA, Philadelphia, PA and includes image controller units 73a-73f sold as the Station Pro and Universal; and systems from North American Philips Inc., of New York, NY under the name VIDIWALL TM.
  • the perspective of the beings 14a. objects 14b, and scenes 14c displayed to the participant 24 are calculted to provide a correct perspective appearance to the participant.
  • off-axis perspective projection is calculated. Off-axis perspective is calculated based on the position and
  • points are sheared in a direction parallel to the projection plane, by an amount proportional to the point's distance from the
  • perspective is calculated by the computer means 18 based on the participants position anywhere in the viewing space. Not only is the perspective distorted based upon the participants location in the viewing space realitive to the virtual model 14a, but imagery is also distorted to compensate for the angle from which the viewer observes the imagery on each display unit 70 to the nth. For both reasons, perspective may be grossly distorted to make the subject or scene appear natural from a participant-centered perspective.
  • movement of the virtual model 14 may be made relative.
  • a slight movement of the viewers head may be translated by the computer to move beings,
  • input means 2 transmits data to either computer 9a and/or 9b .
  • Computer 9a transmits signals to a HMD assmbly 11.
  • Computer 9b transmits signals to the large display assembly 12 .
  • Participant 24a and 24b each operate interactive devices of associated with system 10a and 10b, respectively, to transmit position data to their respective computers 9a and 9b. Position and orientation data from either participant may be transmitted by conventional interface means between
  • computers 9a and 9b or interactive input systems 10a or 10b to computer 9a and 9b in order to update the computer
  • both participants may operate their respective computer independently and not be networked together in any manner.
  • a single computer 9 is operated to render a computer generated model for both the viewer of the HMD assembly 11 and large display assembly 12.
  • Each participant 24a and 24b operate interactive input systems 10a and 10b respectively devices to effect the model
  • the 3-D digitizing system 7 comprises a systems electronic unit, keypad, footswitch, stylus, and model table.
  • the system electronics unit contains the hardware and software necessary to generate magnetic fields, compute the position and orientation data, and interface with the host computer 9 via an RS-232C port.
  • the keypad is a hand-held, multikey alphanumeric terminal with display that is used to command data transmission, label data points, transmit
  • the foot-operated switch is used to transmit data points from the digitizer to the host computer 9 .
  • the stylus is a hand-held stylus that houses a magnetic field sensor and is used to designate the point to be digitized. The operator places the stylus tip on the point to be digitized and
  • the model table is a free-standing 40" high model table used as the digitizing surface.
  • the table houses the electronics unit and the magnetic field source (transmitter).
  • the volume of space above the table is specifically calibrated at time of manufacture and this data reside in the electronic units memory.
  • a type of 3-D digitizing system utiilized is
  • FIG. 13 illustrates a subject vase 13 to be modeled in the computer 9.
  • the computer is configured to recieve input imagery 56 and shape data 55 from the input sources 2 .
  • Subject imagery 56 of the vase recorded by the panoramic camera system 6 and vase shape data derived from operating the digitizer system 7 are combined by operating the computer 9.
  • the computer is operated to texture map image segments of the recorded image of the vase onto 3-D wireframe 55 of the vase previously constructed by operating the 3-D digitizing system 7.
  • each rendered object(e.g. a vase), being (e.g. a person), or scene (e.g. temple and seascape) is placed in the computers memory 25a as rendered models 14a.
  • a type of video input and output sub-system for combining 3-D graphics with digital video as described above is manufactured by Silicon Graphics of Mountain View, CA under the name VideoLab2 TM.
  • the system allows digitized directly to the screen at real-time rates of 25 or 30 frames per second or stored on disk in memory of mass storage 25a.
  • diagramat ically illustrates a sample accoustical signature of a vase.
  • the accoustical signature of the vase is derived by manipulating the vase in the real world and recording the sounds the vase makes. For instance the vase may be handled, scratched, hit, or even broken in order to render sounds by various accoustical signatures.
  • Those signatures form the audio world model 14b.
  • Accoustical signatures are linked to subjects in the visual world model 14b.
  • the accoustical signatures recorded by microphones 39a-39f of the audio input system 8 are converted from an analog signal to a digital audio format and may be stored in the audio processing means mass storage 25b.
  • audio storage 25b may consist of accoustical signatures stored on conventional audio tape or disc and accessed from corresponding audio tape and disc players.
  • Mass storage 25b is connected by suitable conductors and in communicating relationship with audio computer processing system 23.
  • audio computer processing system 23 When an action, such as a person breaking the vase occurs in the world model 14a occurs, a corresponding audio representation is pre-programmed to occur in the accoustical world model 14b.
  • the stereo audio signal is typically converted from a digital to stereo analog signals and read out to the stereo headphones in the
  • participant HMD 11 or to speaker systems associated with each side of the large display assembly 12.
  • FIG. 14 diagramat ically illustrates a computer generated world model 14 which the viewer can enter by the use of interactive input system 10 and audio-visual means 4.
  • the system comprises imagery, shape, and audio signatures recorded from objects in the real world and then translated and
  • the computer 9 determines the current field of view for the participant 24 and his or her line of sight.
  • the images transmitted to the display unit's 70 to the nth are of a virtual model 14 having predetermined spatial coordinates within the display assemblies.
  • the six display units display a virtual model corresponding with what would be seen by the viewer if the virtual model were a real object from the viewers current standpoint.
  • the location and angular position of the participant's 24 line of sight are calculated by computer 9 .
  • Three-dimensional algorithms are operated upon so that the observer is effectively enclosed within the model 14a.
  • the model is generated by rendering data word or text files of coordinates comprising the vertices of the surface faces which are form the model 14a.
  • computer 9 transmits processed signals to the HMD assembly 11 which the viewer wears.
  • An important part of signal processing means 3 is at least one interactive input device 10, Device 10 continuously monitors the participant.
  • the position system 10 comprises a head sensor 76a, a right data glove sensor 76b worn on the participants right hand Hr, a left data glove sensor 76c worn on the participants left hand H1, a source unit 77, a systems electronics unit 78, and conductors 83, and 24 that
  • the source transmits a acts as a fixed source module and is energised by a suitable power source and transmits a low frequency electrical field.
  • the sensors are small and modular.
  • the head sensor 76a is mounted on a band worn on the viewers head, and the glove sensors 76b and 76c are mounted intragral to the right and left glove, respectively.
  • the sensors sample the field generated by the source.
  • the sensor and the transmitter are both connected to an electronics decode circuit 78 which decodes the signals from the sensors and computes each sensor's position and orientation in angular coordinates.
  • the device thus provides data as to both the position and orientation of the viewers head, left and right hand.
  • the information is passed via conductor 82 to the computer 9.
  • a type of system generally compatable with the present invention is manufactured by
  • th°ir may be built into the head band two eyeball movement tracking systems, or eye sensors which monitor movements of the wearer's eyeballs and transmit signals representing these movements to computer 9 via
  • a type of eye tracking system generally compatable with the present invention is manufactured by NAC of England, UK as the Eye Mark Recorder, Model V (EMR-V).
  • the participant wears a head band which holds a position sensor 76a.
  • the position sensor system 10a mounted on participant 24b in assembly 12 operates in the same manner as the position sensing system 10b operated by participant 24b wearing the HMD assembly 11.
  • computer 9 includes computer image processing means 18 for sampling out up to eight independent views within the model 14 for video output. Typically, each output signal is transmitted to a display unit 70a-70h.
  • the circit means 18 samples out two independent views 70g and 70h for display on the left and right displays of the viewers HMD 11, and samples out six independent contiguous views 70a-70f for display on each side of the six sides of the large display assembly 12. It should be noted that by decreasing the number of displays the computer 9 and associated circuit means 18 is able to increase the display resolution because more processing memory can be generated for the remaining displays.
  • a type of image segment circuit means 18 compatable with and integral with the present invention is generally of a type manufactured by
  • Fig. 17 is a more automated embodiment of the system 1 in which a 3-D subject is rendered by a first processing means 15 for fusing the image signals 5a and microwave signals 5b from the subject 13. The fusion of the image and microwave signals by means 15 results in 3-D model segments 26a.
  • a panoramic 3-D camera system 6 and panoramic 3-D digitizing system 7 comprises the panoramic input system 2.
  • Arrays 40 and array assemblies 44 similar to those shown in FIGS. 4 thru 7 are preferrably incorporated position and hold sensors in place.
  • a plurality of image and microwave sensors are positioned inward about a subject 13 to form adjacent coverage 42.
  • a plurality of image and microwave sensors are positioned in an outward facing manner to achieve adjacent coverage 42 of a subject.
  • a target 3-D subject exists in space in the real world.
  • the outputs of image and microwave sensors are combined by image processors 15 and 16 of computer 9.
  • the image sensor is composed of one or more cameras 37 which consist of an aperture and a light sensitive element such as a charge-coupled device (CCD) array 53.
  • the light sensitive element converts the sensed image into an analog video signal 5a .
  • the analog video signal is transferred to the low-level image processor of means 15 via a coaxial cable.
  • the low-level image processor is an electronic circuit that may be implemented as a general purpose computer with
  • the low-level image processor collects the analog video for each frame and converts it to a digital form.
  • This digital image is an array of numbers stored in the memory of the low-level image processor, each of which represents the light intensity at a point on sensing element of camera.
  • the low-level image processor may also perform certain filtering operations on the digital image such as deblurring, histogram equalization, and edge enhancement. These operations are well-known to those skilled in the art.
  • the digital images, thus enhanced, are transferred via a digital data bus to the high-level image processor of means 15.
  • the high-level image processor may also be either a hard-wired circuit or a general purpose computer.
  • the high level image processor takes the enhanced digital image and attempts to extract shape information from it by various means including shape-from-shading, or in the case where multiple cameras are used, stereopsis cr photometeric stereo. Again these are well-known operations to those skilled in the art.
  • the multiple images are combined at the high-level image processor.
  • the images may be combined optically by the use of a panoramic camera system shown in FIG. 3 , and previously discussed herein.
  • the image processor processes an incomplete surface model of the unknown object.
  • the surface model is a list of digitally stored data, each of which consists of theree numbers that are the x, y, and z locations of a point on the object's surface.
  • the incompleteness of the surface model may result from regions on the object's surface that are, for some reason, not understood by the high-level image procesor.
  • the incomplete surface model is passed on to the initializer included as part of processing means 15 by a digital data bus.
  • the initializer is a general-purpose computer of digital circuit that "fills in" the portions of the surface model left incomplete by the high-level image processor.
  • the unknown areas of the surface model are computed by surface functions such as B-splines that depend on some numerical parameter p.
  • the surface functions are represented digitally in a form that both the initializer and the computer understand.
  • the surface funcitons along with the complete surface model are passed on to the computer by a digital data bus.
  • the computer will determine the correct value of the paramater p in the manner hereafter described.
  • a radar cross-sect ion (RCS) of the unknown subject 13 is being measured by the radar system 38.
  • the radar 38 functions a 3-D shape input system 7.
  • the radar 38 consists of a radar processor, antennas 85, and waveguides 86.
  • processor is a widely available device, and all of the
  • the radar processor generates a microwave signal that is transmitted along the wavguide and radiated by the transmitting antenna.
  • the electromagnetic field is diffracted by the object and collected by the
  • the diffracted signal is transmitted back to the radar processor by a waveguide.
  • the radar processor of system 7 computes the RCS of the unknown subject 13.
  • the RCS is represented digitally by the radar processor, and
  • the computer 15 performs the comparisons and interations using two pieces of widely available software.
  • the first is the MINPACK package of non-linear minimization programs published by Argonne National Laboratory, and the second is the Numerical Electromagnetic Code (NEC) available from Ohio State University.
  • the NEC code generates theoretical
  • the computer 15 uses the correct value of p, along with the incomplete surface model and surface functions from the initializer, the computer 15 generates a complete surface model segment for each adjacent field of regard 42 of the overlapping 41 radar and image sensors.
  • a plurality of means 15 operate in parrallel to derive segments 26 representing all sides of a 3-D subject 13 simultaneously. As with all computer programs, they must be written for the particular hardware and carefully debugged.
  • the output from each of the sensor fusion systems is transmitted to a solid-modeling computer system which forms the second processing means 16 of the system 1 shown in FIG. 17.
  • the computer system includes a high speed central processing unit
  • CPU central processing unit
  • terminals for instance magnetic tape or disk drives
  • solid-state memory for instance magnetic tape or disk drives
  • communication interfaces such as a network interface
  • high resolution video display and a printer plotter.
  • Mass storage of means 16 contains a database for storing solid models representing segments 26 of subjects 13 such as objects, beings, and scenes.
  • Solid models may be represented in the database in a number of different ways, such as in the form of boundary representations, and faceted representations.
  • solids are represented so that it is possible to extract information about surface patches whose disjoint union comprises the bounding surfaces of the solid.
  • the system operates using topology directed subdivision for determination of surface intersections. This is accomplished by the steps of obtaining a pair of surfaces from a main pool of surfaces representat ions and performing a mutual point exclusion test to determine if the surfaces may have an intersection.
  • Data defining the model segments 26 obtained by the first processing means 15 are transmitted to the second processing means 16 mass storage. Included in this data is information on the surfaces representat ions of each model segment which is operated upon by the second processing means in determining surface matching and intersection. For those pairs of surfaces possibly having an intersection, the
  • transversal ity of the surface is checked. If transversal, the intersection set is computed. For those pairs which are not transversal, recursive subdivision is performed until
  • a parallel processing system including a master
  • FIG. 17 a position sensing system 10 similar to that described for use in FIGS. 2 and 3 is operated to monitor the participants position and orientation.
  • various viewer interactive input systems such as those previously decribed, may be incorporated to monitor the position and orientation of the veiwer.
  • audio signals 5C from the microphones 39a-39f are transmitted to the 3-D audio system 23.
  • the audio system 23 outputs a stereo signal 5c corresponding to the subjects in the model 14a and the position and orientation of the
  • the stereo signal 79. is transmitted by a conventional stereo audio transmitter 62 to a audio stereo reciever 63 associated with stereo headphones 64 worn by the participant 24.
  • signal 79 is transmitted
  • Such a stereo transmission/receiver headphone arrangement is of a conventional nature used commonly by those in the broadcast industry.
  • panoramic computer generated model 14 by computer 9 is the same as in FIG. 16 for the large display assembly.
  • the forth processing means 18 for processing the image for display and distribution also includes image segment circuit means 72 to partition each signal 80a-80f into sub-segments for display on an array of display units 70al-70af located on each side of the viewer.
  • Image control units 73a-73f whose detailed operation has been previously described, are included as part of means 72 to process the images for distribution to the display units. d) THREE-DIMENSIONAL DISPLAY PROCESSING
  • input sources 2 and signal processing means 3 allow a three-dimensional model 14a to be rendered within the computer 9 that is able to be rendered in real-time for simulation. Additionally, the model has increased realism because the model is composed directly from shape and imagery recorded from the real world as opposed to created in a traditional manner incorporating computer graphics. Most importantly, computer 9 is programmable such that multiple viewpoints of the three-dimensional model 14 may be processed out to facilitate dynamic three-dimensional display for either stereographic, autostereographic or holographic display system applications 91. Computer 9 includes the necessary firmware and software for processing and distributing 3-D images to 3-D display units 32, 33, or 34.
  • a single computer is preferrably used to process an adjacent segment 26a of the model 14a for each 3-D display unit 32, 33, or 34. This is recommended because of the computational intensive nature of rendering multiple viewpoints of the model.
  • the complete world model 14a may be stored in the memory of each computer 9a to the 9nth.
  • each computer 9 recieve only a segment 26a of the model 14a from a master computer 9 or data base 25a.
  • a participant interactive input system 10 transmits viewer position and orientation data to each computer 9a via conventional
  • Each computer is synchronized with each of the other computers by a master clock 88 transmitting a timing signal to each computer 9a to the 9nth.
  • the timing signal may be derived from the internal clock of any computer 9.
  • the output signals from each computer 9a1-9f9 are transmitted to each respective 3-D display unit 90a1-90f9 of the display 3-D assembly.
  • the display units display adjacent portions of the model 26a1 of the world model 14a such that a substantially continuous stereographic, autostereographic, or holographic scene 71 is rendered before the participants eyes.
  • stereoscopic methods and display systems 32 compatable with system 1 which require special eye glasses; commonly, two overlapping pictures for each screen can be read out for 3-D stereoscopic effects, such as with differing filtration or polarization between the projection modules or, for a 3-D effect without glasses, showing one version of the picture slightly smaller and out of focus relative to the other.
  • the two differing points of view are projected in a super-imposed fashion on each side of the large display assembly 12.
  • the images are scanned out of computer 9 as separate channels by using a multi-channel video output processor 18 generally of a type aviailable from Silicon
  • multiplexed image is applied to a micro-polarizer array.
  • the multiplexed image is demultiplexed by the viewer wearing polarized eye glasses.
  • An advantage of Faris is system is that a single picture frame contains both left and right eye information so that conventional display units are instituted, but yield a stereoscopic image to the viewer.
  • an autostereoscopic image requiring no eye glasse may be rendered by scanning out multiple adjacent viewpoint images of the model from computer 9 and applying those images to processing meams 119 and display means 33 according to U.S. Pat. 4,717,949 by Eichenlaub or an
  • the images may be interdigitated by computer 9. Alternatively, the images may be read out from computer 9 on separate channels and interdigitated by a signal mixer.
  • an autostereoscopic system includes a system in which multiple adjacent viewpoint images of the model 26a of the world model 14a are applied to processing means 119 and display means 33 according to U.S. Pat. 5,014,126 by Pritchard et al.
  • the scanning path and recording rates are selected to produce motion within visio-psychological memory rate range when displayed using standard diplay units 31.
  • Computer code is written to define the multiple adjacent viewpoint images of the model 14 and the number of frames per second that the computer 9 transmits to the display unit.
  • each computer 9al-9nth is programmed and operated to construct a series of predistorted views of a model segment 26a of the world model 14a from image data stored in a computer memory by ray-tracing techniques. Each perspective view can then be projected with laser light onto a piece of high resolution film from the angle
  • the hologram is
  • Each adjacent holographic display unit 34al-34f9 displays a holographic image of an adjacent corresponding portion 26a of the world model 14a.
  • a holographic stereogram approach may be desirable when presenting 3-D images.
  • a stereogram consists of a series of two-demensional object views differing in horizontal point-of -view . These views are presented to the participant in the correct horizontal location, resulting in the depth cues of stereopsis and
  • the 2-D perspective views are generally imaged at a particular depth position, and are multiplexed by horizontal angle of view.
  • a given holo-line in this case contains a holographic pattern that diffracts light to each of the horizontal locations on the image plane.
  • the intensity for a particular horizontal viewing angle should be the image intensity fOor the correct perspective view. This is accomplished by making the amplitude of the fringe
  • the precomputed tables can be indexed by image x-position and view-angle (rather than by x position and z position).
  • Summation is performed as each of the perspective views of each segment 26a is read into each computer 9al-9nth based on the viewpoint of the participant.
  • the participants viewpoint is sensed by a any previously mentioned position sensing system, such as a LADAR 97 , that comprises the interactive input system 10.
  • the changes in x can be indexed by look up tables of the computer 9.
  • a simple drawing program written by MIT has been written in which the user can move a 3-D cursor to draw a 3-D image that can oe manipulated.
  • Stereogram computer graphic holograms such as those computed and displayed on the MIT real-time holographic display system, and which produce realistic images, may be computed utilizing sophisticated lighting and shading models and exhibiting occlusion and specular reflections in system 1 .
  • Each computer 9a1-9f9 may comprise the following system and methods demonstrated by the MIT, Media Laoratory. The methods of bipolar intensity summation and precomputed elemental fringe patterns are used in the hologram computation for holographic real-time display.
  • a type of computer 9al-9f9 for holographic processing 220 is generally of a type known as the Connection Machine Model 2 (CM2) manufactured by Thinking Machines, Inc. Cambridge, MA.
  • CM2 Connection Machine Model 2
  • Each computer 9al-9f9 employs a data-parallel approach in order to perform real-time
  • each x location on the holgram is assigned to one of 32k virtual processors.
  • the 16k physical processors are internally programmed to imitate 32k "vrtual" processors.
  • a Sun 4 workstation is used a front-end for the CM2, and the parallel data programming language C Paris is used to implement
  • each image segment 26a1 -26a9 may be computed for each associated holographic dispaly unit 34a1-34f9.
  • the signals represent the optical fringe patterns that would be present in a 3-D hologram composed of 192 horizontal lines. Each line is composed of 16 other lines and has 32,000 picture elements. To simplifiy the computing task, som information is omitted from the hologram.
  • the light signals are converted into a radio-frequency signal,
  • tellurium-dioxide acousto-optical crystal This holographic information.
  • the mirror spins in th opposite direction and at the same speed as the sound waves.
  • each 3-D display unit 90al-90f9 is faced inward toward the participant such that a panoramic scene of speherical coverage is
  • holographic stereogram It mimics the visual properties of a true hologram even though it lacks the information content and inter ferometr ic accuracy of a true hologram.
  • holographic processors and larger holographic display units similar to that described above may be incorporated in the present system 1. It is further forseen within the scope of the system 1 that similar and other holographic processors and diplay units may operate on the same basic image based virtual model for holographic image generation, and use the basic assembly 4 arrangement for holographic display. It is also forseen that projection hologrphic display units may be situated about the viewing space 58 to project 3-D images into the viewing space to add increased realism to the images viewed by the
  • FIGS. 20 and 21 illustrates an embodiment in which system 1 is used in a telecommunications application.
  • a virtual real ity/telepresence teleconferencirg system 20 includes computer 9 , interactive input system 10, source 2 information derived from an input means consisting of a panoramic camera system 6 , 3-D panoramic digitizer system 7 , and 3-D panoramic audio system 8, a head mounted display assembly 11 and/or large display assembly 12, and telecommunications peripheral apparatus 92-96 to allow interconnection of two or more terminals to engage in teleconferencing via digital data networks 98.
  • the digital data telecommunications network may be part of a local area network (LAN) or wide area network (WAN) known to those in the telecommunications industry.
  • LAN local area network
  • WAN wide area network
  • any suitable telephone, cable, satellite, or wireless network 98 compatable with the present invention may select, switch, transmit, and route data of system 1 between remote locations.
  • a frame grabber interfaces each camera of input system 6 with its computer 9 .
  • Signals output from the computer 9 are encoded and compressed before being input to a telephone line via a
  • a decoder 96 is conntected between the modulator /demodulator 94 and the computer for decoding compressed video signals received by the modulator/demodulator means from the telephone line 99 so that the signals may be displayed on a video display conected to the computer.
  • the telecommunications system 20 is configured to send and switch from video to high-resolution graphics, or voice modes without breaking connection.
  • the participant may choose to transmit imagery data and/or shape data, or can update a virtual environment by simply passing data
  • a plurality of computer workstations 9 and telecommunications apparatus 92-96 may operate to in parrallel to transmit data over a plurality of telephone lines to facilitate the
  • high bandwidth imagery of each side of the large display assembly 12 may be
  • the telecommunications system 20 can be added as an internal peripheral to a computer, or may be added as a stand alone device which conntects to the computer 9 via a serial or parallel interface. Additionally, the telecommunications system may be used in a one, two, or many-way ("broadcast") mode.
  • U.S. Pat. 5,062,136 by Gattis et al. is generally of the type incorporated and compatable with the present system 1.
  • input means 2 comprises a panoramic camera system 6a-6f, panoramic
  • Each LADAR 7a-7f of the system includes a registered visible video channel.
  • the LADAR system searches, acquires, detects, and tracks a target subject 13 in space.
  • the LADAR initially searches a wide field of view 223.
  • the LADAR includes focusable optics that subseguently may focus on a subject in a narrow field of view 224.
  • each LADAR video processor 15a-15f associated with each LADAR system 7a-7f switches from a search mode to
  • a vision processor 15a-15f e.g. a SGI Computer Workstation or
  • Macintosh PC of each LADAR system includes an object
  • Each input source, computer fusion processor, and display unit operates on an image with a 90 degree field of view. All systems may be synchronized by the master clock integral to any one of the computers 15a-15f. This is typically done by conventional software program to synchronize the signals of the machines 15a-15f and the use of common wiring network topologies such as an Ethernet, Token Ring, or Arcnet to interconnect machines 15a-15f.
  • FIG. 23 illustrates the arrangment as configured on a module of a space station. It should be understood that various system 1 arrangements can be placed on any suitable vehicle platform.
  • the participant operates interactive control devices 103 to pilot the host vehicle 102 in response to the audio and visual scenery displayed around the participant. Additionally, the system 15 may include object recognition processors.
  • a LADAR 7a and camera 6a of a type compatable for incorporation with the present invention is manufactured by Autonomous Systems Inc. of Orlando, FL.
  • FIG. 25 a 3-D representation of the scene is recorded by a sensor array 36a-36f such as that described in FIGS. 6 and 7 .
  • the sensor array housing 40 is incorporated into the outer skin of a remotely piloted or teleoperated vehicle 108.
  • the vehicle 108 incorporates a video data
  • compression system 226 to transmit visual information sensed about the vehicle over the air to operate a control station 225.
  • An over the air radio frequency digital communications system 226 transmits 1000 to one compressed full color signals at a 60 hertz data transmission rate.
  • Imagery data from a panoramic camera system 6 is transmitted to an output
  • the frame buffer reads out the digital data stream to the radio frequency tranceiver 109b for
  • Tranceiver 109a is located at the control station 225.
  • the transceiver 109a recieves the over the air signal and reads the signal into the input communications buffer.
  • the input communications buffer reads out a digital data stream to a data signal decompression (or data expander) device.
  • the decompressed signal is read to signal processing unit 3 for processing and distribution.
  • the processed image is transmitted from processor 3 to display units 11 or 12. Shape data and audio data may also be
  • a teleoperated vehicle 226 data transmission and control system compatable with the present system 1 is manufactured by
  • a panoramic camera system 6 like that in FIGS. 2-7 replaces the camera system of the Transition Research Corp. camera arrangement.
  • a single or plurality of channels may comprise the system 21.
  • the participant 24 of the control station 225 interacts with the real world remote environment by viewing the displayed scene and operating devices of the interactive input system 10.
  • the control signals are transmitted from the system 10 from tranceiver 109a to tranceiver 109b to vehicle control systems 112.
  • the control system includes data processing means that operates on the transmitted signal to actuate control surfaces and motors 113, and manipulators onboard the teleoperated vehicle. In this manner the participant remotely controls the teleoperated vehicle 108.
  • the sensor array 36 may be mounted onboard unpiloted vehicles such as robots.
  • the sensors of each sensor array would be faced outward from the robot.
  • the sensors of each sensor array would be in
  • Sensed data would be fused and operated upon by the robot to assist the robot in negotiating and interacting within it's environment.
  • the pr ocessing means 18 of computer 9 generates signals 80 transmitted to the HMD 11 via conductor lines and these are converted into two images on respective high-resolution, miniature display units 70a-70b housed within the HMD assembly.
  • the display units are mounted on opposite sides of the HMD assembly in direct communication with the respective left and right eyes of the viewer wearing the HMD assembly. HMD assemblies of the type
  • a large display assembly 12 is provided to receive and display the virtual model to a viewer.
  • the large display assembly is configured in a polyhedral arrangement.
  • the display assembly comprises a structural assembly that encloses the participants head, upper body, or entire body.
  • the assembly is designed to facilitate a single or plurality of viewers.
  • the floor 101 and its' associated display units beneath, to the sides and over the viewer are integrated so the participant is presented with a substantially continuous scene for viewing.
  • the structural framework, supports, and associated fasteners 107 are integrated with the display assembly such that they are hidden from the participant and hold the assembly together.
  • the floor 101 on which the viewer is situated is preferably of a rigid
  • the viewing side of the display systems or optical enlarging assemblies is constructed of materials that support the participant.
  • the material on which the viewer is situated is preferably formed of a transparent rigid glass, plastic, or glass-plast ic laminate.
  • the viewing surface 81 of the display units may be curved or flat and face inward toward the center of the viewing space.
  • the diplay units 70 typically comprise image projection units and associated rear projection screens, cathod ray tube display units, or flat panel displays.
  • Single display units 70a-70f may comprise a side of the viewing space, or a
  • plurality of display units 70a1-70f9 may make up a side of the viewing space 58.
  • stereographic display unit 32 As shown in FIG. 19 stereographic display unit 32,
  • autostereoscopic display units 32 or holographic display units 33, and associated screens, audio components, and entry and exit ways may be supported in the similar manner as conventional display units and screens as described in U.S. Pat. 5,130,794 by the present inventor.
  • the entire display assembly 11 or 12 may be located on any suitable vehicle.
  • a display system for virtual interaction with said recorded images comprising:
  • (a) input means including:

Abstract

The system includes a panoramic input device (6, 7, 8), a computer processing system (15-18), and panoramic audio-visual presentation device (11, 12). The panoramic input device may include a sensor assembly (36) including a plurality of positionable radar (29, 38), camera (28, 37), and acoustical sensors (30, 39) for recording signatures from all sides of a subject simultaneously. The computer processing system processes signals from the input device to generate, update and display a virtual model. An interactive input device (10) is provided for allowing interaction with the virtual model. The panoramic audio-visual presentation device may be either a head-mounted display (11) or a closed structure (42) having display units mounted in all viewable directions surrounding a participant. The display units may be conventional (31), stereoscopic (32), autostereographic (33), or holographic (34) display systems.

Description

IMPROVED PANORAMIC IMAGE BASED VIRTUAL REALITY/
TELEPRESENCE AUDIO-VISUAL SYSTEM AND METHOD
BACKGROUND OF THE INVENTION
1. Field of the Invention
This invention relates generally to panoramic display methods and more particularly to the sensor fusion of data from the panoramic arrangement of three-dimensional imaging sensors and surface contour sensors to form virtual objects and scenes, the processing of the virtual objects and scenes based on a viewer operating interactive computer input devices to effect the manipulation of the virtual objects and scenes defined in the computer, and the display of the effected virtual objects and scenes on a panoramic display unit to the extent that the viewer percevies that the virtual objects and scenes completely surround the viewer.
2. Description of the Related Art
My previous U.S. Pat. No. 5,130,794 describes a panoramic image based virtual reality system that incorporates a
multi-lens camera system with spherical field-of-view (FOV) coverage. As shown in FIG. 2, objective lenses of the '794 camera system face outward with adjacent or overlapping FOV coverage. The imagery from the camera is surface mapped onto the interior of a three-dimensional (3-D) shape defined in a special effects processor of a computer. Alternatively, the input source is at least one computer graphics system that generates three-dimensional graphics of spherical FOV coverage. The viewer operates interactive input devices associated with the computer to manipulate the texture mapped virtual images. The virtual environment is instantaneously affected before the viewer and displayed on either a
head-mounted display assembly or on contiguous display units positioned beneath, to the sides, and above the viewer.
Limitations of the panoramic video camera system in '794 are that the panoramic camera does not record a non- spherical field of view(FOV) and does not incorporate a non-contact shape sensor.
An improvement over the existing system is proposed in my Disclosure Document No. 197612, specifically Fig. 15, filed with the U.S. Patent and Tradmark Office in Feb. 1986, and in my recent paper entitled "Image Based Panoramic Virtual
Reality System", presented at the SPIE/IS&T Symposium on
Electronic Imaging: Science & Technology 92; Visualization, Holography, and Stereographies; Visual Data Interpretation, Paper No. 1168-02, on Feb. 9, 1992.
In these documents a multi-lens camera system with
positionable taking lenses is described. Taking lenses of the camera are faced inward or outward to record imagery of a subject in an continuous simultaneous manner. By combining panoramic visual field of view sensor data with associated shape sensor data a realistic panoramic image based
three-dimensional computer generated model is rendered.
Imagery from the camera is surface mapped onto the surface of a three-dimensional shape defined in a computer. The shape is input by a panoramic 3-D digitizer device. Audio data is input by a panoraimic 3-D audio system. Audio attributes are assigned to subjects in the model. Shape, imagery, and audio sensors may be combined to form one sensor array. Sensors are positioned adjacent to one another to facilitate adjacent or overlapping coverage of a subject. Preferably corresponding panoramic shape, imagery, and audio signatures of a subject(s) are collected simultaneously. In this manner action of a 3-D subject is recorded from substantially all aspects at a single moment in time. The participant operates interactive input devices associated with the computer to manipulate the vιrtual object. In one example, the participant observes the model on a head mounted display system. In another example, the participant is surrounded by contiguous audio-visual display units. In the latter example, each display unit displays a segment of the model.
It is therefore the objective of this invention to provide a more versatile image based panoramic virtual reality and telepresence system and method. Still another objective is to produce systems and methods for recording, formatting,
processing, displaying, and interacting with data representing 3-D beings, objects, and scenes. More specifically, an objective of this invention is to provide a positionable multi-lens camera system for recording contiguous image segments of an object, being, adjacent surrounding scene, or any combination of these types of subjects; a signal
processing means comprising first computerized fusion
processing system for integrating the positional camera system with corresponding digitized shape and contour data; a second computerized fusion processing system for integrating first fused data with other fused data representing adjacent
portions of a being, object, or scene comprising a panoramic computer generated model; where various 3-D digitizer systems may be incorporated for entering 3-D shape and contour data into a image processing computer; a third processing means to manipulate the geometry of subjects comprising the virtual model; a forth processing means for sampling out given fields of regard of the virtual model for presentation and
distribution to display units and audio speakers; where signal processing means includes an expert system for determining the actions of subjects of the computer generated model; where the signal processing means includes image segment circuit means for distributing, processing, and display of the model; wnert the system includes a 3-D graphics computer system for the generation, alteration, and display images; and a system and method for image based recording of 3-D data which may be processed for display on various 3-D display systems to include head mounted display systems, and room display systems with stereographic, autostereoscopic, or holographic display systems.
It is also an objective of this invention to provide
interactive input devices operable by a viewer to cause the generation, alteration, display of 3-D images on said display assembly means; to provide associated 3-D audio systems; to provide alternative viewer interactive and feedback devices to operate the interactive input devices and associated
processing means such that the resultant virtual environment is simultaneously effected before the viewers eyes; to provide an associated telecommunications system; and to provide a system for incorporation with a host vehicle, teleoperated vehicle, or robot.
BRIEF DESCRIPTION OF THE DRAWINGS
FIG. 1 is a flowchart to which reference will be made in generally explaining the overall operation of the recording, processing, and audio-visual system 1 according to the present invention.
FIG. 2 is a perspective view of a cameraman carrying a panoramic camcorder system of spherical coverage described in pr ior art.
FIG. 3 is a greatly enlarged fragmentary sectional veiw of one of the camera arrangements for optically recording image segments representing sides of a three-dimensional subject into a single frame according to the present invention.
FIG. 4 is a perspective view of a sensor array for recording accoustical, visual, and shape data for input according to the present invention.
FIG. 5 is a side sectional view of the sensor array shown in
FIG. 4 .
FIG. 6 is a diagrammatic representation of an inward looking three-dimensional input source incorporating the sensor array shown in FIGS. 4 and 5..
FIG. 7 is a diagrammatic representation of a inward and outward looking panoramic three-dimensional input source assembly incorporating the sensor array shown in FIGS. 6 and 7.
FIGS. 8A-8D are diagramatic representations of video frames of three-dimensional coverage of beings and objects to be modeled in 3-D in the present invention.
FIGS. 9A-9B are diagramatic representations of video frames of three dimensional coverage of beings, objects, and
background scenes, respectively, to be modeled in 3-D in the present invention.
FIG. 10 is a diagramatic representation of a HDTV frame on which includes both foreground and background imagery
necessary to model a virtual environment.
FIG. 11 is a fragmentary view onto the top of the virtual world model in which recorded three-dimensional beings, objects, and/or scenes are incorporated according to the present invention.
FIG. 12 is a fragmentary view onto the side of the virtual model shown in FIG. 11.
FIG. 13 is a diagramatic illustration showing how imagery is texture mapped onto a three-dimensional wireframe model to form a three-dimensional virtual model generated and processed for presentation by audio and video computer signal processing means of system 1.
FIG. 14 is a perspective, partially diagramatic view showing an image based virtual model generated for audio-visual presentation by the visual signal processing means of system 1.
FIG. 15 is a block diagram of an image formatting system for recording, processing, and display of an image of three- dimensional coverage which embodies the present invention.
FIG. 16 is a block diagram of a second embodiment of system.
FIG. 17 is a block diagram of a third embodiment of the system.
FIG. 18 is a block diagram illustrating the incorportation of a three-dimensional display system according to the present invention. FIG. 19 is a perspective, partially diagramatic view
illustrating the three-dimensional viewing system described in FIG. 18.
FIG. 20 is a block diagram of an embodiment of the present invention including a telecommunications system.
FIG. 21 is a perspective, partially diagramatic view
illustrating the telecommunications embodiment according to FIG. 20.
FIG. 22 is block diagram illustrating an embodiment of the present invention wherein a host vehicle control system with a panoramic sensor, processing, and display system provides telepresence to a viewer/operator for control of the host vehicle.
FIG. 23 is a sectional view of a host vehicle incorporating the present invention shown in FIG. 22.
FIG. 24 is a block diagram illustrating an embodiment of the present invention wherein a remote control system for a remotely piloted vehicle with a panoramic sensor system transmits a three-dimensional panoramic scene to a control station for processing and spherical coverage viewing in order to assist the controller in piloting the teleoperated vehicle.
FIG. 25 is a perspective, partially diagrammatic view illustrating the remote control three-dimensional viewing system described in FIG. 24.
LISTED PARTS IN DRAWINGS
1: Improved panoramic image based virtual
real ity/telepresenc audio-visual system and method
2: Panoramic 3-D input source means
3: Panoramic 3-D signal processing means
4: Panoramic audio-visual presentation means
5: Suitable electrical interface means and assoiciated
signal (general)
(5a: video signal means)
(5b: digital shape signal means)
(5c: audio signal means) 6: Panoramic 3-D camera system
7: Panoammic 3-D digitizing system
8: Panoramic 3-D audio recording system
9 : Host computer system
10: Interactive input system
11: Head-mounted display(HMD) system
12: Large display assembly
13: Subject
(13a: being)
(13b: object)
(13c: scene)
(-a: side a)
(-b: side b)
(-c: side c), etc.
14: Computer generated virtual world model
(14a: visual and shape model)
(14b: audio model)
(-a: modeled being)
(-b: modeled object)
(-c: modeled scene)
15: First processing means; fusion processor to wed shape and image segments.
16: Second processing means; fusion of model segments.
17: Third processing means; host simulation computer for manipulating world model; geometry processor.
18: Forth processing means; image processing for display and distribution.
19: Computer graphics system
20: VRT telecommunications system
21: VRT vehicle control system
22: Artificial intelligence system
23: Audio processing system
24: Participant; viewer/operator)
(24a: first participant)
(24b: second participant)
25: Mass storage device
(25a: visual and shape mass storage) (25b: audio data mass storage)
26: Panoramic model segments
(26a: visual and shape model segment)
(26b: audio model segment)
27: Sensor (s)
28: Image sensor (s)
29: Shape sensor(s)
30: Audio (Accoustical) sensor(s)
31: Conventional display unit(s)
32: Stereographic display unit(s)
33: Autostereoscopic display unit(s)
34: Holographic display unit(s)
35: Audio speaker(s)
36: Sensor array
37: Camera
38: Radar
39: Microphone
40: Array housing
41: Overlapping field of regard coverage of sensors.
42: Edge of adjacent field of regard coverage of sensors. 43: Rigid transparent support
44: Array assembly
45: Screw
46: Support armature
47: Panoramic optical assembly arrangement
48: Objective lens
49: Light sensitive surface of the camera
50: Fiber-optic image conduit (bundle)
51: Focusing lens
52: Camera housing
53: Charge Coupled Device(CCD)
54: Sheathing of image conduit
55: Shape data (wireframe) representing subject model
56: Image data representing a subject
(56a: being)
(56b: object)
(56c: scene) 57: Audio data representing a subject
(57a: being)
(57b: object)
(57c: scene)
58: Viewing space
59: Head position of participant
60: Hand location of participant
61: Sample frame of panoramic camera
62: Transimitter; for transmitting an over-the-air stereo aud io signal.
63: Receiver; for reci an over-the-air stereo audio signal.
64: Stereo audio headphones
65: Structural supports of the large display assembly
66: Graphics input system
67: Videotape player
68: Videodisc player
69: Video analog-to-digital converter
70: Display unit; generally; may include audio system.
71: Displayed scene
72: Image segment circuit means
73: Image control unit (including chasis, processors, etc.); may include audio means.
74: Polygonal surfaces of model 14a
75: Head position of viewer
76: Position sensing system sensor
77: Position sensing system source
78: Position sensing system electronics unit
79: Audio signal to means 4
80: Video signal to means 4
81: Display unit viewing surface
82: Position and orientation data and associated conductor f rom i nteract i ve i nput system 10
83: Source conductor line
84: Sensor conductor line
85: Radar antenna
86: Radar waveguide 87: Radar transmitter/ reciever
88: Master clock
89: Conventional signal router/switcher
90: 3-D display unit; generally.
91: 3-D display system embodiment of system 1.
92: Encoder/compressor
93: Encryptor
94: Modem
95: Decryptor
96: Decoder /expander
97: Non-contact position and orientation sensor system (i.e.
Radar or LADAR); may include camera system.
98: Digital data network
99: Telephone line
100: Edge of projected image
101: Floor of large assembly
102: Host vehicle
103: Host vehicle controls
104: Host vehicle control surfaces and motors.
105: Rear projection screen
106: Entry/exit assemblies for assembly 12
107: Structural support, framework, and fasteners for large assembly 12.
108: Remotely piloted vehicle
109: Tranceiver; for sending and recieving radio
frequency (RF) over-the-air digital data.
110: Over-the-air RF digital data link
111: Participant support means
112: Remote vehicle control system
113: Remote vehicle control surfaces and motors
114: Remote vehicle manipulators
115: Timing signal conductor
116: Model signal conductor
117: Processing means for conventional TV
118: Processing means for stereo display TV
119: Processing means for autostereoscopic TV
220: Processing means for holographic TV 221: Processing and distribution system for image segment circuit means
222: Audio-visual units of image segment circuit means
223: Heirispher ical scan of LADAR system; may include
integral registered camera system.
224: Near field of view of LADAR system; may include
integral registered camera system.
225: VRT control station for remotely piloted vehicle
226: Video compression and data system (including
communications buffer)
227: Video decompression and data system (including
communications buffer)
228: Peripherial devices
DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS
As required, detailed embodiments of the present invention are disclosed herein; however, it is to be understood that the disclosed embodiments are merely exemplary of the invention which may be embodied in various forms. Therefore, specific structural and functional details disclosed herein are not to be interpreted as limiting, but merely as a basis for teaching one skilled in the art to variously employ the present
invention in virtually any appropriately detailed structure.
For clarity of description, a preliminary summary of the major features of the recording, processing, and display- portions of a preferred embodiment of the system is now provided, after which individual portions of the system will be described in detail.
Referring to the drawings in more detail.
As shown in FIGS. 1 the reference 1 generally designates a system and method of rendering and interacting with a
three-dimensional (3-D) computer generated model that
comprises the virtual reality/telepresence system 1 presented to a participant 24. The system 1 generally includes a panoramic input source means 2, panoramic signal processing means 3, and panoramic audio-visual presentation assembly means 4 connected generally by suitable electrical interface means 5. Electrical interface means 5 , including the
appropriate conductor, input/output port or jack
interconnections, and associated signal, is indicated by lines and arrows in the drawings. Optional or alternative interface means and devices are indicated by dashed lines in the
drawings. Input means 2, generally consists of a panoramic 3-D camera system 6, a panoramic 3-D digitizing system 7 , and a panoramic 3-D audio recording system 8. Input means 6, 7 , and 8 include a plurality of respective sensors that are positioned to record geographic and geometric subjects. A subject 13 may comprise three-dimensional beings or things in the real world. The world model 14 comprises a model 14a that includes shape and imagery, and an audio model 14b that includes accoustical recordings. The audio model corresponds to the shape and imagery model. Preferrably, all sides of the subject are recorded simultaneously by the input means 6, 7, and 8.
Signal processing means 3 preferrably includes a first computer processing means 15 for sensor fusion of the
resulting imagery signal 5a and shape data signal 5b. The first processing means operates on the signals 5a and 5b to combine shape and surface data of corresponding segments of the 3-D subject. The resulting 3-D model segments 26a are portions of the computer generated world model 14a. Signal processing means 3 preferrably also includes a second computer processing means 16 for fusion of imaging and shape data segments 26a derived by first apparatus 15 to form a
continuous panoramic world model 14. Signal processing means 1 also includes a third computer processing means 17 for manipulating the computer generated model 14a. The third processing means is typically operated to perform interactive 3-D visual simulation and teleoperated applications. Signal processing means 3 also includes a forth computer processing means 18 to sample out and transmit image scene 71 segments of the world model 14a to each respective display unit of the audio-visual assembly means 4. Means 3 includes includes processing means for interface with input sources 3 ,
peripherial computer data entry and manipulation apparatus refered to as an interactive input system 10, and assembly 4. Signal processing means 15, 16, 17 , 18, and 23 include a central processing unit, terminal bus, communication ports, memory, and the like typical to a conventional computer (s). Operating system software, board level software, processing data, generated images and the like are stored in mass storage devices 25 which may include disk drives, optical disk drives, and so forth. All signal processing means 15, 16, 17 , 18, and 23 may be incorportated into a single computer 9. or a
plurality of networked computers (9 tfo the nth) housed in a single or separate chassis. Additionally, means 3 may include a computer graphics system 19, a telecommunications system 20, a vehicle control system 21, or artificial intelligence system 22 to perform special processing functions. Special
processing systems 19, 20, 21, and 22 may be integral or networked to computer 9.
Audio sensors 30 are faced inward about a subject or outward to record signals representing audio segments 26b of a
surrounding subject. Preferrably, the image, shape, and audio sensors 28, 29, and 30 respect ively, are positioned adjacent to one another and record a continuous corresponding subject 13. The audio processing system 23 receives recorded audio signals 5c from the panoramic 3-D audio input system 8. The audio signals 5c as assigned to modeled subject 14a comprise an accoustical world model 14b. The audio model 14b is continuously updated by the computer 23 based on data recieved from the interactive input system 10. Computer 9 communicates changes to the world model 14 via digital data link
interconnected to computer 23. Audio means 23 includes processing means and software means for the generation of 3-D audio output in response to changes and actions of subjects modeled in the computer generated model 14a. The output audio signals are transmitted to speakers positioned about the participant by way of the panoramic audio-visual assembly me a ns 4 .
The preferred embodiment of the system 1 may generally comprise two alternative panoramic audio-visual assembly means 4: A headmounted display (HMD) assembly 11, or a large display assembly 12. The large display assembly 12 may incorporate conventional 31, stereographic 32,
auto-stereographic 33, or holographic 34 display units.
Specific processing means 18 compatable with a given display unit's 31, 32, 33, or 34 format operate on the virtual model 14a. The processing means 18 then outputs a signal
representing a model segment 26a to a predetermined display unit 31, 32, 33 or 34. Display units 31, 32, 33, or 34 are placed contiguous to one another in a communicating
relationship to the participant such that a continuous scene is presented to the participant. In this manner the same basic imagery, shape, and audio data is rendered into a model 14a that may be operated upon for presentation on
conventional, stereographic, autostereoscopic, or holographic display systems.
The model 14 presented to the participant may be derived from prerecorded data stored in a mass storage device 25.
Alternatively, a live feeds from input sources 2 at a remote location are processed in near real time and the participant can interact with the remote location by using teleoperated devices. In these manners the viewer is immersed in a highly- interactive and realistic computer simulation.
Still referring to FIG. 1 , in operation a panoramic sensor array comprising a plurality of shape, visual, and aural sensors are positioned to record a three-dimensional subject in a substantially continuous panoramic fashion. Each sensor 27 outputs a corressponding signal specific to that sensors field of coverage. Signals representing visual and shape data are transmitted from input sources 6 and 7 to the signal processing means 3. A first computer processing means 15 fuses the shape and visual signals to form model segments 26a. The pool of model segments are then transmitted to a second processing means 16 that fuses or matches adjacent and corresonding model segments to one another. The matching of intersections of the pool of model segments yields a panoramic three-dimensional model 14a. Typically the model 14a is rendered such that three-dimensional subjects in the
foreground are of high-resolution and three-dimensional subjects in the background of less resolution. Preferrably, the background scene lies approximately ten feet beyond the boundary of the furthest distance the participant would venture into the virtual model. This is because beyound ten feet perspective is not significantly perceptable to the average human. And beyound this viewing distance the
background scene of the model 14a would not need to be
rendered in a 3-D manner because the viewer can not perceive parrallax and hence the the realism is not increased. A third processing means 17 receives the fused model of panoramic coverage. The third means manipulates the geometry of the model 14a based on viewer interaction. A forth processing means 18 samples out portions of the model and transmits signals representing scenes 71 of a given field of view to predetermined display units of the display assembly 11 or 12. The dimensions and detail of the virtual model 14 may be increased by moving the sensors to different locations
throughout the real world environment in order to increase the resolution of the recorded subjects and to increase the pool of perspective views of subjects throughout the recorded environment. These sensor recordings are then processed and added to the existing data base and existing model in the same manner as prior subjects modeled for inclusion in the computer generated environment. Simultaneous with visual input, processing, and display, audio sensors 30 transmit audio signals to an audio processing system 23. The audio
processing system is operated to assign audio signals to visual subjects positioned and comprising the panoramic computer generated model.
An interactive input system 10, such as a position sensing system, monitors the viewers head position. Position data is transmitted to the visual and audio simulation processing system 17 and 23 respectively. The position and orientation data from system 10 is processed by the visual and audio simulation processing means to update the model 14 after each of the participants actions. Updating the model typically involves the participant moving a virtual object in the model with his hand, or changing the viewpoint of the displayed scene based upon a change in the participants head position. Positional changes of objects, subjects, and scenes are continuously updated and stored in the memory of the computer 9.
Imagery and audio signals are transmitted from the visual 15-18 and audio 23 processing means to the audio-visual assembly means 11 or 12. The processing means has appropriate output processors, conductors, and interface connections to transmit the visual and audio signals to the visual display units 31, 32, 33, or 34 and audio speakers 35 of the display assemblies 11 or 12. The visual model 14a and aural model 14b are updated and displayed instantaneously before the viewers eyes.
INPUT MEANS
Referring to FIG. 1 in more detail, input means comprises a 3-D camera system 6, 3-D digitizing system 7, , and 3-D audio system 8. Preferrably, at least one image sensor 28 of each image system, at least one shape sensor 29 of each 3-D
digitizing system, and at least one accoustical sensor 30 of at least one audio system are positioned adjacent to one another and record a continuous corresponding segment of the subject 13. FIG. 2 illustrates a panoramic camera system 6 of prior art in which a plurality of image sensors 28a-28f and audio sensors (not shown) are faced outward about a point or area to record a contiguous surrounding visual subject scene 13c. FIG. 3 illustrates a panoramic camera system in which image sensors 28a-28f are positionable and may be faced inward to record representations of each side of a subject 13.
FIG. 4 and 5 illustrates a sensor array 36 including a visual system comprising a small conventional camera 37, a 3-D digitizing system comprising a small conventional radar 38, and an accoustical system including a microphone 39. The microphone, radar, and camera of each array have overlapping field-of-regard coverage 41 . The overlapping coverage enables each the arrays sensors to record an accoustical, shape, and image signature of a given side of a subject 13. FIG. 6
illustrates a plurality of arrays 36a-36f faced inward about a 3-D subject. Each array has adjacent field-of-regard coverage 42 of the subject such that each side of the 3-D subject is recorded. Accoustical, shape, and image signatures from each of the arrays are transmitted to signal processing means 3. FIG. 7 illustrates sensor arrays may be faced both inward and outward to record a subject. Arrays are positioned adjacent to one another to form a panoramic array assembly 44. Sensors of the adjacent arrays 36a-36f of the assembly are positioned to have adjacent field-of-regard coverage 42. The array assembly has a substantially panoramic 3-D spherical field-of-regard coverage about a point. A plurality of array assemblies 44a-44f may be arranged in the real world to simultaneously record a subject 13 environment from various points-of-regard. In this manner, virtually all sides of a subject surrounded by the array assemblies is recorded and backgroud scenes surrounding the subject are also
simultaneously recorded. Alternatively, a single assembly 44 may be moved thru space in the real world and records a subject 13 environment from various points of regard at different times. The array 36 or array assembly 44 may be constructed in a portable fashion such that the array or array assembly are carried through a real world environment by a living being or vehicle. Each array of the assembly transmits its respective accoustic, shape, and imagery signatures to the processing means 3. Processing means operates on the
signature data to render the virtual world model 14 . Array 36 and array assembly 44 may be fastened together and supported by conventional means such as screws 45 and support armature 46. Furthermore, sensors may be distributed over a vehicle such that the inner or outer skin oi the vehicle become: a housing for the sensors. The sensors can be placed on remote or host, piloted or unpiloted vehicles.
1) THREE-DIMENSIONAL PANORAMIC SHAPE INPUT
A panoramic 3-D digitizing system 7 comprises one type of input source 2 and is operated to input 3-D data representing a subject 13. The system 7 is operated to record the shape of a 3-D subject. System 7 may comprise a 3-D light pen, optical scanner, image recognition system, sonar, ultrasonic, laser scanner, radar, laser radar (LADAR) system or systems.
Additionally, mathematical formula defining the shape of the subject may be entered by an operator via a keyboard. The 3-D data is transmitted from the system 7 to a computer processing system 9 were it is operated upon. As shown in FIG. 13, the resulting 3-D data representing the subject is called a wireframe 55. The wireframe is a 3-D line and point computer generated rendering of a subject. The intersection of the lines form polygons that define the surfaces of the subject. A 3-D shape input system including a stylus and model table arrangement of the type described in U.S. Pat. 4,514,818 by Walker available from Quantel Limited, UK, or the 3SPACE TM Digitizer available from Polhemus of Colchester, VT may provide the shape data in system 1. Alternatively, a
three-dimensional input system of a type described in U.S. Pat. 4,737,032 and 4,705,401 by Addleman and available from Cyberware Labratory, Inc. as the Rapid 3D Color Digitizer Model 3030 and associated products may provide the shape data in system 1. The Cyberware digitizer incorporates sensing and illumination elements to record a three-dimensional subjects shape and color. Seconds later, a graphics workstation displays the object as a detailed, full color,
three-dimensional model. Alternatively, a radar and camera system decribed in U.S. Pat. 5,005,147 by Krishen et. al. may be incorporated to provide shape and imagery data in system 1. Still alternatively, a laser-radar (LADAR ) system, including a video camera, available from Autonomous Technologies Corp. of Orlando, FL, may be incorporated to provide shape and imagery data in the system 1.
2) THREE-DIMENSIONAL PANORAMIC CAMERA INPUT
Preferrably, a 3-D camera system 6 comprises a plurality of objective lenses typically faced inward about a being or object, and outward to record a scene. Preferrably the objective lenses 48a-48f of the camera has overlapping or adjacent field of view coverage. Any conventional TV camera or non-standard camera may be utilized in the present system 1 that is compatable with signal processing means 3. The electrical section of the camera is structured to convert the visual images recieved by the image processor into electrical video signals 5a such that the information is in a format that is compatible with standard video processing equipment. Any conventional or unconventional video camera 3 7 may be adapted to accept the images from the disclosed optical systems in FIG. 1 thru FIG. 7. The image processor of the camera is structured to convert the visual images received into
electrical video signals. Preferrably, the processed camera signals are typically standard synchronized coded signals utilized in the United States for video transmission. The signal processor 3 may be modified to convert each received electrical video signal 5a from the image processor means into a standard or non-standard synchronized coded signal of any given country or format for transmission and processing as desired, such as NTSC, PAL, SECAM, IDTV, HDTV, or the like.
In both the spherical field of view optical assembly of FIG.
2, and the positionable field of view camera arrangment of FIG. 3, images may be combined by either electronic means or by optical means. Similarly, image chrominance, luminance, hue, and intensity may be controlled electronically,
optically, or electro-opticcally by the camera or later by the signal processing means. Typically, when a plurality of cameras 6a-6f are incorporated, the plurality of images are compressed into a single frame by processing means 3. When a single camera 6 is incorporated, the images are optically integrated into a single frame.
Any of these arrangements may be incorporated with array 36, or array assembly 44 of the system 1 .
Although simple optical systems are depicted in FIGS. 2-7, it should be clear to one skilled in the art that more complex optical arrangements can be employed. Other optical elements and electro-optical components taht may be included are automatic shutters, automatic focusing devices, optical filters and coatings, image intensif iers, correcting and inverting lenses, lens adapters, sensitive recording surfaces and medium of various types and formats, wavelengths, and resolutions, and so forth and so on. These various optical arrangements are given the designer to accomplish a given task. Standard video compression devices can be incorporated into the camera to compress the signal to aid in the storage, transmission, and processing of each image. Image sensors associated with moving target indicators (MTI), pattern recognition systems, and so forth, may be integrated with the optical systems of system 1 . Conventional over-the-air video transmitters can be incorporated to transmit each image to a remote video receiver for processing and display.
Fig. 2 illustrates a prior art camera used for recording a scene of spherical field of view coverage. A type of
spherical camera utilized is generally of the type described in U.S. Patent 5,130,794 by the present inventor. As shown in FIG. 9A , in this way a spherical field of view scene about a point is recorded on a single frame. FIG. 3 illustrates a positionable 3-D camera system for recording all sides of subjects to be modeled in the computer generated virtual environment. Images are transmitted from the objective lenses, thru fiber optic image conduits in focus to a light receiving surface 49 of a camera 37. As shown in FIGS. 8a thru 20, in this way all sides of a subject or subjects are recorded in single frame 61. Typically, optical elements, as shown in FIG. 3, are arranged to record images of various sides of the subject into a single frame. Optical elements such as mirrors, prisms, or coherent fiber-optic image
conduits transmit images from one or more objective lenses to the light sensitve recording surface of the camera or cameras. The optical fiber bundles are of a type that allows them to be twisted in route such that the desired image orientation at the exit end of the bundles is facilitated. Image bundlles have a rectangular cross section to facilitate the format of the image with the cameras format. Cross sectional resolution of 60 to 100 fibers per millimeter and low light loss
transmission construction are incorporated to maintain image quality. Optical fibers are gathered at their exit end such that the image fibers optical axes are parallel to one another and perpendicular to the light sensitive recording surface of the camera. The image focused on the exit end of the conduit is optically transmitted in focus to the light sensitive surface of the camera. FIG. 9b is a frame in which a
plurality of subjects whose sides have been recorded by a camera of a type generally illustrated in FIG. 3.
Alternatively, FIG. 10 shows that imagery of all beings, objects, and background scenery comprising model 14a may be combined in a single frame. Preferrably, a high resolution sensor, such as a HDTV or IDTV recording system, is
incorporated. High resolution recording allows for images to be enlarged for later viewing and still retain acceptable detail.
Alternatively, as shown in FIG. 5 , the panoramic camera system may comprise a plurality of cameras. When a plurality of cameras are incorporated, images are recorded by
positioning a plurality of image sensors toward and about a subject such that substantially continuous coverage is
achieved by the sensors. Typically, conventional charge coupled devices (CCD) 53 positioned directly behind objective lenses or fiber optic image conduits to record image segments of beings, objects, and scenes. Alternatively, image conduits 50 may be routed to a pluraity of cameras. The images are then transmitted to television production equipment, or computer 9 over a conductor for processing. U.S. Pat.
5,130,794 by the present inventor and U.S Pat. 5,023,725 by McCutcheon disclose optical and electronic methods and means of recording and compressing a plurality of images into a single frame generally appllicable to incorporation in the present invention 1. Alternatively, instead of spatially compressing or spaitially multiplexing a plurality of images into a single frame, images may also be time multiplexed. In such an arrangement, alternating images are electronically sampled into a single channel. The images are later
demultiplexed by a video demultiplexer device and processed by the computer 9 . A video multiplexer/demultiplexer system of a type generally incorporated into the present system 1 is available from Colorado Video Inc. as the Video Multiplexer 496A/B and the Video Demultiplexer 497A/B. While digital compression, spatial compression, or spatial multiplexing the images is not required, it will be appreciated by those skilled in the art that compressing the image in one of these manners greatly assists in the transmission, processing, and storage of the vast amount of imagery necessary to build a panoramic model.
3) THREE-DIMENSIONAL AUDIO INPUT
The 3-D audio input system 8 preferably is in communicating relationship with a high speed digital audio signal computer processing system 23 that delivers high quality three
dimensional sound over conventional headphones. In Fig. 2 microphones 39a-39f (not shown) are distributed to face outward from the lens housing to record a spherical
accoustical field of regard coverage about a location. In Fig. 2 microphones 39a-39f (not shown) are faced inward about a subject to record a contiguous accoustical field of regard coverage eminating from the subject 13. As illustrated in FIG. 4 and 5 , microphones may be integrated with the array 36. The microphone 39 of each array preferrably have audio
coverage 41 corresponding to the shape and optical sensor coverage 41 of the same array. FIG. 6 and 1 show that arrays may be placed beside one another to achieve continuous
adjacent panoramic audio coverage 42 of a subject. Audio signals 5c from audio input sources 6c-6f are transmitted to the computer 23. The computer 23 may consist of a
conventional personal computer or computer workstation 9 and includes printed circuit boards designed to sample the input audio signals 5c. Types of workstations and associated printed circuit boards generally utilized for 3-D audio input, processing, and output are manufactured by Division Inc. of Redwood City, CA as the Accoustetron complete studio quality audio workstation and the Convolvotron and Beachtron audio processor printed circuit boards. The audio system 23 uses the virtual 3-D audio client/server protocol standard (VAP) as an operating system software interface with audio input devices such as compact disc or magnetic tape machines, and for communicating with means 17 of computer 9. The boards occupy ISA-compatable personnal computer slots. Input
accoustical data from audio input source 8a-8f are typically stored on digital or analog sources such as compact disc or magnetic tape machines, or may be digitized and stored in computer memory 25b and referenced to beings, objects, and scenes that form the visual model 14a comprising the computer generated environment 14.
The computer 23 samples and affects recorded audio data comprising the audio model 14b based on control data including audio sources position and orientation, participant position and orientation, environment reflection and transmission attributes, and audio source and participant velocities in the virtual environment 14. The host computer processing means 17 is programmed with software such that control data is
automatically transmitted over a standard RS-232C output of means 17 to audio means 23 . Control data is operated upon by computer 23 as a local geometric transform and HRTF tables and gains and input into a high spped digital audio signal
processing system (e.g. Convolvotron or Beachtron) printed circuit boards. The input data is operated upon by the board level system of means 23 to affect input audio sources
corresponding to each board level system. Input audio sources 8 are converted from an analog to digital audio signal 5c. The board level system then output the affected audio to a digital to analog converter. The audio system 23 outputs two independent 16-bit digital/analog converters synchronized to the analog to digital converters driving conventional stereo output. The audio is typically transmitted over conductors to the stereo microphones on the participants 24 head. In this manner the perceived locations of sound sources in the
environment can remain independent of the orientation of the user.
Alternatively, audio computer 23 may comprise a personal computer or workstation that access analog or digitally recorded sound data that is input or stored directly in computer 23 or stored on disk, digital-audio tape, or compact disc. Computer 23 uses the Musical Instrument Digital
Interface (MIDI) audio system to program, process, and output MIDI fomatted data streams which in turn are interpreted by MIDI devices such as synthesizers, drum machines, and
sequencers distributed accross the MIDI network. The system outputs a stereo signal to the participant 24 which typically wears stereo headphones 64. The audio system 23 may be of a type generally available from Silicon Graphics Inc., CA as the IRIS Indigo or Personal IRIS 4D/35 workstation, which include DAT-quality digital-audio subsystem, and configured with Audio Library software; and available from VPL Research Inc. CA as the AudioSphere TM system for use with computer generated virtual environments. Optionally, the MIDI audio system may be designed integral to computer 9 (e.g. available on all 1993 and future Silicon Graphics Inc. platforms).
Alternatively, the stereo audio signals 5c can be
transmitted over-the-air to a reciever 63 on the headphones 64 by an infrared or radio frequency device. An over-the-air audio system of a type generally incorporated into the present system is available from Radio Shack Inc. as the Wireless FM Microphone with tranceiver and the Radio Shack Inc. Stereo FM Radio Headset. The transceiver 62 transmits an over-the-air stereo radio signals output by the computer system 23 tunable between 88-108 GHz to the reciever 63 of the an FM radio with stereo audio headphones 64 worn by the participant.
Alternatively, the audio signals 5c from audio input sources can be transmitted by conductors directly to speakers 35 distributed around the participant. The speakers are
supported by the structure 65 of the large display assembly in a manner consistant with U.S. Pat. 5,130,794 by the present inventor or that described in U.S. Pat. 4,868,682 by Shimizu et al.
4) GRAPHIC COMPUTER AS INPUT SOURCE
Still alternatively a graphics computer 19 is operated as an input source 2 (not shown) to create a 3-D world model 14a. The computer system includes a digital computer including a central processing unit, memory, communications ports, and the like. Operating system software, graphics software,
processing data, generated images and the like are stored in mass storage devices which may include magnetic disk drives, optical disk drives, and so forth. Commands to operate the computer system and graphics generation commands are entered by means of the viewer interaction devices which may include a keyboard and graphics input device. The graphics input device 66 may consist of one or more of a joystick, a trackball, a "mouse", a digitizer pad, a position sensing or pointing system, a non-contact viewer position and recognition sensing system, a voice recognition system, or other such devices. The graphic input device 66 may be operated as an input source 2 or as part of the participants interactive input system 10. The computer graphics system 19 includes a bit mapped video display generator wherein each pixel element or pixel is accessible for generating high resolution images. The video display generator is connected to an input port by suitable conductor lines. The computer generated images are then further processed by the signal processing means 3 for
display. The digital computer may be any type of computer system which has the required processing power and speed such as the type which are employed in 3-D computer graphic
animation and paint applications. The computer system may function as a simulator controller if the display means of the present invention are used as simulators or as a game
controller if the systems are employed as arcade games. The computer system may also be used to create special visual effects by combining artificial and animated scenes wici. live camera recorded scenes.
A graphics computer system of a type utilized herein is generally of the type manufactured by USA Quantel Inc.,
Stamford, CT as the Quantel Graphics "Paintbox" TM System, or by Alias Research Inc., Toronto, Ontario, Canada as Alias PowerAnimator TM animation software for operation on Silicon Graphics Inc. workstations. It is forseen that the graphics computer system 19 and image processing system 17 may occupy the same computer 9.
Additionally, conventional videotape 67 and videodisc 68 player's input signals 5c representing prerecorded image and audio signals may transmit images to the signal processing means 3. As illustrated in FIGS. 10 thru 12 each frame may consist of images of one, several, or all subjects to be modeled in the virtual environment. Likewise, a single, several, or all the audio tracks may be recorded onto a single recording medium.
Additionally, a computer mass storage 25 database may serve as an input source. In such an instance, shape, imagery, and audio information may be encoded onto tape, or a magnetic or optical diskette or platter in an all digital format.
PROCESSING MEANS
Processing means 3 of system 1 at least includes a host computer system 9 and interactive input system 10 .
1) HOST COMPUTER
Referring to FIG. 15 and 1 6 , host computer 9 preferrably comprises a digital computer system with high level image generation and graphics capabilities. A host computer 9
compatable with the present invention is generally of the type manufactured by Silicon Graphics of Mountain View, CA as the SkyWriter TM computer system.
The high level 3-D image generation and 3-D graphics capabilities of computer system 9 typically consists of a type of digital computer subsystem which has the capability to texture-map at least and NTSC video feed onto a three-dimensional wireframe 55. The host computer 9 may include single or dual pipeline subsystem configurations. The computer 9 may include all signal processing means 3
comprising means 15, 16, 17, 18, 19, 10 and 23. The high level image generation subsystem generally includes processing means 17 for manipulation of the geometric model 14, and processing means 18 for output of designated portions of the processed model 14 for display. Computer 9 includes a digital computer including at least one central processing unit, system bus, memory, communications ports, and the like.
Computer 9 may be configured to drive one to twelve analog or digital output channels. Operating system software processing data, generated images, and the like are stored in mass storage devices which may include magnetic disk drives, optical disk drives, and so forth. Commands to operate the computer system 9 are entered by means of the participant interaction devices 10. The computer 9 may be configured to receive a single or plurality of inputs from a single or plurality of interactive input devices associated with the system 10 via host 9 communication ports. High- level 3-D image generation and 3-D graphics capabilities integrated with host computer 9 are generally of a type described in U.S. Pat. 4,827,445 by Puchs; or manufactured by Silicon Graphics Inc., Mountain View, CA as Real ityEngine TM Host Integrated Computer Image Generation System with VideoLab TM or VideoLab/2 TM input/output option, and with
VideoSplitter/2 TM. option. Skywriter TM with RealityEngine TM incorporates the IRIS Performer TM software environment to provide the performance and functional requirements for image gneration applications.
The signal 5a is typically captured from an image input system 6 by a conventional frame grabber at preferably at 30 frames per second and converted from an analog to digital signal by a conventional analog to digital converter 69.
Alternatively, the computer may recieve and operate on a multiplexed or compressed video signal. The converted analog signal representing the image is then transmittted to the computer 9 tor texture mapping on a 3-D wireframe 55
representation defined in the host computer. Areas on the two dimensional frame 61 are referenced by look-up tables stored in memory of the computer 9. Pixels on the two dimensional frame 61 are assigned three-dimensional coordinates
corresponding to a predefined wireframe model 55 stored in computer memory 25a. The image segments are manipulated by the computer to generate the effect to the participant that he or she is within the computer generated environment. To accomplish this effect in the HMD assembly or large display assembly, the computer performs programmed mathematical operations on the input imagery.
A mass storage device 25 may be a separate conventional magnetic disk device or can be an integral part of the host graphics computer 9, i.e., the main memory of computer 9. The mass storage device is used to store data representing beings, objects, and scenes representing subjects 13 in the world model 14. The mass storage device contains a previouly generated data base comprising a digital representation of the world model. FIGS. 11 and 12 illustrate a top plan and side view, respect ively, that can be termed the world model 14 or simulated environment constructed to illustrate the present invention. The subjects rendered in the model 14a are
nominally divided into a plurality of convex polygonal surfaces 74 having arbitrary numbers of sides. The respective polygons are represented in the data base as a sequence of data words each corresponding to a vertex of the polygon. By conversion, the vertices of the polygon are sequenced in, for example, a counter-clockwise direction within the data group. A one bit flag in corresponding data word is utilized to indicate the last vertex. Each vertex data word suitabley comprises a plurality of fields representing the coordinates of the vertex in a choosen coordinate system (x, y, z), the intrinsic color intensities present at the vertex (R, G, B), and a vector indicating the unit normal to the polygon surface at the vertex. In this manner the intensity and depth value of each pixel comprising an object is stored in computer memory.
Alternatively, a single or plurality of live video feeds may be transmitted to computer 9 and processed in near real time. In such an instance, look up tables instruct the computer to sample predetermined portions of the image and texture map them onto predetermined portions of the 3-D wireframe model defined in the computer. Video compression systems and video multiplexer/demultiplexer systems may be operated to
facilitate the reading in of images from a plurality of video sources. Or alternatively, a television production unit with image compression functions may be used to compress a
plurality of sources into a single frame. A production system of a type and function compatable with the present system is described in U.S Pat. 5,130,794 by the present inventor. The preassigned area on the model 55 on which the image segment is positioned and oriented for texture mapping may be stationary or have motion. The model may correspond to a persons head or body or an object in the real world. In such an instance, the being or object is tracked by positions sensors located on the real object. In this manner the computer can keep track of the position and orientation of the corresponding model in the computer 9 and texture map each pixel image in its proper position on the model of the object or being. The ability to sample a live video feed is a requirement in telepresence applications of system 1. Real-time texture mapping from a live video feed is especially useful in an image based virtual teleconferencing system like that described in FIG 20 and FIG. 21. A television production system that can function as host computer 9 in system 1 especially designed for creating the effect of texture mapping live video on a 3-D shape is
generally of a typed cited in U.S. Pat. 4,951,040 by McNeil et al., U.S. Pat. 4,334,245 by Michael, and U.S. Pat. 4,360,831 by Kellar and corresponding products of a type generally available from Quantel Inc. of Darien, CT as the Digital
Production Center with MIRAGE TM or ENCORE TM, a 3-D image processing system. It will also be appreciated that only one participant may be receiving images from computer 9 . And that either a singular or plurality of HMD's assemblies 11 or large display
assemblies 12 may be incorporated or interconnected. Or additionally, participants may use plural computers (9a to the nth) to operate on the same environment; e.g.
telecommunications. In such an instance, only position and orientation data would be transmitted to a remote site to update the virtual model 14 of the remote computer 9 . Viewers may interact with one anothers virtual model instead of or as well as with virtual models generated by the computer 9.
As shown in FIG. 1, preferably an integrated part of the host computer 9 is a graphics computer system 19. The
graphics computer system is in direct communicating
relationship with the computer 9 . The keyboard, touch tablet, and host computer 9 are operated to control the processors and memory and other devices which comprise the graphics computer system. Various input sources may be routed to the graphics computer system for rendering. Once the graphics computer system has been operated to create or affect an existing picture, the picture is stored in mass storage or bused as a picture signal to the host computer 9, image segment circuitry means, or directly to a display unit. The computer graphics system comprising a digital computer may be operated to create or effect the recorded video images either before or after the images have been texture-mapped onto a wireframe. Typically the participant 24 affects the video images frame by frame. Such a graphics system is used in the present invention to affect the images presented in either a 2-D or 3-D format. The data to be affected is derived from a video input source 6 or storage device 25. The components in a typical vector or raster electronic graphics system include a touch tablet, a computer, framestore, and a display.
As shown in FIG. 1, it is foreseen that the system 1 may include an expert system 22 with a complementary data base. The expert system is in direct communicating relationship to computer 9. The expert system may be housed in a separate chasis and communicate thru conventional conductors and input/output ports with computer 9. Or alternatively, the expert system may be integral to computer 9. The knowledge system is provided to respond to participant 24 requests or actions, and each request or action has a record including a plurality of parameters and values for those parameters. The expert system is provided to process the record of a specific request or action to answer or respond to that request or action, and the complementary database stores a plurality of records of requests or actions having known answers or responses, and any request from a participant 24 is
preprocessed by searching the complementary database for a record identical to the record of the request or action. If an identical record is found, the known answer or response to the request having that identical record is given to the participant to answer his or her request; however if no identical record is found in the complementary database, the expert system is involked to answer or respond to the request or action. Experts system of a type generally compatable with system 1 are described in U.S. Pat. 4,884,218 by Agnew et al. and U.S. PAT. 4,884,217 by Skeirik et al. Answers or
responses transmitted from the expert system are interpreted by computer 9 such that the model 14 is updated.
Additionally, responses transmitted from the expert system may be received by other peripherial devices 228 such as motion simulators, participant interactive tactile and force feedback devices, teleoperated vehicles, or any computer actuated device. Responses in the form of data is operated upon by the peripherial device to affect control surfaces, motors, and the like. In this manner subject beings modeled in the computer generated model may interact with subjects in the real world environment. A motion simulator of a type compatable with system 1 device 228 generally responsive to participant actions and exper system data output is manufactured by
McFadden Systems, Inc. of Santa Fe Springs, CA as the Model 611A Motion System. A tactile and force feedback device that is generally of a type compatable with system 1 device 228 is available from VPL Research Inc. of Redwood, CA as the
DataGlove TM models THX, TSK, and FBX TM's.
2) INTERACTIVE INPUT DEVICE SIGNAL PROCESSING
An important part of signal processing means 3 is the interactive input system 2ϋ operated by actions of the
participant 24 to manipulate the virtual beings, objects, and scenes within the virtual environment. Interactive input devices (e.g. data gloves with position, orientation, and flexion sensors, and HMD with position, orientation, and eye tracking sensors) are typically connected by RS-232
input/output ports or the like to processing means of computer 9.
Data from device sensors are typically translated to machine language compatable with software programming of the host computer by an interactive input systems' system electronics unit 78. The electronics unit 78 may be mounted adjacent to the device, a separate chasis, or in some configurations comprise a printed circuit board that is plugged into the system bus of computer 9. Alternatively, an interactive, electronics unit 78 housing a single chasis may provide an interface between left and right data gloves, HMD, and voice recognition systems and computer 9. The electronics unit 78 receives signals from the sensors over conductive cable and translate the signals into machine language compatable with the host computer 9. The electronics unit 78 contains
interface circuitry to translate the data. The translated signals are transmitted by from the electronics unit to the computer 9 over suitable conductors. An electronics unit 78 arrangement for interfacing various interactive devices to a host computer 9 workstation in system 1 is of a type generally utilized by NASA Ames Research Center, Moffett Field, CA, and operated as part of the Virtual Workstation and Virtual Visual Environment Display (VIVED) project.
The preferred embodiment of the system 1 generally comprises two display means: a headmounted display (HMD) assembly 11 and a large display assembly 12. Data from the input devices is operated upon by the processing means of computer 9 to affect the virtual environment. Interactive input systems 10 and associated devices compatable with the present system 1
include spaceballs, position sensor systems, pointing systems, datagloves, datasuites, voice recognition systems, and other such input devices. Electro-optical, ultrasonic, and magnetic position sensing systems for tracking an object or being are also known to those skilled in the art and may be incorporated with the present system 1. Position sensing systems worn by a viewer or mounted on an object of a type particullary
compatable with and that may be incorporated into the present system 1 include those manufactured by Ascension Technology Corporation of Burlington, VT as A Flock of Birds TM,, and Polehemus of Colchester, VT as 3Ball TM, 3SPACE TM Tracker, and 3SPACE TM ISOTRAK TM . U.S. Pat. 4,542,291 and 4,988,981 by Zimmerman et al. describes an interacitive input device and method compatable for incorporation with the present system 1. Products consistant with Zimmeran's invention are available from VPL Research Inc. of Redwood City, CA, as the DataGlove TM, DataVest TM, and DataSuit TM product lines. Associated VPL software compatable with VPL interactive input system 10, including associated electronic units, interface circuitry, devices, and with computer 9 (e.g. Skywritter TM workstation with RealityEngine TM image generation subsystem) is provided for incorporation with system 1 . The VPL input device 10 contain a microprocessor that manages the real time tasks of data acquisition and communication through the various RS232C, RS422, user port and optional IEEE 488 port. The
microprocessor also controls a 3Space Isotrack TM position and orientation system incorporated in the control unit 10. Host computer 9 is programmed with software that allows a library of gestures, solid model simulations of complex objects, and the manipulation of said objects in real time.
Additionally, non-contact position sensor systems 97 , such as electro-optical systems, described in U.S. Pat. 4,843,568 by Krueger et al., U.S. Pat. 4,956,794 by Zeevi et al. are comptable and may be incorporated with the present system 1. The radar in U.S. Pat. 5,005,147 by Krishen et al. and the LADAR available from Autonomous Technologies Corp. of Orlando, FL may also be positioned above the viewing space to sense the position and orientation of participants and objects in the viewing space 58. Data from the radar or LADAR is processed by suitable computer processing means to determine the
position and orientation of the viewer in the viewing space. The position data is then processed by computer 9 to affect the model 14 of system 1. Additionally, data from the same sensors that track the position and orientation of a target subject 13 (e.g. LADAR or radar system with a video camera) may be operated upon by computer 2 to reconstruct the subject as a model 14a. In such an instance input system 2 and position sensing system 10 constitute the same system. The combined system 2 and 10 is placed about the viewer as in FIG. 2.
Furthermore, as shown in FIG. 17 , a voice recognition system 227 may operate to translate audible sounds made by the participant 24 into machine language to control processing functions of the computer 9 . The voice recognition system includes a microphone 39 worn by the participant that
transmits a signal representing audible sounds over an
associated conductor with an input jack arranged in receiving relationship to the voice recognition printed circuit board. Alternatively, a conventional over-the-air radio frequencey transmitter and associated electronics are located in
communicating relationship with the microphone. In this manner the particiants voice signals are transmitted
over-the-air to a corresponding receiver in communicating relationship to the voice recognition system. A board level voice recognition system of a type utiized in system 1 is available from Speech Systems, Batavia, IL, as the Electronic Audio Recognition System. The boards may be mounted in the rack of the host computer 9, or in a separate chasis in communicating relationship with computer 9. The voice recognition system 227 operates with computer 9 to convert the voice signals into machine language to affect the model 14 of system 1. The monitored information from input system 10 via
conductors 82 is used to modify standard computer graphics algorithms of the computer 9. This preferably comprises a laser-disc based or other fast-access digital storage graphic imagery medium. Typically, the viewer of either the HMD assembly or large display assembly may interact with the virtual model 14 by issuing suitable commands to the computer 9 by manipulative means or sensors attached to his
fingers(e.g. DataGlove, TM). Based on such manipulation, the spatial coordinates of the virtual model can of course be changed to give the impression of movement relative to the viewer.
3) IMAGE PROCESSING FOR DISTRIBUTION AND DISPLAY
Once the computer 9 has updated the world model 14 a given field or fields of view are selected in the computer and transmitted to each display unit 70 or units (70 to the nth), and each audio speaker 35 for presentation to a participant. Display units 70 may consist of conventional 31, stereographic 32 or autostereogrphic 33, or holographic 34 television or projection display units. Typically, a single or two views are sampled by the computer for display on the HMD 11.
Typically six adjacent and independent views are sampled by the computer 9 for display on the large presentation assembly 12.
In FIG. 15 thru 18 computer 9 includes a video processing means 18, such as a "VideoSplitter/2" TM printed circuit board, that is programmed to select six independent and adjacent fields of view of the world model 14a for display on the large display assembly 12. The video processing means is in communicating relationship with the display processor and raster processor printed circuit boards. The raster processor comprises a board level unit that includes VLSI processors and graphics system memory. The data received by the raster processor from the output bus of the geometry processor 17 printed circuit board and data management system of computer 9 is scan-converted into pixel data, then processed into the frame buffer. Data in the frame buffer is then transmitted into the display processor of computer 9 . Image memory is interleaved among the parallel processors such that adjacent pixels are always being processed by different processors. The raster processor contains the frame buffer memory, texture memory, and all the processing hardware responsible for color allocation, subpixel anti-aliasing, fogging, lighting, and hidden surface removal. The display processor processes data from the digital frame buffer and processes the pixels thru digital-to-analog converters ( DACs ) to generate an analog pixel stream which may then be transmitted over coaxial cable to display units 31, 32, or 33 as component video. The display processor supports programmable pixel timings to allow the system to drive displays with resolutions, refresh rates, and inter lace/non-inter lace character istics different from those of the standard computer 9 display monitor. The display processor has a programmable pixel clock with a table of available video formats (such as 1280 x 1024 at 60 Hz
Non-interlaced NI or VGA (640 x 497 at 60 Hz NI), NTSC, PAL, and HDTV). All printed circuit boards comprising the forth processing means 18 may be held in the chasis of the computer 9 and generally of a type such as the RealityEngine TM host integrated computer system, and including the IRIS Performer TM software environment available from Silicon Graphics, Inc. of Mountain View CA .
Each of the six fields of view selected by computer 9
correspond to imagery representing a 90 degree square field of regard. Corresponding square fields 71a-71f of regard are displayed adjacent to one another and form a cube such that imagery representing a continuous 360 degree field of view scene is displayed about the participant. Video signals representing each of the six fields of view are transmited over a respective channel of the video processor 18 to an appropriate dislay unit 70a-70f or corresponding image segment circuit means 72 for additional processing.
FIG. 17 illustrates an embodiment of system 1 in which image segment circuit means 21 includes image control units 73a-73f which operate to distribute the images comprising spherical coverage to the large display assembly 12. The function of the digital processing circuitry means is to accept an
incoming television signal and to display it over an array of display units 11 so that the resulting image appears as it would on a single very large screen TV that completely and continuously surrounds the viewer.
In FIG. 17 computer 9 transmits each of the six fields of view is transmited over a respective channel of the
"VideoSplitter" TM to a corresponding image control unit
73a-73f. The electronic video control system 72a-72f accepts the incoming video signal and partitions and processes it for display on the display units 70a1-70f9. This partitioning is referred to as segmenting the image. Each respective image control unit processes its repective image into image segments 71a1-71f9. Each segment is transmitted from the image control unit to predetermined adjacent display units such that a continuous panoramic scene is displayed about the participant on the display units of the large display assembly.
Within each image controller 73a-73f is a central processing unit (CPU) which executes software commands to effect the processing of the images in each framestor memory of
framestore cards. The CPU's of each image controller are connected by an internal bus to each respective framestore card. The software commands read by the CPU may be
pre-recorded onto optical disk, videotape, or image controller by use of a conventional microcomputer or computer workstation
§-. The microcomputer or computer workstation preferrably includes a keyboard for an operator to enter software commands to the image controllers to effect image display. Software commands consist of time code and program code for control of the displayed picture.
The microcomputer or computer workstation can also be operated to input software commands that specify the picture aspect ratio to be scanned for display. In this manner the signals representing model 14a may be segmented by the image
controller into image segments. The video signals
representing each framestore cards picture segment is then transmitted to a corresponding display unit 70. The number of picture segements that the image controller can segment the composite picture into varies and determines the maximum number of display units that can be accommodated.
Preferrably, image segments are pre-formated by the camera system 6 or computer 9 processor 18 to correspond to the picture segmentation accomplished by the image controller.
Image segment circuit means 72, including assoicated cables, display units, audio system, and so forth of the type
generally utilized in FIG. 17 of the system 1 is marketed by DELCOM USA, Philadelphia, PA and includes image controller units 73a-73f sold as the Station Pro and Universal; and systems from North American Philips Inc., of New York, NY under the name VIDIWALL TM.
Other multiple display unit wall arrays that may be
incorporated to form assembly 22 within the present system 1 include that described in U.S. Pats. 5,010,413 by Bahr, Inc., U.S. Pat. 4,890,314 by Judd et al., U.S. Pat. 4,974,073 by Inova, U.S. Pat. 4,974,073 by Inova, U.S. Pat. 5,016,109 by Gaylord, U.S. Pat. 4,734,779 by Levi et al., and the MediaWall TM available from RGB Specturm of Alameda, CA, Vidiwall TM available from Phillips of Holland, TeleWall Delcom 256 model available from Nurnberger Medientecnick GmbH of Germany, and VideoWall TM available from Electronic of Minneapolis, MN. Large display assemblies compatable with the present invention are described in detail in U.S. Pat. 5,130,794 by the present inventor.
As disclosed by Hooks, U.S. Pat. 4,463,380 and in numerous other prior art, the perspective of the beings 14a. objects 14b, and scenes 14c displayed to the participant 24 are calculted to provide a correct perspective appearance to the participant. To achieve a participant-centered perspective, in either the HMD assembly 11 or large display assembly 12, off-axis perspective projection is calculated. Off-axis perspective is calculated based on the position and
orientation of the participants head and eyes 75. The
simplest derivation alters a standard on-axis perspective projection by performing two mathematical transformations of model 14a coordinates that are the equivialent to the
translation, contraction, and expansion with respect to the fixed origin and fixed coordinate system. First, points are sheared in a direction parallel to the projection plane, by an amount proportional to the point's distance from the
projection plane. Then, points are scaled along the axis perpendicular to the projection plane. In this fashion, perspective is calculated by the computer means 18 based on the participants position anywhere in the viewing space. Not only is the perspective distorted based upon the participants location in the viewing space realitive to the virtual model 14a, but imagery is also distorted to compensate for the angle from which the viewer observes the imagery on each display unit 70 to the nth. For both reasons, perspective may be grossly distorted to make the subject or scene appear natural from a participant-centered perspective.
Furthermore, as disclosed by Waldern, U.S. Pat. 4,984,179 and in numerous other prior art, movement of the virtual model 14 may be made relative. A slight movement of the viewers head may be translated by the computer to move beings,
objects, or the scene represented as the virtual model only slightly to dramatically based on algorithms preprogrammed into the computer 9.
4) PROCESSING SYSTEM EMBODIMENTS
In FIGS. 15 input means 2 transmits data to either computer 9a and/or 9b . Computer 9a transmits signals to a HMD assmbly 11. Computer 9b transmits signals to the large display assembly 12 . Participant 24a and 24b each operate interactive devices of associated with system 10a and 10b, respectively, to transmit position data to their respective computers 9a and 9b. Position and orientation data from either participant may be transmitted by conventional interface means between
computers 9a and 9b, or interactive input systems 10a or 10b to computer 9a and 9b in order to update the computer
generated model 14. Alternatively, both participants may operate their respective computer independently and not be networked together in any manner.
Alternatively, as shown in FIG. 16, a single computer 9 is operated to render a computer generated model for both the viewer of the HMD assembly 11 and large display assembly 12. Each participant 24a and 24b operate interactive input systems 10a and 10b respectively devices to effect the model
14.
Assume for example, the 3-D digitizing system 7 comprises a systems electronic unit, keypad, footswitch, stylus, and model table. The system electronics unit contains the hardware and software necessary to generate magnetic fields, compute the position and orientation data, and interface with the host computer 9 via an RS-232C port. The keypad is a hand-held, multikey alphanumeric terminal with display that is used to command data transmission, label data points, transmit
software commands, and receive and display host computer 9 messages. The foot-operated switch is used to transmit data points from the digitizer to the host computer 9 . The stylus is a hand-held stylus that houses a magnetic field sensor and is used to designate the point to be digitized. The operator places the stylus tip on the point to be digitized and
depresses the foot switch or keypad to collect x, y, z
coordinates of an object, being, or to define the background of the scene that constitutes the subject. The points are then selectively connected by lines within the computer 9 to construct 3-D wireframes. The 3-D wireframes 55 form a data base which is stored in the memeory 25a of the computer 9. The model table is a free-standing 40" high model table used as the digitizing surface. The table houses the electronics unit and the magnetic field source (transmitter). The volume of space above the table is specifically calibrated at time of manufacture and this data reside in the electronic units memory. A type of 3-D digitizing system utiilized is
generally of the type manufactured by Polhemus Inc. of
Colchester, Vermont as the 3SPACE Digitizer.
FIG. 13 illustrates a subject vase 13 to be modeled in the computer 9. The computer is configured to recieve input imagery 56 and shape data 55 from the input sources 2 .
Subject imagery 56 of the vase recorded by the panoramic camera system 6 and vase shape data derived from operating the digitizer system 7 are combined by operating the computer 9. As shown in FIG. 8b, 9b, and 10 the computer is operated to texture map image segments of the recorded image of the vase onto 3-D wireframe 55 of the vase previously constructed by operating the 3-D digitizing system 7. Once the texture mapping rendering is accomplished each rendered object(e.g. a vase), being (e.g. a person), or scene (e.g. temple and seascape) is placed in the computers memory 25a as rendered models 14a. A type of video input and output sub-system for combining 3-D graphics with digital video as described above is manufactured by Silicon Graphics of Mountain View, CA under the name VideoLab2 TM. The system allows digitized directly to the screen at real-time rates of 25 or 30 frames per second or stored on disk in memory of mass storage 25a.
Concurrent with shape and image processing, the accoustical signatures 5c of a subject 13 are recorded. FIG. 13
diagramat ically illustrates a sample accoustical signature of a vase. The accoustical signature of the vase is derived by manipulating the vase in the real world and recording the sounds the vase makes. For instance the vase may be handled, scratched, hit, or even broken in order to render sounds by various accoustical signatures. Those signatures form the audio world model 14b. Accoustical signatures are linked to subjects in the visual world model 14b. The accoustical signatures recorded by microphones 39a-39f of the audio input system 8 are converted from an analog signal to a digital audio format and may be stored in the audio processing means mass storage 25b. Alternatively, audio storage 25b may consist of accoustical signatures stored on conventional audio tape or disc and accessed from corresponding audio tape and disc players. Mass storage 25b is connected by suitable conductors and in communicating relationship with audio computer processing system 23. When an action, such as a person breaking the vase occurs in the world model 14a occurs, a corresponding audio representation is pre-programmed to occur in the accoustical world model 14b. The stereo audio signal is typically converted from a digital to stereo analog signals and read out to the stereo headphones in the
participants HMD 11 or to speaker systems associated with each side of the large display assembly 12.
FIG. 14 diagramat ically illustrates a computer generated world model 14 which the viewer can enter by the use of interactive input system 10 and audio-visual means 4. The system comprises imagery, shape, and audio signatures recorded from objects in the real world and then translated and
represented in the computer generated world. By combining position and orientation data received via conductors 75 (for the head sensor) and E1 and Er (for the left and right eye sensors) the computer 9 determines the current field of view for the participant 24 and his or her line of sight. The images transmitted to the display unit's 70 to the nth are of a virtual model 14 having predetermined spatial coordinates within the display assemblies. With the movement of each position sensor relative to the wearer and with each variation of the direction in which the sensor wearer is looking data is transmitted to the computer 9 via conductors 82a and 82b.
Whereby two images 80g and 80h are received over conductors for display on the HMD 11 and six images 80a-80f are
transmitted over conductors for display on the large display assembly 12. Only the portion of the texture mapped imagery comprising the 3-D model 14 within the helmet wearer's 24 field of view is transmitted to the HMD's 11. Two images are calculated so as to present a stereoscopic view of the virtual model 14a corressponding with what would be seen if the virtual model were a real object from the helmet wearer's current standpoint. With respect to the large display
assembly 22 the participant observes imagery within his or her actual field of view as displayed by the six display units
81a-81f. The six display units display a virtual model corresponding with what would be seen by the viewer if the virtual model were a real object from the viewers current standpoint.
Still refering to FIG. 14, the location and angular position of the participant's 24 line of sight are calculated by computer 9 . Three-dimensional algorithms are operated upon so that the observer is effectively enclosed within the model 14a. Typically the model is generated by rendering data word or text files of coordinates comprising the vertices of the surface faces which are form the model 14a. With the
participant at a known position relative to both the model 14a-b ( the vase) and the center and the three-dimensional world 14 which is to be occupied by the participant the angular orientation of the participant can be derived using basic trigonometry. The projection of the model is made by
computing the persepective position of a point and
subsequently drawing it on a display unit.
Referring again to FIGS. 15 and 16, computer 9 transmits processed signals to the HMD assembly 11 which the viewer wears. An important part of signal processing means 3 is at least one interactive input device 10, Device 10 continuously monitors the participant. The position system 10 comprises a head sensor 76a, a right data glove sensor 76b worn on the participants right hand Hr, a left data glove sensor 76c worn on the participants left hand H1, a source unit 77, a systems electronics unit 78, and conductors 83, and 24 that
interconnect the system. The source transmits a acts as a fixed source module and is energised by a suitable power source and transmits a low frequency electrical field. The sensors are small and modular. The head sensor 76a is mounted on a band worn on the viewers head, and the glove sensors 76b and 76c are mounted intragral to the right and left glove, respectively. The sensors sample the field generated by the source. The sensor and the transmitter are both connected to an electronics decode circuit 78 which decodes the signals from the sensors and computes each sensor's position and orientation in angular coordinates. The device thus provides data as to both the position and orientation of the viewers head, left and right hand. The information is passed via conductor 82 to the computer 9. A type of system generally compatable with the present invention is manufactured by
Polhemus of Colchester, VT, as the 3SPACE TM Tracker and
Digitizer. Additionally, th°ir may be built into the head band two eyeball movement tracking systems, or eye sensors which monitor movements of the wearer's eyeballs and transmit signals representing these movements to computer 9 via
condutors. A type of eye tracking system generally compatable with the present invention is manufactured by NAC of England, UK as the Eye Mark Recorder, Model V (EMR-V).
A slightly different position sensing arrangement is
incorporated for the large display assembly 12 in that the participant wears a head band which holds a position sensor 76a. Other than that, the position sensor system 10a mounted on participant 24b in assembly 12 operates in the same manner as the position sensing system 10b operated by participant 24b wearing the HMD assembly 11.
In FIG. 16 computer 9 includes computer image processing means 18 for sampling out up to eight independent views within the model 14 for video output. Typically, each output signal is transmitted to a display unit 70a-70h. The circit means 18 samples out two independent views 70g and 70h for display on the left and right displays of the viewers HMD 11, and samples out six independent contiguous views 70a-70f for display on each side of the six sides of the large display assembly 12. It should be noted that by decreasing the number of displays the computer 9 and associated circuit means 18 is able to increase the display resolution because more processing memory can be generated for the remaining displays. A type of image segment circuit means 18 compatable with and integral with the present invention is generally of a type manufactured by
Silicon Graphics Inc. of Mountain View, CA under the name VideoSplitter/2 TM.
Fig. 17 is a more automated embodiment of the system 1 in which a 3-D subject is rendered by a first processing means 15 for fusing the image signals 5a and microwave signals 5b from the subject 13. The fusion of the image and microwave signals by means 15 results in 3-D model segments 26a. A panoramic 3-D camera system 6 and panoramic 3-D digitizing system 7 comprises the panoramic input system 2. Arrays 40 and array assemblies 44 similar to those shown in FIGS. 4 thru 7 are preferrably incorporated position and hold sensors in place. A plurality of image and microwave sensors are positioned inward about a subject 13 to form adjacent coverage 42.
Likewise, a plurality of image and microwave sensors are positioned in an outward facing manner to achieve adjacent coverage 42 of a subject. A target 3-D subject exists in space in the real world. The outputs of image and microwave sensors are combined by image processors 15 and 16 of computer 9. The image sensor is composed of one or more cameras 37 which consist of an aperture and a light sensitive element such as a charge-coupled device (CCD) array 53. The light sensitive element converts the sensed image into an analog video signal 5a . The analog video signal is transferred to the low-level image processor of means 15 via a coaxial cable.
The low-level image processor is an electronic circuit that may be implemented as a general purpose computer with
specialized programming or as a series of specialized circuit boards with a fixed function. The low-level image processor collects the analog video for each frame and converts it to a digital form. This digital image is an array of numbers stored in the memory of the low-level image processor, each of which represents the light intensity at a point on sensing element of camera. The low-level image processor may also perform certain filtering operations on the digital image such as deblurring, histogram equalization, and edge enhancement. These operations are well-known to those skilled in the art.
The digital images, thus enhanced, are transferred via a digital data bus to the high-level image processor of means 15. The high-level image processor may also be either a hard-wired circuit or a general purpose computer. The high level image processor takes the enhanced digital image and attempts to extract shape information from it by various means including shape-from-shading, or in the case where multiple cameras are used, stereopsis cr photometeric stereo. Again these are well-known operations to those skilled in the art. When multiple cameras are used in the image acquisition and processing system, the multiple images are combined at the high-level image processor. Alternatively, the images may be combined optically by the use of a panoramic camera system shown in FIG. 3 , and previously discussed herein. The image processor processes an incomplete surface model of the unknown object. The surface model is a list of digitally stored data, each of which consists of theree numbers that are the x, y, and z locations of a point on the object's surface. The incompleteness of the surface model may result from regions on the object's surface that are, for some reason, not understood by the high-level image procesor. The incomplete surface model is passed on to the initializer included as part of processing means 15 by a digital data bus.
The initializer is a general-purpose computer of digital circuit that "fills in" the portions of the surface model left incomplete by the high-level image processor. The unknown areas of the surface model are computed by surface functions such as B-splines that depend on some numerical parameter p. The surface functions are represented digitally in a form that both the initializer and the computer understand. The surface funcitons along with the complete surface model are passed on to the computer by a digital data bus. The computer will determine the correct value of the paramater p in the manner hereafter described.
Concurrently with collection of the images by cameras, a radar cross-sect ion (RCS) of the unknown subject 13 is being measured by the radar system 38. The radar 38 functions a 3-D shape input system 7. The radar 38 consists of a radar processor, antennas 85, and waveguides 86. The radar
processor is a widely available device, and all of the
functions described here can be performed by a unit such as the Hewlett-Packard 8510 Network Analyzer. The method by which they may be performed is described in Hewlett-Packard's product number #8510-2. The radar processor generates a microwave signal that is transmitted along the wavguide and radiated by the transmitting antenna. The electromagnetic field is diffracted by the object and collected by the
receiving antenna. The diffracted signal is transmitted back to the radar processor by a waveguide. The radar processor of system 7 computes the RCS of the unknown subject 13. The RCS is represented digitally by the radar processor, and
transferred to the computer processing means 15 by a digital data bus 5b.
The computer 15 performs the comparisons and interations using two pieces of widely available software. The first is the MINPACK package of non-linear minimization programs published by Argonne National Laboratory, and the second is the Numerical Electromagnetic Code (NEC) available from Ohio State University. The NEC code generates theoretical
approximations of the RCS of the object using the surface model produced by the initializer. The MINPACK program
"lmdifO" uses these approximate RCSs to compute the correct value of the parameter p by an iterative scheme known as
"nonlinear least squares". This method allows computation of the correct value of p by minimizing the differences between the observed RCS acquired by the radar system and the
theoretical RCS computed by the NEC code from the incomplete surface model. Using the correct value of p, along with the incomplete surface model and surface functions from the initializer, the computer 15 generates a complete surface model segment for each adjacent field of regard 42 of the overlapping 41 radar and image sensors. Preferrably, a plurality of means 15 operate in parrallel to derive segments 26 representing all sides of a 3-D subject 13 simultaneously. As with all computer programs, they must be written for the particular hardware and carefully debugged. U.S. Pat.
5,005,147 by Krishen et al. discloses a "Method and Apparatus for Sensor Fusion" generally of a type that is compatable with the present invention herein referred to as the sensor fusion system that forms the first processing means 15.
The output from each of the sensor fusion systems is transmitted to a solid-modeling computer system which forms the second processing means 16 of the system 1 shown in FIG. 17. The computer system includes a high speed central
processing unit (CPU), one or more terminals, mass storage, for instance magnetic tape or disk drives, a solid-state memory, one or more communication interfaces (such as a network interface), a high resolution video display and a printer plotter.
Mass storage of means 16 contains a database for storing solid models representing segments 26 of subjects 13 such as objects, beings, and scenes. Solid models may be represented in the database in a number of different ways, such as in the form of boundary representations, and faceted representations. Preferrably, solids are represented so that it is possible to extract information about surface patches whose disjoint union comprises the bounding surfaces of the solid.
The system operates using topology directed subdivision for determination of surface intersections. This is accomplished by the steps of obtaining a pair of surfaces from a main pool of surfaces representat ions and performing a mutual point exclusion test to determine if the surfaces may have an intersection. Data defining the model segments 26 obtained by the first processing means 15 are transmitted to the second processing means 16 mass storage. Included in this data is information on the surfaces representat ions of each model segment which is operated upon by the second processing means in determining surface matching and intersection. For those pairs of surfaces possibly having an intersection, the
transversal ity of the surface is checked. If transversal, the intersection set is computed. For those pairs which are not transversal, recursive subdivision is performed until
transversality is established or until a flatness criteria is met. A parallel processing system including a master
processor and a plurality of slave processors performs the subdivision operation on the surfaces in a parallel fashion. The bounded surfaces are then processed as completely rendered 3-D objects, beings, and scenes. The resultant data defining and representing the 3-D objects, beings, and scenes is read over a conductor into the memory 25a of the third processing means 17. U.S. Pat. 5,014,230 by Sinha et al. discloses a "Solid-Modeling System Using Topology Directed Subdivision for Surface Intersections" generally of a type compatable with the present system 1 and referred to herein as a solid-modeling computer system that comprises the second processing means 16.
In FIG. 17 a position sensing system 10 similar to that described for use in FIGS. 2 and 3 is operated to monitor the participants position and orientation. However, it is known to those skilled in the art that various viewer interactive input systems, such as those previously decribed, may be incorporated to monitor the position and orientation of the veiwer.
In FIG. 17 audio signals 5C from the microphones 39a-39f are transmitted to the 3-D audio system 23. The audio system 23 outputs a stereo signal 5c corresponding to the subjects in the model 14a and the position and orientation of the
participant 24 . The stereo signal 79. is transmitted by a conventional stereo audio transmitter 62 to a audio stereo reciever 63 associated with stereo headphones 64 worn by the participant 24. Preferrably signal 79 is transmitted
over-the-air so that no conductor cables or wires incumber the participant 24. Such a stereo transmission/receiver headphone arrangement is of a conventional nature used commonly by those in the broadcast industry.
In FIG. 17 storage and geometric manipulation of the
panoramic computer generated model 14 by computer 9 is the same as in FIG. 16 for the large display assembly. However, the forth processing means 18 for processing the image for display and distribution also includes image segment circuit means 72 to partition each signal 80a-80f into sub-segments for display on an array of display units 70al-70af located on each side of the viewer. Image control units 73a-73f, whose detailed operation has been previously described, are included as part of means 72 to process the images for distribution to the display units. d) THREE-DIMENSIONAL DISPLAY PROCESSING
Referring to FIGS. 18 and 19, it will be appreciated that some embodiments of input sources 2 and signal processing means 3 allow a three-dimensional model 14a to be rendered within the computer 9 that is able to be rendered in real-time for simulation. Additionally, the model has increased realism because the model is composed directly from shape and imagery recorded from the real world as opposed to created in a traditional manner incorporating computer graphics. Most importantly, computer 9 is programmable such that multiple viewpoints of the three-dimensional model 14 may be processed out to facilitate dynamic three-dimensional display for either stereographic, autostereographic or holographic display system applications 91. Computer 9 includes the necessary firmware and software for processing and distributing 3-D images to 3-D display units 32, 33, or 34.
In such applicat ions, a single computer is preferrably used to process an adjacent segment 26a of the model 14a for each 3-D display unit 32, 33, or 34. This is recommended because of the computational intensive nature of rendering multiple viewpoints of the model. The complete world model 14a may be stored in the memory of each computer 9a to the 9nth.
Alternatively the system may be configured such that each computer 9 recieve only a segment 26a of the model 14a from a master computer 9 or data base 25a. Preferrably a participant interactive input system 10 transmits viewer position and orientation data to each computer 9a via conventional
interfaces 82 . Each computer is synchronized with each of the other computers by a master clock 88 transmitting a timing signal to each computer 9a to the 9nth. The timing signal may be derived from the internal clock of any computer 9. The output signals from each computer 9a1-9f9 are transmitted to each respective 3-D display unit 90a1-90f9 of the display 3-D assembly. The display units display adjacent portions of the model 26a1 of the world model 14a such that a substantially continuous stereographic, autostereographic, or holographic scene 71 is rendered before the participants eyes. With respect to stereoscopic methods and display systems 32 compatable with system 1 which require special eye glasses; commonly, two overlapping pictures for each screen can be read out for 3-D stereoscopic effects, such as with differing filtration or polarization between the projection modules or, for a 3-D effect without glasses, showing one version of the picture slightly smaller and out of focus relative to the other. The two differing points of view are projected in a super-imposed fashion on each side of the large display assembly 12. The images are scanned out of computer 9 as separate channels by using a multi-channel video output processor 18 generally of a type aviailable from Silicon
Graphics Inc. of Mountain View, CA as the VideoSplitter/2. Spatial separation is achieved because each of the images processed out of computer 9 is taken from a different point of view. Polarizing film is placed in front of the projectors 70a and 70b, one horizontal and the other vertical, and the polarized image is viewed with polarized eye glasses of the same orientation, resulting in each eye seeing the proper veiw as with a stereo viewfinder. The number of projection units would remain the same as the number of points of view. The distance between the point of view of the images scanned out of the computer 9 by the multi-channel processor would be approximately 6 cm, equivalent to the average distance between the pupils of human eyes.
Alternatively, a multi-mode stereoscopic imaging arrangement according to U.S. Pat. Applications SN 7/612,494, SN
7/536,419, SN 7/561,090, and SN 7/561,141 by Faris is
compatable and may be integrated with system 1 . Multiple adjacent viewpoint images of the model, preferrably of the left and right eye, are spatially multiplexed. The
multiplexed image is applied to a micro-polarizer array. The multiplexed image is demultiplexed by the viewer wearing polarized eye glasses. An advantage of Faris is system is that a single picture frame contains both left and right eye information so that conventional display units are instituted, but yield a stereoscopic image to the viewer. Alternatively, an autostereoscopic image requiring no eye glasse: may be rendered by scanning out multiple adjacent viewpoint images of the model from computer 9 and applying those images to processing meams 119 and display means 33 according to U.S. Pat. 4,717,949 by Eichenlaub or an
autostereoscopic full-color 3-D TV display system 33
incorporating a HDTV LCD color video projector and bi-plano 3-D screen with diffusion plate and lenticular screen
available from NHK Science and Technical Research Labratories of Japan. Both Eichenlaub and NHK employ multiple
interdigitated columns of left and right eye images in a corduroy-type manner. The images may be interdigitated by computer 9. Alternatively, the images may be read out from computer 9 on separate channels and interdigitated by a signal mixer.
Alternatively, an autostereoscopic system includes a system in which multiple adjacent viewpoint images of the model 26a of the world model 14a are applied to processing means 119 and display means 33 according to U.S. Pat. 5,014,126 by Pritchard et al. The scanning path and recording rates are selected to produce motion within visio-psychological memory rate range when displayed using standard diplay units 31. Computer code is written to define the multiple adjacent viewpoint images of the model 14 and the number of frames per second that the computer 9 transmits to the display unit.
Finally, holographic systems compatable with system 1
according includes U.S. Pat. 4,834,476 by Benton which
discloses a method and devices 33 for display and sampling a series of perspective views of the model corresponding to the participants 25 position that can be computed by conventional computer graphic techniques. Each computer 9al-9nth is programmed and operated to construct a series of predistorted views of a model segment 26a of the world model 14a from image data stored in a computer memory by ray-tracing techniques. Each perspective view can then be projected with laser light onto a piece of high resolution film from the angle
corresponding to its computed viewpoint, overlapped by a coherent "reference" beam to produce a holographic exposure that records the direction of the image light. After all the views have been recorded in this way, the hologram is
processed and then illuminated so that each view is sent back out in the direction it was projected from, that is, toward its intended viewing location, so that a participant moving from side to side sees a progression of views as though he or she were moving around an actual object. Each adjacent holographic display unit 34al-34f9 displays a holographic image of an adjacent corresponding portion 26a of the world model 14a.
For some applications, a holographic stereogram approach may be desirable when presenting 3-D images. In general, a stereogram consists of a series of two-demensional object views differing in horizontal point-of -view . These views are presented to the participant in the correct horizontal location, resulting in the depth cues of stereopsis and
(horizontal) motion parallax. The 2-D perspective views are generally imaged at a particular depth position, and are multiplexed by horizontal angle of view. A given holo-line in this case contains a holographic pattern that diffracts light to each of the horizontal locations on the image plane. The intensity for a particular horizontal viewing angle should be the image intensity fOor the correct perspective view. This is accomplished by making the amplitude of the fringe
contribution a step-wise x-funtion of the intensity of each image point from each of the views. To facilitate rapid computation of stereogram-type computer graphic holograms, the precomputed tables can be indexed by image x-position and view-angle (rather than by x position and z position).
Summation is performed as each of the perspective views of each segment 26a is read into each computer 9al-9nth based on the viewpoint of the participant. The participants viewpoint is sensed by a any previously mentioned position sensing system, such as a LADAR 97 , that comprises the interactive input system 10. Furthermore, the changes in x can be indexed by look up tables of the computer 9. Alternatively, a simple drawing program written by MIT has been written in which the user can move a 3-D cursor to draw a 3-D image that can oe manipulated. Stereogram computer graphic holograms, such as those computed and displayed on the MIT real-time holographic display system, and which produce realistic images, may be computed utilizing sophisticated lighting and shading models and exhibiting occlusion and specular reflections in system 1 .
Each computer 9a1-9f9 may comprise the following system and methods demonstrated by the MIT, Media Laoratory. The methods of bipolar intensity summation and precomputed elemental fringe patterns are used in the hologram computation for holographic real-time display. A type of computer 9al-9f9 for holographic processing 220 is generally of a type known as the Connection Machine Model 2 (CM2) manufactured by Thinking Machines, Inc. Cambridge, MA. Each computer 9al-9f9 employs a data-parallel approach in order to perform real-time
computer generated hologram computation. This means that each x location on the holgram is assigned to one of 32k virtual processors. (The 16k physical processors are internally programmed to imitate 32k "vrtual" processors.) A Sun 4 workstation is used a front-end for the CM2, and the parallel data programming language C Paris is used to implement
holographic computation. In this manner, each image segment 26a1 -26a9 may be computed for each associated holographic dispaly unit 34a1-34f9.
Once each computer 9a1-9af has rendered holographic signals for output the signals are transmitted from each
supercomputer. The signals represent the optical fringe patterns that would be present in a 3-D hologram composed of 192 horizontal lines. Each line is composed of 16 other lines and has 32,000 picture elements. To simplifiy the computing task, som information is omitted from the hologram. The light signals are converted into a radio-frequency signal,
amplified, and sent to three transducers attached to a
tellurium-dioxide acousto-optical crystal. Here the signals are converted into sound waves that travel through the crystal at about twice the speed of sound, altering the index of refraction as they change pitch and advance. A laser beam passing throught the crystal is detracted just as it would be passing through a hologram. A spinning mirror--18 facets on the edge of a thick brass polygon--"freezes" this holographic information. The mirror spins in th opposite direction and at the same speed as the sound waves. When a ciruit counts 16 lines, it sends a signal to a vertical scanner, an
electrmechnaically driven mirror that moves down one step for the next scan line. A lens focused on the spinning polygon magnifies the 192 scan lines into a visible image. Simple images, such as a wire-frame cube, can be updated by the computer in about 0.3 second, so animation is possible, showing the 3-D cube spinning against a black background. A series of animated images that is precomputed, may be stored on magnetic discs, then rapidly downloaded for smooth, flicker-free animation. The front viewing face of each 3-D display unit 90al-90f9 is faced inward toward the participant such that a panoramic scene of speherical coverage is
presented to the participant.
If the images are accurately computed and registered, the resulting image looks like a solid three-dimensional subject. Such a composite or synthetic hologram is termed a
"holographic stereogram." It mimics the visual properties of a true hologram even though it lacks the information content and inter ferometr ic accuracy of a true hologram.
It is forseen that other types and improved holographic processors and larger holographic display units similar to that described above may be incorporated in the present system 1. It is further forseen within the scope of the system 1 that similar and other holographic processors and diplay units may operate on the same basic image based virtual model for holographic image generation, and use the basic assembly 4 arrangement for holographic display. It is also forseen that projection hologrphic display units may be situated about the viewing space 58 to project 3-D images into the viewing space to add increased realism to the images viewed by the
participant. 4) TELECOMMUNICATIONS EMBODIMENT
FIGS. 20 and 21 illustrates an embodiment in which system 1 is used in a telecommunications application. A virtual real ity/telepresence teleconferencirg system 20 includes computer 9 , interactive input system 10, source 2 information derived from an input means consisting of a panoramic camera system 6 , 3-D panoramic digitizer system 7 , and 3-D panoramic audio system 8, a head mounted display assembly 11 and/or large display assembly 12, and telecommunications peripheral apparatus 92-96 to allow interconnection of two or more terminals to engage in teleconferencing via digital data networks 98. The digital data telecommunications network may be part of a local area network (LAN) or wide area network (WAN) known to those in the telecommunications industry.
Additionally, it is also known to those in the
telecommunication industry that any suitable telephone, cable, satellite, or wireless network 98 compatable with the present invention may select, switch, transmit, and route data of system 1 between remote locations. Typically a frame grabber interfaces each camera of input system 6 with its computer 9 . Signals output from the computer 9 are encoded and compressed before being input to a telephone line via a
modulator /demodulator 94. A decoder 96 is conntected between the modulator /demodulator 94 and the computer for decoding compressed video signals received by the modulator/demodulator means from the telephone line 99 so that the signals may be displayed on a video display conected to the computer.
The telecommunications system 20 is configured to send and switch from video to high-resolution graphics, or voice modes without breaking connection. For example the participant may choose to transmit imagery data and/or shape data, or can update a virtual environment by simply passing data
representing position coordinates to update the position of actual live 13 or prerecorded modeled 14 beings, objects, or scenery modeled or stored at remote locations. Alternatively, a plurality of computer workstations 9 and telecommunications apparatus 92-96 may operate to in parrallel to transmit data over a plurality of telephone lines to facilitate the
transmission of large volumes of information. For instance, high bandwidth imagery of each side of the large display assembly 12 may be
transmitted over a separate telephone line 99 to each
corresponding display unit 70a-70f.
The telecommunications system 20 can be added as an internal peripheral to a computer, or may be added as a stand alone device which conntects to the computer 9 via a serial or parallel interface. Additionally, the telecommunications system may be used in a one, two, or many-way ("broadcast") mode. U.S. Pat. 5,062,136 by Gattis et al. is generally of the type incorporated and compatable with the present system 1.
5) VEHICLE EMBODIMENTS
It is foreseen that display system 1 may be used with various kinds of vehicles. In FIGS. 22 and 23 input means 2 comprises a panoramic camera system 6a-6f, panoramic
laser-radar (LADAR) system 7a-7 f , and a panoramic audio system 8a-8f. Each LADAR 7a-7f of the system includes a registered visible video channel. The LADAR system searches, acquires, detects, and tracks a target subject 13 in space. The LADAR initially searches a wide field of view 223. The LADAR includes focusable optics that subseguently may focus on a subject in a narrow field of view 224. Once the subject 13 is resolved each LADAR video processor 15a-15f associated with each LADAR system 7a-7f switches from a search mode to
determining the subject 13 orientation and position. A vision processor 15a-15f (e.g. a SGI Computer Workstation or
Macintosh PC) of each LADAR system includes an object
recognition capability which correlates, identifies, and registers subjects detected by each LADAR ' s laser ranging system. Imagery from each LADAR is fused with the camera imagery 6a-6f by a computer 15a-15f. The fused data from each computer 15a-15f is transmitted to an associated display unit 70a-70f of assembly means 4. The scenes are displayed about the viewer /operator 24 . Each input source, computer fusion processor, and display unit operates on an image with a 90 degree field of view. All systems may be synchronized by the master clock integral to any one of the computers 15a-15f. This is typically done by conventional software program to synchronize the signals of the machines 15a-15f and the use of common wiring network topologies such as an Ethernet, Token Ring, or Arcnet to interconnect machines 15a-15f.
FIG. 23 illustrates the arrangment as configured on a module of a space station. It should be understood that various system 1 arrangements can be placed on any suitable vehicle platform. The participant operates interactive control devices 103 to pilot the host vehicle 102 in response to the audio and visual scenery displayed around the participant. Additionally, the system 15 may include object recognition processors. A LADAR 7a and camera 6a of a type compatable for incorporation with the present invention is manufactured by Autonomous Systems Inc. of Orlando, FL.
Similarly it is forseen that the system could be used to remotely pilot a vehicle, such as described in FIGS. 24 and 25. In FIG. 25 a 3-D representation of the scene is recorded by a sensor array 36a-36f such as that described in FIGS. 6 and 7 . The sensor array housing 40 is incorporated into the outer skin of a remotely piloted or teleoperated vehicle 108.
In FIG. 25 the vehicle 108 incorporates a video data
compression system 226 to transmit visual information sensed about the vehicle over the air to operate a control station 225. An over the air radio frequency digital communications system 226 transmits 1000 to one compressed full color signals at a 60 hertz data transmission rate. Imagery data from a panoramic camera system 6 is transmitted to an output
communications buffer. The frame buffer reads out the digital data stream to the radio frequency tranceiver 109b for
transmission to the transceiver 109a. Tranceiver 109a is located at the control station 225. The transceiver 109a recieves the over the air signal and reads the signal into the input communications buffer. The input communications buffer reads out a digital data stream to a data signal decompression (or data expander) device. The decompressed signal is read to signal processing unit 3 for processing and distribution. The processed image is transmitted from processor 3 to display units 11 or 12. Shape data and audio data may also be
transmitted over the digital over the air data link. A teleoperated vehicle 226 data transmission and control system compatable with the present system 1 is manufactured by
Transition Research Corporation of Warren, MI. The panoramic camera system 6 like that in FIGS. 2-7 replaces the camera system of the Transition Research Corp. camera arrangement. A single or plurality of channels may comprise the system 21.
The participant 24 of the control station 225 interacts with the real world remote environment by viewing the displayed scene and operating devices of the interactive input system 10. The control signals are transmitted from the system 10 from tranceiver 109a to tranceiver 109b to vehicle control systems 112. The control system includes data processing means that operates on the transmitted signal to actuate control surfaces and motors 113, and manipulators onboard the teleoperated vehicle. In this manner the participant remotely controls the teleoperated vehicle 108.
It is further forseen that the sensor array 36 may be mounted onboard unpiloted vehicles such as robots. The sensors of each sensor array would be faced outward from the robot. The sensors of each sensor array would be in
communicating relationship with the robots computer processing means. Sensed data would be fused and operated upon by the robot to assist the robot in negotiating and interacting within it's environment.
DISPLAY ASSEMBLY MEANS
1) HEAD MOUNTED DI SPLAY ASSEMBLY
The pr ocessing means 18 of computer 9 generates signals 80 transmitted to the HMD 11 via conductor lines and these are converted into two images on respective high-resolution, miniature display units 70a-70b housed within the HMD assembly. The display units are mounted on opposite sides of the HMD assembly in direct communication with the respective left and right eyes of the viewer wearing the HMD assembly. HMD assemblies of the type
generally compatable with the present invention are
manufactured by VPL Research Inc. of Redwood City, CA as the EyePhone TM HRX System; by LEEP Technologies Inc., Waltham, MA as the Cyberspace 2; by Sony Corporation, Inc. of Japan as VisorTron TM; and that sighted in U.S Pat. 5,034,809 by Katoh and assigned to Palca, Inc, Tokyo, Japan.
2) LARGE DISPLAY ASSEMBLY
Alternatively, a large display assembly 12 is provided to receive and display the virtual model to a viewer. The large display assembly is configured in a polyhedral arrangement. The display assembly comprises a structural assembly that encloses the participants head, upper body, or entire body. The assembly is designed to facilitate a single or plurality of viewers. The floor 101 and its' associated display units beneath, to the sides and over the viewer are integrated so the participant is presented with a substantially continuous scene for viewing. The structural framework, supports, and associated fasteners 107 are integrated with the display assembly such that they are hidden from the participant and hold the assembly together. Display systems and optical enlargement means mounted on spr ing-hinged doors, latches, or rollers, allow the entry and exit assemblies 106 to move back and forth in an open and closed position. The floor 101 on which the viewer is situated is preferably of a rigid
transparent material through which the viewer sees the viewing side of the display systems or optical enlarging assemblies. Alternatively, the viewing side of the display systems or optical enlarging assemblies is constructed of materials that support the participant. The material on which the viewer is situated is preferably formed of a transparent rigid glass, plastic, or glass-plast ic laminate.
The viewing surface 81 of the display units may be curved or flat and face inward toward the center of the viewing space. The diplay units 70 typically comprise image projection units and associated rear projection screens, cathod ray tube display units, or flat panel displays. Single display units 70a-70f may comprise a side of the viewing space, or a
plurality of display units 70a1-70f9 may make up a side of the viewing space 58.
As shown in FIG. 19 stereographic display unit 32,
autostereoscopic display units 32, or holographic display units 33, and associated screens, audio components, and entry and exit ways may be supported in the similar manner as conventional display units and screens as described in U.S. Pat. 5,130,794 by the present inventor.
As shown in FIG. 22 the entire display assembly 11 or 12 may be located on any suitable vehicle.
It is to be understood that while certain forms of the present invention have been illustrated and described herein, it is not to be limited to the specific forms or arrangement of parts described and shown.
What is claimed and disired to be secured by Letters Patent is as follows:
1. A display system for virtual interaction with said recorded images comprising:
(a) input means including:
(1) a plurality of positionable sensor means of
mutually angular relation to enable substantially continuous coverage by said sensor means of a given three-dimensional subject;
(2) sensor recorder means communicating with said
sensor means and operative to store and generate said sensor signals representing a said subject; and
(b) signal processing means communicating with said
sensitive means and said recorder means, receiving said image signals from at least one of said recorder means,

Claims

and operable to texture map virtual images represented by said image signals onto a three-dimensional form;
(c) a panoramic audio-visual display assembly means
communicating with said signal processing means and enabling the display to a participant of the texture mapped virtual images; and
(d) participant control means communicating with said
image processing means and enabling interactive
manipulation of said texture mapped virtual images.
2. A system according to claim 1 wherein:
(a) the plurality of sensor means are positioned
in an inwardly facing mutually angular relation to enable substant ially continuous coverage by said sensor means of a given subject.
3. A system according to claim 1 wherein:
(a) the plurality of sensor means are positioned
on a housing means in outwardly facing, mutually angular relation to enable substantially spherical coverage by said sensor means.
4. A system according to claim 1 where said input means comprises :
(a) a plurality of positionable objective lens means
of mutually angular relation to enable substantially continuous coverage by said lens means of a given subject;
(b) light sensitive means optically communicating with
said lens means and receiving respective optical images therefrom and generating image signals
representing said optical images; and
(c) image recorder means communicating with said light
sensitive means and operative to store said image signals.
5. A system according to claim 1 where said input means comprises :
(a) a plurality of microphones positioned in mutally
angular relation to enable continuous coverage by said microphone means; and (b) audio recorder means communicating with said
microphones and operative to store and generate audio s ignals.
6. A system according to claim 1 where said input means includes :
(a) a plurality of three-dimensional digitizing sensor means positioned in mutually angular relation to enable continuous coverage by said digitizer means; and
(b) three-dimensional digitizer recording means
communicating with said digitizing sensor to store and generate signals representing a three-dimensional representation of said subject.
7 . A method for recording a panoramic computer generated virtual environment comprising the following actions:
(a) digitizing the shape of all beings, objects, and
scenes to be represented as part of the virtual environment;
(b) imaging all beings, objects, and scenes to be
represented as part of the virtual environment; and
(c) recording audio representations eminating from all
beings, objects, and scenes.
8. A system according to claim 1 wherein signal processing means includes :
(a) an apparatus for adapting video and digital signals to a computer in a telecommunications system to allow a plurality of computer terminals to engage in
teleconferencing via a digital data network.
9. A method for rendering a three-dimensional computer generated model comprising the following steps :
(a) recording substant ially all aspects of a subjects
shape, visual, and audio signature by recording
segments representing each aspect of a subjects shape, visual, and audio signature;
(b) constructing a three-dimensional model of each segment by fusing corresponding shape and visual signatures together and assigning corresponding audio signatures to each fused shape and visual signature segment; (c) constructing a panoramic three-dimensional model by fusing together said corresponding fused shape and visual signature segments, and assigning said
corresponding audio signatures to fused shape and visual signature segments;
(d) processing said panoramic three-dimensional model
representing the shape, imagery, and audio signature of said subject by use of a digital computer such that visual and audio signals are output to a display means;
(e) communicating said panoramic audio and visual model to the participant by an audio system and display units comprising a panoramic audio-visual display assembly means; and
(f) participant manipulation of participant interactive input devices communicating with said computer
processing means to effect audio and visual model.
10. A non-contact sensor array for recording a
three-dimensional signature comprising:
(a) a visual input sensor which records a video signal at least representing a field of coverage similiar to that of an associated audio sensor and shape sensor, and conductor and interface means for transmitting said visual signal to an associated processing or display means;
(b) an audio input sensor which records an aural singal at least representing a field of coverage similiar to that of said associated visual and shape sensor, and
conductor and interface means for transmitting said audio signal to an associated audio processor or speaker system;
(c) a shape input sensor which records a shape signature at least representing a field of coverage simialr to that of said visual and audio sensor, and conductor and interface means for transmitting said signature to an associated processing or display means; and
(d) housing means to hold said visual, audio, and shape sensors in place and in a mutually communicating relationship with said subject, and in a communicating relationship with said processing or presentation means.
11. A non-contact sensor array according to claim 8
including :
(a) a plurality of arrays oriented in an outwardly facing mutually angular relation to enable substantially spherical coverage of a three dimensional subject.
12. A non-contact sensor array according to claim 8
including :
(a) a plurality of arrays oriented in an inwardly facing mutually angular relationship to enable substantially continuous coverage of a three dimensional subject.
13. A system according to claim 1 wherein:
(a) a digital computer including a pixel programmable
display generator;
(b) said video display generator being operatively
connected to said display assembly means; and
(c) graphics input means operatively connected to said
computer to be operated by a viewer to cause the
generation, alteration, and display of images by
display assembly means.
14. A system according to claim 1 wherein signal
processing means includes:
(a) an expert system and complementary data base.
15. A system according to claim 1 wherein signal processing means includes:
(a) an expert system consisting of retrieval rules, which each associate one of several attributes to an object in accordance with the values of inputs;
(b) analysis rules, which sellectively associate an
attribute with an object, and which are somewhat
anogous to the natural-language inference rules which would be used in communications between domain experts; and
(c) action rules, which selectively carry out the output and control actuation options, based on the attributes associated with objects by the other rules.
16. A system according to claim 1 wherein signal processing means includes:
(a) a database wi thi n the host computer that assigns
attributes to subjects modeled in the host computer, where the attributes are prompted by actions of the viewer operating at least one interactive input device.
17. A system according to claim 1 wherein signal processing means comprises:
(a) a plurality of host computers which each process a
segment of the panoramic subject for display.
18. A system according to claim 1 wherein signal processing means and display unit means includes:
(a) processing means to generate two independent views
taken from viewpoints a little distance apart of a portion of said images mapped onto said
three-dimensional form and communicate those views as stereoscopic video signals; and
(b) a plurality of display units of said assembly means
operable to display said signals on each respective display unit as a stereoscopic image to said
participant.
19. A system according to claim 1 wherein signal
processing means and audio-visual display assembly means includes:
(a) an autostereoscopic audio-visual display system in which the participant requires no eye glasses.
20. A panoramic autostereoscopic audio-visual display assembly comprising:
(a) structural support means for securing said
autostereoscopic display units about the viewer,
supporting the planar floor on which the viewer is situated, and supporting entry and exit portion of said assembly;
(b) said plurality of autoster oscopic dislplay units
supported by said structural support means such that said display units face inward to the viewer, said display units operable to display autostereoscopic images, said display units arranged to form sides tangent to one another about a viewer such that a viewer views a respective portion of said composite image in substantially any viewable direction
surrounding the viewer;
(c) said rigid planar floor positioned close to the lower side of said assembly, said planar floor adaptive to viewing said autostereoscopic images eminating from said display units by said viewer, and said planar floor supporting said viewer; and
(d) said entry and exit portion of said assembly
positioned latterly of said viewer.
21. A system according to claim 1 wherein signal processing means and audio-visual display assembly means include:
(a) a holographic display system.
22. A panoramic holographic audio-visual display assembly comprising :
(a) structural support means for securing holographic
display units about the viewer, supporting the planar floor on which the viewer is situated, and supporting entry and exit portion of said assembly;
(b) said plurality of hologrgaphic dislplay units
supported by said structural support means such that said display units face inward to the viewer, said display units operable to display holographic images, said display units arranged to form sides tangent to one another about a viewer such that a viewer views a respective portion of said composite image in
substantially any viewable direction surrounding the viewer;
(c) said rigid planar floor positioned close to the lower side of said assembly, said planar floor adaptive to viewing said holographic images eminating from said display units by said viewer, and planar floor
supporting said viewer; and
(d) said entry and exit portion of said assembly positioned latterly of said viewer.
23. A system according to claim 1 wherein signal processing means includes:
(a) a voice recognition system.
24. A system according to claim 1 wherein signal processing means includes:
(a) a position sensing system.
25. A system according to claim 1 wherein signal processing means includes :
(a) image segment circuit means for partitioning an image output by said computer processing means.
26. A system according to claim 1 wherein the audio-visual display assembly includes:
(a) a plurality of flat panel display systems.
27. A system according to claim 1 wherein the audio-visual display assembly includes:
(a) a plurality of cathode ray tubes.
28. A system according to claim 1 wherein the audio-visual display assembly includes:
(a) a plurality of video projection display units and associated projection screen.
29. A system according to claim 1 wherein the panoramic audio-visual display assembly includes:
(a) a head mounted display assembly.
30. A system according to claim 1 wherein the panoramic audio-visual display assembly comprises:
(a) structural support means for securing display units
about the viewer, supporting the planar floor on which the viewer is situated, and supporting entry and exit portion of said assembly;
(b) said plurality of dislplay units supported by said
structural support means such that said display units face inward to the viewer, said display units operable to display images;
(c) said display units arranged to form sides tangent to one another about a viewer such that a viewer views a respective portion of said composite image in substantially any viewable direction surrounding the viewer;
(d) said rigid planar floor positioned close to the lower side of said assembly, said planar floor adaptive to viewing said images eminating from said display units by said viewer, and planar floor supporting said viewer; and
(e) said entry and exit portion of said assembly
positioned latterly of said viewer.
31. A system according to claim 1 where said audio-visual display assembly comprises:
(a) a head mounted display assembly receiving a processed video signal which has been processed by said
processing means for display on said head mounted display assembly; and
(b) a second audio-visual display assembly including a plurality of display units receiving respective processed video signals representing image segments of said composite image from said segment circuit means; and
(c) each said assembly means being arranged such that a viewer views a respective portion of said scene of substantially spherical coverage in any viewable direction.
32. A system according to claim 1 wherein participant control means includes:
(a) a non-contact participant position sensor system.
33. A system according to claim 1 wherein the
signal processing means includes:
(a) a target recognition system.
34 . A system according to claim 1 wherein the signal processing means includes:
(a) a synthet ically generated three-dimensional auditory display system.
35. A system according to claim 1 wherein the signal processing means includes:
(a) a video teleconferencing system.
36. A system according to claim 10 wherein said housing means comprises:
(a) the exterior surface of a host vehicle.
37. A system according to claim 10 wherein said housing means comprises:
(a) the exterior surface of a teleoperated vehicle.
38. A system according to claim 10 wherein said housing means comprises:
(a) the exterior surface of a robotic vehicle.
36. A system according to claim 1 wherein the system
includes:
(a) a conventional stereo over-the-air audio transmitter transmits a stereo audio signal to a conventional stereo over-the-air receiver of conventional stereo headphones worn by a participant.
37. A system according to claim 1 wherein the third
processing means comprising the host computer for manipulation of the world model includes:
(a) associated software for assigning action to subjects in computer generated world model based upon actions by another subject in the computer generated world model.
38. A system according to claim 1 wherein the forth
processing means of the system includes:
(a) a processing means for sampling out a plurality of
independent views of the computer generated world model and transmitting said views as separate video signals to a next device for processing or display.
39. A system according to claim 1 wherein the signal processing means includes:
(a) a television production system.
40. A system according to claim 1 wherein the signal processing means comprises:
(a) a digital computer system.
41. A system according to claim 1 wherein the signal processing means includes:
(a) a knowledge based artificial intelligence system; and
(b) a peripherial device in communicating relationship with said knowledge based system such that the
peripherial device operates and performs functions in the real world based on changes and actions of subjects in the virtual world; the subjects perceptions being formed by a subject beings knowledge based systems perception of the real world sensor data modelded in the virtual model.
42. A system according to claim 1 wherein the signal processing means includes:
(a) a participant mechanical feedback device, actuated by particpant actions and actions by subjects programmed into the virtual model.
43. A system according to claim 1 including:
(a) a motion simulation system.
44. A method for simultaneously generating a three- dimensional computer generated model and tracking the position and orientation of a real world subject including:
(a) positioning shape, image, and acoustical sensors to
record respective sensor representation of all sides of a given subject; and
(b) processing same sensor data of said subject to first
derive position and orientation of the subject and secondly to render three-dimensional computer generated model of said same subject.
PCT/US1994/000289 1993-01-11 1994-01-07 Improved panoramic image based virtual reality/telepresence audio-visual system and method WO1994016406A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US08/002,582 1993-01-11
US08/002,582 US5495576A (en) 1993-01-11 1993-01-11 Panoramic image based virtual reality/telepresence audio-visual system and method

Publications (1)

Publication Number Publication Date
WO1994016406A1 true WO1994016406A1 (en) 1994-07-21

Family

ID=21701463

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US1994/000289 WO1994016406A1 (en) 1993-01-11 1994-01-07 Improved panoramic image based virtual reality/telepresence audio-visual system and method

Country Status (2)

Country Link
US (1) US5495576A (en)
WO (1) WO1994016406A1 (en)

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0705044A3 (en) * 1994-09-28 1996-07-17 At & T Corp An interactive scanning device or system
EP0757335A2 (en) * 1995-08-02 1997-02-05 Nippon Hoso Kyokai 3D object graphics display device and method
WO2000060857A1 (en) * 1999-04-08 2000-10-12 Internet Pictures Corporation Virtual theater
US6778211B1 (en) 1999-04-08 2004-08-17 Ipix Corp. Method and apparatus for providing virtual processing effects for wide-angle video images
ES2214115A1 (en) * 2002-10-15 2004-09-01 Universidad De Malaga System for automatically recognizing screening objects in database, has motion control module moving mobile autonomous agent by environment and allowing cameras to be directed to same selected points
EP1536378A2 (en) * 2003-11-28 2005-06-01 Topcon Corporation Three-dimensional image display apparatus and method for models generated from stereo images
CN102119531A (en) * 2008-08-13 2011-07-06 惠普开发有限公司 Audio/video system
CN102135882A (en) * 2010-01-25 2011-07-27 微软公司 Voice-body identity correlation
RU2485593C1 (en) * 2012-05-10 2013-06-20 Федеральное государственное бюджетное образовательное учреждение высшего профессионального образования "Сибирская государственная геодезическая академия" (ФГБОУ ВПО "СГГА") Method of drawing advanced maps (versions)
EP3057316A1 (en) * 2015-02-10 2016-08-17 DreamWorks Animation LLC Generation of three-dimensional imagery to supplement existing content
US9721385B2 (en) 2015-02-10 2017-08-01 Dreamworks Animation Llc Generation of three-dimensional imagery from a two-dimensional image using a depth map
EP3564785A4 (en) * 2016-12-30 2020-08-12 ZTE Corporation Data processing method and apparatus, acquisition device, and storage medium
CN114051151A (en) * 2021-11-23 2022-02-15 广州博冠信息科技有限公司 Live broadcast interaction method and device, storage medium and electronic equipment

Families Citing this family (784)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7907793B1 (en) 2001-05-04 2011-03-15 Legend Films Inc. Image sequence depth enhancement system and method
US8396328B2 (en) 2001-05-04 2013-03-12 Legend3D, Inc. Minimal artifact image sequence depth enhancement system and method
US5384588A (en) 1991-05-13 1995-01-24 Telerobotics International, Inc. System for omindirectional image viewing at a remote location without the transmission of control signals to select viewing parameters
US8352400B2 (en) 1991-12-23 2013-01-08 Hoffberg Steven M Adaptive pattern recognition based controller apparatus and method and human-factored interface therefore
US5553864A (en) 1992-05-22 1996-09-10 Sitrick; David H. User image integration into audiovisual presentation system and methodology
US8821276B2 (en) 1992-05-22 2014-09-02 Bassilic Technologies Llc Image integration, mapping and linking system and methodology
US7345672B2 (en) * 1992-12-02 2008-03-18 Immersion Corporation Force feedback system and actuator power management
US6801008B1 (en) * 1992-12-02 2004-10-05 Immersion Corporation Force feedback system and actuator power management
JP3792733B2 (en) * 1993-01-18 2006-07-05 キヤノン株式会社 Server apparatus and control method
US5659764A (en) * 1993-02-25 1997-08-19 Hitachi, Ltd. Sign language generation apparatus and sign language translation apparatus
US5712898A (en) * 1993-03-08 1998-01-27 Adtran, Inc. D4 channel bank with multi-mode formatted, performance-monitoring communication bus
DE69434685T2 (en) * 1993-08-04 2006-09-14 Canon K.K. Image processing method and apparatus
US5719598A (en) * 1993-08-23 1998-02-17 Loral Aerospace Corporation Graphics processor for parallel processing a plurality of fields of view for multiple video displays
US6061064A (en) * 1993-08-31 2000-05-09 Sun Microsystems, Inc. System and method for providing and using a computer user interface with a view space having discrete portions
US5659691A (en) * 1993-09-23 1997-08-19 Virtual Universe Corporation Virtual reality network with selective distribution and updating of data to reduce bandwidth requirements
US5980256A (en) * 1993-10-29 1999-11-09 Carmein; David E. E. Virtual reality system with enhanced sensory apparatus
US5562572A (en) * 1995-03-10 1996-10-08 Carmein; David E. E. Omni-directional treadmill
EP0695109B1 (en) * 1994-02-14 2011-07-27 Sony Corporation Device for reproducing video signal and audio signal
JP3214776B2 (en) * 1994-04-13 2001-10-02 株式会社東芝 Virtual environment display device and method
JPH07287761A (en) * 1994-04-19 1995-10-31 Canon Inc Device and method for processing picture
US7843497B2 (en) 1994-05-31 2010-11-30 Conley Gregory J Array-camera motion picture device, and methods to produce new visual and aural effects
US5803738A (en) * 1994-06-24 1998-09-08 Cgsd Corporation Apparatus for robotic force simulation
JP3428151B2 (en) * 1994-07-08 2003-07-22 株式会社セガ Game device using image display device
US5745665A (en) * 1994-07-20 1998-04-28 Douglas F. Winnek Method for processing a three-dimensional data set into a composite two-dimensional image viewable as a three-dimensional image
US9513744B2 (en) * 1994-08-15 2016-12-06 Apple Inc. Control systems employing novel physical controls and touch screens
WO1996006392A1 (en) * 1994-08-18 1996-02-29 Interval Research Corporation Content-based haptic input device for video
US5751829A (en) * 1994-08-18 1998-05-12 Autodesk, Inc. Spectrally coordinated pattern search-imaging system and method
US6839081B1 (en) * 1994-09-09 2005-01-04 Canon Kabushiki Kaisha Virtual image sensing and generating method and apparatus
DE69524332T2 (en) * 1994-09-19 2002-06-13 Matsushita Electric Ind Co Ltd Device for three-dimensional image reproduction
US5661518A (en) * 1994-11-03 1997-08-26 Synthonics Incorporated Methods and apparatus for the creation and transmission of 3-dimensional images
JP3528284B2 (en) * 1994-11-18 2004-05-17 ヤマハ株式会社 3D sound system
US5714997A (en) * 1995-01-06 1998-02-03 Anderson; David P. Virtual reality television system
US5644386A (en) * 1995-01-11 1997-07-01 Loral Vought Systems Corp. Visual recognition system for LADAR sensors
WO1996024216A1 (en) 1995-01-31 1996-08-08 Transcenic, Inc. Spatial referenced photography
AU719514B2 (en) 1995-03-08 2000-05-11 British Telecommunications Public Limited Company Broadband switching system
US5729471A (en) * 1995-03-31 1998-03-17 The Regents Of The University Of California Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
US5850352A (en) * 1995-03-31 1998-12-15 The Regents Of The University Of California Immersive video, including video hypermosaicing to generate from multiple video views of a scene a three-dimensional video mosaic from which diverse virtual video scene images are synthesized, including panoramic, scene interactive and stereoscopic images
US5684531A (en) * 1995-04-10 1997-11-04 The United States Of America As Represented By The Administrator Of The National Aeronautics And Space Administration Ranging apparatus and method implementing stereo vision system
US5990934A (en) * 1995-04-28 1999-11-23 Lucent Technologies, Inc. Method and system for panoramic viewing
KR100201739B1 (en) * 1995-05-18 1999-06-15 타테이시 요시오 Method for observing an object, apparatus for observing an object using said method, apparatus for measuring traffic flow and apparatus for observing a parking lot
US5913727A (en) * 1995-06-02 1999-06-22 Ahdoot; Ned Interactive movement and contact simulation game
JPH0954376A (en) * 1995-06-09 1997-02-25 Pioneer Electron Corp Stereoscopic display device
US20090322499A1 (en) * 1995-06-29 2009-12-31 Pryor Timothy R Programmable tactile touch screen displays and man-machine interfaces for improved vehicle instrumentation and telematics
US6005607A (en) 1995-06-29 1999-12-21 Matsushita Electric Industrial Co., Ltd. Stereoscopic computer graphics image generating apparatus and stereoscopic TV apparatus
US8228305B2 (en) 1995-06-29 2012-07-24 Apple Inc. Method for providing human input to a computer
US5905499A (en) 1995-07-05 1999-05-18 Fakespace, Inc. Method and system for high performance computer-generated virtual environments
RU2109336C1 (en) * 1995-07-14 1998-04-20 Нурахмед Нурисламович Латыпов Method and device for immersing user into virtual world
US5666504A (en) * 1995-09-29 1997-09-09 Intel Corporation Method for displaying a graphical rocker button control
US5959663A (en) * 1995-10-19 1999-09-28 Sony Corporation Stereoscopic image generation method and apparatus thereof
US6098458A (en) * 1995-11-06 2000-08-08 Impulse Technology, Ltd. Testing and training system for assessing movement and agility skills without a confining field
US6308565B1 (en) 1995-11-06 2001-10-30 Impulse Technology Ltd. System and method for tracking and assessing movement skills in multidimensional space
US6430997B1 (en) 1995-11-06 2002-08-13 Trazer Technologies, Inc. System and method for tracking and assessing movement skills in multidimensional space
US7542035B2 (en) * 1995-11-15 2009-06-02 Ford Oxaal Method for interactively viewing full-surround image data and apparatus therefor
US6111702A (en) 1995-11-30 2000-08-29 Lucent Technologies Inc. Panoramic viewing system with offset virtual optical centers
US6329964B1 (en) * 1995-12-04 2001-12-11 Sharp Kabushiki Kaisha Image display device
JPH09167253A (en) * 1995-12-14 1997-06-24 Olympus Optical Co Ltd Image display device
US6141034A (en) * 1995-12-15 2000-10-31 Immersive Media Co. Immersive imaging method and apparatus
US6055012A (en) * 1995-12-29 2000-04-25 Lucent Technologies Inc. Digital multi-view video compression with complexity and compatibility constraints
US5929841A (en) * 1996-02-05 1999-07-27 Sharp Kabushiki Kaisha Data input unit
US7038698B1 (en) * 1996-02-08 2006-05-02 Palm Charles S 3D stereo browser for the internet
US7154506B2 (en) * 1996-02-08 2006-12-26 Palm Charles S 3D stereo browser for the internet
US7190371B2 (en) 1996-02-08 2007-03-13 Palm Charles S 3D stereo browser for the internet
US6192145B1 (en) * 1996-02-12 2001-02-20 Sarnoff Corporation Method and apparatus for three-dimensional scene processing using parallax geometry of pairs of points
US5966310A (en) * 1996-02-13 1999-10-12 Sanyo Electric Co., Ltd. Personal design system and personal equipment production system for actually producing equipment having designed appearance
IL117133A (en) 1996-02-14 1999-07-14 Olivr Corp Ltd Method and system for providing on-line virtual reality movies
JPH09238367A (en) * 1996-02-29 1997-09-09 Matsushita Electric Ind Co Ltd Television signal transmission method, television signal transmitter, television signal reception method, television signal receiver, television signal transmission/ reception method and television signal transmitter-receiver
US6532021B1 (en) 1996-04-30 2003-03-11 Sun Microsystems, Inc. Opaque screen visualizer
US5708469A (en) * 1996-05-03 1998-01-13 International Business Machines Corporation Multiple view telepresence camera system using a wire cage which surroundss a plurality of movable cameras and identifies fields of view
WO1997042601A1 (en) * 1996-05-06 1997-11-13 Sas Institute, Inc. Integrated interactive multimedia process
JPH1063470A (en) * 1996-06-12 1998-03-06 Nintendo Co Ltd Souond generating device interlocking with image display
US20010025261A1 (en) * 1996-06-14 2001-09-27 Shari Olefson Method and apparatus for providing a virtual tour of a dormatory or other institution to a prospective resident
US20020116297A1 (en) * 1996-06-14 2002-08-22 Olefson Sharl B. Method and apparatus for providing a virtual tour of a dormitory or other institution to a prospective resident
GB2314203B (en) * 1996-06-15 2000-11-08 Ibm Auto-stereoscopic display device and system
US6493032B1 (en) 1996-06-24 2002-12-10 Be Here Corporation Imaging arrangement which allows for capturing an image of a view at different resolutions
US6459451B2 (en) 1996-06-24 2002-10-01 Be Here Corporation Method and apparatus for a panoramic camera to capture a 360 degree image
US6373642B1 (en) 1996-06-24 2002-04-16 Be Here Corporation Panoramic imaging arrangement
US6341044B1 (en) 1996-06-24 2002-01-22 Be Here Corporation Panoramic imaging arrangement
US6331869B1 (en) 1998-08-07 2001-12-18 Be Here Corporation Method and apparatus for electronically distributing motion panoramic images
US6006021A (en) * 1996-07-01 1999-12-21 Sun Microsystems, Inc. Device for mapping dwellings and other structures in 3D
US6064749A (en) * 1996-08-02 2000-05-16 Hirota; Gentaro Hybrid tracking for augmented reality using both camera motion detection and landmark tracking
JP3571859B2 (en) * 1996-08-12 2004-09-29 富士通株式会社 Image display control device and image display system
US5944530A (en) 1996-08-13 1999-08-31 Ho; Chi Fai Learning method and system that consider a student's concentration level
DE19633813C2 (en) * 1996-08-22 1998-07-02 Fraunhofer Ges Forschung Process for the non-destructive three-dimensional detection of structures in buildings
JP2000516829A (en) 1996-08-27 2000-12-19 イー イー カーメイン,デイヴィッド Omnidirectional treadmill
US6152854A (en) * 1996-08-27 2000-11-28 Carmein; David E. E. Omni-directional treadmill
JPH1070742A (en) * 1996-08-29 1998-03-10 Olympus Optical Co Ltd Twin-lens video display device
US6535241B1 (en) 1996-11-13 2003-03-18 Fakespace Labs, Inc. Multi-person stereo display system
JP3469410B2 (en) * 1996-11-25 2003-11-25 三菱電機株式会社 Wellness system
US6115022A (en) * 1996-12-10 2000-09-05 Metavision Corporation Method and apparatus for adjusting multiple projected raster images
JP3651204B2 (en) * 1996-12-18 2005-05-25 トヨタ自動車株式会社 Stereoscopic image display device, stereoscopic image display method, and recording medium
EP0917690B1 (en) * 1996-12-19 2002-10-02 Koninklijke Philips Electronics N.V. Method and device for displaying an autostereogram
US6080063A (en) * 1997-01-06 2000-06-27 Khosla; Vinod Simulated real time game play with live event
WO1998043441A1 (en) 1997-03-27 1998-10-01 Litton Systems, Inc. Autostereoscopic projection system
US5993003A (en) * 1997-03-27 1999-11-30 Litton Systems, Inc. Autostereo projection system
US6515688B1 (en) 1997-04-04 2003-02-04 International Business Machines Corporation Viewer interactive three-dimensional workspace with a two-dimensional workplane containing interactive two-dimensional images
US5990935A (en) * 1997-04-04 1999-11-23 Evans & Sutherland Computer Corporation Method for measuring camera and lens properties for camera tracking
US5877803A (en) * 1997-04-07 1999-03-02 Tritech Mircoelectronics International, Ltd. 3-D image detector
GB9707704D0 (en) * 1997-04-16 1997-06-04 British Telecomm Display terminal
DE19716958A1 (en) * 1997-04-17 1998-10-22 Zbigniew Rybczynski Optical imaging system
US6535878B1 (en) 1997-05-02 2003-03-18 Roxio, Inc. Method and system for providing on-line interactivity over a server-client network
US5968120A (en) * 1997-05-02 1999-10-19 Olivr Corporation Ltd. Method and system for providing on-line interactivity over a server-client network
US6466254B1 (en) 1997-05-08 2002-10-15 Be Here Corporation Method and apparatus for electronically distributing motion panoramic images
US6356296B1 (en) 1997-05-08 2002-03-12 Behere Corporation Method and apparatus for implementing a panoptic camera system
US6043837A (en) 1997-05-08 2000-03-28 Be Here Corporation Method and apparatus for electronically distributing images from a panoptic camera system
JPH10309381A (en) * 1997-05-13 1998-11-24 Yoshimasa Tanaka Amusement system for mobile object
US6483537B1 (en) 1997-05-21 2002-11-19 Metavision Corporation Apparatus and method for analyzing projected images, singly and for array projection applications
US6064389A (en) * 1997-05-27 2000-05-16 International Business Machines Corporation Distance dependent selective activation of three-dimensional objects in three-dimensional workspace interactive displays
JPH10334270A (en) * 1997-05-28 1998-12-18 Mitsubishi Electric Corp Operation recognition device and recorded medium recording operation recognition program
US6307941B1 (en) 1997-07-15 2001-10-23 Desper Products, Inc. System and method for localization of virtual sound
US6070269A (en) * 1997-07-25 2000-06-06 Medialab Services S.A. Data-suit for real-time computer animation and virtual reality applications
US6522312B2 (en) * 1997-09-01 2003-02-18 Canon Kabushiki Kaisha Apparatus for presenting mixed reality shared among operators
US6466701B1 (en) * 1997-09-10 2002-10-15 Ricoh Company, Ltd. System and method for displaying an image indicating a positional relation between partially overlapping images
US6424371B1 (en) * 1997-09-24 2002-07-23 Sheree H. Wen Intelligent video monitor system
JPH11119303A (en) * 1997-10-20 1999-04-30 Fujitsu Ltd Monitoring system and monitoring method
DE69840547D1 (en) * 1997-10-30 2009-03-26 Myvu Corp INTERFACE SYSTEM FOR GLASSES
US6611629B2 (en) 1997-11-03 2003-08-26 Intel Corporation Correcting correlation errors in a composite image
US6007338A (en) * 1997-11-17 1999-12-28 Disney Enterprises, Inc. Roller coaster simulator
US6095928A (en) * 1997-12-10 2000-08-01 Goszyk; Kurt A. Three-dimensional object path tracking
US6181343B1 (en) * 1997-12-23 2001-01-30 Philips Electronics North America Corp. System and method for permitting three-dimensional navigation through a virtual reality environment using camera-based gesture inputs
US6356297B1 (en) * 1998-01-15 2002-03-12 International Business Machines Corporation Method and apparatus for displaying panoramas with streaming video
US9239673B2 (en) 1998-01-26 2016-01-19 Apple Inc. Gesturing with a multipoint sensing device
US7614008B2 (en) * 2004-07-30 2009-11-03 Apple Inc. Operation of a computer with touch screen interface
US8479122B2 (en) 2004-07-30 2013-07-02 Apple Inc. Gestures for touch sensitive input devices
US9292111B2 (en) 1998-01-26 2016-03-22 Apple Inc. Gesturing with a multipoint sensing device
US6126450A (en) * 1998-02-04 2000-10-03 Mitsubishi Denki Kabushiki Kaisha Medical simulator system and medical simulator notifying apparatus
US6175343B1 (en) * 1998-02-24 2001-01-16 Anivision, Inc. Method and apparatus for operating the overlay of computer-generated effects onto a live image
US6278479B1 (en) 1998-02-24 2001-08-21 Wilson, Hewitt & Associates, Inc. Dual reality system
US6226035B1 (en) * 1998-03-04 2001-05-01 Cyclo Vision Technologies, Inc. Adjustable imaging system with wide angle capability
RU2161871C2 (en) * 1998-03-20 2001-01-10 Латыпов Нурахмед Нурисламович Method and device for producing video programs
US6160909A (en) * 1998-04-01 2000-12-12 Canon Kabushiki Kaisha Depth control for stereoscopic images
US6522325B1 (en) 1998-04-02 2003-02-18 Kewazinga Corp. Navigable telepresence method and system utilizing an array of cameras
IL138808A0 (en) * 1998-04-02 2001-10-31 Kewazinga Corp A navigable telepresence method and system utilizing an array of cameras
KR100306695B1 (en) * 1998-04-16 2001-11-30 윤종용 Stereoscopic image display apparatus using micro polarizer
US6337683B1 (en) 1998-05-13 2002-01-08 Imove Inc. Panoramic movies which simulate movement through multidimensional space
US20050146521A1 (en) * 1998-05-27 2005-07-07 Kaye Michael C. Method for creating and presenting an accurate reproduction of three-dimensional images converted from two-dimensional images
US20050231505A1 (en) * 1998-05-27 2005-10-20 Kaye Michael C Method for creating artifact free three-dimensional images converted from two-dimensional images
US7116323B2 (en) * 1998-05-27 2006-10-03 In-Three, Inc. Method of hidden surface reconstruction for creating accurate three-dimensional images converted from two-dimensional images
US6205241B1 (en) 1998-06-01 2001-03-20 Canon Kabushiki Kaisha Compression of stereoscopic images
US6141440A (en) * 1998-06-04 2000-10-31 Canon Kabushiki Kaisha Disparity measurement with variably sized interrogation regions
US6128071A (en) * 1998-06-04 2000-10-03 Canon Kabushiki Kaisha Range data recordation
US6417797B1 (en) * 1998-07-14 2002-07-09 Cirrus Logic, Inc. System for A multi-purpose portable imaging device and methods for using same
FR2781299B1 (en) * 1998-07-15 2000-09-15 Eastman Kodak Co METHOD AND DEVICE FOR TRANSFORMING DIGITAL IMAGES
US6246413B1 (en) * 1998-08-17 2001-06-12 Mgi Software Corporation Method and system for creating panoramas
US6195204B1 (en) 1998-08-28 2001-02-27 Lucent Technologies Inc. Compact high resolution panoramic viewing system
US6144501A (en) * 1998-08-28 2000-11-07 Lucent Technologies Inc. Split mirrored panoramic image display
US6141145A (en) * 1998-08-28 2000-10-31 Lucent Technologies Stereo panoramic viewing system
US6128143A (en) * 1998-08-28 2000-10-03 Lucent Technologies Inc. Panoramic viewing system with support stand
US6285365B1 (en) 1998-08-28 2001-09-04 Fullview, Inc. Icon referenced panoramic image display
US6320979B1 (en) 1998-10-06 2001-11-20 Canon Kabushiki Kaisha Depth of field enhancement
US6369818B1 (en) 1998-11-25 2002-04-09 Be Here Corporation Method, apparatus and computer program product for generating perspective corrected data from warped information
US6704042B2 (en) * 1998-12-10 2004-03-09 Canon Kabushiki Kaisha Video processing apparatus, control method therefor, and storage medium
US6621509B1 (en) 1999-01-08 2003-09-16 Ati International Srl Method and apparatus for providing a three dimensional graphical user interface
US6583793B1 (en) * 1999-01-08 2003-06-24 Ati International Srl Method and apparatus for mapping live video on to three dimensional objects
US6331852B1 (en) * 1999-01-08 2001-12-18 Ati International Srl Method and apparatus for providing a three dimensional object on live video
US7271803B2 (en) * 1999-01-08 2007-09-18 Ricoh Company, Ltd. Method and system for simulating stereographic vision
US6175454B1 (en) 1999-01-13 2001-01-16 Behere Corporation Panoramic imaging arrangement
US7035897B1 (en) 1999-01-15 2006-04-25 California Institute Of Technology Wireless augmented reality communication system
US7904187B2 (en) 1999-02-01 2011-03-08 Hoffberg Steven M Internet appliance system and method
US6500008B1 (en) * 1999-03-15 2002-12-31 Information Decision Technologies, Llc Augmented reality-based firefighter training system and method
US6591293B1 (en) * 1999-03-31 2003-07-08 International Business Machines Corporation Application presentation synchronizer
AU4069800A (en) * 1999-04-05 2000-10-23 Bechtel Bwxt Idaho, Llc Systems and methods for improved telepresence
US6721446B1 (en) * 1999-04-26 2004-04-13 Adobe Systems Incorporated Identifying intrinsic pixel colors in a region of uncertain pixels
US6690374B2 (en) * 1999-05-12 2004-02-10 Imove, Inc. Security camera system for tracking moving objects in both forward and reverse directions
US7620909B2 (en) * 1999-05-12 2009-11-17 Imove Inc. Interactive image seamer for panoramic images
US6738073B2 (en) 1999-05-12 2004-05-18 Imove, Inc. Camera system with both a wide angle view and a high resolution view
US7050085B1 (en) 2000-10-26 2006-05-23 Imove, Inc. System and method for camera calibration
US20040075738A1 (en) * 1999-05-12 2004-04-22 Sean Burke Spherical surveillance system architecture
US6346950B1 (en) * 1999-05-20 2002-02-12 Compaq Computer Corporation System and method for display images using anamorphic video
US6292713B1 (en) * 1999-05-20 2001-09-18 Compaq Computer Corporation Robotic telepresence system
US7084887B1 (en) * 1999-06-11 2006-08-01 Canon Kabushiki Kaisha Marker layout method, mixed reality apparatus, and mixed reality space image generation method
US7158096B1 (en) 1999-06-21 2007-01-02 The Microoptical Corporation Compact, head-mountable display device with suspended eyepiece assembly
US6724354B1 (en) 1999-06-21 2004-04-20 The Microoptical Corporation Illumination systems for eyeglass and facemask display systems
GB2351426A (en) * 1999-06-24 2000-12-27 Stephen James Crampton Method and apparatus for the generation of computer graphic representations of individuals
US6415171B1 (en) * 1999-07-16 2002-07-02 International Business Machines Corporation System and method for fusing three-dimensional shape data on distorted images without correcting for distortion
US6466975B1 (en) * 1999-08-23 2002-10-15 Digital Connexxions Corp. Systems and methods for virtual population mutual relationship management using electronic computer driven networks
US6597807B1 (en) 1999-09-27 2003-07-22 The United States Of America As Represented By The Secretary Of The Army Method for red green blue (RGB) stereo sensor fusion
US6563529B1 (en) * 1999-10-08 2003-05-13 Jerry Jongerius Interactive system for displaying detailed view and direction in panoramic images
AU1208101A (en) * 1999-10-15 2001-04-30 Kewazinga Corp. Method and system for comparing multiple images utilizing a navigable array of cameras
US8482535B2 (en) * 1999-11-08 2013-07-09 Apple Inc. Programmable tactile touch screen displays and man-machine interfaces for improved vehicle instrumentation and telematics
US6603503B1 (en) * 1999-11-18 2003-08-05 Avaya, Inc. Methods, systems and devices for displaying live 3-D, parallax and panoramic images
ES2243451T3 (en) * 2000-01-27 2005-12-01 Siemens Aktiengesellschaft SYSTEM AND PROCEDURE FOR THE PROCESSING OF VOICE FOCUSED ON VISION WITH GENERATION OF A VISUAL REACTION SIGNAL.
US6620043B1 (en) * 2000-01-28 2003-09-16 Disney Enterprises, Inc. Virtual tug of war
US6937730B1 (en) * 2000-02-16 2005-08-30 Intel Corporation Method and system for providing content-specific conditional access to digital content
US8576199B1 (en) 2000-02-22 2013-11-05 Apple Inc. Computer control systems
US20020055383A1 (en) * 2000-02-24 2002-05-09 Namco Ltd. Game system and program
US7522186B2 (en) * 2000-03-07 2009-04-21 L-3 Communications Corporation Method and apparatus for providing immersive surveillance
GB2362551B (en) * 2000-04-14 2004-08-18 Ford Global Tech Inc System and method of evaluation of a vehicle design within a virtual environment using virtual reality
GB2363273A (en) 2000-06-09 2001-12-12 Secr Defence Computation time reduction for three dimensional displays
JP2004507954A (en) 2000-06-13 2004-03-11 パノラム テクノロジーズ,インコーポレイテッド. Method and apparatus for seamless integration of multiple video projectors
GB0014671D0 (en) * 2000-06-15 2000-08-09 Seos Displays Ltd Head slaved area of interest (HSAOI) using framestore demultiplexing
US6753863B1 (en) * 2000-06-22 2004-06-22 Techimage Ltd. System and method for streaming real time animation data file
US20010056574A1 (en) * 2000-06-26 2001-12-27 Richards Angus Duncan VTV system
US6788333B1 (en) * 2000-07-07 2004-09-07 Microsoft Corporation Panoramic video
US7193645B1 (en) 2000-07-27 2007-03-20 Pvi Virtual Media Services, Llc Video system and method of operating a video system
US6636237B1 (en) * 2000-07-31 2003-10-21 James H. Murray Method for creating and synchronizing links to objects in a video
JP3561463B2 (en) * 2000-08-11 2004-09-02 コナミ株式会社 Virtual camera viewpoint movement control method and 3D video game apparatus in 3D video game
WO2002017044A2 (en) * 2000-08-24 2002-02-28 Immersive Technologies Llc Computerized image system
WO2002039389A1 (en) * 2000-11-07 2002-05-16 Holographic Imaging Llc Computer generated hologram display system
US20020180727A1 (en) * 2000-11-22 2002-12-05 Guckenberger Ronald James Shadow buffer control module method and software construct for adjusting per pixel raster images attributes to screen space and projector features for digital warp, intensity transforms, color matching, soft-edge blending, and filtering for multiple projectors and laser projectors
FR2817441B1 (en) * 2000-11-29 2004-01-16 Thomson Multimedia Sa METHOD OF VIEWING A VIDEO SEQUENCE IN A PANORAMA WINDOW
US6774869B2 (en) * 2000-12-22 2004-08-10 Board Of Trustees Operating Michigan State University Teleportal face-to-face system
US7084874B2 (en) * 2000-12-26 2006-08-01 Kurzweil Ainetworks, Inc. Virtual reality presentation
DE10102477A1 (en) 2001-01-19 2002-07-25 Storz Endoskop Gmbh Schaffhaus Anastomosis of two ends of a blood vessel using laser surgery, has light guiding instrument that directs light from a source along an optical fiber to the joint where the light is deflected outwards to weld the tissue together
EP1371019A2 (en) * 2001-01-26 2003-12-17 Zaxel Systems, Inc. Real-time virtual viewpoint in simulated reality environment
US20080024463A1 (en) * 2001-02-22 2008-01-31 Timothy Pryor Reconfigurable tactile control display applications
US20080088587A1 (en) * 2001-02-22 2008-04-17 Timothy Pryor Compact rtd instrument panels and computer interfaces
DE10110358B4 (en) 2001-02-27 2006-05-04 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Arrangement and method for spatial visualization
US20020138264A1 (en) * 2001-03-21 2002-09-26 International Business Machines Corporation Apparatus to convey depth information in graphical images and method therefor
US20020147991A1 (en) * 2001-04-10 2002-10-10 Furlan John L. W. Transmission of panoramic video via existing video infrastructure
US6961458B2 (en) * 2001-04-27 2005-11-01 International Business Machines Corporation Method and apparatus for presenting 3-dimensional objects to visually impaired users
US9286941B2 (en) 2001-05-04 2016-03-15 Legend3D, Inc. Image sequence enhancement and motion picture project management system
US8401336B2 (en) 2001-05-04 2013-03-19 Legend3D, Inc. System and method for rapid image sequence depth enhancement with augmented computer-generated elements
DE10134488A1 (en) * 2001-06-13 2003-01-09 Square Vision Ag Method and device for projecting a digitally stored 3D scene onto several projection surfaces arranged at an angle to one another
US6803912B1 (en) * 2001-08-02 2004-10-12 Mark Resources, Llc Real time three-dimensional multiple display imaging system
US20030090439A1 (en) * 2001-09-07 2003-05-15 Spitzer Mark B. Light weight, compact, remountable face-supported electronic display
JP4974319B2 (en) * 2001-09-10 2012-07-11 株式会社バンダイナムコゲームス Image generation system, program, and information storage medium
US20030062675A1 (en) * 2001-09-28 2003-04-03 Canon Kabushiki Kaisha Image experiencing system and information processing method
US6583808B2 (en) 2001-10-04 2003-06-24 National Research Council Of Canada Method and system for stereo videoconferencing
US7313246B2 (en) * 2001-10-06 2007-12-25 Stryker Corporation Information system using eyewear for communication
US20030210329A1 (en) * 2001-11-08 2003-11-13 Aagaard Kenneth Joseph Video system and methods for operating a video system
US7015401B2 (en) * 2001-11-23 2006-03-21 Aiptek International, Inc. Image processing system with handwriting input function and the method for forming the same
US20030105558A1 (en) * 2001-11-28 2003-06-05 Steele Robert C. Multimedia racing experience system and corresponding experience based displays
US7265663B2 (en) 2001-11-28 2007-09-04 Trivinci Systems, Llc Multimedia racing experience system
WO2003063513A1 (en) * 2002-01-23 2003-07-31 Tenebraex Corporation D of creating a virtual window
US7319720B2 (en) * 2002-01-28 2008-01-15 Microsoft Corporation Stereoscopic video
US6990639B2 (en) * 2002-02-07 2006-01-24 Microsoft Corporation System and process for controlling electronic components in a ubiquitous computing environment using multimodal integration
JP2003263104A (en) * 2002-03-11 2003-09-19 Mitsubishi Electric Corp Imaging information recognition system
WO2003079272A1 (en) * 2002-03-15 2003-09-25 University Of Washington Materials and methods for simulating focal shifts in viewers using large depth of focus displays
TW533390B (en) * 2002-03-25 2003-05-21 Silicon Integrated Sys Corp Method and apparatus for controlling a stereo video display with non-stereo video source
US7526790B1 (en) 2002-03-28 2009-04-28 Nokia Corporation Virtual audio arena effect for live TV presentations: system, methods and program products
US6671651B2 (en) * 2002-04-26 2003-12-30 Sensable Technologies, Inc. 3-D selection and manipulation with a multiple dimension haptic interface
US6976846B2 (en) * 2002-05-08 2005-12-20 Accenture Global Services Gmbh Telecommunications virtual simulator
US7190331B2 (en) * 2002-06-06 2007-03-13 Siemens Corporate Research, Inc. System and method for measuring the registration accuracy of an augmented reality system
US7358963B2 (en) 2002-09-09 2008-04-15 Apple Inc. Mouse having an optically-based scrolling feature
US7239311B2 (en) * 2002-09-26 2007-07-03 The United States Government As Represented By The Secretary Of The Navy Global visualization process (GVP) and system for implementing a GVP
JP3744002B2 (en) * 2002-10-04 2006-02-08 ソニー株式会社 Display device, imaging device, and imaging / display system
US8458028B2 (en) * 2002-10-16 2013-06-04 Barbaro Technologies System and method for integrating business-related content into an electronic game
EP1567234A4 (en) * 2002-11-05 2006-01-04 Disney Entpr Inc Video actuated interactive environment
US20040085260A1 (en) * 2002-11-05 2004-05-06 Mcdavid Louis C. Multi-lingual display apparatus and method
AU2002952872A0 (en) * 2002-11-25 2002-12-12 Dynamic Digital Depth Research Pty Ltd Image generation
JP2004199496A (en) * 2002-12-19 2004-07-15 Sony Corp Information processor and method, and program
US20040196282A1 (en) * 2003-02-14 2004-10-07 Oh Byong Mok Modeling and editing image panoramas
US7619626B2 (en) * 2003-03-01 2009-11-17 The Boeing Company Mapping images from one or more sources into an image for display
US7148861B2 (en) * 2003-03-01 2006-12-12 The Boeing Company Systems and methods for providing enhanced vision imaging with decreased latency
US8745541B2 (en) 2003-03-25 2014-06-03 Microsoft Corporation Architecture for controlling a computer using hand gestures
US7665041B2 (en) 2003-03-25 2010-02-16 Microsoft Corporation Architecture for controlling a computer using hand gestures
EP2581248B1 (en) * 2003-03-31 2016-11-02 Apple Inc. Reconfigurable vehicle instrument panels
US7636088B2 (en) * 2003-04-17 2009-12-22 Sharp Kabushiki Kaisha 3-Dimensional image creation device, 3-dimensional image reproduction device, 3-dimensional image processing device, 3-dimensional image processing program, and recording medium containing the program
DE10321980B4 (en) * 2003-05-15 2005-10-06 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for calculating a discrete value of a component in a loudspeaker signal
US7463280B2 (en) 2003-06-03 2008-12-09 Steuart Iii Leonard P Digital 3D/360 degree camera system
JP2007525068A (en) * 2003-06-19 2007-08-30 エル3 コミュニケーションズ コーポレイション Method and apparatus for providing scalable multi-camera distributed video processing and visualization surveillance system
US7149383B2 (en) * 2003-06-30 2006-12-12 Finisar Corporation Optical system with reduced back reflection
US6961489B2 (en) * 2003-06-30 2005-11-01 Finisar Corporation High speed optical system
US7259778B2 (en) * 2003-07-01 2007-08-21 L-3 Communications Corporation Method and apparatus for placing sensors using 3D models
US7497812B2 (en) * 2003-07-15 2009-03-03 Cube X, Incorporated Interactive computer simulation enhanced exercise machine
US20050054492A1 (en) * 2003-07-15 2005-03-10 Neff John D. Exercise device for under a desk
US7497807B2 (en) * 2003-07-15 2009-03-03 Cube X Incorporated Interactive computer simulation enhanced exercise machine
US7091930B2 (en) * 2003-08-02 2006-08-15 Litton Systems, Inc. Centerline mounted sensor fusion device
US20060152532A1 (en) * 2003-09-29 2006-07-13 Prabir Sen The largest toy gallery park with 3D simulation displays for animations and other collectibles juxtaposed with physical-virtual collaborative games and activities in a three-dimension photo-realistic virtual-reality environment
EP1531322A3 (en) * 2003-11-13 2007-09-05 Matsushita Electric Industrial Co., Ltd. Map display apparatus
US7436429B2 (en) * 2003-11-24 2008-10-14 The Boeing Company Virtual pan/tilt camera system and method for vehicles
DE10355146A1 (en) * 2003-11-26 2005-07-07 Fraunhofer-Gesellschaft zur Förderung der angewandten Forschung e.V. Apparatus and method for generating a bass channel
JP2005167517A (en) * 2003-12-01 2005-06-23 Olympus Corp Image processor, calibration method thereof, and image processing program
US9948885B2 (en) * 2003-12-12 2018-04-17 Kurzweil Technologies, Inc. Virtual encounters
US7683937B1 (en) 2003-12-31 2010-03-23 Aol Inc. Presentation of a multimedia experience
KR100588042B1 (en) * 2004-01-14 2006-06-09 한국과학기술연구원 Interactive presentation system
US7184098B2 (en) * 2004-02-19 2007-02-27 Spatialight, Inc. Cyclic data signal averaging system and method for use in video display systems
US20050185047A1 (en) * 2004-02-19 2005-08-25 Hii Desmond Toh O. Method and apparatus for providing a combined image
US20050185711A1 (en) * 2004-02-20 2005-08-25 Hanspeter Pfister 3D television system and method
WO2005088970A1 (en) * 2004-03-11 2005-09-22 Olympus Corporation Image generation device, image generation method, and image generation program
US7966563B2 (en) * 2004-03-12 2011-06-21 Vanbree Ken System for organizing and displaying registered images
US20050212910A1 (en) * 2004-03-25 2005-09-29 Singhal Manoj K Method and system for multidimensional virtual reality audio and visual projection
US8063936B2 (en) * 2004-06-01 2011-11-22 L-3 Communications Corporation Modular immersive surveillance processing system and method
SE527257C2 (en) * 2004-06-21 2006-01-31 Totalfoersvarets Forskningsins Device and method for presenting an external image
DE102004032531A1 (en) * 2004-07-06 2006-02-02 Comlogic Darmstadt Systeme Gmbh Verifying method for effecting a planned project stored as a three-dimensional model directs a camera at a space to produce a virtual image modeled with a real image
US20080129707A1 (en) * 2004-07-27 2008-06-05 Pryor Timothy R Method and apparatus employing multi-functional controls and displays
WO2006014810A2 (en) * 2004-07-29 2006-02-09 Kevin Ferguson A human movement measurement system
US8381135B2 (en) 2004-07-30 2013-02-19 Apple Inc. Proximity detector in handheld device
WO2006017771A1 (en) * 2004-08-06 2006-02-16 University Of Washington Variable fixation viewing distance scanned light displays
DE102004041821A1 (en) * 2004-08-27 2006-03-16 Abb Research Ltd. Device and method for securing a machine-controlled handling device
US20100231506A1 (en) * 2004-09-07 2010-09-16 Timothy Pryor Control of appliances, kitchen and home
JP2008513830A (en) * 2004-09-22 2008-05-01 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Electronic window based on rear projection screen
US9141615B1 (en) 2004-11-12 2015-09-22 Grandeye, Ltd. Interactive media server
WO2006053271A1 (en) 2004-11-12 2006-05-18 Mok3, Inc. Method for inter-scene transitions
JP2006174195A (en) * 2004-12-17 2006-06-29 Hitachi Ltd Video image service system
US20110214085A1 (en) * 2004-12-23 2011-09-01 Vanbree Ken Method of user display associated with displaying registered images
US20060187297A1 (en) * 2005-02-24 2006-08-24 Levent Onural Holographic 3-d television
US20070132785A1 (en) * 2005-03-29 2007-06-14 Ebersole John F Jr Platform for immersive gaming
US20060268360A1 (en) * 2005-05-12 2006-11-30 Jones Peter W J Methods of creating a virtual window
US7864168B2 (en) * 2005-05-25 2011-01-04 Impulse Technology Ltd. Virtual reality movement system
US7925391B2 (en) * 2005-06-02 2011-04-12 The Boeing Company Systems and methods for remote display of an enhanced image
WO2007011748A2 (en) * 2005-07-14 2007-01-25 Molsoft, Llc Structured documents for displaying and interaction with three dimensional objects
US20070021207A1 (en) * 2005-07-25 2007-01-25 Ned Ahdoot Interactive combat game between a real player and a projected image of a computer generated player or a real player with a predictive method
US20070021199A1 (en) * 2005-07-25 2007-01-25 Ned Ahdoot Interactive games with prediction method
US20070035563A1 (en) * 2005-08-12 2007-02-15 The Board Of Trustees Of Michigan State University Augmented reality spatial interaction and navigational system
US20070076096A1 (en) * 2005-10-04 2007-04-05 Alexander Eugene J System and method for calibrating a set of imaging devices and calculating 3D coordinates of detected features in a laboratory coordinate system
US8848035B2 (en) * 2005-10-04 2014-09-30 Motion Analysis Corporation Device for generating three dimensional surface models of moving objects
US7697827B2 (en) 2005-10-17 2010-04-13 Konicek Jeffrey C User-friendlier interfaces for a camera
US9270976B2 (en) * 2005-11-02 2016-02-23 Exelis Inc. Multi-user stereoscopic 3-D panoramic vision system and method
US8223208B2 (en) * 2005-11-10 2012-07-17 Motion Analysis Corporation Device and method for calibrating an imaging device for generating three dimensional surface models of moving objects
FR2892940B1 (en) * 2005-11-10 2021-04-09 Olivier Lordereau BIOMEDICAL DEVICE FOR TREATMENT BY VIRTUAL IMMERSION
US8025572B2 (en) * 2005-11-21 2011-09-27 Microsoft Corporation Dynamic spectator mode
US8723951B2 (en) * 2005-11-23 2014-05-13 Grandeye, Ltd. Interactive wide-angle video server
BRPI0506340A (en) * 2005-12-12 2007-10-02 Univ Fed Sao Paulo Unifesp augmented reality visualization system with pervasive computing
US8077147B2 (en) * 2005-12-30 2011-12-13 Apple Inc. Mouse with optical sensing surface
US9101279B2 (en) 2006-02-15 2015-08-11 Virtual Video Reality By Ritchey, Llc Mobile user borne brain activity data and surrounding environment data correlation system
US9344612B2 (en) 2006-02-15 2016-05-17 Kenneth Ira Ritchey Non-interference field-of-view support apparatus for a panoramic facial sensor
US20070196809A1 (en) * 2006-02-21 2007-08-23 Mr. Prabir Sen Digital Reality Sports, Games Events and Activities in three dimensional and interactive space display environment and information processing medium
CN101496387B (en) * 2006-03-06 2012-09-05 思科技术公司 System and method for access authentication in a mobile wireless network
US7400288B2 (en) * 2006-07-15 2008-07-15 Rogitz John L Target visualization system
US8446509B2 (en) * 2006-08-09 2013-05-21 Tenebraex Corporation Methods of creating a virtual window
US8595356B2 (en) * 2006-09-28 2013-11-26 Microsoft Corporation Serialization of run-time state
US20080215450A1 (en) * 2006-09-28 2008-09-04 Microsoft Corporation Remote provisioning of information technology
US8012023B2 (en) * 2006-09-28 2011-09-06 Microsoft Corporation Virtual entertainment
US7672909B2 (en) * 2006-09-28 2010-03-02 Microsoft Corporation Machine learning system and method comprising segregator convergence and recognition components to determine the existence of possible tagging data trends and identify that predetermined convergence criteria have been met or establish criteria for taxonomy purpose then recognize items based on an aggregate of user tagging behavior
US9746912B2 (en) 2006-09-28 2017-08-29 Microsoft Technology Licensing, Llc Transformations for virtual guest representation
US20080082600A1 (en) * 2006-09-28 2008-04-03 Microsoft Corporation Remote network operating system
US7716150B2 (en) * 2006-09-28 2010-05-11 Microsoft Corporation Machine learning system for analyzing and establishing tagging trends based on convergence criteria
US8402110B2 (en) 2006-09-28 2013-03-19 Microsoft Corporation Remote provisioning of information technology
US8014308B2 (en) * 2006-09-28 2011-09-06 Microsoft Corporation Hardware architecture for cloud services
US20080091613A1 (en) * 2006-09-28 2008-04-17 Microsoft Corporation Rights management in a cloud
US7680908B2 (en) * 2006-09-28 2010-03-16 Microsoft Corporation State replication
US20080082667A1 (en) * 2006-09-28 2008-04-03 Microsoft Corporation Remote provisioning of information technology
US20080104699A1 (en) * 2006-09-28 2008-05-01 Microsoft Corporation Secure service computation
US20080080526A1 (en) * 2006-09-28 2008-04-03 Microsoft Corporation Migrating data to new cloud
US8719143B2 (en) * 2006-09-28 2014-05-06 Microsoft Corporation Determination of optimized location for services and data
US8474027B2 (en) * 2006-09-29 2013-06-25 Microsoft Corporation Remote management of resource license
US20080082480A1 (en) * 2006-09-29 2008-04-03 Microsoft Corporation Data normalization
US20080083040A1 (en) * 2006-09-29 2008-04-03 Microsoft Corporation Aggregated resource license
US7797453B2 (en) 2006-09-29 2010-09-14 Microsoft Corporation Resource standardization in an off-premise environment
US7880739B2 (en) * 2006-10-11 2011-02-01 International Business Machines Corporation Virtual window with simulated parallax and field of view change
US9369679B2 (en) * 2006-11-07 2016-06-14 The Board Of Trustees Of The Leland Stanford Junior University System and process for projecting location-referenced panoramic images into a 3-D environment model and rendering panoramic images from arbitrary viewpoints within the 3-D environment model
DE102006052779A1 (en) * 2006-11-09 2008-05-15 Bayerische Motoren Werke Ag Method for generating an overall image of the surroundings of a motor vehicle
US20080110115A1 (en) * 2006-11-13 2008-05-15 French Barry J Exercise facility and method
US20080083031A1 (en) * 2006-12-20 2008-04-03 Microsoft Corporation Secure service computation
US8212805B1 (en) 2007-01-05 2012-07-03 Kenneth Banschick System and method for parametric display of modular aesthetic designs
US9530142B2 (en) * 2007-02-13 2016-12-27 Claudia Juliana Minsky Method and system for creating a multifunctional collage useable for client/server communication
US8005238B2 (en) 2007-03-22 2011-08-23 Microsoft Corporation Robust adaptive beamforming with enhanced noise suppression
US8669845B1 (en) 2007-03-30 2014-03-11 Vail Resorts, Inc. RFID skier monitoring systems and methods
US8005237B2 (en) 2007-05-17 2011-08-23 Microsoft Corp. Sensor array beamformer post-processor
US8570373B2 (en) 2007-06-08 2013-10-29 Cisco Technology, Inc. Tracking an object utilizing location information associated with a wireless device
US8629976B2 (en) * 2007-10-02 2014-01-14 Microsoft Corporation Methods and systems for hierarchical de-aliasing time-of-flight (TOF) systems
EP2223526B1 (en) * 2007-11-16 2015-01-28 Scallop Imaging, LLC Systems and methods of creating a virtual window
US8791984B2 (en) * 2007-11-16 2014-07-29 Scallop Imaging, Llc Digital security camera
US20090290033A1 (en) * 2007-11-16 2009-11-26 Tenebraex Corporation Systems and methods of creating a virtual window
US20090166684A1 (en) * 2007-12-26 2009-07-02 3Dv Systems Ltd. Photogate cmos pixel for 3d cameras having reduced intra-pixel cross talk
US9035876B2 (en) 2008-01-14 2015-05-19 Apple Inc. Three-dimensional user interface session control
US8933876B2 (en) 2010-12-13 2015-01-13 Apple Inc. Three dimensional user interface session control
US8452052B2 (en) * 2008-01-21 2013-05-28 The Boeing Company Modeling motion capture volumes with distance fields
AU2009210672B2 (en) 2008-02-08 2013-09-19 Google Llc Panoramic camera with multiple image sensors using timed shutters
US8355041B2 (en) 2008-02-14 2013-01-15 Cisco Technology, Inc. Telepresence system for 360 degree video conferencing
US8797377B2 (en) * 2008-02-14 2014-08-05 Cisco Technology, Inc. Method and system for videoconference configuration
US8319819B2 (en) 2008-03-26 2012-11-27 Cisco Technology, Inc. Virtual round-table videoconference
US8390667B2 (en) * 2008-04-15 2013-03-05 Cisco Technology, Inc. Pop-up PIP for people not in picture
US8320613B2 (en) * 2008-06-04 2012-11-27 Lockheed Martin Corporation Detecting and tracking targets in images based on estimated target geometry
US8385557B2 (en) 2008-06-19 2013-02-26 Microsoft Corporation Multichannel acoustic echo reduction
US8325909B2 (en) 2008-06-25 2012-12-04 Microsoft Corporation Acoustic echo suppression
US8203699B2 (en) 2008-06-30 2012-06-19 Microsoft Corporation System architecture design for time-of-flight system having reduced differential pixel size, and time-of-flight systems so designed
US8625899B2 (en) * 2008-07-10 2014-01-07 Samsung Electronics Co., Ltd. Method for recognizing and translating characters in camera-based image
KR101445185B1 (en) * 2008-07-10 2014-09-30 삼성전자주식회사 Flexible Image Photographing Apparatus with a plurality of image forming units and Method for manufacturing the same
US8305425B2 (en) * 2008-08-22 2012-11-06 Promos Technologies, Inc. Solid-state panoramic image capture apparatus
US8694658B2 (en) * 2008-09-19 2014-04-08 Cisco Technology, Inc. System and method for enabling communication sessions in a network environment
US8250143B2 (en) 2008-12-10 2012-08-21 International Business Machines Corporation Network driven actuator mapping agent and bus and method of use
HK1125531A2 (en) * 2008-12-24 2009-08-07 Leung Shiu Ming Action simulation device and method
US8681321B2 (en) 2009-01-04 2014-03-25 Microsoft International Holdings B.V. Gated 3D camera
US8448094B2 (en) * 2009-01-30 2013-05-21 Microsoft Corporation Mapping a natural input device to a legacy system
US8682028B2 (en) * 2009-01-30 2014-03-25 Microsoft Corporation Visual target tracking
US8267781B2 (en) * 2009-01-30 2012-09-18 Microsoft Corporation Visual target tracking
US8577085B2 (en) * 2009-01-30 2013-11-05 Microsoft Corporation Visual target tracking
US8565477B2 (en) * 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US8565476B2 (en) * 2009-01-30 2013-10-22 Microsoft Corporation Visual target tracking
US8588465B2 (en) 2009-01-30 2013-11-19 Microsoft Corporation Visual target tracking
US8295546B2 (en) 2009-01-30 2012-10-23 Microsoft Corporation Pose tracking pipeline
US8294767B2 (en) 2009-01-30 2012-10-23 Microsoft Corporation Body scan
US8577084B2 (en) * 2009-01-30 2013-11-05 Microsoft Corporation Visual target tracking
US20100199228A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Gesture Keyboarding
US7996793B2 (en) 2009-01-30 2011-08-09 Microsoft Corporation Gesture recognizer system architecture
US20100199231A1 (en) * 2009-01-30 2010-08-05 Microsoft Corporation Predictive determination
US8487938B2 (en) * 2009-01-30 2013-07-16 Microsoft Corporation Standard Gestures
US8798965B2 (en) 2009-02-06 2014-08-05 The Hong Kong University Of Science And Technology Generating three-dimensional models from images
US9098926B2 (en) * 2009-02-06 2015-08-04 The Hong Kong University Of Science And Technology Generating three-dimensional façade models from images
US8659637B2 (en) * 2009-03-09 2014-02-25 Cisco Technology, Inc. System and method for providing three dimensional video conferencing in a network environment
US8477175B2 (en) 2009-03-09 2013-07-02 Cisco Technology, Inc. System and method for providing three dimensional imaging in a network environment
US20100235786A1 (en) * 2009-03-13 2010-09-16 Primesense Ltd. Enhanced 3d interfacing for remote devices
US8773355B2 (en) * 2009-03-16 2014-07-08 Microsoft Corporation Adaptive cursor sizing
US8217993B2 (en) * 2009-03-20 2012-07-10 Cranial Technologies, Inc. Three-dimensional image capture system for subjects
US9256282B2 (en) * 2009-03-20 2016-02-09 Microsoft Technology Licensing, Llc Virtual object manipulation
US8988437B2 (en) 2009-03-20 2015-03-24 Microsoft Technology Licensing, Llc Chaining animations
US9313376B1 (en) 2009-04-01 2016-04-12 Microsoft Technology Licensing, Llc Dynamic depth power equalization
US8649554B2 (en) 2009-05-01 2014-02-11 Microsoft Corporation Method to control perspective for a camera-controlled computer
US8181123B2 (en) 2009-05-01 2012-05-15 Microsoft Corporation Managing virtual port associations to users in a gesture-based computing environment
US9015638B2 (en) 2009-05-01 2015-04-21 Microsoft Technology Licensing, Llc Binding users to a gesture based system and providing feedback to the users
US9498718B2 (en) 2009-05-01 2016-11-22 Microsoft Technology Licensing, Llc Altering a view perspective within a display environment
US8340432B2 (en) * 2009-05-01 2012-12-25 Microsoft Corporation Systems and methods for detecting a tilt angle from a depth image
US9898675B2 (en) * 2009-05-01 2018-02-20 Microsoft Technology Licensing, Llc User movement tracking feedback to improve tracking
US20100277470A1 (en) * 2009-05-01 2010-11-04 Microsoft Corporation Systems And Methods For Applying Model Tracking To Motion Capture
US9377857B2 (en) 2009-05-01 2016-06-28 Microsoft Technology Licensing, Llc Show body position
US8638985B2 (en) * 2009-05-01 2014-01-28 Microsoft Corporation Human body pose estimation
US8660303B2 (en) * 2009-05-01 2014-02-25 Microsoft Corporation Detection of body and props
US8942428B2 (en) * 2009-05-01 2015-01-27 Microsoft Corporation Isolate extraneous motions
US8503720B2 (en) 2009-05-01 2013-08-06 Microsoft Corporation Human body pose estimation
US8253746B2 (en) 2009-05-01 2012-08-28 Microsoft Corporation Determine intended motions
US20100283829A1 (en) * 2009-05-11 2010-11-11 Cisco Technology, Inc. System and method for translating communications between participants in a conferencing environment
RU2518218C2 (en) * 2009-05-12 2014-06-10 Хуавэй Дивайс Ко., Лтд. Telepresence system, telepresence method and video collection device
US20100295771A1 (en) * 2009-05-20 2010-11-25 Microsoft Corporation Control of display objects
US20110078052A1 (en) * 2009-05-28 2011-03-31 Yunus Ciptawilangga Virtual reality ecommerce with linked user and avatar benefits
US20100306121A1 (en) * 2009-05-28 2010-12-02 Yunus Ciptawilangga Selling and delivering real goods and services within a virtual reality world
US20100306120A1 (en) * 2009-05-28 2010-12-02 Yunus Ciptawilangga Online merchandising and ecommerce with virtual reality simulation of an actual retail location
US20100306084A1 (en) * 2009-05-28 2010-12-02 Yunus Ciptawilangga Need-based online virtual reality ecommerce system
US8625837B2 (en) * 2009-05-29 2014-01-07 Microsoft Corporation Protocol and format for communicating an image from a camera to a computing environment
US20100302138A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Methods and systems for defining or modifying a visual representation
US8542252B2 (en) * 2009-05-29 2013-09-24 Microsoft Corporation Target digitization, extraction, and tracking
US8379101B2 (en) 2009-05-29 2013-02-19 Microsoft Corporation Environment and/or target segmentation
US8320619B2 (en) 2009-05-29 2012-11-27 Microsoft Corporation Systems and methods for tracking a model
US8744121B2 (en) 2009-05-29 2014-06-03 Microsoft Corporation Device for identifying and tracking multiple humans over time
US8693724B2 (en) 2009-05-29 2014-04-08 Microsoft Corporation Method and system implementing user-centric gesture control
US8509479B2 (en) * 2009-05-29 2013-08-13 Microsoft Corporation Virtual object
US9383823B2 (en) 2009-05-29 2016-07-05 Microsoft Technology Licensing, Llc Combining gestures beyond skeletal
US9182814B2 (en) 2009-05-29 2015-11-10 Microsoft Technology Licensing, Llc Systems and methods for estimating a non-visible or occluded body part
US20100306716A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Extending standard gestures
US9400559B2 (en) * 2009-05-29 2016-07-26 Microsoft Technology Licensing, Llc Gesture shortcuts
US8418085B2 (en) * 2009-05-29 2013-04-09 Microsoft Corporation Gesture coach
US8659639B2 (en) 2009-05-29 2014-02-25 Cisco Technology, Inc. System and method for extending communications between participants in a conferencing environment
US20100302365A1 (en) * 2009-05-29 2010-12-02 Microsoft Corporation Depth Image Noise Reduction
US8856691B2 (en) * 2009-05-29 2014-10-07 Microsoft Corporation Gesture tool
US8487871B2 (en) * 2009-06-01 2013-07-16 Microsoft Corporation Virtual desktop coordinate transformation
US20100309290A1 (en) * 2009-06-08 2010-12-09 Stephen Brooks Myers System for capture and display of stereoscopic content
US8390680B2 (en) * 2009-07-09 2013-03-05 Microsoft Corporation Visual representation expression based on player expression
US9159151B2 (en) * 2009-07-13 2015-10-13 Microsoft Technology Licensing, Llc Bringing a visual representation to life via learned input from the user
US9082297B2 (en) * 2009-08-11 2015-07-14 Cisco Technology, Inc. System and method for verifying parameters in an audiovisual environment
US8264536B2 (en) * 2009-08-25 2012-09-11 Microsoft Corporation Depth-sensitive imaging via polarization-state mapping
US9141193B2 (en) * 2009-08-31 2015-09-22 Microsoft Technology Licensing, Llc Techniques for using human gestures to control gesture unaware programs
US8330134B2 (en) 2009-09-14 2012-12-11 Microsoft Corporation Optical fault monitoring
US8508919B2 (en) * 2009-09-14 2013-08-13 Microsoft Corporation Separation of electrical and optical components
US8976986B2 (en) * 2009-09-21 2015-03-10 Microsoft Technology Licensing, Llc Volume adjustment based on listener position
US8760571B2 (en) * 2009-09-21 2014-06-24 Microsoft Corporation Alignment of lens and image sensor
US8428340B2 (en) * 2009-09-21 2013-04-23 Microsoft Corporation Screen space plane identification
WO2011037964A1 (en) * 2009-09-22 2011-03-31 Tenebraex Corporation Systems and methods for correcting images in a multi-sensor system
US9014546B2 (en) 2009-09-23 2015-04-21 Rovi Guides, Inc. Systems and methods for automatically detecting users within detection regions of media devices
US8452087B2 (en) * 2009-09-30 2013-05-28 Microsoft Corporation Image selection techniques
US8723118B2 (en) * 2009-10-01 2014-05-13 Microsoft Corporation Imager for constructing color and depth images
US20110083108A1 (en) * 2009-10-05 2011-04-07 Microsoft Corporation Providing user interface feedback regarding cursor position on a display screen
US7961910B2 (en) 2009-10-07 2011-06-14 Microsoft Corporation Systems and methods for tracking a model
US8867820B2 (en) 2009-10-07 2014-10-21 Microsoft Corporation Systems and methods for removing a background of an image
US8564534B2 (en) 2009-10-07 2013-10-22 Microsoft Corporation Human tracking system
US8963829B2 (en) 2009-10-07 2015-02-24 Microsoft Corporation Methods and systems for determining and tracking extremities of a target
US9400548B2 (en) * 2009-10-19 2016-07-26 Microsoft Technology Licensing, Llc Gesture personalization and profile roaming
US20110099476A1 (en) * 2009-10-23 2011-04-28 Microsoft Corporation Decorating a display environment
US8988432B2 (en) * 2009-11-05 2015-03-24 Microsoft Technology Licensing, Llc Systems and methods for processing an image for target tracking
US20110109617A1 (en) * 2009-11-12 2011-05-12 Microsoft Corporation Visualizing Depth
US8843857B2 (en) 2009-11-19 2014-09-23 Microsoft Corporation Distance scalable no touch computing
US9110365B2 (en) * 2009-11-19 2015-08-18 Olympus Corporation Imaging apparatus
US9244533B2 (en) * 2009-12-17 2016-01-26 Microsoft Technology Licensing, Llc Camera navigation for presentations
US20110151974A1 (en) * 2009-12-18 2011-06-23 Microsoft Corporation Gesture style recognition and reward
US20110150271A1 (en) 2009-12-18 2011-06-23 Microsoft Corporation Motion detection using depth images
US8320621B2 (en) 2009-12-21 2012-11-27 Microsoft Corporation Depth projector system with integrated VCSEL array
US20110164032A1 (en) * 2010-01-07 2011-07-07 Prime Sense Ltd. Three-Dimensional User Interface
US9019201B2 (en) * 2010-01-08 2015-04-28 Microsoft Technology Licensing, Llc Evolving universal gesture sets
US9268404B2 (en) * 2010-01-08 2016-02-23 Microsoft Technology Licensing, Llc Application gesture interpretation
US8631355B2 (en) * 2010-01-08 2014-01-14 Microsoft Corporation Assigning gesture dictionaries
US8334842B2 (en) 2010-01-15 2012-12-18 Microsoft Corporation Recognizing user intent in motion capture system
US8933884B2 (en) * 2010-01-15 2015-01-13 Microsoft Corporation Tracking groups of users in motion capture system
US8676581B2 (en) 2010-01-22 2014-03-18 Microsoft Corporation Speech recognition analysis via identification information
US8864581B2 (en) 2010-01-29 2014-10-21 Microsoft Corporation Visual based identitiy tracking
US8891067B2 (en) * 2010-02-01 2014-11-18 Microsoft Corporation Multiple synchronized optical sources for time-of-flight range finding systems
US8687044B2 (en) * 2010-02-02 2014-04-01 Microsoft Corporation Depth camera compatibility
US8619122B2 (en) * 2010-02-02 2013-12-31 Microsoft Corporation Depth camera compatibility
US8717469B2 (en) * 2010-02-03 2014-05-06 Microsoft Corporation Fast gating photosurface
US8659658B2 (en) * 2010-02-09 2014-02-25 Microsoft Corporation Physical interaction zone for gesture-based user interfaces
US8499257B2 (en) * 2010-02-09 2013-07-30 Microsoft Corporation Handles interactions for human—computer interface
US20110199302A1 (en) * 2010-02-16 2011-08-18 Microsoft Corporation Capturing screen objects using a collision volume
US8633890B2 (en) * 2010-02-16 2014-01-21 Microsoft Corporation Gesture detection based on joint skipping
US20110202845A1 (en) * 2010-02-17 2011-08-18 Anthony Jon Mountjoy System and method for generating and distributing three dimensional interactive content
US8928579B2 (en) * 2010-02-22 2015-01-06 Andrew David Wilson Interacting with an omni-directionally projected display
US8411948B2 (en) 2010-03-05 2013-04-02 Microsoft Corporation Up-sampling binary images for segmentation
US8655069B2 (en) * 2010-03-05 2014-02-18 Microsoft Corporation Updating image segmentation following user input
US8422769B2 (en) * 2010-03-05 2013-04-16 Microsoft Corporation Image segmentation using reduced foreground training data
US20110221755A1 (en) * 2010-03-12 2011-09-15 Kevin Geisner Bionic motion
US20110223995A1 (en) 2010-03-12 2011-09-15 Kevin Geisner Interacting with a computer based application
US8279418B2 (en) * 2010-03-17 2012-10-02 Microsoft Corporation Raster scanning for depth detection
US8189037B2 (en) * 2010-03-17 2012-05-29 Seiko Epson Corporation Various configurations of the viewing window based 3D display system
US9225916B2 (en) * 2010-03-18 2015-12-29 Cisco Technology, Inc. System and method for enhancing video images in a conferencing environment
US8213680B2 (en) * 2010-03-19 2012-07-03 Microsoft Corporation Proxy training data for human body tracking
USD626103S1 (en) 2010-03-21 2010-10-26 Cisco Technology, Inc. Video unit with integrated features
USD626102S1 (en) 2010-03-21 2010-10-26 Cisco Tech Inc Video unit with integrated features
US8514269B2 (en) * 2010-03-26 2013-08-20 Microsoft Corporation De-aliasing depth images
US20110234481A1 (en) * 2010-03-26 2011-09-29 Sagi Katz Enhancing presentations using depth sensing cameras
US8523667B2 (en) * 2010-03-29 2013-09-03 Microsoft Corporation Parental control settings based on body dimensions
US8605763B2 (en) 2010-03-31 2013-12-10 Microsoft Corporation Temperature measurement and control for laser and light-emitting diodes
US9098873B2 (en) 2010-04-01 2015-08-04 Microsoft Technology Licensing, Llc Motion-based interactive shopping environment
US9646340B2 (en) 2010-04-01 2017-05-09 Microsoft Technology Licensing, Llc Avatar-based virtual dressing room
US8351651B2 (en) 2010-04-26 2013-01-08 Microsoft Corporation Hand-location post-process refinement in a tracking system
US8379919B2 (en) 2010-04-29 2013-02-19 Microsoft Corporation Multiple centroid condensation of probability distribution clouds
US8284847B2 (en) 2010-05-03 2012-10-09 Microsoft Corporation Detecting motion for a multifunction sensor device
US8498481B2 (en) 2010-05-07 2013-07-30 Microsoft Corporation Image segmentation using star-convexity constraints
US8885890B2 (en) 2010-05-07 2014-11-11 Microsoft Corporation Depth map confidence filtering
US9313452B2 (en) 2010-05-17 2016-04-12 Cisco Technology, Inc. System and method for providing retracting optics in a video conferencing environment
US8457353B2 (en) 2010-05-18 2013-06-04 Microsoft Corporation Gestures and gesture modifiers for manipulating a user-interface
US9183560B2 (en) 2010-05-28 2015-11-10 Daniel H. Abelow Reality alternate
US9122707B2 (en) * 2010-05-28 2015-09-01 Nokia Technologies Oy Method and apparatus for providing a localized virtual reality environment
US8803888B2 (en) 2010-06-02 2014-08-12 Microsoft Corporation Recognition system for sharing information
US9008355B2 (en) 2010-06-04 2015-04-14 Microsoft Technology Licensing, Llc Automatic depth camera aiming
US8751215B2 (en) 2010-06-04 2014-06-10 Microsoft Corporation Machine based sign language interpreter
US9557574B2 (en) 2010-06-08 2017-01-31 Microsoft Technology Licensing, Llc Depth illumination and detection optics
US8330822B2 (en) 2010-06-09 2012-12-11 Microsoft Corporation Thermally-tuned depth camera light source
US9384329B2 (en) 2010-06-11 2016-07-05 Microsoft Technology Licensing, Llc Caloric burn determination from body movement
US8675981B2 (en) 2010-06-11 2014-03-18 Microsoft Corporation Multi-modal gender recognition including depth data
US8749557B2 (en) 2010-06-11 2014-06-10 Microsoft Corporation Interacting with user interface via avatar
US8982151B2 (en) 2010-06-14 2015-03-17 Microsoft Technology Licensing, Llc Independently processing planes of display data
US8558873B2 (en) 2010-06-16 2013-10-15 Microsoft Corporation Use of wavefront coding to create a depth image
US8670029B2 (en) 2010-06-16 2014-03-11 Microsoft Corporation Depth camera illuminator with superluminescent light-emitting diode
US8296151B2 (en) 2010-06-18 2012-10-23 Microsoft Corporation Compound gesture-speech commands
US8381108B2 (en) 2010-06-21 2013-02-19 Microsoft Corporation Natural user input for driving interactive stories
US8416187B2 (en) 2010-06-22 2013-04-09 Microsoft Corporation Item navigation using motion-capture data
US9053562B1 (en) 2010-06-24 2015-06-09 Gregory S. Rabin Two dimensional to three dimensional moving image converter
US9789392B1 (en) * 2010-07-09 2017-10-17 Open Invention Network Llc Action or position triggers in a game play mode
US9201501B2 (en) 2010-07-20 2015-12-01 Apple Inc. Adaptive projector
CN102959616B (en) 2010-07-20 2015-06-10 苹果公司 Interactive reality augmentation for natural interaction
US20150375109A1 (en) * 2010-07-26 2015-12-31 Matthew E. Ward Method of Integrating Ad Hoc Camera Networks in Interactive Mesh Systems
US9075434B2 (en) 2010-08-20 2015-07-07 Microsoft Technology Licensing, Llc Translating user motion into multiple object responses
US8613666B2 (en) 2010-08-31 2013-12-24 Microsoft Corporation User selection and navigation based on looped motions
US8704879B1 (en) 2010-08-31 2014-04-22 Nintendo Co., Ltd. Eye tracking enabling 3D viewing on conventional 2D display
US8896655B2 (en) 2010-08-31 2014-11-25 Cisco Technology, Inc. System and method for providing depth adaptive video conferencing
US20120058824A1 (en) 2010-09-07 2012-03-08 Microsoft Corporation Scalable real-time motion recognition
US8437506B2 (en) 2010-09-07 2013-05-07 Microsoft Corporation System for fast, probabilistic skeletal tracking
US8599934B2 (en) 2010-09-08 2013-12-03 Cisco Technology, Inc. System and method for skip coding during video conferencing in a network environment
US8988508B2 (en) 2010-09-24 2015-03-24 Microsoft Technology Licensing, Llc. Wide angle field of view active illumination imaging system
US8959013B2 (en) 2010-09-27 2015-02-17 Apple Inc. Virtual keyboard for a non-tactile three dimensional user interface
US8681255B2 (en) 2010-09-28 2014-03-25 Microsoft Corporation Integrated low power depth camera and projection device
US8548270B2 (en) 2010-10-04 2013-10-01 Microsoft Corporation Time-of-flight depth imaging
US9484065B2 (en) 2010-10-15 2016-11-01 Microsoft Technology Licensing, Llc Intelligent determination of replays based on event identification
US8599865B2 (en) 2010-10-26 2013-12-03 Cisco Technology, Inc. System and method for provisioning flows in a mobile network environment
US8592739B2 (en) 2010-11-02 2013-11-26 Microsoft Corporation Detection of configuration changes of an optical element in an illumination system
US8866889B2 (en) 2010-11-03 2014-10-21 Microsoft Corporation In-home depth camera calibration
US8699457B2 (en) 2010-11-03 2014-04-15 Cisco Technology, Inc. System and method for managing flows in a mobile network environment
US9325804B2 (en) * 2010-11-08 2016-04-26 Microsoft Technology Licensing, Llc Dynamic image result stitching
US8667519B2 (en) 2010-11-12 2014-03-04 Microsoft Corporation Automatic passive and anonymous feedback system
US8730297B2 (en) 2010-11-15 2014-05-20 Cisco Technology, Inc. System and method for providing camera functions in a video environment
US10726861B2 (en) 2010-11-15 2020-07-28 Microsoft Technology Licensing, Llc Semi-private communication in open environments
US9143725B2 (en) 2010-11-15 2015-09-22 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US8902244B2 (en) 2010-11-15 2014-12-02 Cisco Technology, Inc. System and method for providing enhanced graphics in a video environment
US9338394B2 (en) 2010-11-15 2016-05-10 Cisco Technology, Inc. System and method for providing enhanced audio in a video environment
US8542264B2 (en) 2010-11-18 2013-09-24 Cisco Technology, Inc. System and method for managing optics in a video environment
US8723914B2 (en) 2010-11-19 2014-05-13 Cisco Technology, Inc. System and method for providing enhanced video processing in a network environment
US9349040B2 (en) 2010-11-19 2016-05-24 Microsoft Technology Licensing, Llc Bi-modal depth-image analysis
US9111138B2 (en) 2010-11-30 2015-08-18 Cisco Technology, Inc. System and method for gesture interface control
US10234545B2 (en) 2010-12-01 2019-03-19 Microsoft Technology Licensing, Llc Light source module
US8872762B2 (en) 2010-12-08 2014-10-28 Primesense Ltd. Three dimensional user interface cursor control
US8553934B2 (en) 2010-12-08 2013-10-08 Microsoft Corporation Orienting the position of a sensor
US8618405B2 (en) 2010-12-09 2013-12-31 Microsoft Corp. Free-space gesture musical instrument digital interface (MIDI) controller
DE102010053895A1 (en) 2010-12-09 2012-06-14 Eads Deutschland Gmbh Environment display device as well as a vehicle with such an environment-presentation device and method for displaying a panoramic image
CN102157011B (en) * 2010-12-10 2012-12-26 北京大学 Method for carrying out dynamic texture acquisition and virtuality-reality fusion by using mobile shooting equipment
US8408706B2 (en) 2010-12-13 2013-04-02 Microsoft Corporation 3D gaze tracker
CA2819257C (en) 2010-12-14 2019-09-24 Hologic, Inc. System and method for fusing three dimensional image data from a plurality of different imaging systems for use in diagnostic imaging
US8884968B2 (en) 2010-12-15 2014-11-11 Microsoft Corporation Modeling an object from image data
US9171264B2 (en) 2010-12-15 2015-10-27 Microsoft Technology Licensing, Llc Parallel processing machine learning decision tree training
US8920241B2 (en) 2010-12-15 2014-12-30 Microsoft Corporation Gesture controlled persistent handles for interface guides
USD678308S1 (en) 2010-12-16 2013-03-19 Cisco Technology, Inc. Display screen with graphical user interface
USD678320S1 (en) 2010-12-16 2013-03-19 Cisco Technology, Inc. Display screen with graphical user interface
USD678894S1 (en) 2010-12-16 2013-03-26 Cisco Technology, Inc. Display screen with graphical user interface
USD678307S1 (en) 2010-12-16 2013-03-19 Cisco Technology, Inc. Display screen with graphical user interface
USD682854S1 (en) 2010-12-16 2013-05-21 Cisco Technology, Inc. Display screen for graphical user interface
USD682293S1 (en) 2010-12-16 2013-05-14 Cisco Technology, Inc. Display screen with graphical user interface
USD682864S1 (en) 2010-12-16 2013-05-21 Cisco Technology, Inc. Display screen with graphical user interface
US9690099B2 (en) * 2010-12-17 2017-06-27 Microsoft Technology Licensing, Llc Optimized focal area for augmented reality displays
US8448056B2 (en) 2010-12-17 2013-05-21 Microsoft Corporation Validation analysis of human target
US8803952B2 (en) 2010-12-20 2014-08-12 Microsoft Corporation Plural detector time-of-flight depth mapping
US8385596B2 (en) 2010-12-21 2013-02-26 Microsoft Corporation First person shooter control with virtual skeleton
US9848106B2 (en) 2010-12-21 2017-12-19 Microsoft Technology Licensing, Llc Intelligent gameplay photo capture
US9823339B2 (en) 2010-12-21 2017-11-21 Microsoft Technology Licensing, Llc Plural anode time-of-flight sensor
US9821224B2 (en) 2010-12-21 2017-11-21 Microsoft Technology Licensing, Llc Driving simulator control with virtual skeleton
US8994718B2 (en) 2010-12-21 2015-03-31 Microsoft Technology Licensing, Llc Skeletal control of three-dimensional virtual world
US9123316B2 (en) 2010-12-27 2015-09-01 Microsoft Technology Licensing, Llc Interactive content creation
US8488888B2 (en) 2010-12-28 2013-07-16 Microsoft Corporation Classification of posture states
US8401242B2 (en) 2011-01-31 2013-03-19 Microsoft Corporation Real-time camera tracking using depth maps
US8401225B2 (en) 2011-01-31 2013-03-19 Microsoft Corporation Moving object segmentation using depth images
US8587583B2 (en) 2011-01-31 2013-11-19 Microsoft Corporation Three-dimensional environment reconstruction
US9247238B2 (en) 2011-01-31 2016-01-26 Microsoft Technology Licensing, Llc Reducing interference between multiple infra-red depth cameras
US8730232B2 (en) 2011-02-01 2014-05-20 Legend3D, Inc. Director-style based 2D to 3D movie conversion system and method
US8724887B2 (en) 2011-02-03 2014-05-13 Microsoft Corporation Environmental modifications to mitigate environmental factors
CN106125921B (en) 2011-02-09 2019-01-15 苹果公司 Gaze detection in 3D map environment
US8942917B2 (en) 2011-02-14 2015-01-27 Microsoft Corporation Change invariant scene recognition by an agent
US8497838B2 (en) 2011-02-16 2013-07-30 Microsoft Corporation Push actuation of interface controls
US9113130B2 (en) 2012-02-06 2015-08-18 Legend3D, Inc. Multi-stage production pipeline system
US9407904B2 (en) 2013-05-01 2016-08-02 Legend3D, Inc. Method for creating 3D virtual reality from 2D images
US9288476B2 (en) 2011-02-17 2016-03-15 Legend3D, Inc. System and method for real-time depth modification of stereo images of a virtual reality environment
US9241147B2 (en) 2013-05-01 2016-01-19 Legend3D, Inc. External depth map transformation method for conversion of two-dimensional images to stereoscopic images
US9282321B2 (en) 2011-02-17 2016-03-08 Legend3D, Inc. 3D model multi-reviewer system
US8692862B2 (en) 2011-02-28 2014-04-08 Cisco Technology, Inc. System and method for selection of video data in a video conference environment
US9551914B2 (en) 2011-03-07 2017-01-24 Microsoft Technology Licensing, Llc Illuminator with refractive optical element
US9067136B2 (en) 2011-03-10 2015-06-30 Microsoft Technology Licensing, Llc Push personalization of interface controls
US8571263B2 (en) 2011-03-17 2013-10-29 Microsoft Corporation Predicting joint positions
US9470778B2 (en) 2011-03-29 2016-10-18 Microsoft Technology Licensing, Llc Learning from high quality depth measurements
US9842168B2 (en) 2011-03-31 2017-12-12 Microsoft Technology Licensing, Llc Task driven user intents
US10642934B2 (en) 2011-03-31 2020-05-05 Microsoft Technology Licensing, Llc Augmented conversational understanding architecture
US9298287B2 (en) 2011-03-31 2016-03-29 Microsoft Technology Licensing, Llc Combined activation for natural user interface systems
US9760566B2 (en) 2011-03-31 2017-09-12 Microsoft Technology Licensing, Llc Augmented conversational understanding agent to identify conversation context between two humans and taking an agent action thereof
US9597590B2 (en) * 2011-04-01 2017-03-21 Massachusetts Institute Of Technology Methods and apparatus for accessing peripheral content
US8824749B2 (en) 2011-04-05 2014-09-02 Microsoft Corporation Biometric recognition
US8503494B2 (en) 2011-04-05 2013-08-06 Microsoft Corporation Thermal management system
US8620113B2 (en) 2011-04-25 2013-12-31 Microsoft Corporation Laser diode modes
US8670019B2 (en) 2011-04-28 2014-03-11 Cisco Technology, Inc. System and method for providing enhanced eye gaze in a video conferencing environment
US8702507B2 (en) 2011-04-28 2014-04-22 Microsoft Corporation Manual and camera-based avatar control
US9259643B2 (en) 2011-04-28 2016-02-16 Microsoft Technology Licensing, Llc Control of separate computer game elements
US8786631B1 (en) 2011-04-30 2014-07-22 Cisco Technology, Inc. System and method for transferring transparency information in a video environment
US10671841B2 (en) 2011-05-02 2020-06-02 Microsoft Technology Licensing, Llc Attribute state classification
US8888331B2 (en) 2011-05-09 2014-11-18 Microsoft Corporation Low inductance light source module
US9064006B2 (en) 2012-08-23 2015-06-23 Microsoft Technology Licensing, Llc Translating natural language utterances to keyword search queries
US8934026B2 (en) 2011-05-12 2015-01-13 Cisco Technology, Inc. System and method for video coding in a dynamic environment
US9137463B2 (en) 2011-05-12 2015-09-15 Microsoft Technology Licensing, Llc Adaptive high dynamic range camera
US8788973B2 (en) 2011-05-23 2014-07-22 Microsoft Corporation Three-dimensional gesture controlled avatar configuration interface
US9007430B2 (en) * 2011-05-27 2015-04-14 Thomas Seidl System and method for creating a navigable, three-dimensional virtual reality environment having ultra-wide field of view
US8760395B2 (en) 2011-05-31 2014-06-24 Microsoft Corporation Gesture recognition techniques
US8526734B2 (en) 2011-06-01 2013-09-03 Microsoft Corporation Three-dimensional background removal for vision system
US9594430B2 (en) 2011-06-01 2017-03-14 Microsoft Technology Licensing, Llc Three-dimensional foreground selection for vision system
US8597142B2 (en) 2011-06-06 2013-12-03 Microsoft Corporation Dynamic camera based practice mode
US9098110B2 (en) 2011-06-06 2015-08-04 Microsoft Technology Licensing, Llc Head rotation tracking from depth-based center of mass
US8929612B2 (en) 2011-06-06 2015-01-06 Microsoft Corporation System for recognizing an open or closed hand
US10796494B2 (en) 2011-06-06 2020-10-06 Microsoft Technology Licensing, Llc Adding attributes to virtual representations of real-world objects
US9013489B2 (en) 2011-06-06 2015-04-21 Microsoft Technology Licensing, Llc Generation of avatar reflecting player appearance
US9724600B2 (en) 2011-06-06 2017-08-08 Microsoft Technology Licensing, Llc Controlling objects in a virtual environment
US9208571B2 (en) 2011-06-06 2015-12-08 Microsoft Technology Licensing, Llc Object digitization
US8897491B2 (en) 2011-06-06 2014-11-25 Microsoft Corporation System for finger recognition and tracking
US9597587B2 (en) 2011-06-08 2017-03-21 Microsoft Technology Licensing, Llc Locational node device
US9377865B2 (en) 2011-07-05 2016-06-28 Apple Inc. Zoom-based gesture user interface
US9459758B2 (en) 2011-07-05 2016-10-04 Apple Inc. Gesture-based interface with enhanced features
US8881051B2 (en) 2011-07-05 2014-11-04 Primesense Ltd Zoom-based gesture user interface
US8912979B1 (en) 2011-07-14 2014-12-16 Google Inc. Virtual window in head-mounted display
US9030498B2 (en) 2011-08-15 2015-05-12 Apple Inc. Combining explicit select gestures and timeclick in a non-tactile three dimensional user interface
US9509922B2 (en) * 2011-08-17 2016-11-29 Microsoft Technology Licensing, Llc Content normalization on digital displays
US8786730B2 (en) 2011-08-18 2014-07-22 Microsoft Corporation Image exposure using exclusion regions
US9218063B2 (en) 2011-08-24 2015-12-22 Apple Inc. Sessionless pointing user interface
US9122311B2 (en) 2011-08-24 2015-09-01 Apple Inc. Visual feedback for tactile and non-tactile user interfaces
US9274595B2 (en) 2011-08-26 2016-03-01 Reincloud Corporation Coherent presentation of multiple reality and interaction models
US20130271564A1 (en) * 2011-10-10 2013-10-17 Fabio Zaffagnini Mountain, natural park track view with panoramas taken with an acquisition system mounted on a backpack
CN103105926A (en) 2011-10-17 2013-05-15 微软公司 Multi-sensor posture recognition
US9557836B2 (en) 2011-11-01 2017-01-31 Microsoft Technology Licensing, Llc Depth image compression
US9117281B2 (en) 2011-11-02 2015-08-25 Microsoft Corporation Surface segmentation from RGB and depth images
US8854426B2 (en) 2011-11-07 2014-10-07 Microsoft Corporation Time-of-flight camera with guided light
US8947493B2 (en) 2011-11-16 2015-02-03 Cisco Technology, Inc. System and method for alerting a participant in a video conference
US8724906B2 (en) 2011-11-18 2014-05-13 Microsoft Corporation Computing pose and/or shape of modifiable entities
US8509545B2 (en) 2011-11-29 2013-08-13 Microsoft Corporation Foreground subject detection
US8635637B2 (en) 2011-12-02 2014-01-21 Microsoft Corporation User interface presenting an animated avatar performing a media reaction
US8803800B2 (en) 2011-12-02 2014-08-12 Microsoft Corporation User interface control based on head orientation
CN103959340A (en) * 2011-12-07 2014-07-30 英特尔公司 Graphics rendering technique for autostereoscopic three dimensional display
US9100685B2 (en) 2011-12-09 2015-08-04 Microsoft Technology Licensing, Llc Determining audience state or interest using passive sensor data
US8971612B2 (en) 2011-12-15 2015-03-03 Microsoft Corporation Learning image processing tasks from scene reconstructions
US8879831B2 (en) 2011-12-15 2014-11-04 Microsoft Corporation Using high-level attributes to guide image processing
US8630457B2 (en) 2011-12-15 2014-01-14 Microsoft Corporation Problem states for pose tracking pipeline
US8811938B2 (en) 2011-12-16 2014-08-19 Microsoft Corporation Providing a user interface experience based on inferred vehicle state
US8682087B2 (en) 2011-12-19 2014-03-25 Cisco Technology, Inc. System and method for depth-guided image filtering in a video conference environment
US9342139B2 (en) 2011-12-19 2016-05-17 Microsoft Technology Licensing, Llc Pairing a computing device to a user
US9646453B2 (en) * 2011-12-23 2017-05-09 Bally Gaming, Inc. Integrating three-dimensional and two-dimensional gaming elements
US9720089B2 (en) 2012-01-23 2017-08-01 Microsoft Technology Licensing, Llc 3D zoom imager
US9229534B2 (en) 2012-02-28 2016-01-05 Apple Inc. Asymmetric mapping for tactile and non-tactile user interfaces
AU2013239179B2 (en) 2012-03-26 2015-08-20 Apple Inc. Enhanced virtual touchpad and touchscreen
US9170648B2 (en) * 2012-04-03 2015-10-27 The Boeing Company System and method for virtual engineering
US8898687B2 (en) 2012-04-04 2014-11-25 Microsoft Corporation Controlling a media program based on a media reaction
US9019316B2 (en) 2012-04-15 2015-04-28 Trimble Navigation Limited Identifying a point of interest from different stations
US11284137B2 (en) 2012-04-24 2022-03-22 Skreens Entertainment Technologies, Inc. Video processing systems and methods for display, selection and navigation of a combination of heterogeneous sources
US9743119B2 (en) 2012-04-24 2017-08-22 Skreens Entertainment Technologies, Inc. Video display system
US10499118B2 (en) * 2012-04-24 2019-12-03 Skreens Entertainment Technologies, Inc. Virtual and augmented reality system and headset display
US9210401B2 (en) 2012-05-03 2015-12-08 Microsoft Technology Licensing, Llc Projected visual cues for guiding physical movement
CA2775700C (en) 2012-05-04 2013-07-23 Microsoft Corporation Determining a future portion of a currently presented media program
US9153073B2 (en) * 2012-05-23 2015-10-06 Qualcomm Incorporated Spatially registered augmented video
JP6018707B2 (en) 2012-06-21 2016-11-02 マイクロソフト コーポレーション Building an avatar using a depth camera
US9836590B2 (en) 2012-06-22 2017-12-05 Microsoft Technology Licensing, Llc Enhanced accuracy of user presence status determination
US20140195285A1 (en) * 2012-07-20 2014-07-10 Abbas Aghakhani System and method for creating cultural heritage tour program and historical environment for tourists
US9495783B1 (en) * 2012-07-25 2016-11-15 Sri International Augmented reality vision system for tracking and geolocating objects of interest
US9696427B2 (en) 2012-08-14 2017-07-04 Microsoft Technology Licensing, Llc Wide angle depth detection
US9429912B2 (en) 2012-08-17 2016-08-30 Microsoft Technology Licensing, Llc Mixed reality holographic object development
US8451323B1 (en) * 2012-09-12 2013-05-28 Google Inc. Building data models by using light to determine material properties of illuminated objects
TWI590099B (en) * 2012-09-27 2017-07-01 緯創資通股份有限公司 Interaction system and motion detection method
US9007365B2 (en) 2012-11-27 2015-04-14 Legend3D, Inc. Line depth augmentation system and method for conversion of 2D images to 3D images
US9547937B2 (en) 2012-11-30 2017-01-17 Legend3D, Inc. Three-dimensional annotation system and method
US9265458B2 (en) 2012-12-04 2016-02-23 Sync-Think, Inc. Application of smooth pursuit cognitive testing paradigms to clinical drug development
US9681154B2 (en) 2012-12-06 2017-06-13 Patent Capital Group System and method for depth-guided filtering in a video conference environment
US8882310B2 (en) 2012-12-10 2014-11-11 Microsoft Corporation Laser die light source module with low inductance
US10116911B2 (en) * 2012-12-18 2018-10-30 Qualcomm Incorporated Realistic point of view video method and apparatus
US9857470B2 (en) 2012-12-28 2018-01-02 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
JP6143469B2 (en) * 2013-01-17 2017-06-07 キヤノン株式会社 Information processing apparatus, information processing method, and program
US9251590B2 (en) 2013-01-24 2016-02-02 Microsoft Technology Licensing, Llc Camera pose estimation for 3D reconstruction
US9052746B2 (en) 2013-02-15 2015-06-09 Microsoft Technology Licensing, Llc User center-of-mass and mass distribution extraction using depth images
US9940553B2 (en) 2013-02-22 2018-04-10 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
US9135516B2 (en) 2013-03-08 2015-09-15 Microsoft Technology Licensing, Llc User body angle, curvature and average extremity positions extraction using depth images
US9380976B2 (en) 2013-03-11 2016-07-05 Sync-Think, Inc. Optical neuroinformatics
CN105188516B (en) 2013-03-11 2017-12-22 奇跃公司 For strengthening the System and method for virtual reality
US9092657B2 (en) 2013-03-13 2015-07-28 Microsoft Technology Licensing, Llc Depth image processing
US9274606B2 (en) 2013-03-14 2016-03-01 Microsoft Technology Licensing, Llc NUI video conference controls
US9992021B1 (en) 2013-03-14 2018-06-05 GoTenna, Inc. System and method for private and point-to-point communication between computing devices
US9007404B2 (en) 2013-03-15 2015-04-14 Legend3D, Inc. Tilt-based look around effect image enhancement method
CN107577350B (en) 2013-03-15 2020-10-23 奇跃公司 Display system and method
US9189034B2 (en) * 2013-03-25 2015-11-17 Panasonic intellectual property Management co., Ltd Electronic device
US9953213B2 (en) 2013-03-27 2018-04-24 Microsoft Technology Licensing, Llc Self discovery of autonomous NUI devices
US9438878B2 (en) 2013-05-01 2016-09-06 Legend3D, Inc. Method of converting 2D video to 3D video using 3D object models
US9442186B2 (en) 2013-05-13 2016-09-13 Microsoft Technology Licensing, Llc Interference reduction for TOF systems
US10295338B2 (en) 2013-07-12 2019-05-21 Magic Leap, Inc. Method and system for generating map data from an image
US9215382B1 (en) * 2013-07-25 2015-12-15 The United States Of America As Represented By The Secretary Of The Navy Apparatus and method for data fusion and visualization of video and LADAR data
US11019258B2 (en) 2013-08-21 2021-05-25 Verizon Patent And Licensing Inc. Aggregating images and audio data to generate content
US9451162B2 (en) * 2013-08-21 2016-09-20 Jaunt Inc. Camera array including camera modules
US9462253B2 (en) 2013-09-23 2016-10-04 Microsoft Technology Licensing, Llc Optical modules that reduce speckle contrast and diffraction artifacts
KR101491324B1 (en) * 2013-10-08 2015-02-06 현대자동차주식회사 Apparatus for Taking of Image for Vehicle
US9443310B2 (en) 2013-10-09 2016-09-13 Microsoft Technology Licensing, Llc Illumination modules that emit structured light
US9674563B2 (en) 2013-11-04 2017-06-06 Rovi Guides, Inc. Systems and methods for recommending content
US9769459B2 (en) 2013-11-12 2017-09-19 Microsoft Technology Licensing, Llc Power efficient laser diode driver circuit and method
US9508385B2 (en) 2013-11-21 2016-11-29 Microsoft Technology Licensing, Llc Audio-visual project generator
EP2887322B1 (en) * 2013-12-18 2020-02-12 Microsoft Technology Licensing, LLC Mixed reality holographic object development
US9971491B2 (en) 2014-01-09 2018-05-15 Microsoft Technology Licensing, Llc Gesture library for natural user input
WO2015126987A1 (en) 2014-02-18 2015-08-27 Merge Labs, Inc. Head mounted display goggles for use with mobile computing devices
USD751072S1 (en) 2014-02-18 2016-03-08 Merge Labs, Inc. Mobile head mounted display
CN104866261B (en) * 2014-02-24 2018-08-10 联想(北京)有限公司 A kind of information processing method and device
JP2015194709A (en) * 2014-03-28 2015-11-05 パナソニックIpマネジメント株式会社 image display device
US9384402B1 (en) 2014-04-10 2016-07-05 Google Inc. Image and video compression for remote vehicle assistance
US9986154B2 (en) * 2014-05-21 2018-05-29 Here Global B.V. Developing a panoramic image
US9911454B2 (en) 2014-05-29 2018-03-06 Jaunt Inc. Camera array including camera modules
US20150370067A1 (en) * 2014-06-20 2015-12-24 Filtrest, LLC Devices and Systems For Real-Time Experience Sharing
US11108971B2 (en) 2014-07-25 2021-08-31 Verzon Patent and Licensing Ine. Camera array removing lens distortion
US9363569B1 (en) 2014-07-28 2016-06-07 Jaunt Inc. Virtual reality system including social graph
US9774887B1 (en) 2016-09-19 2017-09-26 Jaunt Inc. Behavioral directional encoding of three-dimensional video
US10186301B1 (en) 2014-07-28 2019-01-22 Jaunt Inc. Camera array including camera modules
US10701426B1 (en) 2014-07-28 2020-06-30 Verizon Patent And Licensing Inc. Virtual reality system including social graph
US10440398B2 (en) 2014-07-28 2019-10-08 Jaunt, Inc. Probabilistic model to compress images for three-dimensional video
WO2016029283A1 (en) * 2014-08-27 2016-03-03 Muniz Samuel System for creating images, videos and sounds in an omnidimensional virtual environment from real scenes using a set of cameras and depth sensors, and playback of images, videos and sounds in three-dimensional virtual environments using a head-mounted display and a movement sensor
US10275935B2 (en) 2014-10-31 2019-04-30 Fyusion, Inc. System and method for infinite synthetic image generation from multi-directional structured image array
US10262426B2 (en) 2014-10-31 2019-04-16 Fyusion, Inc. System and method for infinite smoothing of image sequences
US9746921B2 (en) 2014-12-31 2017-08-29 Sony Interactive Entertainment Inc. Signal generation and detector systems and methods for determining positions of fingers of a user
US10356393B1 (en) * 2015-02-16 2019-07-16 Amazon Technologies, Inc. High resolution 3D content
USD760701S1 (en) 2015-02-20 2016-07-05 Merge Labs, Inc. Mobile head mounted display controller
USD755789S1 (en) 2015-02-20 2016-05-10 Merge Labs, Inc. Mobile head mounted display
US9857939B2 (en) * 2015-02-27 2018-01-02 Accenture Global Services Limited Three-dimensional virtualization
US9558760B2 (en) * 2015-03-06 2017-01-31 Microsoft Technology Licensing, Llc Real-time remodeling of user voice in an immersive visualization system
CN107636702A (en) 2015-05-15 2018-01-26 派克汉尼芬公司 Integrated form Asset Integrity Management System
US9736440B2 (en) * 2015-05-26 2017-08-15 Chunghwa Picture Tubes, Ltd. Holographic projection device capable of forming a holographic image without misalignment
US9813621B2 (en) 2015-05-26 2017-11-07 Google Llc Omnistereo capture for mobile devices
US9877016B2 (en) 2015-05-27 2018-01-23 Google Llc Omnistereo capture and render of panoramic virtual reality content
JP6484349B2 (en) 2015-05-27 2019-03-13 グーグル エルエルシー Camera rig and 3D image capture
US10038887B2 (en) 2015-05-27 2018-07-31 Google Llc Capture and render of panoramic virtual reality content
WO2016205350A1 (en) * 2015-06-15 2016-12-22 University Of Maryland, Baltimore Method and apparatus to provide a virtual workstation with enhanced navigational efficiency
US9824499B2 (en) 2015-06-23 2017-11-21 Microsoft Technology Licensing, Llc Mixed-reality image capture
US10222932B2 (en) 2015-07-15 2019-03-05 Fyusion, Inc. Virtual reality environment based manipulation of multilayered multi-view interactive digital media representations
US10147211B2 (en) 2015-07-15 2018-12-04 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US10242474B2 (en) 2015-07-15 2019-03-26 Fyusion, Inc. Artificially rendering images using viewpoint interpolation and extrapolation
US11095869B2 (en) 2015-09-22 2021-08-17 Fyusion, Inc. System and method for generating combined embedded multi-view interactive digital media representations
US10852902B2 (en) 2015-07-15 2020-12-01 Fyusion, Inc. Automatic tagging of objects on a multi-view interactive digital media representation of a dynamic entity
EP3323109B1 (en) 2015-07-16 2022-03-23 Google LLC Camera pose estimation for mobile devices
US10701318B2 (en) * 2015-08-14 2020-06-30 Pcms Holdings, Inc. System and method for augmented reality multi-view telepresence
US9609307B1 (en) 2015-09-17 2017-03-28 Legend3D, Inc. Method of converting 2D video to 3D video using machine learning
GB2542434A (en) 2015-09-21 2017-03-22 Figment Productions Ltd A System for Providing a Virtual Reality Experience
US11783864B2 (en) 2015-09-22 2023-10-10 Fyusion, Inc. Integration of audio into a multi-view interactive digital media representation
WO2017075614A1 (en) 2015-10-29 2017-05-04 Oy Vulcan Vision Corporation Video imaging an area of interest using networked cameras
GB2543913B (en) * 2015-10-30 2019-05-08 Walmart Apollo Llc Virtual conference room
US10412280B2 (en) 2016-02-10 2019-09-10 Microsoft Technology Licensing, Llc Camera with light valve over sensor array
US10257932B2 (en) 2016-02-16 2019-04-09 Microsoft Technology Licensing, Llc. Laser diode chip on printed circuit board
US10274737B2 (en) * 2016-02-29 2019-04-30 Microsoft Technology Licensing, Llc Selecting portions of vehicle-captured video to use for display
US10462452B2 (en) 2016-03-16 2019-10-29 Microsoft Technology Licensing, Llc Synchronizing active illumination cameras
WO2017172528A1 (en) 2016-04-01 2017-10-05 Pcms Holdings, Inc. Apparatus and method for supporting interactive augmented reality functionalities
CN109313698B (en) 2016-05-27 2022-08-30 霍罗吉克公司 Simultaneous surface and internal tumor detection
JP6917820B2 (en) * 2016-08-05 2021-08-11 株式会社半導体エネルギー研究所 Data processing system
US10681341B2 (en) 2016-09-19 2020-06-09 Verizon Patent And Licensing Inc. Using a sphere to reorient a location of a user in a three-dimensional virtual reality video
US11032536B2 (en) 2016-09-19 2021-06-08 Verizon Patent And Licensing Inc. Generating a three-dimensional preview from a two-dimensional selectable icon of a three-dimensional reality video
US11032535B2 (en) 2016-09-19 2021-06-08 Verizon Patent And Licensing Inc. Generating a three-dimensional preview of a three-dimensional video
US11202017B2 (en) 2016-10-06 2021-12-14 Fyusion, Inc. Live style transfer on a mobile device
US10924691B2 (en) * 2016-11-14 2021-02-16 Sony Corporation Control device of movable type imaging device and control method of movable type imaging device
WO2018125541A1 (en) * 2016-12-30 2018-07-05 Wal-Mart Stores, Inc. Electronic shelf label system
WO2018147329A1 (en) * 2017-02-10 2018-08-16 パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ Free-viewpoint image generation method and free-viewpoint image generation system
US10666923B2 (en) * 2017-02-24 2020-05-26 Immervision, Inc. Wide-angle stereoscopic vision with cameras having different parameters
US10313651B2 (en) 2017-05-22 2019-06-04 Fyusion, Inc. Snapshots at predefined intervals or angles
US10973391B1 (en) * 2017-05-22 2021-04-13 James X. Liu Mixed reality viewing of a surgical procedure
US10560735B2 (en) 2017-05-31 2020-02-11 Lp-Research Inc. Media augmentation through automotive motion
EP3635949A1 (en) * 2017-06-09 2020-04-15 PCMS Holdings, Inc. Spatially faithful telepresence supporting varying geometries and moving users
US20180357819A1 (en) * 2017-06-13 2018-12-13 Fotonation Limited Method for generating a set of annotated images
US11069147B2 (en) 2017-06-26 2021-07-20 Fyusion, Inc. Modification of multi-view interactive digital media representation
WO2019060543A1 (en) 2017-09-21 2019-03-28 Sharevr Hawaii Llc Stabilized camera system
US20200120330A1 (en) * 2018-03-08 2020-04-16 Richard N. Berry System & method for providing a simulated environment
US10592747B2 (en) 2018-04-26 2020-03-17 Fyusion, Inc. Method and apparatus for 3-D auto tagging
US10810871B2 (en) * 2018-06-29 2020-10-20 Ford Global Technologies, Llc Vehicle classification system
US10694167B1 (en) 2018-12-12 2020-06-23 Verizon Patent And Licensing Inc. Camera array including camera modules
EP4014468A4 (en) 2019-08-12 2022-10-19 Magic Leap, Inc. Systems and methods for virtual and augmented reality
US11800231B2 (en) 2019-09-19 2023-10-24 Apple Inc. Head-mounted display
US11159748B1 (en) * 2020-07-14 2021-10-26 Guy Cohen Paz Studio in a box
CN112562433B (en) 2020-12-30 2021-09-07 华中师范大学 Working method of 5G strong interaction remote delivery teaching system based on holographic terminal
WO2022220707A1 (en) * 2021-04-12 2022-10-20 Хальдун Саид Аль-Зубейди Virtual teleport room
US20230128826A1 (en) * 2021-10-22 2023-04-27 Tencent America LLC Generating holographic or lightfield views using crowdsourcing
NL2030186B1 (en) * 2021-12-17 2023-06-28 Dimenco Holding B V Autostereoscopic display device presenting 3d-view and 3d-sound

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4868682A (en) * 1986-06-27 1989-09-19 Yamaha Corporation Method of recording and reproducing video and sound information using plural recording devices and plural reproducing devices
US4884217A (en) * 1987-09-30 1989-11-28 E. I. Du Pont De Nemours And Company Expert system with three classes of rules
US4951040A (en) * 1987-03-17 1990-08-21 Quantel Limited Image transformation processing
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4868682A (en) * 1986-06-27 1989-09-19 Yamaha Corporation Method of recording and reproducing video and sound information using plural recording devices and plural reproducing devices
US4951040A (en) * 1987-03-17 1990-08-21 Quantel Limited Image transformation processing
US4884217A (en) * 1987-09-30 1989-11-28 E. I. Du Pont De Nemours And Company Expert system with three classes of rules
US5130794A (en) * 1990-03-29 1992-07-14 Ritchey Kurtis J Panoramic display system

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
COMMUNICATIONS OF THE ACM, Vol. 35, No. 6, pp. 64-72, June 1992, CRUZ-NEIRA, "The Cave: Audio Visual Experience Automatic Virtual Environment". *
CYBERWARE LABORATORY, INC., "Cyberware Color 3D Digitizer", Model 3030, 1990. *
KURZWEIL APPLIED INTELLIGENCE, INC., "Talk Instead of Type", 1985. *

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0705044A3 (en) * 1994-09-28 1996-07-17 At & T Corp An interactive scanning device or system
EP0757335A2 (en) * 1995-08-02 1997-02-05 Nippon Hoso Kyokai 3D object graphics display device and method
EP0757335A3 (en) * 1995-08-02 1999-01-07 Nippon Hoso Kyokai 3D object graphics display device and method
WO2000060857A1 (en) * 1999-04-08 2000-10-12 Internet Pictures Corporation Virtual theater
US6778211B1 (en) 1999-04-08 2004-08-17 Ipix Corp. Method and apparatus for providing virtual processing effects for wide-angle video images
US7312820B2 (en) 1999-04-08 2007-12-25 Ipix Corporation Method and apparatus for providing virtual processing effects for wide-angle video images
ES2214115A1 (en) * 2002-10-15 2004-09-01 Universidad De Malaga System for automatically recognizing screening objects in database, has motion control module moving mobile autonomous agent by environment and allowing cameras to be directed to same selected points
EP1536378A2 (en) * 2003-11-28 2005-06-01 Topcon Corporation Three-dimensional image display apparatus and method for models generated from stereo images
EP1536378A3 (en) * 2003-11-28 2006-11-22 Topcon Corporation Three-dimensional image display apparatus and method for models generated from stereo images
US7746377B2 (en) 2003-11-28 2010-06-29 Topcon Corporation Three-dimensional image display apparatus and method
CN102119531A (en) * 2008-08-13 2011-07-06 惠普开发有限公司 Audio/video system
CN102135882A (en) * 2010-01-25 2011-07-27 微软公司 Voice-body identity correlation
CN102135882B (en) * 2010-01-25 2014-06-04 微软公司 Voice-body identity correlation
RU2485593C1 (en) * 2012-05-10 2013-06-20 Федеральное государственное бюджетное образовательное учреждение высшего профессионального образования "Сибирская государственная геодезическая академия" (ФГБОУ ВПО "СГГА") Method of drawing advanced maps (versions)
EP3057316A1 (en) * 2015-02-10 2016-08-17 DreamWorks Animation LLC Generation of three-dimensional imagery to supplement existing content
US9721385B2 (en) 2015-02-10 2017-08-01 Dreamworks Animation Llc Generation of three-dimensional imagery from a two-dimensional image using a depth map
US9897806B2 (en) 2015-02-10 2018-02-20 Dreamworks Animation L.L.C. Generation of three-dimensional imagery to supplement existing content
US10096157B2 (en) 2015-02-10 2018-10-09 Dreamworks Animation L.L.C. Generation of three-dimensional imagery from a two-dimensional image using a depth map
EP3564785A4 (en) * 2016-12-30 2020-08-12 ZTE Corporation Data processing method and apparatus, acquisition device, and storage medium
US10911884B2 (en) 2016-12-30 2021-02-02 Zte Corporation Data processing method and apparatus, acquisition device, and storage medium
US11223923B2 (en) 2016-12-30 2022-01-11 Zte Corporation Data processing method and apparatus, acquisition device, and storage medium
CN114051151A (en) * 2021-11-23 2022-02-15 广州博冠信息科技有限公司 Live broadcast interaction method and device, storage medium and electronic equipment
CN114051151B (en) * 2021-11-23 2023-11-28 广州博冠信息科技有限公司 Live interaction method and device, storage medium and electronic equipment

Also Published As

Publication number Publication date
US5495576A (en) 1996-02-27

Similar Documents

Publication Publication Date Title
US5495576A (en) Panoramic image based virtual reality/telepresence audio-visual system and method
Ellis Virtual environments and environmental instruments1
KR101908033B1 (en) Broad viewing angle displays and user interfaces
US5130794A (en) Panoramic display system
Fisher Virtual environments, personal simulation and telepresence
Arthur et al. Evaluating 3d task performance for fish tank virtual worlds
US5734421A (en) Apparatus for inducing attitudinal head movements for passive virtual reality
US4884219A (en) Method and apparatus for the perception of computer-generated imagery
US6583808B2 (en) Method and system for stereo videoconferencing
US6181371B1 (en) Apparatus for inducing attitudinal head movements for passive virtual reality
Manetta et al. Glossary of virtual reality terminology
US20090238378A1 (en) Enhanced Immersive Soundscapes Production
US20060250391A1 (en) Three dimensional horizontal perspective workstation
WO1996022660A1 (en) Intelligent method and system for producing and displaying stereoscopically-multiplexed images in virtual reality environments
US6836286B1 (en) Method and apparatus for producing images in a virtual space, and image pickup system for use therein
Holloway et al. Virtual environments: A survey of the technology
CN112492380A (en) Sound effect adjusting method, device, equipment and storage medium
Giraldi et al. Introduction to virtual reality
CN110600141B (en) Fusion bionic robot remote care system based on holographic image technology
JPH11195131A (en) Virtual reality method and device therefor and storage medium
KR20010096556A (en) 3D imaging equipment and method
Sammartino Integrated Virtual Reality Game Interaction: The Archery Game
CN112286355B (en) Interactive method and system for immersive content
US20240078767A1 (en) Information processing apparatus and information processing method
Welch et al. Immersive electronic books for surgical training

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): AT BE CH DE DK ES FR GB GR IE IT LU MC NL PT SE

121 Ep: the epo has been informed by wipo that ep was designated in this application
122 Ep: pct application non-entry in european phase