WO2001028309A2 - Method and system for comparing multiple images utilizing a navigable array of cameras - Google Patents

Method and system for comparing multiple images utilizing a navigable array of cameras Download PDF

Info

Publication number
WO2001028309A2
WO2001028309A2 PCT/US2000/028652 US0028652W WO0128309A2 WO 2001028309 A2 WO2001028309 A2 WO 2001028309A2 US 0028652 W US0028652 W US 0028652W WO 0128309 A2 WO0128309 A2 WO 0128309A2
Authority
WO
WIPO (PCT)
Prior art keywords
images
user
image
environment
camera
Prior art date
Application number
PCT/US2000/028652
Other languages
French (fr)
Other versions
WO2001028309A3 (en
Inventor
Scott Sorokin
Andrew H. Weber
David C. Worley
Original Assignee
Kewazinga Corp.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Priority claimed from US09/419,274 external-priority patent/US6522325B1/en
Application filed by Kewazinga Corp. filed Critical Kewazinga Corp.
Priority to AU12081/01A priority Critical patent/AU1208101A/en
Priority to EP00973582A priority patent/EP1224798A2/en
Publication of WO2001028309A2 publication Critical patent/WO2001028309A2/en
Publication of WO2001028309A3 publication Critical patent/WO2001028309A3/en
Priority to HK03100632.1A priority patent/HK1048576A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/243Image signal generators using stereoscopic image cameras using three or more 2D image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/167Synchronising or controlling image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/296Synchronisation thereof; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/58Means for changing the camera field of view without moving the camera body, e.g. nutating or panning of optics or image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/194Transmission of image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras

Definitions

  • the present invention relates to a telepresence system and, more particularly, to a navigable camera array telepresence system and method of using same for comparing two or more images.
  • the broadcast resulting from these editorial and production efforts provides viewers with limited enjoyment.
  • the broadcast is typically based on filming the venue from a finite number of predetermined cameras.
  • the broadcast contains limited viewing angles and perspectives of the venue.
  • the viewing angles and perspectives presented in the broadcast are those selected by a producer or director during the editorial and production process; there is no viewer autonomy.
  • the broadcast is often recorded for multiple viewings, the broadcast has limited content life because each viewing is identical to the first. Because each showing looks and sounds the same, viewers rarely come back for multiple viewings. A viewer fortunate enough to attend a venue in person will encounter many of the same problems. For example, a museum-goer must remain behind the barricades, viewing exhibits from limited angles and perspectives.
  • This system has several drawbacks. For example, in order for a viewer's perspective to move through the venue, the moving vehicle must be actuated and controlled. In this regard, operation of the system is complicated. Furthermore, because the camera views are contiguous, typically at right angles, changing camera views results in a discontinuous image.
  • U.S. Patent No. 5,187,571 for Television System For Displaying Multiple Views of A Remote Location issued February 16, 1993 describes a camera system similar to the 360 degree camera systems described above.
  • the system described provides a user to select an arbitrary and continuously variable section of an aggregate field of view. Multiple cameras are aligned so that each camera's field of view merges contiguously with those of adjacent cameras thereby creating the aggregate field of view.
  • the aggregate field of view may expand to cover 360 degrees.
  • the cameras' views must be contiguous.
  • the cameras In order for the camera views to be contiguous, the cameras have to share a common point perspective, or vertex.
  • 5,187,571 limits a user's view to a single point perspective, rather than allowing a user to experience movement in perspective through an environment. Also, with regard to the system of U.S. Patent No. 5,187,571, in order to achieve the contiguity between camera views, a relatively complex arrangement of mirrors is required. Additionally, each camera seemingly must also be placed in the same vertical plane.
  • a telepresence system includes an array of cameras, each of which has an associated view of an environment and an associated output representing the view.
  • the system also includes a first user interface device having first user inputs associated with movement along a first path in the array.
  • the system further includes a second user interface device having second user inputs associated with movement along a second path in the array.
  • a processing element is coupled to the user interface devices. The processing element receives and interprets the first inputs and selects outputs of cameras in the first path. Similarly, the processing element receives and interprets the second inputs and selects outputs of cameras in the second path independently of the first inputs.
  • a first user and a second user are able to navigate simultaneously and independently through the array.
  • the system may also mix the output by mosaicing or tweening the output images.
  • the telepresence system distinguishes between permissible cameras in the array and impermissible cameras in the array.
  • the telepresence system allows a user to move forward or backward through the environment, which provides the user the opportunity to move forward or backward through the environment.
  • Figure 1 is an overall schematic of one embodiment of the present invention.
  • Figure 2a is a perspective view of a camera and a camera rail section of the array according to one embodiment of the present invention.
  • Figures 2b-2d are side plan views of a camera and a camera rail according to one embodiment of the present invention.
  • Figure 2e is a top plan view of a camera rail according to one embodiment of the present invention.
  • Figure 3 is a perspective view of a portion of the camera array according to one embodiment of the present invention.
  • Figure 4 is a perspective view of a portion of the camera array according to an alternate embodiment of the present invention.
  • Figure 5 is a flowchart illustrating the general operation of the user interface according to one embodiment of the present invention.
  • Figure 6 is a flowchart illustrating in detail a portion of the operation shown in Figure 5.
  • Figure 7a is a perspective view of a portion of one embodiment of the present invention illustrating the arrangement of the camera array relative to objects being viewed.
  • Figures 7b-7g illustrate views from the perspectives of selected cameras of the array in
  • Figure 8 is a schematic view of an alternate embodiment of the present invention.
  • Figure 9 is a schematic view of a server according to one embodiment of the present invention.
  • Figure 10 is a schematic view of a server according to an alternate embodiment of the present invention.
  • Figure 11 is a top plan view of an alternate embodiment of the present invention.
  • Figure 12 is a flowchart illustrating in detail the image capture portion of the operation of the embodiment shown in Figure 11.
  • Figure 13 is a schematic illustrating an array of one embodiment of the present invention.
  • Figure 14 is flowchart illustrating the image capture process of one embodiment of the present invention.
  • Figure 15 is a schematic illustrating the logical arrangement of frames of an image according to one embodiment of the present invention.
  • Figure 16 is a flowchart illustrating the playback process of one embodiment of the present invention.
  • Figure 17 is a schematic view representing a display according to one embodiment of the present invention.
  • Figure 18a-c are schematics illustrating the logical relationship among frames according to one embodiment of the present invention.
  • Figure 19 is a schematic illustrating the logical arrangement of frames according to one embodiment of the present invention.
  • Figure 20 is a flowchart illustrating the process of harmonizing the duration of images according to one embodiment of the present invention.
  • the present invention relates to a telepresence system that, in preferred embodiments, uses modular, interlocking arrays of microcameras or cameras.
  • the cameras are on rails, with each rail holding a plurality of cameras. These cameras, each locked in a fixed relation to every adjacent camera on the array and dispersed dimensionally in a given environment, transmit image output to an associated storage node, thereby enabling remote viewers to navigate through such environment with the same spatial and visual cues (the changing perspective lines, the moving light reflections and shadows) that characterize an actual in- environment transit.
  • the outputs of these microcameras are linked by tiny (less than half the width of a human hair) Vertical Cavity Surface Emitting Lasers (VCSELs) to optical fibers, fed through area net hubs, buffered on server arrays or server farms (either for recording or (instantaneous) relay) and sent to viewers at remote terminals, interactive 5 wall screens, or mobile image appliances (like Virtual Retinal Displays).
  • VCSELs Vertical Cavity Surface Emitting Lasers
  • GUI graphical user interface
  • the system uses the multiplicity of positioned cameras to move the viewer's perspective from camera node to adjacent camera node in a way that provides the viewer with a sequential visual and acoustical path throughout the extent of the array. This allows the viewer to fluidly track or dolly through a 3- dimensional remote environment, to move through an event and make autonomous real-time 5 decisions about where to move and when to linger.
  • a telepresence system 100 according to the present invention is shown in Fig. 1.
  • the telepresence system 100 generally includes an array 10 of cameras 14 coupled to a server 18, which in turn is coupled to one or more users 22 each having a user interfaced/display device 24.
  • the operation and functionality of the embodiment described herein is provided, in part, by the server and user interface/display device.
  • the camera array 10 is conceptualized as being in an X, Z 0 coordinate system. This allows each camera to have an associated, unique node address comprising an X, and Z coordinate (X, Z).
  • a coordinate value corresponding to an axis of a particular camera represents the number of camera positions along that axis the particular camera is displaced from a reference camera.
  • the X axis runs left and right, and the 5 Z axis runs down and up.
  • Each camera 14 is identified by its X, Z coordinate. It is to be understood, however, that other methods of identifying cameras 14 can be used.
  • the array is three o dimensional, located in an X, Y, Z coordinate system.
  • the array 10 comprises a plurality of rails 12, each rail 12 including a series of one or more cameras 14.
  • the output from the cameras 14 are coupled to the server 18 by means of local area hubs 16.
  • the local area hubs 16 gather the outputs and, when necessary, amplify the outputs for transmission to the server 18.
  • the local area hubs 5 16 multiplex the outputs for transmission to the server 18.
  • the communication links 15 to take the form of fiber optics, cable, satellite, microwave transmission, internet, and the like.
  • an electronic storage device 20 is also coupled to the server 18.
  • the server 18 transfers the outputs to the electronic storage device 20.
  • the electronic (mass) storage device 20 transfers each camera's output onto a storage medium or means, such as CD- ROM, DVD, fluorescent multilayered disk (FMD), tape, platter, disk array, or the like.
  • the output of each camera 14 is stored in particular locations on the storage medium associated with that camera 14 or is stored with an indication to which camera 14 each stored output corresponds. For example, the output of each camera 14 is stored in contiguous locations on 5 a separate disk, tape, CD-ROM, or platter.
  • the camera output may be stored in a compressed format, such as JPEG, which is a standard format for storing still color and grayscale photographs in bitmap form, MPEG1, which is a standard format for storing video output with a resolution of 30 frames per second, MPEG2, which is a standard format for storing video output with a resolution of 60 frames per second (typically used for high 0 bandwidth applications such as HDTV and DVD-ROMs), and the like.
  • JPEG which is a standard format for storing still color and grayscale photographs in bitmap form
  • MPEG1 which is a standard format for storing video output with a resolution of 30 frames per second
  • MPEG2 which is a standard format for storing video output with a resolution of 60 frames per second (typically used for high 0 bandwidth applications such as HDTV and DVD-ROMs), and the like.
  • the server 18 receives output from the cameras 5 14 in the array.
  • the server 18 processes these outputs for either storage in the electronic storage device 20, transmission to the users 22 or both.
  • the server 18 is configured to provide the functionality of the system 100 in the present embodiment, it is to be understood that other processing elements may provide the functionality of the system 100.
  • the user interface device is a personal computer programmed to interpret the user input and transmit an indication of the desired current node address, buffer outputs from the array, and provide other of the described functions.
  • the system 100 can accommodate (but does not require) multiple users 22.
  • Each user 22 has associated therewith a user interface device including a user display device 5 (collectively 24).
  • user 22-1 has an associated user interface device and a user display device in the form of a computer 24-1 having a monitor and a keyboard.
  • User 22-2 has associated therewith an interactive wall screen 24-2 which serves as a user interface device and a user display device.
  • the user interface device and the user display device of user 22-3 includes a mobile audio and image appliance 24-3.
  • a digital interactive TV 24-4 is o the user interface device and user display device of user 22-4.
  • user 22-5 has a voice recognition unit and monitor 24-5 as the user interface and display devices.
  • user interface devices and user display devices are merely exemplary; for example, other interface devices include a mouse, touch screen, biofeedback devices, as well as those identified in U.S. Provisional Patent Application Serial No. 60/080,413 and the like.
  • each user interface device 24 has associated therewith user inputs. These user inputs allow each user 22 to move or navigate independently through the array 10. In other words, each user 22 enters inputs to generally select which camera outputs are transferred to the user display device.
  • each user display device includes a graphical representation of the array 10. The graphical representation includes an indication of which camera in the array the output of which is being viewed.
  • the user inputs allow each user to not only select particular cameras, but also to select relative movement or navigational paths through the array 10. It is to be understood that as used herein a path is defined by both cameras and time. As such, two users navigating through the same series of cameras may navigate different paths, provided the users do not access all cameras simultaneously. In other words, a linear series of plurality of cameras provides for a plurality of paths.
  • each user 22 may be coupled to the server 18 by an independent communication link.
  • each communication link may employ different technology.
  • the communication links include an internet link, a microwave signal link, a satellite link, a cable link, a fiber optic link, a wireless link, and the like.
  • the array 10 provides several advantages. For example, because the array 10 employs a series of cameras 14, no individual camera, or the entire array 10 for that matter, need be moved in order to obtain a seamless view of the environment. Instead, the user navigates through the array 10, which is strategically placed through and around the physical environment to be viewed. Furthermore, because the cameras 14 of the array 10 are physically located at different points in the environment to be viewed, a user is able to view changes in perspective, a feature unavailable to a single camera that merely changes focal length.
  • CMOS active pixel sensor APS
  • the video chips used in microcameras may be CMOS, CCD and the like, and are produced in a mainstream manufacturing process, by several companies, including Photobit, Pasadena, CA; Sarnoff Corporation, Princeton, NJ; and VLSI Vision, Ltd., Edinburgh, Scotland.
  • One specific suitable cameras is the analog color CCD camera manufactured by Sanyo Electric Co. Ltd. under the tradename VCC-5974.
  • VCC-5974 video capture boards
  • Meteor-II an analog to digital converter for converting analog NTSC video.
  • the capture boards also receive a video synchronizing signal, noted below, so that the output of each camera is synchronize, with each captured frame of one camera corresponding to that of the other. From the capture boards, the camera output is then provided to one or more servers or processing elements for processing.
  • the camera array 10 of the present embodiment comprises a series of modular rails 12 carrying cameras 14.
  • the structure of the rails 12 and cameras 14 will now be discussed in greater detail with reference to Figs. 2a through 2d.
  • Each camera 14 includes registration pins 34.
  • the cameras 14 utilize VCSELs to transfer their outputs to the rail 12. It is to be understood that the present invention is not limited to any particular type of camera 14, however, or even to an array 10 consisting of only one type of camera 14.
  • Each rail 12 includes two sides, 12a, 12b, at least one of which 12b is hingeably connected to the base 12c of the rail 12.
  • the base 12c includes docking ports 36 for receiving the registration pins 34 of the camera 14.
  • the hinged side 12b of the rail 12 is moved against the base 32 of the camera 14, thereby securing the camera 14 to the rail 12.
  • Each rail 12 further includes a first end 38 and a second end 44.
  • the first end 38 includes, in the present embodiment, two locking pins 40 and a protected transmission relay port 42 for transmitting the camera outputs.
  • the second end 44 includes two guide holes 46 for receiving the locking pins 40, and a transmission receiving port 48.
  • each rail 12 is modular and can be functionally connected to another rail to create the array 10.
  • each rail 12 includes communication paths for transmitting the output from each camera 14. 0 Alternatively, a cable couples each camera to the server.
  • the array 10 is shown having a particular configuration, it is to be understood that virtually any configuration of rails 12 and cameras 14 is within the scope of the present invention.
  • the array 10 may be a linear array of cameras 14, a 2- dimensional array of cameras 14, a 3 -dimensional array of cameras 14, or any combination 5 thereof.
  • the array 10 need not be comprised solely of linear segments, but rather may include curvilinear sections.
  • individual rails support a single camera and include varying degree of freedom extension spacers on either end of the rail to change the spacing between cameras or change the angle between adjacent cameras.
  • These spacers o comprise linear or rotary actuators or electrostrictive polymers controlled by one of the system servers.
  • the array 10 is supported by any of a number of support means.
  • the array 10 can be fixedly mounted to a wall or ceiling; the array 10 can be secured to a moveable frame that can be wheeled into position in the environment or supported from 5 cables.
  • Fig. 3 illustrates an example of a portion of the array 10.
  • the array 10 comprises five rows of rails 12a, through 12e.
  • Each of these rails 12a-12e is directed towards a central plane, which substantially passes through the center row 12c. Consequently, for any object placed in the same plane as the middle row 12c, a user would be able to view the o object essentially from the bottom, front, and top.
  • the rails 12 of the array 10 need not have the same geometry.
  • some of the rails 12 may be straight while others may be curved.
  • Fig. 4 illustrates the camera alignment that results from utilizing curved rails. It should be noted that rails in Fig. 4 have been made transparent so that the arrangement of cameras 14 may be easily seen.
  • each rail is configured in a step-like fashion or an arc with each camera above (or below) and in front of a previous camera.
  • the user has the option of moving forward through the environment.
  • the spacing of the cameras 14 depends on the particular application, including the objects being viewed, the focal length of the cameras 14, and the speed of movement through the array 10. In general, the closer the cameras and the greater the overlap in views, the more seamless the transition between camera views.
  • the distance between cameras 14 can be approximated by analogy to the distance between exposed frames taken by a motion picture camera dollying linearly through an environment.
  • the speed of movement of the projector through the environment divided by the frames exposed per unit of time results in a frame-distance ratio. For example, as shown by the following equations, in some applications a frame is taken ever inch.
  • a conventional movie camera records twenty-four frames per second. When such a camera is moved linearly through an environment at two feet per second, a frame is taken approximately every inch.
  • a frame of the projector is analogous to a camera 14 in the present invention.
  • one frame exposed per inch results in a movie having a seamless view of the environment, so too does one camera 14 per inch.
  • the cameras 14 are spaced approximately one inch apart, thereby resulting in a seamless view of the environment.
  • the spacing between cameras is greater than one inch, provided the fields of view of adjacent cameras overlap. Again, the greater the degree of overlap, the more seamless the progression between adjacent camera views. As described in greater detail below, the spacing between cameras may be further increased by generating synthetic or mixed images between contiguous cameras. Furthermore, the linear spacing between cameras becomes less important in a curved array, where the angular displacement between cameras is more important. For example, in one embodiment, the array is in a 180 degree arc, with cameras placed at five degree intervals, directed towards the center of the arc. As the radius of the arc increases, the linear distance between the cameras also increase; however, the angular displacement, five degrees, and the overlap in fields of view remain the same.
  • the system maintains the seamless progression from camera to adjacent camera.
  • the array comprises an arc of cameras.
  • the arc extends 110 degrees, with a radius of nine feet, and the cameras placed at approximately seven and a half degree intervals around the arc.
  • the arc has a radius of fifteen feet, with the cameras located every sixteen inches.
  • step 110 the user is presented with a predetermined starting view of the environment corresponding to a starting camera.
  • the operation of the system is controlled, in part, by software residing in the server.
  • the system associates each camera in the array with a coordinate.
  • the system is able to note the coordinates of the starting camera node. The camera output and, thus the corresponding view, changes only upon receiving a user input.
  • the user When the user determines that they want to move or navigate through the array, the user enters a user input through the user interface device 24.
  • the user inputs of the present embodiment generally include moving to the right, to the left, up, or down in the array. Additionally, a user may jump to a particular camera in the array. In alternate embodiments, a subset of these or other inputs, such as forward, backward, diagonal, over, and under, are used.
  • the user interface device transmits the user input to the server in step 120.
  • decoding the input generally involves determining whether the user wishes to move to the right, to the left, up, or down in the array.
  • the server 18 proceeds to determine whether the input corresponds to moving to the user's right in the array 10. This determination is shown in step 140. If the received user input does correspond to moving to the right, the current node address is incremented along the X axis in step 150 to obtain an updated address.
  • the server 18 determines whether the input corresponds to moving to the user's left in the array 10 in step 160. Upon determining that the input does correspond to moving to the left, the server 18 then decrements the current node address along the X axis to arrive at the updated address. This is shown in step 170.
  • the server 18 determines whether the input corresponds to moving up in the array. This determination is made in step 180. If the user input corresponds to moving up, in step 190, the server 18 increments the current node address along the Z axis, thereby obtaining an updated address.
  • the server 18 determines whether the received user input corresponds to moving down in the array 10. This determination is made in step 200. If the input does correspond to moving down in the array 10, in step 210 the server 18 decrements the current node address along the Z axis.
  • step 220 the server 18 determines whether the received user input corresponds to jumping or changing the view to a particular camera 14. As indicated in Figure 5, if the input corresponds to jumping to a particular camera 14, the server 18 changes the current node address to reflect the desired camera position. Updating the node address is shown as step 230. In an alternate embodiment, the input corresponds to jumping to a particular position in the array 10, not identified by the user as being a particular camera but by some reference to the venue, such as stage right. It is to be understood that the server 18 may decode the received user inputs in any of a number of ways, including in any order. For example, in an alternate embodiment the server 18 first determines whether the user input corresponds to up or down. In another alternate, preferred embodiment, user navigation includes moving forward, backward, to the 5 left and right, and up and down through a three dimensional array.
  • step 240 the server 18 causes a message signal to be transmitted to the user display device 24, causing a message to be displayed to the user 22 that the received input was not 0 understood. Operation of the system 100 then continues with step 120, and the server 18 awaits receipt of the next user input.
  • step 250 After adjusting the current node address, either by incrementing or decrementing the node address along an axis or by jumping to a particular node address, the server 18 proceeds in step 250 to adjust the user's view. Once the view is adjusted, operation of the system 100 5 continues again with step 120 as the server 18 awaits receipt of the next user input.
  • the server 18 continues to update the node address and adjust the view based on the received user input. For example, if the user input corresponded to "moving to the right", then operation of the system 100 would continuously loop through steps 140, 150, and 250, checking for a different input. When the different input is received, o the server 18 continuously updates the view accordingly.
  • Fig. 6 is a more detailed diagram of the operation of the system according to steps 140, 150, and 250 of Fig. 5. Moreover, it is to be understood that while Fig. 6 describes more detailed movement one direction i.e., to the right, the same detailed movement can be applied in any other direction. As illustrated, the determination of whether the user input corresponds to moving to the right actually involves several determinations. As o described in detail below, these determinations include moving to the right through the array
  • the present invention allows a user 22 to navigate through the array 10 at the different speeds.
  • the server 18 will apply an algorithm that controls the transition between camera outputs either at critical speed (n nodes/per unit of time), under critical speed (n-1 nodes/per unit of time), or over critical speed (n + 1 nodes/per unit of time).
  • speed of movement through the array 10 can alternatively be expressed as the time to switch from one camera 14 to another camera 14.
  • the server 18 makes the determination whether the user input corresponds to moving to the right at a critical speed.
  • the critical speed is preferably a predetermined speed of movement through the array 10 set by the system operator or designer depending on the anticipated environment being viewed. Further, the critical speed depends upon various other factors, such as focal length, distance between cameras, distance between the cameras and the viewed object, and the like.
  • the speed of movement through the array 10 is controlled by the number of cameras 14 traversed in a given time period. Thus, the movement through the array 10 at critical speed corresponds to traversing some number, "n", camera nodes per millisecond, or taking some amount of time, "s", to switch from one camera 14 to another.
  • the server 18 increments the current node address along the X axis at n nodes per millisecond.
  • the user traverses twenty-four cameras 14 per second.
  • a movie projector records twenty-four frames per second. Analogizing between the movie projector and the present invention, at critical the user traverses (and the server 18 switches between) approximately twenty-four cameras 14 per second, or a camera 14 approximately every 0.04167 seconds.
  • the user 22 may advance not only at critical speed, but also at over the critical speed, as shown in step 140b, or at under the critical speed, as shown in step 140c.
  • the server 18 increments the current node address along the X axis by a unit of greater than n, for example, at n + 2 nodes per millisecond.
  • the step of incrementing the current node address at n + 1 nodes per millisecond along the X axis is shown in step 150b.
  • the server 18 proceeds to increment the current node address at a variable less than n, for example, at n - 1 nodes per millisecond. This operation is shown as step 150c.
  • the shape of the array 10 can also be electronically scaled and the system 100 designed with a "center of gravity” that will ease a user's image path back to a "starting" or “critical position” node or ring of nodes, either when the user 22 releases control or when the system 100 is programmed to override the user's autonomy; that is to say, the active perimeter or geometry of the array 10 can be pre-configured to change at specified times or intervals in order to corral or focus attention in a situation that requires dramatic shaping.
  • the system operator can, by real-time manipulation or via a pre-configured electronic proxy sequentially activate or deactivate designated portions of the camera array 10. This is of particular importance in maintaining authorship and dramatic pacing in theatrical or entertainment venues, and also for implementing controls over how much freedom a user 22 will have to navigate through the array 10.
  • the system 100 can be programmed such that certain portions of the array 10 are unavailable to the user 22 at specified times or intervals.
  • the server 18 makes the determination whether the user input corresponds to movement to the right through the array but is subject to a navigation control algorithm.
  • the navigation control algorithm causes the server 18 to determine, based upon navigation control factors, whether the user's desired movement is permissible.
  • the navigation control algorithm determines whether the desired movement would cause the current node address to fall outside the permissible range of node coordinates.
  • the permissible range of node coordinates is predetermined and depends upon the time of day, as noted by the server 18.
  • the navigation control factors include time.
  • permissible camera nodes and control factors can be correlated in a table stored in memory.
  • the navigation control factors include time as measured from the beginning of a performance being viewed, also as noted by the server.
  • the system operator can dictate from where in the array a user will view certain scenes.
  • the navigation control factor is speed of movement through the array. For example, the faster a user 22 moves or navigates through the array, the wider the turns must be.
  • the permissible range of node coordinates is not predetermined.
  • the navigation control factors and, 5 therefore, the permissible range is dynamically controlled by the system operator who communicates with the server via an input device.
  • the server 18 further proceeds, in step 150d, to increment the current node address along a predetermined path.
  • incrementing the current node address along a predetermined path 0 the system operator is able to corral or focus the attention of the user 22 to the particular view of the permissible cameras 14, thereby maintaining authorship and dramatic pacing in theatrical and entertainment venues.
  • the server 18 does not move the user along a predetermined path. Instead, the 5 server 18 merely awaits a permissible user input and holds the view at the current node. Only when the server 18 receives a user input resulting in a permissible node coordinate will the server 18 adjust the user's view.
  • the user 22 may, at predetermined o locations in the array 10, choose to leave the real world environment being viewed. More specifically, additional source outputs, such as computer graphic imagery, virtual world imagery, applets, film clips, and other artificial and real camera outputs, are made available to the user 22. In one embodiment, the additional source output is composited with the view of the real environment. In an alternate embodiment, the user's view transfers completely from 5 the real environment to that offered by the additional source output.
  • additional source outputs such as computer graphic imagery, virtual world imagery, applets, film clips, and other artificial and real camera outputs
  • the additional source output is stored (preferably in digital form) in the electronic storage device 20.
  • the server 18 transmits the additional source output to the user interface/display device 24.
  • the server 18 simply transmits the o additional source output to the user display device 24.
  • the server 18 first composites the additional source output with the camera output and then transmits the composited signal to the user interface/display device 24.
  • the server 18 makes the determination whether the user input corresponds to moving in the array into the source output. If the user 22 decides to move into the additional source output, the server 18 adjusts the view by substituting the additional source output for the updated camera output identified in either of steps 150a-d.
  • the server 18 proceeds to adjust the user's view in step 250.
  • the server 18 "mixes" the existing or current camera output being displayed with the output of the camera 14 identified by the updated camera node address. Mixing the outputs is achieved differently in alternate embodiments of the invention. In the present embodiment, mixing the outputs involves electronically switching at a particular speed from the existing camera output to the output of the camera 14 having the new current node address.
  • the camera outputs are synchronized.
  • a synchronizing signal from a "sync generator” is supplied to the cameras and/or the processors capturing the camera output.
  • the sync generator may take the form of those used in video editing and may comprise, in alternate embodiments, part of the server, the hub, and/or a separate component coupled to the array.
  • the server 18 switches camera outputs approximately at a rate of 24 per second, or one every 0.04167 seconds. If the user 22 is moving through the array 10 at under the critical speed, the outputs of the intermediate cameras 14 are each displayed for a relatively longer duration than if the user is moving at the critical speed. Similarly, each output is displayed for a relatively shorter duration when a user navigates at over the critical speed. In other words, the server 18 adjusts the switching speed based on the speed of the movement through the array 10.
  • the user may navigate at only the critical speed.
  • mixing the outputs is achieved by compositing the existing or current output and the updated camera node output. In yet another embodiment, mixing involves dissolving the existing view into the new view. In still another alternate embodiment, mixing the outputs includes adjusting the frame refresh rate of the user display device. Additionally, based on speed of movement through the array, the server may add motion blur to convey the realistic sense of speed.
  • the server causes a black screen to be viewed instantaneously between camera views.
  • a black screen is analogous to blank film between frames in a movie reel.
  • black screens reduce the physiologic "carrying over" of one view into a subsequent view.
  • the user inputs corresponding to movements through the array at different speeds may include either different keystrokes on a keypad, different positions of a joystick, positioning a joystick in a given position for a predetermined length of time, and the like.
  • the decision to move into an additional source output may be indicated by a particular keystroke, joystick movement, or the like.
  • mixing may be accomplished by "mosaicing" the outputs of the intermediate cameras 14.
  • U.S. Pat. No. 5,649,032 entitled System For Automatically Aligning Images To Form A Mosaic Image to Peter J. Burt et al. discloses a system and method for generating a mosaic from a plurality of images and is hereby incorporated by reference.
  • the server 18 automatically aligns one camera output to another camera output, a camera output to another mosaic (generated from previously occurring camera output) such that the output can be added to the mosaic, or an existing mosaic to a camera output.
  • the present embodiment utilizes a mosaic composition process to construct (or update) a mosaic.
  • the mosaic composition comprises a selection process and a combination process.
  • the selection process automatically selects outputs for incorporation into the mosaic and may include masking and cropping functions to select the region of interest in a mosaic.
  • the combination process combines the various outputs to form the mosaic.
  • the combination process applies various output processing techniques, such as merging, fusing, filtering, output enhancement, and the like, to achieve a seamless combination of the outputs.
  • the resulting mosaic is a smooth view that combines the constituent outputs such that temporal and spatial information redundancy are minimized in the mosaic.
  • the mosaic may be formed as the user moves through the system and the output image displayed close to real time.
  • the system may form the mosaic from a predetermined number of outputs or during a predetermined time interval, and then display the images pursuant to the user's navigation through the environment.
  • the server 18 enables the output to be mixed by a "tweening" process.
  • tweening One example of the tweening process is disclosed in U.S. Pat. No. 5,259,040 entitled Method For Determining Sensor Motion And Scene Structure And Image 5 Processing System Therefor to Keith J. Hanna, herein incorporated by reference. Tweening enables the server 18 to process the structure of a view from two or more camera outputs of the view.
  • the server monitors the movement among the intermediate cameras 14 through a o scene using local scene characteristics such as brightness derivatives of a pair of camera outputs.
  • a global camera output movement constraint is combined with a local scene characteristic constancy constraint to relate local surface structures with the global camera output movement model and local scene characteristics.
  • the method for determining a model for global camera output movement through a scene and scene structure model of the scene s from two or more outputs of the scene at a given image resolution comprises the following steps:
  • step (c) resetting the initial estimates of the local scene models and the image sensor motion model using the new value of one of the models determined in step (b);
  • step (d) determining a new value of the second of the models using the estimates of the models determined in step (b) by minimizing the difference between the measured error in the 5 outputs and the error predicted by the model;
  • an embodiment of the present invention monitors the user movement among live cameras or storage nodes.
  • the server 18 also transmits to the user display device 24 outputs from some or all of the intermediate cameras, namely those located between the current camera node and the updated camera node.
  • Fig. 7a illustrates a curvilinear portion of an array 10 that extends along the X axis or to the left and right from the user's perspective.
  • the coordinates that the server 18 associates with the cameras 14 differ only in the X coordinate. More specifically, for purposes of the present example, the cameras 14 can be considered sequentially numbered, starting with the left-most camera 14 being the first, i.e., number "1".
  • the X coordinate of each camera 14 is equal to the camera's position in the array.
  • particular cameras will be designate 14-X, where X equals the camera's position in the array 10 and, thus, its associated X coordinate.
  • Figs. 7a-7g illustrate possible user movement through the array 10.
  • the environment to be viewed includes three objects 602, 604, 606, the first and second of which include numbered surfaces. As will be apparent, these numbered surface allow a better appreciation of the change in user perspective.
  • Fig. 7a six cameras 14-2, 14-7, 14-11, 14-14, 14-20, 14-23 of the array 10 are specifically identified.
  • the boundaries of each camera's view is identified by the pair of lines 14-2a, 14-7a, 14-1 la, 14-14a, 14-20a, 14-23a, radiating from each identified camera 14-2, 14-7, 14-11, 14-14, 14-20, 14-23, respectively.
  • the user 22 navigates through the array 10 along the X axis such that the images or views of the environment are those corresponding to the identified cameras 14-2, 14-7, 14-11, 14-14, 14-20, 14-23.
  • the present example provides the user 22 with the starting view from camera 14-2. This view is illustrated in Fig. 7b.
  • the server 18 Because the server 18 has been programmed to recognized the "7" key as corresponding to moving or jumping through the array to camera 14-7.
  • the server 18 changes the X coordinate of the current camera node address to 7, selects the output of 0 camera 14-7, and adjusts the view or image sent to the user 22. Adjusting the view, as discussed above, involves mixing the outputs of the current and updated camera nodes. Mixing the outputs, in turn, involves switching intermediate camera outputs into the view to achieve the seamless progression of the discrete views of cameras 14-2 through 14-7, which gives the user 22 the look and feel of moving around the viewed object.
  • the user 22 now has 5 another view of the first object 702.
  • the view from camera 14-7 is shown in Fig. 7c.
  • the server 18 would omit some or all of the intermediate outputs.
  • the user 22 indicates to the system 100 a desire to navigate to the right at critical speed.
  • the updated camera node address is 14-11.
  • the server 18 causes the mixing of the output of camera 14-11 with that of camera 14-7. Again, this includes switching into the view the outputs of the intermediate cameras (i.e., 14-8, 14-9, and 14-10) to give the user 22 the look and feel of navigating around the viewed object. The user 22 is thus presented with 5 the view from camera 14-11 , as shown in Fig. 7d.
  • the user 22 enters a user input, for example, "alt- right arrow,” indicating a desire to move to the right at less than critical speed. Accordingly, the server 18 increments the updated camera node address by n-1 nodes, namely 3 in the present example, to camera 14-14. The outputs from cameras 14-11 and 14-14 are mixed, 0 and the user 22 is presented with a seamless view associated with cameras 14-11 through 14-
  • Fig. 7e illustrates the resulting view of camera 14-14.
  • the server 18 interprets the user input and increments the current node address by n+2, or 6 in the present example.
  • the updated node address thus corresponds to camera 14-20.
  • the server 18 mixes the outputs of cameras 14-14 and 14-20, which includes switching into the view the outputs of the intermediate cameras 14-15 through 14-19.
  • the resulting view of camera 14-20 is displayed to the user 22.
  • the user 22 now views the second object 704.
  • the user 22 desires to move slowly through the array 10. Accordingly, the user 22 enters "alt-right arrow" to indicate moving to the right at below critical speed.
  • the server 18 interprets the received user input, it updates the current camera node address along the X axis by 3 to camera 14-23.
  • the server 18 then mixes the outputs of camera 14-20 and 14-23, thereby providing the user 22 with a seamless progression of views through camera 14-23.
  • the resulting view 14-23a is illustrated in Fig. 7g.
  • devices other than cameras may be interspersed in the array.
  • These other devices such as motion sensors and microphones, provide data to the server(s) for processing.
  • output from motion sensors or microphones are fed to the server(s) and used to scale the array.
  • permissible camera nodes are those near the sensor or microphone having a desired output e.g., where there is motion or sound.
  • navigation control factors include output from other such devices.
  • the output from the sensors or microphones are provided to the user.
  • the system 800 generally includes an array of cameras 802 coupled to a server 804, which, in turn, is coupled to one or more user interface and display devices 806 and an electronic storage device 808.
  • a hub 810 collects and transfers the outputs from the array 802 to the server 804.
  • the array 802 comprises modular rails 812 that are interconnected. Each rail 812 carries multiple cameras 814 and a microphone 816 centrally located at rail 812.
  • the system 800 includes microphones 818 that are physically separate from the array 802. The outputs of both the cameras 814 and microphones 816, 818 are coupled to the server 804 for processing.
  • the server 804 receives the sound output from the microphones 816, 818 and, as with the camera output, selectively transmits sound output to the user. As the server 804 updates the current camera node address and changes the user's view, it also changes the sound output transmitted to the user.
  • the server 804 has stored in memory an associated range of camera nodes with a given microphone, namely the cameras 0 814 on each rail 810 are associated with the microphone 816 on that particular rail 810. In the event a user attempts to navigate beyond the end of the array 802, the server 804 determines the camera navigation is impermissible and instead updates the microphone node output to that of the microphone 818 adjacent to the array 802.
  • the server 804 might include a database in which camera 5 nodes in a particular area are associated with a given microphones. For example, a rectangular volume defined by the (X, Y, Z) coordinates (0,0,0), (10,0,0), (10,5,0), (0,5,0), (0,0,5), (10,0,5), (10,5,5) and (0,5,5) are associated with a given microphone. It is to be understood that selecting one of the series of microphones based on the user's position (or view) in the array provides the user with a sound perspective of the environment that o coincides with the visual perspective.
  • server 902 electronic storage device 20, array 10, users (1,2,3, . 5 . .N) 22-1 - 22-N, and associated user interface/display devices 24-1 - 24-N are shown therein.
  • the server 902 includes, among other components, a processing means in the form of one or more central processing units (CPU) 904 coupled to associated read only memory (ROM) 906 and a random access memory (RAM) 908.
  • CPU central processing units
  • ROM 906 is for storing o the program that dictates the operation of the server 902
  • RAM 908 is for storing variables and values used by the CPU 904 during operation.
  • user interface/display devices 24 are also coupled to the CPU 904. It is to be understood that the CPU may, in alternate embodiments, comprise several processing units, each performing a discrete function.
  • a memory controller 910 Coupled to both the CPU 904 and the electronic storage device 20 is a memory controller 910.
  • the memory controller 910 under direction of the CPU 904, controls accesses (reads and writes) to the storage device 20.
  • the memory controller 910 is 5 shown as part of the server 902, it is to be understood that it may reside in the storage device 20.
  • the CPU 904 receives camera outputs from the array 10 via bus 912. As described above, the CPU 904 mixes the camera outputs for display on the user interface/display device 24. Which outputs are mixed depends on the view selected by each 0 user 22. Specifically, each user interface/display devices 24 transmits across bus 914 the user inputs that define the view to be displayed. Once the CPU 904 mixes the appropriate outputs, it transmits the resulting output to the user interface/display device 24 via bus 916. As shown, in the present embodiment, each user 22 is independently coupled to the server 902.
  • the bus 912 also carries the camera outputs to the storage device 20 for storage. 5 When storing the camera outputs, the CPU 904 directs the memory controller 910 to store the output of each camera 14 in particular locations of memory in the storage device 20.
  • the CPU 904 When the image to be displayed has previously been stored in the storage device 20, the CPU 904 causes the memory controller 910 to access the storage device 20 to retrieve the appropriate camera output. The output is thus transmitted to the CPU 904 via bus 918 where 0 it is mixed. Bus 918 also carries additional source output to the CPU 904 for transmission to the users 22. As with outputs received directly from the array 10, the CPU 904 mixes these outputs and transmits the appropriate view to the user interface/display device 24.
  • FIG. 10 shows a server configuration according to an alternate embodiment of the present invention.
  • the server 1002 generally comprises a control central 5 processing unit (CPU) 1004, a mixing CPU 1006 associated with each user 22, and a memory controller 1008.
  • the control CPU 1004 has associated ROM 1010 and RAM 1012.
  • each mixing CPU 1006 has associated ROM 1014 and RAM 1016.
  • the camera outputs from the array 10 are coupled to each of the mixing CPUs 1 through N 1006-1, 1006-N via bus 1018.
  • each user 22 enters inputs in the interface/display device 24 for transmission (via bus 1020) to the control CPU 1004.
  • the control CPU 1004 interprets the inputs and, via buses 1022-1, 1022-N, transmits control signals to the mixing CPUs 1006-1, 1006-N instructing them which camera outputs received on bus 1018 to mix.
  • the mixing CPUs 1006-1, 1006-N mix the outputs in order to generate the appropriate view and transmit the resulting view via buses 1024-1, 1024-N to the user interface/display devices 24-1, 24-N. 5
  • each mixing CPU 1006 multiplexes outputs to more than one user 22. Indications of which outputs are to mixed and transmitted to each user 22 comes from the control CPU 1004.
  • the bus 1018 couples the camera outputs not only to the mixing CPUs 1006-1, 1006- N, but also to the storage device 20.
  • the storage device 20 stores the camera outputs in known storage locations.
  • the control CPU 1004 causes the memory controller 1008 to retrieve the appropriate images from the storage device 20. Such images are retrieved into the mixing CPUs 1006 via bus 1026. Additional source output is also retrieved to the mixing 5 CPUs 1006-1, 1006-N via bus 1026.
  • the control CPU 1004 also passes control signals to the mixing CPUs 1006-1, 1006-N to indicate which outputs are to be mixed and displayed.
  • the outputs of cameras are provided to networked (e.g., via an Ethernet) personal computers, for example one capture computer per pair of adjacent cameras and one control computer.
  • each capture computer also includes two video capture boards — one per camera coupled to the capture computer.
  • Each capture computer also provides the mixing functionality, such as tweening, between the cameras coupled thereto.
  • the control computer causes each capture computer to receive the output from a camera adjacent to one directly coupled to the capture computer so that capture computer may mix the outputs 5 of the camera directly coupled to the capture computer and the adjacent camera.
  • Control computer coordinates the operation of the capture computers and other o components as described herein.
  • the system retrieves from the array (or the electronic storage device) and simultaneously transmits to the user at least portions of outputs from two cameras.
  • the server processing element mixes these camera outputs to achieve a stereoscopic output.
  • Each view provided to the user is based on such a stereoscopic output.
  • the outputs from two adjacent cameras in the array are used to produce one stereoscopic view.
  • Figs. 7a-7g one view is the stereoscopic view from cameras 14-1 and 14-2.
  • the next view is based on the stereoscopic output of cameras 14-2 and 14-3 or two other cameras.
  • the user is provided the added feature of a stereoscopic seamless view of the environment.
  • the present invention allows multiple users to simultaneously navigate through the array independently of each other.
  • the systems described above distinguish between inputs from the multiple users and selects a separate camera output appropriate to each user's inputs.
  • the server tracks the current camera node address associated with each user by storing each node address in a particular memory location associate with that user.
  • each user's input is differentiated and identified as being associated with the particular memory location with the use of message tags appended to the user inputs by the corresponding user interface device.
  • two or more users may choose to be linked, thereby moving in tandem and having the same view of the environment.
  • each includes identifying another user by his/her code to serve as a "guide".
  • the server provides the outputs and views selected by the guide user to both the guide and the other user selecting the guide. Another user input causes the server to unlink the users, thereby allowing each user to control his/her own movement through the array.
  • a user may also wish to navigate forward and backward through the environment, thereby moving closer to or further away from an object.
  • the use of a zoom lens would entail robotic control by a single user and preclude the simultaneous viewing of different fields of view positions at that camera node by multiple users.
  • One embodiment that solves this problem of preventing multiple user from simultaneously viewing different fields of view from the same camera position in the array entails creating different field of view options at a single camera position.
  • the different field of view options are created with clusters of cameras at each position in the array, each camera having a different field of view lens but substantially the same vertex in the array.
  • the cameras at the same position have essentially the same vertex by employing beam splitters and/or mirrors to enable the different field of view cameras to be physically positioned away from the vertex in the array, yet have each camera field of view from the same perspective or vertex.
  • each camera and its associated output has an address, a storage location where the camera outputs are being stored, and is accessible based on user inputs indicating which field of view or relative field of view (zoom in or zoom out) the user desires to receive.
  • the use of such multiple cameras at a given node or location in the array may be used in any of the embodiments described herein.
  • Fig. 11 illustrates a top plan view of another embodiment enabling the user to move left, right, up, down, forward or backwards through the environment.
  • a plurality of cylindrical arrays (121-1 - 121-n) of differing diameters comprising a series of cameras 14 may be situated around an environment comprising one or more objects 1200, one cylindrical array at a time.
  • Cameras 14 situated around the object(s) 1100 are positioned along an X and Z coordinate system.
  • an array 12 may comprise a plurality of rings of the same circumference positioned at different positions (heights) throughout the z-axis to form a cylinder of cameras 14 around the object(s) 1100. This also allows each camera in each array 12 to have an associated, unique storage node address comprising an X and Z coordinate - i.e., array 1(X, Z).
  • a coordinate value corresponding to an axis of a particular camera represents the number of camera positions along that axis the particular camera is displaced from a reference camera.
  • the X axis runs around the perimeter of an array 12, and the Z axis runs down and up.
  • Each storage node is associated with a camera view identified by its X, Z coordinate.
  • the outputs of the cameras 14 are coupled to one or more servers for gathering and transmitting the outputs to the server 18.
  • each camera requires only one storage location.
  • the camera output may be stored in a logical arrangement, such as a matrix of n arrays, wherein each array has a plurality of (X,Z) coordinates.
  • the node addresses may comprise of a specific coordinate within an array- i.e., Array ⁇ (X n ,Zn), Array 2 (X n ,Zn) through Array n (Xn,Zn).
  • users can navigate the stored images in much the same manner as the user may navigate through an environment using live camera images.
  • a cylindrical array 12-1 is situated around the object(s) located in an environment 1100.
  • the view of each camera 14 is transmitted to server 18 in step 1220.
  • the electronic storage device 20 of the server 18 stores the output of each camera 14 at the storage node address associated with that camera 14. Storage of the images may be effectuated serially, from one camera 14 at a time within the array 12, or by simultaneous transmission of the image data from all of the cameras 14 of each array 12.
  • cylindrical array 12-1 is removed from the environment (step 1240).
  • step 1250 a determination is made as to the availability of additional cylindrical arrays 12 of differing diameters to those already situated. If additional cylindrical arrays 12 are desired, the process repeats beginning with step 1210. When no additional arrays 12 are available for situating around the environment, the process of inputting images into storage devices 20 is complete (step 1260). At the end of the process, a matrix of addressable stored images exist.
  • a user may navigate through the environment. Navigation is effectuated by accessing the input of the storage nodes by a user interface device 24.
  • the user inputs generally include moving around the environment or object 1100 by moving to the left or right, moving higher or lower along the z-axis, moving through the environment closer or further from the object 1100, or some combination of moving around and through the environment. For example, a user may access the image stored in the node address Array (0,0) to view an object from the camera previously located at coordinate (0,0) of Array 3 .
  • the user may move directly forward, and therefore closer to the object 1100, by accessing the image stored in Array 2 (0,0) and then Array ⁇ (0,0). To move further away from the object and to the right and up, the user may move from the image stored in node address Array ⁇ (0,0) and access the images stored in node address Array 2 (l,l), followed by accessing the image stored in node address Array 3 (2,2), an so on.
  • a user may, of course, move among arrays and/or coordinates by any increments changing the point perspective of the environment with each node. Additionally, a user may jump to a particular camera view of the environment. Thus, a user may move throughout the environment in a manner similar to that described above with respect to accessing output of live cameras.
  • This embodiment allows user to access images that are stored in storage nodes as opposed to accessing live cameras. Moreover, this embodiment provides a convenient system and method to allow a user to move forward and backward in an environment. It should be noted that although each storage node is associated with a camera view identified by its X, Z coordinate of a particular array, other methods of identifying camera views and storage nodes can be used. For example, other coordinate systems, such as those noting angular displacement from a fixed reference point as well as coordinate systems that indicate relative displacement from the current camera node may be used. It should also be understood that the camera arrays 12 may be other shapes other than cylindrical.
  • the camera arrays 12 surround the entire environment.
  • the foregoing user inputs namely, move clockwise, move counter-clockwise, up, down, closer to the environment, and further from the environment, are merely general descriptions of movement through the environment.
  • movement in each of these general directions is further defined based upon the user input.
  • the output generated by the server to the user may be mixed when moving among adjacent storage nodes associated with environment views (along the x axis, z axis, or among juxtaposed arrays) to generate seamless movement throughout the environment. Mixing may be accomplished by, but are not limited to, the processes described above.
  • an array according to the present invention may be used to capture virtually any image for any purpose.
  • One particular use of one embodiment of the present invention is to compare multiple images.
  • the present invention when used to compare images, can allow for a comparison from any one of multiple point perspectives at any given reference point of time.
  • Exemplary embodiments which will now be described with reference to Figs. 15-17, provide a training aid that compares the images of the swings of two golfers ⁇ a training professional and a player/trainee.
  • the array is generally in the form of a geodesic dome 1305 having an opening for a golfer to enter and hit a ball. More specifically, the array extends approximately 270° in a horizontal band, 180° in a vertical band from side to side and 150° in a vertical band from the rear at the ground, forward towards the opening.
  • the array not only includes cameras 1310, but also lights 1315, a greenscreen background covering 1320, a greenscreen background flooring 1325, and a supporting rail structure 1330. As is known in the art, other color backgrounds can be used.
  • the plurality of cameras 1310 populate the interior of the dome 1305 supported by the greenscreen 1320 and/or rails 1330. As described in greater detail below, the green covering 1320 and flooring 1325 allow for easier processing of the images.
  • the cameras 1310 can be logically organized in rows; for example, the lowest row 1335 can be designated row 0 , the second row from the bottom 1340 can be designated rowi, the third row from the bottom 1345 can be designated row 2 . Additionally, the cameras 1310 in each row can be logically numbered, for example, sequentially from the right of the array, clockwise to the left. As described below, such logical arrangement facilitates processing of images and navigation through the array. In alternate embodiments, the cameras 1310 are mounted in configurations other than rows, such as geometric or random patterns, preferably so that the image captured by one camera 1310 overlaps the image captured by each adjacent camera 1310.
  • the array can be coupled to one or more processing elements, storage devices, user interface devices, and other components according to any one of the configurations described above with reference to Figures 1 and 8-10 and equivalents thereto.
  • the 5 images of the professional's swing is stored in one storage device and the image of the trainee's swing is stored in a second storage device.
  • the images of the two swings are stored in different layers, levels or partitions within a single storage device, such as a fluorescent multi-layer disk.
  • Each of the two storage devices are coupled in parallel to and can be accessed in parallel by the server.
  • the cameras 1310 are 0 coupled to the electronic storage devices so the images may be stored and the server is coupled to the storage devices so images can be retrieved from storage, processed and restored in the storage devices.
  • a user interface device is also coupled to the server so the images can be transmitted to the user.
  • each camera 1310 operates at approximately thirty frames per second. In an alternate embodiment, the cameras 1310 capture the image at sixty frames per second.
  • the image from each camera 1310 and for each frame is then processed to separate the image from the background. More o specifically, the server (or dedicated processor) mattes out the image from the solid background 1320 (step 1410).
  • Such a process is generally known as bluescreening, matting, keying or chromakeying out the image and can be performed by any of a number of known processes, including those provided by the Ultimatte Corporation under the trade name ULTIMATTE, and by PixelCom J. V. under the trade name PRIMATTE. As will be 5 appreciated by those skilled in the art, matting out the image is preferable for better display of the images.
  • the server then digitally stores the matted or keyed out image of each frame from each camera 1310 in an electronic storage device (step 1415).
  • the outputs (or images) captured in each frame of each camera 1310 are temporarily stored.
  • the server then processes the o temporarily stored frames to matte/key out the golfer's image from each frame and stores the matted/keyed out image, preferably writing over the original (non-keyed) frames.
  • the server processes the frames, keying out the golfer's image, in real time. In such an embodiment, no temporary image need be stored. In another embodiment, no matting process is performed. Once the professional golfer's swing is captured, the system operation is repeated to capture and store the trainee's swing (step 1420).
  • Figure 15 depicts one example of a logical representation and addressing scheme of one golfer's swing as stored in one storage device without storing any mixed images. Taking thirty frames per second and the average golf swing lasting less than three seconds, approximately ninety frames will be stored for each camera. As shown, each frame from each camera is stored at a unique location or address in the storage device. In this embodiment, the first and second (right most) digits of the address indicate frame number, the third and fourth digits indicate camera number, and the fifth and sixth digits indicate row number.
  • the first frame ⁇ framei — taken by the first camera in the first row — row ⁇ (l) — is stored at address 01 01 01.
  • the third frame — frame ⁇ taken by the second camera in the second row — row (2) ⁇ is stored at address 02 02 03. It is to be understood that essentially any addressing scheme may be used for storing camera outputs, so long as the software playing back the images is capable of identifying the appropriate camera output in response to user inputs.
  • the addresses can be represented in any notation, such as hexadecimal or binary, and the addresses may or may not be contiguous. Although not required, in the present embodiment, the same logical arrangement is used for the storage of the second golfer's swing in the second storage device.
  • the playback of the images will now be described with reference to Figures 16 and 17 and continuing reference to Figures 13 and 15.
  • the user selects playback on the user terminal (step 1605) and the playback begins. More specifically, the system begins by providing the user a default starting view of the professional and trainee (step 1610).
  • the images of the professional and the trainee are displayed side-by-side, as shown in Figure 17, from the same camera 1310 at frame ! . Determination of the first frame is described in greater detail below.
  • the user may begin navigating the stored images.
  • the user enters a user input via the user input device, and the server receives and interprets the input in a manner as described above with reference to Figures 5 and 6 (step 1615).
  • the server then accesses and updates in parallel the trainee image (step 1620a) and the professional image (step 1620b).
  • the user inputs include moving to the left or right and up or down in the array; further, each directional movement can be forward in time, at the same point in time, or backward in time. Such movement is achieved by accessing and, where appropriate, stringing together the frames taken by the cameras. More specifically, navigating through the array can be based on the logical arrangement and addressing scheme of frames: to move to the left to the next camera 1310, the third digit of the address of the image to be viewed is incremented; to move up to the next row, the fifth digit of the address is incremented; to move forward in time to the next frame, the first digit of the address is incremented.
  • the next image is that associated with framej of row ⁇ (2) (i.e., the image stored at address 01 02 01), and then the image associated with frame ! of row ⁇ (3) (i.e., the image stored at address 01 03 01).
  • the next image could be that associated with frame 2 of row 2 (2) (i.e., the image stored at address 02 02 02), and then the image associated with frame of row 3 (3) (i.e., the image stored at address 03 03 03).
  • the server provides an updated view to the user (step 1625). Images of both the professional and trainee are updated synchronously. Changes to the user's view is applied to both the professional's and the trainee's images. Operation of the present embodiment is made efficient by using the same addressing scheme in both the storage device containing the professional's images and the storage device containing the trainee's images. In other words, each frame from each camera is stored at the same address in different storage devices. Therefore, the server receives the user input, determines the next appropriate camera frame/output and corresponding address, mixes the last frame with the updated frame and causes the image stored at that address in each storage device to be provided to the user.
  • the server awaits the next user input (step 1615).
  • the server continuously updates the view based on the previously entered user input until the user enters a different input.
  • the playback preferably occurs at the same rate as the image capture occurred, namely thirty frames per second in the present embodiment. Therefore, when the selected user input is "forward in time" (from any camera(s)), the view is essentially a video playback at the actual speed of the swings. It should be understood that the present invention is independent of the type of cameras and the capture and playback rates.
  • the present embodiment thus allows for enhanced comparison of images and, consequently, improved training.
  • the trainee's swing can be compared to that of the professional in many ways.
  • the swings can be compared at a single point in time, such as at the top of the trainee's back swing, and from any perspective provided by the array, such as front, back, top, etc.
  • the swings can be compared through sequential points in time, throughout a portion or the entirety of the swings, and from a changing perspective.
  • the swings can be compared at actual speed over and over again, each time from a new perspective.
  • the present embodiment allows two images to be compared at any point in time from any perspective.
  • the images are displayed one overlaid on top of another.
  • the images are displayed with differing luminance levels.
  • the professional swing image which remains constant, can be captured and stored with no change in luminance level.
  • the trainee swing image on the other hand, can be stored with a lesser luminance level so that it can be overlaid on top of the professional swing image.
  • the camera outputs are temporarily stored in the storage device and retrieved by the server; The server not only processes the outputs to matte out the image (if desired), but also adjusts the luminance level of each image. The server then stores the processed outputs for later retrieval during playback.
  • the luminance levels are adjusted at different points during the system operation, such as when originally retrieved from the cameras or just prior to outputting to the user interface display device.
  • the user may separately control the views of the professional's and the trainee's swings.
  • the server discriminates between two sets of user inputs — one relating to each of the two images.
  • the opening in the dome allows the golfers to take a realistic swing and hit an actual ball. Where a greater range of viewing is desired, however, the array need not include an opening for the ball to travel. Instead, the golfers can be completely enclosed in a dome of cameras (entering by way of a door having cameras mounted thereon), thereby allowing viewing from 360°.
  • the server mixes the camera frames/images by electronically switching between frames/images.
  • the server mixes the frames/images in any of the manners described above.
  • mixing includes creating a "tweened" image from the output of adjacent cameras. The tweened image can be created and stored, or depending upon available processing power, created in real time as the view is being presented to the user.
  • Figure 18a illustrates the logical relationship of real and mixed images according to one embodiment in which the mixed images are synthesized images that are the product of images (output) from adjacent cameras.
  • the logical arrangement of frames containing the real and mixed images can best be illustrated in the three dimensional representation in which the first access represents sequential frames, the second access represents sequential rows, and the third access represents sequential cameras in each row.
  • sequential frames of the same camera are illustrated along the horizontal axis (i.e., left to right)
  • adjacent rows are illustrated along the vertical access
  • adjacent cameras in the same row are illustrated along the access extending into the page.
  • frames containing real images are illustrated as squares and bear the same logical address as corresponding frames identified in Figure 15.
  • Synthesized frames created by mixing outputs from the same point in time, from two adjacent cameras, in the same row are represented by triangles; synthesized frames created by mixing outputs from the same point in time, from corresponding cameras in adjacent rows are indicated by circles; and synthesized frames created by mixing outputs from the same point in time, from a camera in a given row and from the next camera in an adjacent row are indicated by diamonds.
  • the asterisk indicates a synthesized frame created by mixing the outputs from adjacent cameras, in adjacent rows taken at subsequent points in time (i.e., adjacent frames).
  • the mixed images are labeled with the logical notation wherein an apostrophe (') adjacent to either the second or third pair of digits signifies that the image was created by mixing outputs of adjacent cameras in the same row or corresponding cameras in adjacent rows, respectively.
  • the notation 01 '01 01 refers to the image created by mixing frames from 01 01 01 and 02 01 01
  • 01 '01 '01 refers to the image created by mixing the frames 01 01 01 and 02 02 01
  • Ol'Ol '01' refers to the image created by mixing the frames 01 01 01 and 02 02 02.
  • certain of the mixed frames although described as being the product of two particular frames, may be the product of two or more other frames.
  • frame 01' 01' 01 may be created by mixing frames 02 01
  • Figure 18a illustrates only two successive frames of each of two adjacent cameras in each of two adjacent rows, it is to be understood that the logical depiction is readily extensible to multiple frames, cameras and rows. Having described the logical relationship of frames containing real images and synthesized frames containing mixed images, exemplary user navigation will be described with reference to Figures 18b and c, which use the same notation as Figure 18a, and continuing reference to Figure 13.
  • a user navigating the array from the first camera in row 1 and moving to the left at the same point in time is sequentially provided the images of frames 01 01 01, 01 Ol'Ol, and 01 02 01.
  • the user is sequentially provided frames 01 '02 01 and 02 02 01.
  • moving forward in time from the same camera the user is sequentially provided the image of frame 02 02 02 and subsequent frames, 02 02 03, 02 02 04, et seq.
  • a user navigating through the array diagonally to the left and up while moving forward in time is sequentially provided frames 01 01 01,
  • the system identifies one or more reference points of the swings and uses such reference points to synchronize the swings and/or adjust the playback speed of the swings.
  • the system includes a user interface device, through which a user can manually indicate a reference point of a swing, or any number of motion measuring devices, such as motion detectors, range finders, electronic tags (mounted on the golfer or golf club) and the like.
  • various points in the swing can be identified, including the beginning of movement of the golf club during the back swing, the change of direction of the golf club at the end of the back swing, contact of the golf club and the golf ball, the end of the follow-through, when the golf club comes to rest, and the like.
  • Manual indications, as well as indications received from such movement measuring means, of the various points in the swing may be used to synchronize the swings of the professional and the trainee.
  • the system receives the indication and, in essentially real time, tags the corresponding reference frame.
  • the two identified reference frames are used as synchronizing points for the swings. For example, in one embodiment where the reference point is the beginning of the back swing, such reference frames are used as the first frame in the playback and all navigation is performed relative to the two reference frames.
  • a user is able to compare the swings to determine whether the trainee is swinging too fast or too slow.
  • a professional golfer may swing in about two seconds. During the two- second swing, cameras operating at thirty frames per second capture sixty frames. A trainee, on the other hand, may swing slower, over three seconds. Thus, the trainee swing will take a total of ninety frames. Accordingly, with the playback of the images occurring at the same thirty-frames per second rate, the addition of thirty frames to the professional swing will cause the professional's swing to be the same duration and thus speed as the trainee's swing; both will be ninety frames in duration.
  • these thirty additional frames are preferably mixed images created from successive frames of a each camera that are uniformly interspersed among the frames of each camera containing real images.
  • the logical arrangement of the frames containing real images and frames containing mixed images of the foregoing example is illustrated in Figure 19.
  • Interspersed among the sixty frames containing real images of the professional swing are thirty frames of mixed images. More specifically, the thirty mixed images are uniformly interspersed between every other pair of frames; a mixed image has been created between frames 1 and 2, not between frames 2 and 3, between frames 3 and 4, not between frames 4 and 5, and so forth.
  • such mixed images created from successive frames from the same camera can be combined in the same embodiment as mixed images created from frames from different cameras.
  • such mixed images interspersed for the purpose of adjusting the speed of the image are used to create other mixed images.
  • the mixed images that are interspersed for adjusting the speed of the swing are indicated by an "X”, and (using the notation of Figure 18a) mixed images 01 01 01' and 02 01 01' are used to create mixed image 01' 01 01'.
  • the system first captures and stores the image of the professional's swing and the image of the trainee's swing (step 2010). The system then receives a user input via a user interface device indicating the user's desire to harmonize the speeds of two swings (step 2020). The system then proceeds to create the necessary mixed images.
  • the system receives a indications via the motion measuring device coupled to the system (e.g., server) noting both the beginning and end of the first swing (step 2030).
  • These user indications correspond to particular points in time relative to the start of recording, which, in turn, correspond to particular reference frames that the system tags.
  • the system automatically identifies the beginning and end of each swing by input from any of a number of motion measuring devices, such as motion detectors, range finders, electronic tags and the like, and in other embodiments via manual input via a user interface device during playback of the images.
  • beginning and end points of a swing need not be precisely defined, but are preferably selected so that the points correspond to the same part of the two swings.
  • the beginning may be the beginning of the golfer's back swing and the end may be when the golf club comes to rest after the golfer's follow-through.
  • the number of frames in the faster swing is subtracted from the number of frames in the slower swing, resulting in the number of mixed images to be added to the faster swing (step 2060).
  • the slower swing included ninety frames and the faster swing sixty frames, thirty mixed frames must be added to the faster swing.
  • the system must also determine the composition of the mixed images (step 2070).
  • the system must determine the "location" of the mixed images.
  • the system evenly intersperses the frames containing the mixed images.
  • the location of the frames is determined by dividing the number of additional mixed images to be added into the number of frames containing real images of the faster swing.
  • sixty original frames divided by thirty additional mixed images equals one added mixed image every two original frames. Where the division results in a non-integer, even distribution can be approximated by rounding the result to the next highest integer.
  • Each mixed image comprises the product of mixing the two adjacent frames containing real images.
  • the present invention includes other manners of harmonizing the speed of the two swings.
  • blank frames are inserted or repeat frames are inserted.
  • the system accounts for the different speeds by adjusting the playback speed based on the ratio of the lengths of the swings.
  • the playback speed of the professional swing (sixty frames) to trainee swing (ninety frames) is two-thirds (60 frames/90 frames) that of the trainee.
  • the system adjusts the playback speed by accessing and/or refreshing the frames at different rates.
  • a number of frames (equal to the number otherwise to be added to the faster swing in the above embodiments) from the slower swing are dropped from the image.
  • the system and method for adjusting the speed of a swing may be separately applied to portions of a swing, thereby synchronizing discrete portions of swings.
  • the different durations of the professional's and trainee's backswings may be harmonized so that upon playback both images arrive at the end of the backswing at the same time.
  • the remainder of the swing i.e., the downswing and follow- through
  • the process of Figure 20 is performed based on the beginning and end of each portion of the swing to be synchronized.
  • each data element in the linked list points to a frame as well as the previous and successive frame in each of the variable dimensions, such as those illustrated in Figure 18a, including up and down, diagonal, left and right and forward and back in time.
  • the data elements in the linked list point to either the previous or successive frame in a subset of those dimensions.
  • frames taken from cameras at the boundaries of the array are linked to frames taken at the opposite boundary. For example, the frames from the last camera in a given row of the array of Figure 13 are linked to frames from the first camera in the same row.
  • the exemplary embodiments described herein relating to harmonizing the speed and duration of images are concerned with harmonizing two images, the present invention can be used to harmonize multiple images by utilizing the process described with reference to Figure 19 to add frames to all but the longest image. Furthermore, it is to be understood that although the embodiments described herein intersperse a single frame containing a mixed image between frames containing real images, in alternate embodiments multiple frames containing mixed images are interspersed between frames containing real images.
  • images captured and processed according to the present invention may be stored on a portable storage medium, such as a CD-ROM, and played back by a user on hardware separate from that which was used to capture and process the images.
  • the play-back hardware includes software providing the play back functionality, including the ability to interpret user inputs and, in response thereto, locate and display appropriate frames.
  • the playback software locates the frames in any number of ways, including accessing a mapping or linked list of the frames which is stored on the storage medium.

Abstract

A telepresence system (100) uses an array (10) of cameras (14) to provide a first user (22-1) with a first display of an environment and a second user (22-2) with a second display of the environment. Each camera (14) has an associated view of the environment. A first user interface device (24) has first user inputs associated with movement along a first path, and a second user interface device has second user inputs associated with a second path. A processing element interprets the first and the second inputs and independently selects output of the cameras, allowing the first user and the second user to navigate simultaneously and independently through the environment. In alternate embodiments the array (10) includes multiple cameras (14) at on node or perspective, the cameras (14) having different fields of view selectable by the users for navigating forward or backward in the environment. The system also may be used for comparing multiple images or portions of images.

Description

METHOD AND SYSTEM FOR COMPARING MULTIPLE IMAGES UTILIZING A NAVIGABLE ARRAY OF CAMERAS
BACKGROUND OF THE INVENTION
1. Field Of The Invention
The present invention relates to a telepresence system and, more particularly, to a navigable camera array telepresence system and method of using same for comparing two or more images.
2. Description Of Related Art In general, a need exists for the development of telepresence systems suitable for use with static venues, such as museums, and dynamic venues or events, such as a music concerts. The viewing of such venues is limited by time, geographical location, and the viewer capacity of the venue. For example, potential visitors to a museum may be prevented from viewing an exhibit due to the limited hours the museum is open. Similarly, music concert producers must turn back fans due to the limited seating of an arena. In short, limited access to venues reduces the revenue generated.
In an attempt to increase the revenue stream from both static and dynamic venues, such venues have been recorded for broadcast or distribution. In some instances, dynamic venues are also broadcast live. While such broadcasting increases access to the venues, it involves considerable production effort. Typically, recorded broadcasts must be cut and edited, as views from multiple cameras are pieced together. These editorial and production efforts are costly.
In some instances, the broadcast resulting from these editorial and production efforts provides viewers with limited enjoyment. Specifically, the broadcast is typically based on filming the venue from a finite number of predetermined cameras. Thus, the broadcast contains limited viewing angles and perspectives of the venue. Moreover, the viewing angles and perspectives presented in the broadcast are those selected by a producer or director during the editorial and production process; there is no viewer autonomy. Furthermore, although the broadcast is often recorded for multiple viewings, the broadcast has limited content life because each viewing is identical to the first. Because each showing looks and sounds the same, viewers rarely come back for multiple viewings. A viewer fortunate enough to attend a venue in person will encounter many of the same problems. For example, a museum-goer must remain behind the barricades, viewing exhibits from limited angles and perspectives. Similarly, concert-goers are often restricted to a particular seat or section in an arena. Even if a viewer were allowed free access to the entire arena to videotape the venue, such a recording would also have limited content life because each viewing would be the same as the first. Therefore, a need exists for a telepresence system that preferably provides user autonomy while resulting in recordings with enhanced content life at a reduced production cost.
Apparently, attempts have been made to develop telepresence systems to satisfy some of the foregoing needs. One telepresence system is described in U.S. Patent No. 5,708,469 for Multiple View Telepresence Camera Systems Using A Wire Cage Which Surrounds A Polarity Of Multiple Cameras And Identifies The Fields Of View, issued January 13, 1998. The system described therein includes a plurality of cameras, wherein each camera has a field of view that is space-contiguous with and at a right angle to at least one other camera. In other words, it is preferable that the camera fields of view do not overlap each other. A user interface allows the user to jump between views. In order for the user's view to move through the venue or environment, a moving vehicle carries the cameras.
This system, however, has several drawbacks. For example, in order for a viewer's perspective to move through the venue, the moving vehicle must be actuated and controlled. In this regard, operation of the system is complicated. Furthermore, because the camera views are contiguous, typically at right angles, changing camera views results in a discontinuous image.
Other attempts at providing a telepresence system have taken the form of a 360 degree camera systems. One such system is described in U.S. Patent No. 5,745,305 for Panoramic Viewing Apparatus, issued April 28 1998. The system described therein provides a 360 degree view of environment by arranging multiple cameras around a pyramid shaped reflective element. Each camera, all of which share a common virtual optical center, receives an image from a different side of the reflective pyramid. Other types of 360 degree camera systems employ a parabolic lens or a rotating camera. Such 360 degree camera systems also suffer from drawbacks. In particular, such systems limit the user's view to 360 degrees from a given point perspective. In other words, 360 degree camera systems provide the user with a panoramic view from a single location. Only if the camera system was mounted on a moving vehicle could the user experience simulated movement through an environment.
U.S. Patent No. 5,187,571 for Television System For Displaying Multiple Views of A Remote Location issued February 16, 1993, describes a camera system similar to the 360 degree camera systems described above. The system described provides a user to select an arbitrary and continuously variable section of an aggregate field of view. Multiple cameras are aligned so that each camera's field of view merges contiguously with those of adjacent cameras thereby creating the aggregate field of view. The aggregate field of view may expand to cover 360 degrees. In order to create the aggregate field of view, the cameras' views must be contiguous. In order for the camera views to be contiguous, the cameras have to share a common point perspective, or vertex. Thus, like the previously described 360 degree camera systems, the system of U.S. Patent No. 5,187,571 limits a user's view to a single point perspective, rather than allowing a user to experience movement in perspective through an environment. Also, with regard to the system of U.S. Patent No. 5,187,571, in order to achieve the contiguity between camera views, a relatively complex arrangement of mirrors is required. Additionally, each camera seemingly must also be placed in the same vertical plane.
Thus, a need still exists for an improved telepresence system that provides the ability to better simulate a viewer's actual presence in a venue, preferably in real time.
3. Summary of the Invention
These and other needs are satisfied by the present invention. A telepresence system according to one embodiment of the present invention includes an array of cameras, each of which has an associated view of an environment and an associated output representing the view. The system also includes a first user interface device having first user inputs associated with movement along a first path in the array. The system further includes a second user interface device having second user inputs associated with movement along a second path in the array. A processing element is coupled to the user interface devices. The processing element receives and interprets the first inputs and selects outputs of cameras in the first path. Similarly, the processing element receives and interprets the second inputs and selects outputs of cameras in the second path independently of the first inputs. Thus, a first user and a second user are able to navigate simultaneously and independently through the array. In another embodiment, the system may also mix the output by mosaicing or tweening the output images. In a further embodiment of the present invention the telepresence system distinguishes between permissible cameras in the array and impermissible cameras in the array. In yet another embodiment of the present invention the telepresence system allows a user to move forward or backward through the environment, which provides the user the opportunity to move forward or backward through the environment.
BRIEF DESCRIPTION OF THE DRAWINGS
Figure 1 is an overall schematic of one embodiment of the present invention. Figure 2a is a perspective view of a camera and a camera rail section of the array according to one embodiment of the present invention.
Figures 2b-2d are side plan views of a camera and a camera rail according to one embodiment of the present invention.
Figure 2e is a top plan view of a camera rail according to one embodiment of the present invention. Figure 3 is a perspective view of a portion of the camera array according to one embodiment of the present invention.
Figure 4 is a perspective view of a portion of the camera array according to an alternate embodiment of the present invention.
Figure 5 is a flowchart illustrating the general operation of the user interface according to one embodiment of the present invention.
Figure 6 is a flowchart illustrating in detail a portion of the operation shown in Figure 5.
Figure 7a is a perspective view of a portion of one embodiment of the present invention illustrating the arrangement of the camera array relative to objects being viewed. Figures 7b-7g illustrate views from the perspectives of selected cameras of the array in
Figure 7a.
Figure 8 is a schematic view of an alternate embodiment of the present invention. Figure 9 is a schematic view of a server according to one embodiment of the present invention. Figure 10 is a schematic view of a server according to an alternate embodiment of the present invention. Figure 11 is a top plan view of an alternate embodiment of the present invention.
Figure 12 is a flowchart illustrating in detail the image capture portion of the operation of the embodiment shown in Figure 11.
Figure 13 is a schematic illustrating an array of one embodiment of the present invention.
Figure 14 is flowchart illustrating the image capture process of one embodiment of the present invention.
Figure 15 is a schematic illustrating the logical arrangement of frames of an image according to one embodiment of the present invention. Figure 16 is a flowchart illustrating the playback process of one embodiment of the present invention.
Figure 17 is a schematic view representing a display according to one embodiment of the present invention.
Figure 18a-c are schematics illustrating the logical relationship among frames according to one embodiment of the present invention.
Figure 19 is a schematic illustrating the logical arrangement of frames according to one embodiment of the present invention.
Figure 20 is a flowchart illustrating the process of harmonizing the duration of images according to one embodiment of the present invention.
DESCRIPTION OF PREFERRED EMBODIMENTS
1. General Description Of Preferred Embodiments
The present invention relates to a telepresence system that, in preferred embodiments, uses modular, interlocking arrays of microcameras or cameras. The cameras are on rails, with each rail holding a plurality of cameras. These cameras, each locked in a fixed relation to every adjacent camera on the array and dispersed dimensionally in a given environment, transmit image output to an associated storage node, thereby enabling remote viewers to navigate through such environment with the same spatial and visual cues (the changing perspective lines, the moving light reflections and shadows) that characterize an actual in- environment transit. In another preferred embodiment, the outputs of these microcameras are linked by tiny (less than half the width of a human hair) Vertical Cavity Surface Emitting Lasers (VCSELs) to optical fibers, fed through area net hubs, buffered on server arrays or server farms (either for recording or (instantaneous) relay) and sent to viewers at remote terminals, interactive 5 wall screens, or mobile image appliances (like Virtual Retinal Displays). Each remote viewer, through an intuitive graphical user interface (GUI), can navigate effortlessly through the environment, enabling seamless movement through the event.
This involves a multiplexed, electronic switching process (invisible to the viewer) which moves the viewer's point perspective from camera to camera. Rather than relying, per 0 se, on physically moving a camera through space, the system uses the multiplicity of positioned cameras to move the viewer's perspective from camera node to adjacent camera node in a way that provides the viewer with a sequential visual and acoustical path throughout the extent of the array. This allows the viewer to fluidly track or dolly through a 3- dimensional remote environment, to move through an event and make autonomous real-time 5 decisions about where to move and when to linger.
Instead of investing the viewer with the capacity to physically move a robotic camera, which would immediately limit the number of viewers that could simultaneously control their own course and navigate via storage nodes containing images of an environment associated with a pre-existing array of cameras. The user can move around the environment in any o direction — clockwise or counterclockwise, up, down, closer to or further away from the environment, or some combination thereof. Moreover, image output mixing, such as mosaicing and tweening, effectuates seamless motion throughout the environment.
2. Detailed Description Of Preferred Embodiments
5 Certain embodiments of the present invention will now be described in greater detail with reference to the drawings. It is understood that the operation and functionality of many of the components of the embodiments described herein are known to one skilled in the art and, as such, the present description does not go into detail into such operative and functionality. 0 A telepresence system 100 according to the present invention is shown in Fig. 1. The telepresence system 100 generally includes an array 10 of cameras 14 coupled to a server 18, which in turn is coupled to one or more users 22 each having a user interfaced/display device 24. As will be understood to one skilled it the art, the operation and functionality of the embodiment described herein is provided, in part, by the server and user interface/display device. While the operation of these components is not described by way of particular code 5 listings or logic diagrams, it is to be understood that one skilled in the art will be able to arrive at suitable implementations based on the functional and operational details provided herein. Furthermore, the scope of the present invention is not to be construed as limited to any particular code or logic implementation.
In the present embodiment, the camera array 10 is conceptualized as being in an X, Z 0 coordinate system. This allows each camera to have an associated, unique node address comprising an X, and Z coordinate (X, Z). In the present embodiment, for example, a coordinate value corresponding to an axis of a particular camera represents the number of camera positions along that axis the particular camera is displaced from a reference camera. In the present embodiment, from the user's perspective the X axis runs left and right, and the 5 Z axis runs down and up. Each camera 14 is identified by its X, Z coordinate. It is to be understood, however, that other methods of identifying cameras 14 can be used. For example, other coordinate systems, such as those noting angular displacement from a fixed reference point as well as coordinate systems that indicate relative displacement from the current camera node may be used. In another alternate embodiment, the array is three o dimensional, located in an X, Y, Z coordinate system.
The array 10 comprises a plurality of rails 12, each rail 12 including a series of one or more cameras 14. The output from the cameras 14 are coupled to the server 18 by means of local area hubs 16. The local area hubs 16 gather the outputs and, when necessary, amplify the outputs for transmission to the server 18. In an alternate embodiment, the local area hubs 5 16 multiplex the outputs for transmission to the server 18. Although the figure depicts the communication links 15 between the cameras 14 and the server 18 as being hardwired, it is to be understood that wireless links may be employed. Thus, it is within the scope of the present invention for the communication links 15 to take the form of fiber optics, cable, satellite, microwave transmission, internet, and the like. 0 Also coupled to the server 18 is an electronic storage device 20. The server 18 transfers the outputs to the electronic storage device 20. The electronic (mass) storage device 20, in turn, transfers each camera's output onto a storage medium or means, such as CD- ROM, DVD, fluorescent multilayered disk (FMD), tape, platter, disk array, or the like. The output of each camera 14 is stored in particular locations on the storage medium associated with that camera 14 or is stored with an indication to which camera 14 each stored output corresponds. For example, the output of each camera 14 is stored in contiguous locations on 5 a separate disk, tape, CD-ROM, or platter. As is known in the art, the camera output may be stored in a compressed format, such as JPEG, which is a standard format for storing still color and grayscale photographs in bitmap form, MPEG1, which is a standard format for storing video output with a resolution of 30 frames per second, MPEG2, which is a standard format for storing video output with a resolution of 60 frames per second (typically used for high 0 bandwidth applications such as HDTV and DVD-ROMs), and the like. Having stored each output allows a user to later view the environment over and over again, each time moving through the array 10 in a new path, as described below. In some embodiments of the present invention, such as those providing only real-time viewing, no storage device is required.
As will be described in detail below, the server 18 receives output from the cameras 5 14 in the array. The server 18 processes these outputs for either storage in the electronic storage device 20, transmission to the users 22 or both.
It is to be understood that although the server 18 is configured to provide the functionality of the system 100 in the present embodiment, it is to be understood that other processing elements may provide the functionality of the system 100. For example, in o alternate embodiments, the user interface device is a personal computer programmed to interpret the user input and transmit an indication of the desired current node address, buffer outputs from the array, and provide other of the described functions.
As shown, the system 100 can accommodate (but does not require) multiple users 22. Each user 22 has associated therewith a user interface device including a user display device 5 (collectively 24). For example, user 22-1 has an associated user interface device and a user display device in the form of a computer 24-1 having a monitor and a keyboard. User 22-2 has associated therewith an interactive wall screen 24-2 which serves as a user interface device and a user display device. The user interface device and the user display device of user 22-3 includes a mobile audio and image appliance 24-3. A digital interactive TV 24-4 is o the user interface device and user display device of user 22-4. Similarly, user 22-5 has a voice recognition unit and monitor 24-5 as the user interface and display devices. It is to be understood that the foregoing user interface devices and user display devices are merely exemplary; for example, other interface devices include a mouse, touch screen, biofeedback devices, as well as those identified in U.S. Provisional Patent Application Serial No. 60/080,413 and the like.
As described in detail below, each user interface device 24 has associated therewith user inputs. These user inputs allow each user 22 to move or navigate independently through the array 10. In other words, each user 22 enters inputs to generally select which camera outputs are transferred to the user display device. Preferably, each user display device includes a graphical representation of the array 10. The graphical representation includes an indication of which camera in the array the output of which is being viewed. The user inputs allow each user to not only select particular cameras, but also to select relative movement or navigational paths through the array 10. It is to be understood that as used herein a path is defined by both cameras and time. As such, two users navigating through the same series of cameras may navigate different paths, provided the users do not access all cameras simultaneously. In other words, a linear series of plurality of cameras provides for a plurality of paths.
As shown in Fig. 1, each user 22 may be coupled to the server 18 by an independent communication link. Furthermore, each communication link may employ different technology. For example, in alternate embodiments, the communication links include an internet link, a microwave signal link, a satellite link, a cable link, a fiber optic link, a wireless link, and the like.
It is to be understood that the array 10 provides several advantages. For example, because the array 10 employs a series of cameras 14, no individual camera, or the entire array 10 for that matter, need be moved in order to obtain a seamless view of the environment. Instead, the user navigates through the array 10, which is strategically placed through and around the physical environment to be viewed. Furthermore, because the cameras 14 of the array 10 are physically located at different points in the environment to be viewed, a user is able to view changes in perspective, a feature unavailable to a single camera that merely changes focal length.
Cameras
It is to be understood that the present invention does not depend upon any particular type of camera and as such, includes in alternate embodiments, analog or digital, video or still, or full size or microcameras-microlenses mounted on thumbnail-sized CMOS active pixel sensor (APS) microchips. The video chips used in microcameras may be CMOS, CCD and the like, and are produced in a mainstream manufacturing process, by several companies, including Photobit, Pasadena, CA; Sarnoff Corporation, Princeton, NJ; and VLSI Vision, Ltd., Edinburgh, Scotland.
One specific suitable cameras is the analog color CCD camera manufactured by Sanyo Electric Co. Ltd. under the tradename VCC-5974. As will be appreciated by those skilled in the art, use of such an analog camera is in conjunction with video capture boards, such as those provided by the Matrox Electronic Systems under the tradename Meteor-II, which includes an analog to digital converter for converting analog NTSC video. In various embodiments involving video, the capture boards also receive a video synchronizing signal, noted below, so that the output of each camera is synchronize, with each captured frame of one camera corresponding to that of the other. From the capture boards, the camera output is then provided to one or more servers or processing elements for processing.
Structure of the Array
The structure of the array 10 will now be described in greater detail with reference to Figs. 2a-2e. In general, the camera array 10 of the present embodiment comprises a series of modular rails 12 carrying cameras 14. The structure of the rails 12 and cameras 14 will now be discussed in greater detail with reference to Figs. 2a through 2d. Each camera 14 includes registration pins 34. In one embodiment, the cameras 14 utilize VCSELs to transfer their outputs to the rail 12. It is to be understood that the present invention is not limited to any particular type of camera 14, however, or even to an array 10 consisting of only one type of camera 14. Each rail 12 includes two sides, 12a, 12b, at least one of which 12b is hingeably connected to the base 12c of the rail 12. The base 12c includes docking ports 36 for receiving the registration pins 34 of the camera 14. When the camera 14 is seated on a rail 12 such that the registration pins 34 are fully engaged in the docking ports 36, the hinged side 12b of the rail 12 is moved against the base 32 of the camera 14, thereby securing the camera 14 to the rail 12. Each rail 12 further includes a first end 38 and a second end 44. The first end 38 includes, in the present embodiment, two locking pins 40 and a protected transmission relay port 42 for transmitting the camera outputs. The second end 44 includes two guide holes 46 for receiving the locking pins 40, and a transmission receiving port 48. Thus, the first end 38 5 of one rail 12 is engageable with a second end 44 of another rail 12. Therefore, each rail 12 is modular and can be functionally connected to another rail to create the array 10.
Once the camera 14 is securely seated to the rail 12, the camera 14 is positioned such that the camera output may be transmitted via a cable or VCSEL to the rail 12. Each rail 12 includes communication paths for transmitting the output from each camera 14. 0 Alternatively, a cable couples each camera to the server.
Although the array 10 is shown having a particular configuration, it is to be understood that virtually any configuration of rails 12 and cameras 14 is within the scope of the present invention. For example, the array 10 may be a linear array of cameras 14, a 2- dimensional array of cameras 14, a 3 -dimensional array of cameras 14, or any combination 5 thereof. Furthermore, the array 10 need not be comprised solely of linear segments, but rather may include curvilinear sections.
Furthermore, in an alternate embodiment individual rails support a single camera and include varying degree of freedom extension spacers on either end of the rail to change the spacing between cameras or change the angle between adjacent cameras. These spacers o comprise linear or rotary actuators or electrostrictive polymers controlled by one of the system servers.
The array 10 is supported by any of a number of support means. For example, the array 10 can be fixedly mounted to a wall or ceiling; the array 10 can be secured to a moveable frame that can be wheeled into position in the environment or supported from 5 cables.
Fig. 3 illustrates an example of a portion of the array 10. As shown, the array 10 comprises five rows of rails 12a, through 12e. Each of these rails 12a-12e is directed towards a central plane, which substantially passes through the center row 12c. Consequently, for any object placed in the same plane as the middle row 12c, a user would be able to view the o object essentially from the bottom, front, and top.
As noted above, the rails 12 of the array 10 need not have the same geometry. For example, some of the rails 12 may be straight while others may be curved. For example, Fig. 4 illustrates the camera alignment that results from utilizing curved rails. It should be noted that rails in Fig. 4 have been made transparent so that the arrangement of cameras 14 may be easily seen.
In an alternate embodiment, each rail is configured in a step-like fashion or an arc with each camera above (or below) and in front of a previous camera. In such an arrangement, the user has the option of moving forward through the environment.
It is to be understood that the spacing of the cameras 14 depends on the particular application, including the objects being viewed, the focal length of the cameras 14, and the speed of movement through the array 10. In general, the closer the cameras and the greater the overlap in views, the more seamless the transition between camera views. In one embodiment the distance between cameras 14 can be approximated by analogy to the distance between exposed frames taken by a motion picture camera dollying linearly through an environment. In general, the speed of movement of the projector through the environment divided by the frames exposed per unit of time results in a frame-distance ratio. For example, as shown by the following equations, in some applications a frame is taken ever inch. A conventional movie camera records twenty-four frames per second. When such a camera is moved linearly through an environment at two feet per second, a frame is taken approximately every inch.
2 ft ÷ 24 frames = 2 ft - 1 ft = 12 inches = sec sec 24 frames 12 frames 12 frames
1 inch = 1 frame per inch. 1 frame
A frame of the projector is analogous to a camera 14 in the present invention. Thus, where one frame exposed per inch results in a movie having a seamless view of the environment, so too does one camera 14 per inch. Thus, in one embodiment of the present invention the cameras 14 are spaced approximately one inch apart, thereby resulting in a seamless view of the environment.
In alternate embodiments, the spacing between cameras is greater than one inch, provided the fields of view of adjacent cameras overlap. Again, the greater the degree of overlap, the more seamless the progression between adjacent camera views. As described in greater detail below, the spacing between cameras may be further increased by generating synthetic or mixed images between contiguous cameras. Furthermore, the linear spacing between cameras becomes less important in a curved array, where the angular displacement between cameras is more important. For example, in one embodiment, the array is in a 180 degree arc, with cameras placed at five degree intervals, directed towards the center of the arc. As the radius of the arc increases, the linear distance between the cameras also increase; however, the angular displacement, five degrees, and the overlap in fields of view remain the same. Because the overlap in field of view remains, the system maintains the seamless progression from camera to adjacent camera. In one embodiment the array comprises an arc of cameras. The arc extends 110 degrees, with a radius of nine feet, and the cameras placed at approximately seven and a half degree intervals around the arc. In another embodiment the arc has a radius of fifteen feet, with the cameras located every sixteen inches.
In certain embodiments, it is useful to calibrate the cameras, aligning them in the same horizontal and vertical planes. Such calibration is accomplished in various embodiments using lasers directed from each camera, a grid superimposed on each camera view and the like to align each camera respective to a reference point.
Navigation Through the System
The general operation of the present embodiment will now be described with reference to Fig. 5 and continuing reference to Figure 1. As shown in step 110, the user is presented with a predetermined starting view of the environment corresponding to a starting camera. It is to be understood that the operation of the system is controlled, in part, by software residing in the server. As noted above, the system associates each camera in the array with a coordinate. Thus, the system is able to note the coordinates of the starting camera node. The camera output and, thus the corresponding view, changes only upon receiving a user input.
When the user determines that they want to move or navigate through the array, the user enters a user input through the user interface device 24. As described below, the user inputs of the present embodiment generally include moving to the right, to the left, up, or down in the array. Additionally, a user may jump to a particular camera in the array. In alternate embodiments, a subset of these or other inputs, such as forward, backward, diagonal, over, and under, are used. The user interface device, in turn, transmits the user input to the server in step 120.
Next, the server receives the user input in step 130 and proceeds to decode the input. In the present embodiment, decoding the input generally involves determining whether the user wishes to move to the right, to the left, up, or down in the array.
On the other hand, if the received user input does not correspond to backward, then The server 18 proceeds to determine whether the input corresponds to moving to the user's right in the array 10. This determination is shown in step 140. If the received user input does correspond to moving to the right, the current node address is incremented along the X axis in step 150 to obtain an updated address.
If the received user input does not correspond to moving to the right in the array, the server 18 then determines whether the input corresponds to moving to the user's left in the array 10 in step 160. Upon determining that the input does correspond to moving to the left, the server 18 then decrements the current node address along the X axis to arrive at the updated address. This is shown in step 170.
If the received user input does not correspond to either moving to the right or to the left, the server 18 then determines whether the input corresponds to moving up in the array. This determination is made in step 180. If the user input corresponds to moving up, in step 190, the server 18 increments the current node address along the Z axis, thereby obtaining an updated address.
Next, the server 18 determines whether the received user input corresponds to moving down in the array 10. This determination is made in step 200. If the input does correspond to moving down in the array 10, in step 210 the server 18 decrements the current node address along the Z axis.
Lastly, in step 220 the server 18 determines whether the received user input corresponds to jumping or changing the view to a particular camera 14. As indicated in Figure 5, if the input corresponds to jumping to a particular camera 14, the server 18 changes the current node address to reflect the desired camera position. Updating the node address is shown as step 230. In an alternate embodiment, the input corresponds to jumping to a particular position in the array 10, not identified by the user as being a particular camera but by some reference to the venue, such as stage right. It is to be understood that the server 18 may decode the received user inputs in any of a number of ways, including in any order. For example, in an alternate embodiment the server 18 first determines whether the user input corresponds to up or down. In another alternate, preferred embodiment, user navigation includes moving forward, backward, to the 5 left and right, and up and down through a three dimensional array.
If the received user input does not correspond to any of the recognized inputs, namely to the right, to the left, up, down, or jumping to a particular position in the array 10 then in step 240, the server 18 causes a message signal to be transmitted to the user display device 24, causing a message to be displayed to the user 22 that the received input was not 0 understood. Operation of the system 100 then continues with step 120, and the server 18 awaits receipt of the next user input.
After adjusting the current node address, either by incrementing or decrementing the node address along an axis or by jumping to a particular node address, the server 18 proceeds in step 250 to adjust the user's view. Once the view is adjusted, operation of the system 100 5 continues again with step 120 as the server 18 awaits receipt of the next user input.
In an alternate embodiment, the server 18 continues to update the node address and adjust the view based on the received user input. For example, if the user input corresponded to "moving to the right", then operation of the system 100 would continuously loop through steps 140, 150, and 250, checking for a different input. When the different input is received, o the server 18 continuously updates the view accordingly.
It is to be understood that the foregoing user inputs, namely, to the right, to the left, up, and down, are merely general descriptions of movement through the array. Although the present invention is not so limited, in the present preferred embodiment, movement in each of these general directions is further defined based upon the user input. 5 Accordingly, Fig. 6 is a more detailed diagram of the operation of the system according to steps 140, 150, and 250 of Fig. 5. Moreover, it is to be understood that while Fig. 6 describes more detailed movement one direction i.e., to the right, the same detailed movement can be applied in any other direction. As illustrated, the determination of whether the user input corresponds to moving to the right actually involves several determinations. As o described in detail below, these determinations include moving to the right through the array
10 at different speeds, moving to the right into a composited additional source output at different speeds, and having the user input overridden by the system 100. The present invention allows a user 22 to navigate through the array 10 at the different speeds. Depending on the speed (i.e. number of camera nodes transversed per unit of time) indicated by the user's input, such as movement of a pointing device (or other interface device), the server 18 will apply an algorithm that controls the transition between camera outputs either at critical speed (n nodes/per unit of time), under critical speed (n-1 nodes/per unit of time), or over critical speed (n + 1 nodes/per unit of time).
It is to be understood that speed of movement through the array 10 can alternatively be expressed as the time to switch from one camera 14 to another camera 14.
Specifically, as shown in step 140a, the server 18 makes the determination whether the user input corresponds to moving to the right at a critical speed. The critical speed is preferably a predetermined speed of movement through the array 10 set by the system operator or designer depending on the anticipated environment being viewed. Further, the critical speed depends upon various other factors, such as focal length, distance between cameras, distance between the cameras and the viewed object, and the like. The speed of movement through the array 10 is controlled by the number of cameras 14 traversed in a given time period. Thus, the movement through the array 10 at critical speed corresponds to traversing some number, "n", camera nodes per millisecond, or taking some amount of time, "s", to switch from one camera 14 to another. It is to be understood that in the same embodiment the critical speed of moving through the array 10 in one dimension need not equal the critical speed of moving through the array in another dimension. Consequently, the server 18 increments the current node address along the X axis at n nodes per millisecond.
In the present preferred embodiment the user traverses twenty-four cameras 14 per second. As discussed above, a movie projector records twenty-four frames per second. Analogizing between the movie projector and the present invention, at critical the user traverses (and the server 18 switches between) approximately twenty-four cameras 14 per second, or a camera 14 approximately every 0.04167 seconds.
As shown in Figure 6, the user 22 may advance not only at critical speed, but also at over the critical speed, as shown in step 140b, or at under the critical speed, as shown in step 140c. Where the user input "I" indicates movement through the array 10 at over the critical speed, the server 18 increments the current node address along the X axis by a unit of greater than n, for example, at n + 2 nodes per millisecond. The step of incrementing the current node address at n + 1 nodes per millisecond along the X axis is shown in step 150b. Where the user input "I" indicates movement through the array 10 at under the critical speed, the server 18 proceeds to increment the current node address at a variable less than n, for example, at n - 1 nodes per millisecond. This operation is shown as step 150c.
Scaleable Arrays
The shape of the array 10 can also be electronically scaled and the system 100 designed with a "center of gravity" that will ease a user's image path back to a "starting" or "critical position" node or ring of nodes, either when the user 22 releases control or when the system 100 is programmed to override the user's autonomy; that is to say, the active perimeter or geometry of the array 10 can be pre-configured to change at specified times or intervals in order to corral or focus attention in a situation that requires dramatic shaping. The system operator can, by real-time manipulation or via a pre-configured electronic proxy sequentially activate or deactivate designated portions of the camera array 10. This is of particular importance in maintaining authorship and dramatic pacing in theatrical or entertainment venues, and also for implementing controls over how much freedom a user 22 will have to navigate through the array 10.
In the present embodiment, the system 100 can be programmed such that certain portions of the array 10 are unavailable to the user 22 at specified times or intervals. Thus, continuing with step 140d of Fig. 6, the server 18 makes the determination whether the user input corresponds to movement to the right through the array but is subject to a navigation control algorithm. The navigation control algorithm causes the server 18 to determine, based upon navigation control factors, whether the user's desired movement is permissible.
More specifically, the navigation control algorithm, which is programmed in the server 18, determines whether the desired movement would cause the current node address to fall outside the permissible range of node coordinates. In the present embodiment, the permissible range of node coordinates is predetermined and depends upon the time of day, as noted by the server 18. Thus, in the present embodiment, the navigation control factors include time. As will be appreciated by those skilled in the art, permissible camera nodes and control factors can be correlated in a table stored in memory.
In an alternate embodiment, the navigation control factors include time as measured from the beginning of a performance being viewed, also as noted by the server. In such an embodiment, the system operator can dictate from where in the array a user will view certain scenes. In another alternate embodiment, the navigation control factor is speed of movement through the array. For example, the faster a user 22 moves or navigates through the array, the wider the turns must be. In other alternate embodiments, the permissible range of node coordinates is not predetermined. In one embodiment, the navigation control factors and, 5 therefore, the permissible range, is dynamically controlled by the system operator who communicates with the server via an input device.
Having determined that the user input is subject to the navigation control algorithm, the server 18 further proceeds, in step 150d, to increment the current node address along a predetermined path. By incrementing the current node address along a predetermined path, 0 the system operator is able to corral or focus the attention of the user 22 to the particular view of the permissible cameras 14, thereby maintaining authorship and dramatic pacing in theatrical and entertainment venues.
In an alternate embodiment where the user input is subject to a navigation control algorithm, the server 18 does not move the user along a predetermined path. Instead, the 5 server 18 merely awaits a permissible user input and holds the view at the current node. Only when the server 18 receives a user input resulting in a permissible node coordinate will the server 18 adjust the user's view.
Additional Source Output
In addition to moving through the array 10, the user 22 may, at predetermined o locations in the array 10, choose to leave the real world environment being viewed. More specifically, additional source outputs, such as computer graphic imagery, virtual world imagery, applets, film clips, and other artificial and real camera outputs, are made available to the user 22. In one embodiment, the additional source output is composited with the view of the real environment. In an alternate embodiment, the user's view transfers completely from 5 the real environment to that offered by the additional source output.
More specifically, the additional source output is stored (preferably in digital form) in the electronic storage device 20. Upon the user 22 inputting a desire to view the additional source output, the server 18 transmits the additional source output to the user interface/display device 24. The present embodiment, the server 18 simply transmits the o additional source output to the user display device 24. In an alternate embodiment, the server 18 first composites the additional source output with the camera output and then transmits the composited signal to the user interface/display device 24.
As shown in step 140e, the server 18 makes the determination whether the user input corresponds to moving in the array into the source output. If the user 22 decides to move into the additional source output, the server 18 adjusts the view by substituting the additional source output for the updated camera output identified in either of steps 150a-d.
Once the current node address is updated in either of steps 150a-d, the server 18 proceeds to adjust the user's view in step 250. When adjusting the view, the server 18 "mixes" the existing or current camera output being displayed with the output of the camera 14 identified by the updated camera node address. Mixing the outputs is achieved differently in alternate embodiments of the invention. In the present embodiment, mixing the outputs involves electronically switching at a particular speed from the existing camera output to the output of the camera 14 having the new current node address.
It is to be understood that in this and other preferred embodiments disclosed herein, the camera outputs are synchronized. As is well known in the art, a synchronizing signal from a "sync generator" is supplied to the cameras and/or the processors capturing the camera output. The sync generator may take the form of those used in video editing and may comprise, in alternate embodiments, part of the server, the hub, and/or a separate component coupled to the array. As described above, at critical speed, the server 18 switches camera outputs approximately at a rate of 24 per second, or one every 0.04167 seconds. If the user 22 is moving through the array 10 at under the critical speed, the outputs of the intermediate cameras 14 are each displayed for a relatively longer duration than if the user is moving at the critical speed. Similarly, each output is displayed for a relatively shorter duration when a user navigates at over the critical speed. In other words, the server 18 adjusts the switching speed based on the speed of the movement through the array 10.
Of course, it is to be understood that in a simplified embodiment of the present invention, the user may navigate at only the critical speed.
In another alternate embodiment, mixing the outputs is achieved by compositing the existing or current output and the updated camera node output. In yet another embodiment, mixing involves dissolving the existing view into the new view. In still another alternate embodiment, mixing the outputs includes adjusting the frame refresh rate of the user display device. Additionally, based on speed of movement through the array, the server may add motion blur to convey the realistic sense of speed.
In yet another alternate embodiment, the server causes a black screen to be viewed instantaneously between camera views. Such an embodiment is analogous to blank film between frames in a movie reel. Furthermore, although not always advantageous, such black screens reduce the physiologic "carrying over" of one view into a subsequent view.
It is to be understood that the user inputs corresponding to movements through the array at different speeds may include either different keystrokes on a keypad, different positions of a joystick, positioning a joystick in a given position for a predetermined length of time, and the like. Similarly, the decision to move into an additional source output may be indicated by a particular keystroke, joystick movement, or the like.
In another embodiment, mixing may be accomplished by "mosaicing" the outputs of the intermediate cameras 14. U.S. Pat. No. 5,649,032 entitled System For Automatically Aligning Images To Form A Mosaic Image to Peter J. Burt et al. discloses a system and method for generating a mosaic from a plurality of images and is hereby incorporated by reference. The server 18 automatically aligns one camera output to another camera output, a camera output to another mosaic (generated from previously occurring camera output) such that the output can be added to the mosaic, or an existing mosaic to a camera output.
Once the mosaic alignment is complete, the present embodiment utilizes a mosaic composition process to construct (or update) a mosaic. The mosaic composition comprises a selection process and a combination process. The selection process automatically selects outputs for incorporation into the mosaic and may include masking and cropping functions to select the region of interest in a mosaic. Once the selection process selects which output(s) are to be included in the mosaic, the combination process combines the various outputs to form the mosaic. The combination process applies various output processing techniques, such as merging, fusing, filtering, output enhancement, and the like, to achieve a seamless combination of the outputs. The resulting mosaic is a smooth view that combines the constituent outputs such that temporal and spatial information redundancy are minimized in the mosaic. In one embodiment of the present invention, the mosaic may be formed as the user moves through the system and the output image displayed close to real time. In another embodiment, the system may form the mosaic from a predetermined number of outputs or during a predetermined time interval, and then display the images pursuant to the user's navigation through the environment.
In yet another embodiment, the server 18 enables the output to be mixed by a "tweening" process. One example of the tweening process is disclosed in U.S. Pat. No. 5,259,040 entitled Method For Determining Sensor Motion And Scene Structure And Image 5 Processing System Therefor to Keith J. Hanna, herein incorporated by reference. Tweening enables the server 18 to process the structure of a view from two or more camera outputs of the view.
Applying the Hanna patent to the telepresence method/system herein, tweening is now described. The server monitors the movement among the intermediate cameras 14 through a o scene using local scene characteristics such as brightness derivatives of a pair of camera outputs. A global camera output movement constraint is combined with a local scene characteristic constancy constraint to relate local surface structures with the global camera output movement model and local scene characteristics. The method for determining a model for global camera output movement through a scene and scene structure model of the scene s from two or more outputs of the scene at a given image resolution comprises the following steps:
(a) setting initial estimates of local scene models and a global camera output movement model;
(b) determining a new value of one of the models by minimizing the difference 0 between the measured error in the outputs and the error predicted by the model;
(c) resetting the initial estimates of the local scene models and the image sensor motion model using the new value of one of the models determined in step (b);
(d) determining a new value of the second of the models using the estimates of the models determined in step (b) by minimizing the difference between the measured error in the 5 outputs and the error predicted by the model;
(e) warping one of the outputs towards the other output using the current estimates of the models at the given image resolution; and
(f) repeating steps (b), (c), (d) and (e) until the differences between the new values of the models and the values determined in the previous iteration are less than a certain value or o until a fixed number of iterations have occurred.
It should be noted that where the Hanna patent effectuates the tweening process by detecting the motion of an image sensor (e.g., a video camera), an embodiment of the present invention monitors the user movement among live cameras or storage nodes.
As will be appreciated by those skilled in the art based on the present disclosure, other existing techniques may be applied to the mixing or tweening of outputs in any of the embodiments based on the teachings herein. Such other techniques are described in U.S. Patents U.S. 5,649,032, for System For Automatically Aligning Images To Form A Mosaic Image; U.S. 5,629,988, for System And Method For Electronic Image Stabilization; U.S. 5,581,629, for Method For Estimating The Location Of An Image Target Region From Tracked Multiple Image Landmark Regions; U.S. 5,488,674, for Method For Fusing Images And Apparatus Therefor; U.S. 5,067,014 Three-Frame Technique For Analyzing Two Motions In Successive Image Frames Dynamically, each of which are hereby incorporated by reference.
In an alternate embodiment, although not always necessary, to ensure a seamless progression of views, the server 18 also transmits to the user display device 24 outputs from some or all of the intermediate cameras, namely those located between the current camera node and the updated camera node. Such an embodiment will now be described with reference to Figs. 7a-7g. Specifically, Fig. 7a illustrates a curvilinear portion of an array 10 that extends along the X axis or to the left and right from the user's perspective. Thus, the coordinates that the server 18 associates with the cameras 14 differ only in the X coordinate. More specifically, for purposes of the present example, the cameras 14 can be considered sequentially numbered, starting with the left-most camera 14 being the first, i.e., number "1". The X coordinate of each camera 14 is equal to the camera's position in the array. For illustrative purposes, particular cameras will be designate 14-X, where X equals the camera's position in the array 10 and, thus, its associated X coordinate.
In general, Figs. 7a-7g illustrate possible user movement through the array 10. The environment to be viewed includes three objects 602, 604, 606, the first and second of which include numbered surfaces. As will be apparent, these numbered surface allow a better appreciation of the change in user perspective.
In Fig. 7a, six cameras 14-2, 14-7, 14-11, 14-14, 14-20, 14-23 of the array 10 are specifically identified. The boundaries of each camera's view is identified by the pair of lines 14-2a, 14-7a, 14-1 la, 14-14a, 14-20a, 14-23a, radiating from each identified camera 14-2, 14-7, 14-11, 14-14, 14-20, 14-23, respectively. As described below, in the present example the user 22 navigates through the array 10 along the X axis such that the images or views of the environment are those corresponding to the identified cameras 14-2, 14-7, 14-11, 14-14, 14-20, 14-23.
The present example provides the user 22 with the starting view from camera 14-2. This view is illustrated in Fig. 7b. The user 22, desiring to have a better view of the object 5 702, pushes the "7" key on the keyboard. This user input is transmitted to and interpreted by the server 18.
Because the server 18 has been programmed to recognized the "7" key as corresponding to moving or jumping through the array to camera 14-7. The server 18 changes the X coordinate of the current camera node address to 7, selects the output of 0 camera 14-7, and adjusts the view or image sent to the user 22. Adjusting the view, as discussed above, involves mixing the outputs of the current and updated camera nodes. Mixing the outputs, in turn, involves switching intermediate camera outputs into the view to achieve the seamless progression of the discrete views of cameras 14-2 through 14-7, which gives the user 22 the look and feel of moving around the viewed object. The user 22 now has 5 another view of the first object 702. The view from camera 14-7 is shown in Fig. 7c. As noted above, if the jump in camera nodes is greater than a predetermined limit, the server 18 would omit some or all of the intermediate outputs.
Pressing the "right arrow" key on the keyboard, the user 22 indicates to the system 100 a desire to navigate to the right at critical speed. The server 18 receives and interprets o this user input as indicating such and increments the current camera node address by n=4.
Consequently, the updated camera node address is 14-11. The server 18 causes the mixing of the output of camera 14-11 with that of camera 14-7. Again, this includes switching into the view the outputs of the intermediate cameras (i.e., 14-8, 14-9, and 14-10) to give the user 22 the look and feel of navigating around the viewed object. The user 22 is thus presented with 5 the view from camera 14-11 , as shown in Fig. 7d.
Still interested in the first object 702, the user 22 enters a user input, for example, "alt- right arrow," indicating a desire to move to the right at less than critical speed. Accordingly, the server 18 increments the updated camera node address by n-1 nodes, namely 3 in the present example, to camera 14-14. The outputs from cameras 14-11 and 14-14 are mixed, 0 and the user 22 is presented with a seamless view associated with cameras 14-11 through 14-
14. Fig. 7e illustrates the resulting view of camera 14-14. With little to see immediately after the first object 702, the user 22 enters a user input such as "shift-right arrow," indicating a desire to move quickly through the array 10, i.e., at over the critical speed. The server 18 interprets the user input and increments the current node address by n+2, or 6 in the present example. The updated node address thus corresponds to camera 14-20. The server 18 mixes the outputs of cameras 14-14 and 14-20, which includes switching into the view the outputs of the intermediate cameras 14-15 through 14-19. The resulting view of camera 14-20 is displayed to the user 22. As shown in Fig. 7f, the user 22 now views the second object 704.
Becoming interested in the third object 704, the user 22 desires to move slowly through the array 10. Accordingly, the user 22 enters "alt-right arrow" to indicate moving to the right at below critical speed. Once the server 18 interprets the received user input, it updates the current camera node address along the X axis by 3 to camera 14-23. The server 18 then mixes the outputs of camera 14-20 and 14-23, thereby providing the user 22 with a seamless progression of views through camera 14-23. The resulting view 14-23a is illustrated in Fig. 7g.
Other Data Devices It is to be understood that devices other than cameras may be interspersed in the array. These other devices, such as motion sensors and microphones, provide data to the server(s) for processing. For example, in alternate embodiments output from motion sensors or microphones are fed to the server(s) and used to scale the array. More specifically, permissible camera nodes (as defined in a table stored in memory) are those near the sensor or microphone having a desired output e.g., where there is motion or sound. As such, navigation control factors include output from other such devices. Alternatively, the output from the sensors or microphones are provided to the user. An alternate embodiment in which the array of cameras includes multiple microphones interspersed among the viewed environment and the cameras will now be described with reference to Fig. 8. The system 800 generally includes an array of cameras 802 coupled to a server 804, which, in turn, is coupled to one or more user interface and display devices 806 and an electronic storage device 808. A hub 810 collects and transfers the outputs from the array 802 to the server 804. More specifically, the array 802 comprises modular rails 812 that are interconnected. Each rail 812 carries multiple cameras 814 and a microphone 816 centrally located at rail 812. Additionally, the system 800 includes microphones 818 that are physically separate from the array 802. The outputs of both the cameras 814 and microphones 816, 818 are coupled to the server 804 for processing.
In general, operation of the system 800 proceeds as described with respect to system 100 of Figures l-2d and 5-6. Beyond the operation of the previously described system 100, 5 however, the server 804 receives the sound output from the microphones 816, 818 and, as with the camera output, selectively transmits sound output to the user. As the server 804 updates the current camera node address and changes the user's view, it also changes the sound output transmitted to the user. In the present embodiment, the server 804 has stored in memory an associated range of camera nodes with a given microphone, namely the cameras 0 814 on each rail 810 are associated with the microphone 816 on that particular rail 810. In the event a user attempts to navigate beyond the end of the array 802, the server 804 determines the camera navigation is impermissible and instead updates the microphone node output to that of the microphone 818 adjacent to the array 802.
In an alternate embodiment, the server 804 might include a database in which camera 5 nodes in a particular area are associated with a given microphones. For example, a rectangular volume defined by the (X, Y, Z) coordinates (0,0,0), (10,0,0), (10,5,0), (0,5,0), (0,0,5), (10,0,5), (10,5,5) and (0,5,5) are associated with a given microphone. It is to be understood that selecting one of the series of microphones based on the user's position (or view) in the array provides the user with a sound perspective of the environment that o coincides with the visual perspective.
It is to be understood that the server of the embodiments discussed above may take any of a number of known configurations. Two examples of server configurations suitable for use with the present invention will be described with reference to Figures 9 and 10. Turning first to Figure 9, the server 902, electronic storage device 20, array 10, users (1,2,3, . 5 . .N) 22-1 - 22-N, and associated user interface/display devices 24-1 - 24-N are shown therein.
The server 902 includes, among other components, a processing means in the form of one or more central processing units (CPU) 904 coupled to associated read only memory (ROM) 906 and a random access memory (RAM) 908. In general, ROM 906 is for storing o the program that dictates the operation of the server 902, and the RAM 908 is for storing variables and values used by the CPU 904 during operation. Also coupled to the CPU 904 are the user interface/display devices 24. It is to be understood that the CPU may, in alternate embodiments, comprise several processing units, each performing a discrete function.
Coupled to both the CPU 904 and the electronic storage device 20 is a memory controller 910. The memory controller 910, under direction of the CPU 904, controls accesses (reads and writes) to the storage device 20. Although the memory controller 910 is 5 shown as part of the server 902, it is to be understood that it may reside in the storage device 20.
During operation, the CPU 904 receives camera outputs from the array 10 via bus 912. As described above, the CPU 904 mixes the camera outputs for display on the user interface/display device 24. Which outputs are mixed depends on the view selected by each 0 user 22. Specifically, each user interface/display devices 24 transmits across bus 914 the user inputs that define the view to be displayed. Once the CPU 904 mixes the appropriate outputs, it transmits the resulting output to the user interface/display device 24 via bus 916. As shown, in the present embodiment, each user 22 is independently coupled to the server 902.
The bus 912 also carries the camera outputs to the storage device 20 for storage. 5 When storing the camera outputs, the CPU 904 directs the memory controller 910 to store the output of each camera 14 in particular locations of memory in the storage device 20.
When the image to be displayed has previously been stored in the storage device 20, the CPU 904 causes the memory controller 910 to access the storage device 20 to retrieve the appropriate camera output. The output is thus transmitted to the CPU 904 via bus 918 where 0 it is mixed. Bus 918 also carries additional source output to the CPU 904 for transmission to the users 22. As with outputs received directly from the array 10, the CPU 904 mixes these outputs and transmits the appropriate view to the user interface/display device 24.
Figure 10 shows a server configuration according to an alternate embodiment of the present invention. As shown therein, the server 1002 generally comprises a control central 5 processing unit (CPU) 1004, a mixing CPU 1006 associated with each user 22, and a memory controller 1008. The control CPU 1004 has associated ROM 1010 and RAM 1012. Similarly, each mixing CPU 1006 has associated ROM 1014 and RAM 1016.
To achieve the functionality described above, the camera outputs from the array 10 are coupled to each of the mixing CPUs 1 through N 1006-1, 1006-N via bus 1018. During o operation, each user 22 enters inputs in the interface/display device 24 for transmission (via bus 1020) to the control CPU 1004. The control CPU 1004 interprets the inputs and, via buses 1022-1, 1022-N, transmits control signals to the mixing CPUs 1006-1, 1006-N instructing them which camera outputs received on bus 1018 to mix. As the name implies, the mixing CPUs 1006-1, 1006-N mix the outputs in order to generate the appropriate view and transmit the resulting view via buses 1024-1, 1024-N to the user interface/display devices 24-1, 24-N. 5 In an alternate related embodiment, each mixing CPU 1006 multiplexes outputs to more than one user 22. Indications of which outputs are to mixed and transmitted to each user 22 comes from the control CPU 1004.
The bus 1018 couples the camera outputs not only to the mixing CPUs 1006-1, 1006- N, but also to the storage device 20. Under control of the memory controller 1008, which in 0 turn is controlled by the control CPU 1004, the storage device 20 stores the camera outputs in known storage locations. Where user inputs to the control CPU 1004 indicate a users' 22 desire to view stored images, the control CPU 1004 causes the memory controller 1008 to retrieve the appropriate images from the storage device 20. Such images are retrieved into the mixing CPUs 1006 via bus 1026. Additional source output is also retrieved to the mixing 5 CPUs 1006-1, 1006-N via bus 1026. The control CPU 1004 also passes control signals to the mixing CPUs 1006-1, 1006-N to indicate which outputs are to be mixed and displayed.
In an embodiment analogous to that of Figure 10, the outputs of cameras are provided to networked (e.g., via an Ethernet) personal computers, for example one capture computer per pair of adjacent cameras and one control computer. In one embodiment, where analog o video cameras are used, each capture computer also includes two video capture boards — one per camera coupled to the capture computer. Each capture computer also provides the mixing functionality, such as tweening, between the cameras coupled thereto. Furthermore, the control computer causes each capture computer to receive the output from a camera adjacent to one directly coupled to the capture computer so that capture computer may mix the outputs 5 of the camera directly coupled to the capture computer and the adjacent camera. For example, if one capture computer is coupled to cameras "1" and "2", and a second capture computer is coupled to cameras "3" and "4", then the second capture computer would also receive the output of camera "2" so that such output could be mixed with that of adjacent camera "3". Control computer coordinates the operation of the capture computers and other o components as described herein.
Stereoscopic Views It is to be understood that it is within the scope of the present invention to employ stereoscopic views of the environment. To achieve the stereoscopic view, the system retrieves from the array (or the electronic storage device) and simultaneously transmits to the user at least portions of outputs from two cameras. The server processing element mixes these camera outputs to achieve a stereoscopic output. Each view provided to the user is based on such a stereoscopic output. In one stereoscopic embodiment, the outputs from two adjacent cameras in the array are used to produce one stereoscopic view. Using the notation of Figs. 7a-7g, one view is the stereoscopic view from cameras 14-1 and 14-2. The next view is based on the stereoscopic output of cameras 14-2 and 14-3 or two other cameras. Thus, in such an embodiment, the user is provided the added feature of a stereoscopic seamless view of the environment.
Multiple Users
As described above, the present invention allows multiple users to simultaneously navigate through the array independently of each other. To accommodate multiple users, the systems described above distinguish between inputs from the multiple users and selects a separate camera output appropriate to each user's inputs. In one such embodiment, the server tracks the current camera node address associated with each user by storing each node address in a particular memory location associate with that user. Similarly, each user's input is differentiated and identified as being associated with the particular memory location with the use of message tags appended to the user inputs by the corresponding user interface device. In an alternate embodiment, two or more users may choose to be linked, thereby moving in tandem and having the same view of the environment. In such an embodiment, each includes identifying another user by his/her code to serve as a "guide". In operation, the server provides the outputs and views selected by the guide user to both the guide and the other user selecting the guide. Another user input causes the server to unlink the users, thereby allowing each user to control his/her own movement through the array.
Multiple Arrays
In certain applications, a user may also wish to navigate forward and backward through the environment, thereby moving closer to or further away from an object. Although it is within the scope of the present invention to use cameras with zoom capability, the use of a zoom lens would entail robotic control by a single user and preclude the simultaneous viewing of different fields of view positions at that camera node by multiple users. One embodiment that solves this problem of preventing multiple user from simultaneously viewing different fields of view from the same camera position in the array entails creating different field of view options at a single camera position. In alternate embodiments the different field of view options are created with clusters of cameras at each position in the array, each camera having a different field of view lens but substantially the same vertex in the array. In one embodiment, the cameras at the same position have essentially the same vertex by employing beam splitters and/or mirrors to enable the different field of view cameras to be physically positioned away from the vertex in the array, yet have each camera field of view from the same perspective or vertex. Where multiple cameras are used at a particular node in an array, each camera and its associated output has an address, a storage location where the camera outputs are being stored, and is accessible based on user inputs indicating which field of view or relative field of view (zoom in or zoom out) the user desires to receive. Additionally, it is to be understood that the use of such multiple cameras at a given node or location in the array may be used in any of the embodiments described herein. Simply zooming towards an object, while simplifying the background and recompose of the scene, does not provide the visual cues, such as changing perspective lines, changing shadows and reflections, that actually moving forward through the environment provides. One such embodiment in which users can move dimensionally forward and backward through the environment with a changing image point perspective will now be described with respect to Fig. 11 and continuing reference to Fig. 1. As will be understood by those skilled in the art, the arrays described with reference to Fig. 11 may be used with any server, storage device and user terminals described herein. Fig. 11 illustrates a top plan view of another embodiment enabling the user to move left, right, up, down, forward or backwards through the environment. A plurality of cylindrical arrays (121-1 - 121-n) of differing diameters comprising a series of cameras 14 may be situated around an environment comprising one or more objects 1200, one cylindrical array at a time. Cameras 14 situated around the object(s) 1100 are positioned along an X and Z coordinate system. Accordingly, an array 12 may comprise a plurality of rings of the same circumference positioned at different positions (heights) throughout the z-axis to form a cylinder of cameras 14 around the object(s) 1100. This also allows each camera in each array 12 to have an associated, unique storage node address comprising an X and Z coordinate - i.e., array 1(X, Z). In the present embodiment, for example, a coordinate value corresponding to an axis of a particular camera represents the number of camera positions along that axis the particular camera is displaced from a reference camera. In the present embodiment, from the user's perspective, the X axis runs around the perimeter of an array 12, and the Z axis runs down and up. Each storage node is associated with a camera view identified by its X, Z coordinate.
As described above, the outputs of the cameras 14 are coupled to one or more servers for gathering and transmitting the outputs to the server 18. In one embodiment, because the environment is static, each camera requires only one storage location. The camera output may be stored in a logical arrangement, such as a matrix of n arrays, wherein each array has a plurality of (X,Z) coordinates. In one embodiment, the node addresses may comprise of a specific coordinate within an array- i.e., Array ι(Xn,Zn), Array2(Xn,Zn) through Arrayn(Xn,Zn). As described below, users can navigate the stored images in much the same manner as the user may navigate through an environment using live camera images.
The general operation of one embodiment of inputting images in storage device 20 for transmission to a user will now be described with reference to Fig. 12 and continuing reference to Fig. 11. As shown in step 1210, a cylindrical array 12-1 is situated around the object(s) located in an environment 1100. The view of each camera 14 is transmitted to server 18 in step 1220. Next, in step 1220, the electronic storage device 20 of the server 18 stores the output of each camera 14 at the storage node address associated with that camera 14. Storage of the images may be effectuated serially, from one camera 14 at a time within the array 12, or by simultaneous transmission of the image data from all of the cameras 14 of each array 12. Once the output for each camera 14 of array 12-1 is stored, cylindrical array 12-1 is removed from the environment (step 1240). In step 1250, a determination is made as to the availability of additional cylindrical arrays 12 of differing diameters to those already situated. If additional cylindrical arrays 12 are desired, the process repeats beginning with step 1210. When no additional arrays 12 are available for situating around the environment, the process of inputting images into storage devices 20 is complete (step 1260). At the end of the process, a matrix of addressable stored images exist.
Upon storing all of the outputs associated with the arrays 12-1 through 12-n, a user may navigate through the environment. Navigation is effectuated by accessing the input of the storage nodes by a user interface device 24. In the present embodiment, the user inputs generally include moving around the environment or object 1100 by moving to the left or right, moving higher or lower along the z-axis, moving through the environment closer or further from the object 1100, or some combination of moving around and through the environment. For example, a user may access the image stored in the node address Array (0,0) to view an object from the camera previously located at coordinate (0,0) of Array3. The user may move directly forward, and therefore closer to the object 1100, by accessing the image stored in Array2(0,0) and then Arrayι(0,0). To move further away from the object and to the right and up, the user may move from the image stored in node address Arrayι(0,0) and access the images stored in node address Array2(l,l), followed by accessing the image stored in node address Array3(2,2), an so on. A user may, of course, move among arrays and/or coordinates by any increments changing the point perspective of the environment with each node. Additionally, a user may jump to a particular camera view of the environment. Thus, a user may move throughout the environment in a manner similar to that described above with respect to accessing output of live cameras. This embodiment, however, allows user to access images that are stored in storage nodes as opposed to accessing live cameras. Moreover, this embodiment provides a convenient system and method to allow a user to move forward and backward in an environment. It should be noted that although each storage node is associated with a camera view identified by its X, Z coordinate of a particular array, other methods of identifying camera views and storage nodes can be used. For example, other coordinate systems, such as those noting angular displacement from a fixed reference point as well as coordinate systems that indicate relative displacement from the current camera node may be used. It should also be understood that the camera arrays 12 may be other shapes other than cylindrical.
Moreover, it is not essential, although often advantageous, that the camera arrays 12 surround the entire environment.
It is to be understood that the foregoing user inputs, namely, move clockwise, move counter-clockwise, up, down, closer to the environment, and further from the environment, are merely general descriptions of movement through the environment. Although the present invention is not so limited, in the present preferred embodiment, movement in each of these general directions is further defined based upon the user input. Moreover the output generated by the server to the user may be mixed when moving among adjacent storage nodes associated with environment views (along the x axis, z axis, or among juxtaposed arrays) to generate seamless movement throughout the environment. Mixing may be accomplished by, but are not limited to, the processes described above. As indicated above, an array according to the present invention may be used to capture virtually any image for any purpose. One particular use of one embodiment of the present invention is to compare multiple images. As will be appreciated from the following description, when used to compare images, the present invention can allow for a comparison from any one of multiple point perspectives at any given reference point of time. Exemplary embodiments, which will now be described with reference to Figs. 15-17, provide a training aid that compares the images of the swings of two golfers ~ a training professional and a player/trainee.
As shown in Fig. 13, the array is generally in the form of a geodesic dome 1305 having an opening for a golfer to enter and hit a ball. More specifically, the array extends approximately 270° in a horizontal band, 180° in a vertical band from side to side and 150° in a vertical band from the rear at the ground, forward towards the opening.
The array not only includes cameras 1310, but also lights 1315, a greenscreen background covering 1320, a greenscreen background flooring 1325, and a supporting rail structure 1330. As is known in the art, other color backgrounds can be used. The plurality of cameras 1310 populate the interior of the dome 1305 supported by the greenscreen 1320 and/or rails 1330. As described in greater detail below, the green covering 1320 and flooring 1325 allow for easier processing of the images.
As also described in detail below, the cameras 1310 can be logically organized in rows; for example, the lowest row 1335 can be designated row0, the second row from the bottom 1340 can be designated rowi, the third row from the bottom 1345 can be designated row2. Additionally, the cameras 1310 in each row can be logically numbered, for example, sequentially from the right of the array, clockwise to the left. As described below, such logical arrangement facilitates processing of images and navigation through the array. In alternate embodiments, the cameras 1310 are mounted in configurations other than rows, such as geometric or random patterns, preferably so that the image captured by one camera 1310 overlaps the image captured by each adjacent camera 1310. Although only the array is depicted in Figure 13, it is to be understood that the array can be coupled to one or more processing elements, storage devices, user interface devices, and other components according to any one of the configurations described above with reference to Figures 1 and 8-10 and equivalents thereto. In the present embodiment, the 5 images of the professional's swing is stored in one storage device and the image of the trainee's swing is stored in a second storage device. In alternate embodiments, the images of the two swings are stored in different layers, levels or partitions within a single storage device, such as a fluorescent multi-layer disk. Each of the two storage devices are coupled in parallel to and can be accessed in parallel by the server. Furthermore, the cameras 1310 are 0 coupled to the electronic storage devices so the images may be stored and the server is coupled to the storage devices so images can be retrieved from storage, processed and restored in the storage devices. A user interface device is also coupled to the server so the images can be transmitted to the user.
The capturing and storing of the images will now be described with reference to 5 Figure 14. Once one of the golfers enters the dome 1305 and the system is activated, the system captures the image of the golfer's swing (step 1405). In the present embodiment each camera 1310 operates at approximately thirty frames per second. In an alternate embodiment, the cameras 1310 capture the image at sixty frames per second. The image from each camera 1310 and for each frame is then processed to separate the image from the background. More o specifically, the server (or dedicated processor) mattes out the image from the solid background 1320 (step 1410). Such a process is generally known as bluescreening, matting, keying or chromakeying out the image and can be performed by any of a number of known processes, including those provided by the Ultimatte Corporation under the trade name ULTIMATTE, and by PixelCom J. V. under the trade name PRIMATTE. As will be 5 appreciated by those skilled in the art, matting out the image is preferable for better display of the images. The server then digitally stores the matted or keyed out image of each frame from each camera 1310 in an electronic storage device (step 1415).
Although not required, the present embodiment, the outputs (or images) captured in each frame of each camera 1310 are temporarily stored. The server then processes the o temporarily stored frames to matte/key out the golfer's image from each frame and stores the matted/keyed out image, preferably writing over the original (non-keyed) frames. In an alternate embodiment, the server processes the frames, keying out the golfer's image, in real time. In such an embodiment, no temporary image need be stored. In another embodiment, no matting process is performed. Once the professional golfer's swing is captured, the system operation is repeated to capture and store the trainee's swing (step 1420).
Figure 15 depicts one example of a logical representation and addressing scheme of one golfer's swing as stored in one storage device without storing any mixed images. Taking thirty frames per second and the average golf swing lasting less than three seconds, approximately ninety frames will be stored for each camera. As shown, each frame from each camera is stored at a unique location or address in the storage device. In this embodiment, the first and second (right most) digits of the address indicate frame number, the third and fourth digits indicate camera number, and the fifth and sixth digits indicate row number. Thus, using the notation rowx(y) to denote the yth camera of row x and the notation framez to denote the Zl frame taken, the first frame ~ framei — taken by the first camera in the first row — rowι(l) — is stored at address 01 01 01. Similarly, the third frame — frame ~ taken by the second camera in the second row — row (2) ~ is stored at address 02 02 03. It is to be understood that essentially any addressing scheme may be used for storing camera outputs, so long as the software playing back the images is capable of identifying the appropriate camera output in response to user inputs. In alternate embodiments the addresses can be represented in any notation, such as hexadecimal or binary, and the addresses may or may not be contiguous. Although not required, in the present embodiment, the same logical arrangement is used for the storage of the second golfer's swing in the second storage device.
Having described the capture and storage of the images, the playback of the images will now be described with reference to Figures 16 and 17 and continuing reference to Figures 13 and 15. As an initial step, the user selects playback on the user terminal (step 1605) and the playback begins. More specifically, the system begins by providing the user a default starting view of the professional and trainee (step 1610). In the present embodiment, the images of the professional and the trainee are displayed side-by-side, as shown in Figure 17, from the same camera 1310 at frame!. Determination of the first frame is described in greater detail below.
After the default view is displayed, the user may begin navigating the stored images. As with the embodiments described above, the user enters a user input via the user input device, and the server receives and interprets the input in a manner as described above with reference to Figures 5 and 6 (step 1615). The server then accesses and updates in parallel the trainee image (step 1620a) and the professional image (step 1620b).
In the present embodiment, the user inputs include moving to the left or right and up or down in the array; further, each directional movement can be forward in time, at the same point in time, or backward in time. Such movement is achieved by accessing and, where appropriate, stringing together the frames taken by the cameras. More specifically, navigating through the array can be based on the logical arrangement and addressing scheme of frames: to move to the left to the next camera 1310, the third digit of the address of the image to be viewed is incremented; to move up to the next row, the fifth digit of the address is incremented; to move forward in time to the next frame, the first digit of the address is incremented.
Thus, with reference to Figure 15, starting with the first frame—frame] —of rowι(l) (i.e., the image stored at address 01 01 01)and moving to the left with the image frozen at the same point in time, the next image is that associated with framej of rowι(2) (i.e., the image stored at address 01 02 01), and then the image associated with frame! of rowι(3) (i.e., the image stored at address 01 03 01). Similarly, starting with the first image ~ framei — of rowι(l) (i.e., the image stored at address 01 01 01) and moving up, to the left and forward in time, the next image could be that associated with frame2 of row2(2) (i.e., the image stored at address 02 02 02), and then the image associated with frame of row3(3) (i.e., the image stored at address 03 03 03).
Once the new camera outputs are accessed and retrieved from the storage devices, the server provides an updated view to the user (step 1625). Images of both the professional and trainee are updated synchronously. Changes to the user's view is applied to both the professional's and the trainee's images. Operation of the present embodiment is made efficient by using the same addressing scheme in both the storage device containing the professional's images and the storage device containing the trainee's images. In other words, each frame from each camera is stored at the same address in different storage devices. Therefore, the server receives the user input, determines the next appropriate camera frame/output and corresponding address, mixes the last frame with the updated frame and causes the image stored at that address in each storage device to be provided to the user.
Having displayed the view, the server awaits the next user input (step 1615). In the embodiment of Figures 13-17, the server continuously updates the view based on the previously entered user input until the user enters a different input. Moreover, the playback preferably occurs at the same rate as the image capture occurred, namely thirty frames per second in the present embodiment. Therefore, when the selected user input is "forward in time" (from any camera(s)), the view is essentially a video playback at the actual speed of the swings. It should be understood that the present invention is independent of the type of cameras and the capture and playback rates.
The present embodiment thus allows for enhanced comparison of images and, consequently, improved training. The trainee's swing can be compared to that of the professional in many ways. For example, the swings can be compared at a single point in time, such as at the top of the trainee's back swing, and from any perspective provided by the array, such as front, back, top, etc. Additionally, the swings can be compared through sequential points in time, throughout a portion or the entirety of the swings, and from a changing perspective. The swings can be compared at actual speed over and over again, each time from a new perspective. In sum, the present embodiment allows two images to be compared at any point in time from any perspective.
In an alternate embodiment for comparing multiple images, the images are displayed one overlaid on top of another. In one alternate embodiment utilizing overlaid images, the images are displayed with differing luminance levels. For example, the professional swing image, which remains constant, can be captured and stored with no change in luminance level. The trainee swing image, on the other hand, can be stored with a lesser luminance level so that it can be overlaid on top of the professional swing image. In such an embodiment, the camera outputs are temporarily stored in the storage device and retrieved by the server; The server not only processes the outputs to matte out the image (if desired), but also adjusts the luminance level of each image. The server then stores the processed outputs for later retrieval during playback. In related embodiments the luminance levels are adjusted at different points during the system operation, such as when originally retrieved from the cameras or just prior to outputting to the user interface display device.
In yet another alternate embodiment, the user may separately control the views of the professional's and the trainee's swings. In such an embodiment, the server discriminates between two sets of user inputs — one relating to each of the two images. In the embodiment of Figures 13-17, the opening in the dome allows the golfers to take a realistic swing and hit an actual ball. Where a greater range of viewing is desired, however, the array need not include an opening for the ball to travel. Instead, the golfers can be completely enclosed in a dome of cameras (entering by way of a door having cameras mounted thereon), thereby allowing viewing from 360°.
In the embodiment of Figures 13-17, the server mixes the camera frames/images by electronically switching between frames/images. However, in alternate embodiments the server mixes the frames/images in any of the manners described above. For example, in one embodiment, mixing includes creating a "tweened" image from the output of adjacent cameras. The tweened image can be created and stored, or depending upon available processing power, created in real time as the view is being presented to the user.
Figure 18a illustrates the logical relationship of real and mixed images according to one embodiment in which the mixed images are synthesized images that are the product of images (output) from adjacent cameras. The logical arrangement of frames containing the real and mixed images can best be illustrated in the three dimensional representation in which the first access represents sequential frames, the second access represents sequential rows, and the third access represents sequential cameras in each row. Thus, as shown in Figure 18a, sequential frames of the same camera are illustrated along the horizontal axis (i.e., left to right), adjacent rows are illustrated along the vertical access, and adjacent cameras in the same row are illustrated along the access extending into the page. More specifically, frames containing real images are illustrated as squares and bear the same logical address as corresponding frames identified in Figure 15. Synthesized frames created by mixing outputs from the same point in time, from two adjacent cameras, in the same row are represented by triangles; synthesized frames created by mixing outputs from the same point in time, from corresponding cameras in adjacent rows are indicated by circles; and synthesized frames created by mixing outputs from the same point in time, from a camera in a given row and from the next camera in an adjacent row are indicated by diamonds. The asterisk indicates a synthesized frame created by mixing the outputs from adjacent cameras, in adjacent rows taken at subsequent points in time (i.e., adjacent frames). Furthermore, the mixed images are labeled with the logical notation wherein an apostrophe (') adjacent to either the second or third pair of digits signifies that the image was created by mixing outputs of adjacent cameras in the same row or corresponding cameras in adjacent rows, respectively. For example, the notation 01 '01 01 refers to the image created by mixing frames from 01 01 01 and 02 01 01; 01 '01 '01 refers to the image created by mixing the frames 01 01 01 and 02 02 01; and Ol'Ol '01' refers to the image created by mixing the frames 01 01 01 and 02 02 02. It is to be understood that certain of the mixed frames, although described as being the product of two particular frames, may be the product of two or more other frames. For example, frame 01' 01' 01 may be created by mixing frames 02 01
01 and 01 02 01, or by mixing 01 01 01, 02 01 01, 01 02 01 and 02 02 01.
Although for simplicity Figure 18a illustrates only two successive frames of each of two adjacent cameras in each of two adjacent rows, it is to be understood that the logical depiction is readily extensible to multiple frames, cameras and rows. Having described the logical relationship of frames containing real images and synthesized frames containing mixed images, exemplary user navigation will be described with reference to Figures 18b and c, which use the same notation as Figure 18a, and continuing reference to Figure 13.
Thus, a user navigating the array from the first camera in row 1 and moving to the left at the same point in time is sequentially provided the images of frames 01 01 01, 01 Ol'Ol, and 01 02 01. Continuing to navigate the array by moving upward at the same point in time, the user is sequentially provided frames 01 '02 01 and 02 02 01. Finally, moving forward in time from the same camera the user is sequentially provided the image of frame 02 02 02 and subsequent frames, 02 02 03, 02 02 04, et seq. Similarly, as shown in Figure 18c, a user navigating through the array diagonally to the left and up while moving forward in time is sequentially provided frames 01 01 01,
02 02 02.
In certain embodiments of the present invention the system identifies one or more reference points of the swings and uses such reference points to synchronize the swings and/or adjust the playback speed of the swings. In such embodiments, the system includes a user interface device, through which a user can manually indicate a reference point of a swing, or any number of motion measuring devices, such as motion detectors, range finders, electronic tags (mounted on the golfer or golf club) and the like. Applying such devices to embodiments of the present invention, various points in the swing can be identified, including the beginning of movement of the golf club during the back swing, the change of direction of the golf club at the end of the back swing, contact of the golf club and the golf ball, the end of the follow-through, when the golf club comes to rest, and the like. Manual indications, as well as indications received from such movement measuring means, of the various points in the swing may be used to synchronize the swings of the professional and the trainee.
In such an embodiment, the system begins recording of a swing at a reference time, t=0. The system then receives an indication, either manual or from one of the motion measuring means, indicating the reference point in the swing. More specifically, the system automatically notes the time of such an indication, t=x, relative to the beginning of recording.
Having an indication of the time (t=x) at which the reference point of the swing occurred, the system identifies the frame corresponding to the reference point essentially by multiplying the time at which the reference point of the swing occurred by the recording speed of the cameras (i.e., x seconds (30 frames/second)=30x frames). In an alternate embodiment, the system receives the indication and, in essentially real time, tags the corresponding reference frame.
With this process repeated for both the professional swing and the trainee swing, the two identified reference frames are used as synchronizing points for the swings. For example, in one embodiment where the reference point is the beginning of the back swing, such reference frames are used as the first frame in the playback and all navigation is performed relative to the two reference frames.
In the embodiment where the beginnings of the swings are synchronized, a user is able to compare the swings to determine whether the trainee is swinging too fast or too slow.
However, where the trainee and professional swing at different speeds, point-by-point comparison of the swings becomes difficult as the swings diverge and lack synchronization. Use of multiple reference points, however, permit this system to synchronize the swings and compensate for the different swing speeds, thereby allowing essentially point-by-point comparison of the swings.
The operation of one embodiment in which the system uses multiple reference points and compensates for differing speed swings will now be described with reference to Figures 19-20. Different swing speeds correspond to different time durations of swings, which in turn, correspond to different number of frames. Thus, one manner in which to compensate for different swing speeds is to adjust the number of frames for one of the swings.
For example, a professional golfer may swing in about two seconds. During the two- second swing, cameras operating at thirty frames per second capture sixty frames. A trainee, on the other hand, may swing slower, over three seconds. Thus, the trainee swing will take a total of ninety frames. Accordingly, with the playback of the images occurring at the same thirty-frames per second rate, the addition of thirty frames to the professional swing will cause the professional's swing to be the same duration and thus speed as the trainee's swing; both will be ninety frames in duration. In the present embodiment, these thirty additional frames are preferably mixed images created from successive frames of a each camera that are uniformly interspersed among the frames of each camera containing real images.
The logical arrangement of the frames containing real images and frames containing mixed images of the foregoing example is illustrated in Figure 19. Interspersed among the sixty frames containing real images of the professional swing are thirty frames of mixed images. More specifically, the thirty mixed images are uniformly interspersed between every other pair of frames; a mixed image has been created between frames 1 and 2, not between frames 2 and 3, between frames 3 and 4, not between frames 4 and 5, and so forth.
It is to be understood that such mixed images created from successive frames from the same camera can be combined in the same embodiment as mixed images created from frames from different cameras. Moreover, in certain embodiments of the present invention, such mixed images interspersed for the purpose of adjusting the speed of the image are used to create other mixed images. For example, in the schematic of Figure 18a, the mixed images that are interspersed for adjusting the speed of the swing are indicated by an "X", and (using the notation of Figure 18a) mixed images 01 01 01' and 02 01 01' are used to create mixed image 01' 01 01'.
The capture of the images and creation of the mixed images of the embodiment of Figure 19 will now be described with regard to Figure 20. The system first captures and stores the image of the professional's swing and the image of the trainee's swing (step 2010). The system then receives a user input via a user interface device indicating the user's desire to harmonize the speeds of two swings (step 2020). The system then proceeds to create the necessary mixed images.
More specifically, during playback of each image, the system receives a indications via the motion measuring device coupled to the system (e.g., server) noting both the beginning and end of the first swing (step 2030). These user indications correspond to particular points in time relative to the start of recording, which, in turn, correspond to particular reference frames that the system tags. In alternate embodiments the system automatically identifies the beginning and end of each swing by input from any of a number of motion measuring devices, such as motion detectors, range finders, electronic tags and the like, and in other embodiments via manual input via a user interface device during playback of the images. It should be noted that the "beginning" and "end" points of a swing need not be precisely defined, but are preferably selected so that the points correspond to the same part of the two swings. For example, the beginning may be the beginning of the golfer's back swing and the end may be when the golf club comes to rest after the golfer's follow-through.
Once the system has identified the bounds (i.e., beginning and end) of what the user considers to be the swing, the system determines the number of frames in the first swing (step 2040). In the present embodiment, the system determines the number of frames by noting the relative time between reference points and by multiplying by the number of frames per unit time (e.g., x seconds) (30 frames/second) = 30x frames). In an alternate embodiment, the system determines the number of frames by incrementing a counter for each frame address in a linked list of frame address between the frames corresponding to the beginning and end of the swing. The system proceeds through the same steps to count the number of frames for the second swing (step 2050).
Once the number of frames in each swing containing real images is determined, the number of frames in the faster swing is subtracted from the number of frames in the slower swing, resulting in the number of mixed images to be added to the faster swing (step 2060). In the example of Figure 19, because the slower swing included ninety frames and the faster swing sixty frames, thirty mixed frames must be added to the faster swing.
The system must also determine the composition of the mixed images (step 2070). In the context of the logical depiction of Figure 19, the system must determine the "location" of the mixed images. Preferably, the system evenly intersperses the frames containing the mixed images. In the present embodiment, the location of the frames is determined by dividing the number of additional mixed images to be added into the number of frames containing real images of the faster swing. In the example of Figure 19, then, sixty original frames divided by thirty additional mixed images, equals one added mixed image every two original frames. Where the division results in a non-integer, even distribution can be approximated by rounding the result to the next highest integer. Each mixed image comprises the product of mixing the two adjacent frames containing real images. Once the compositions of the mixed images are determined, the system proceeds to create and store the mixed images (step 2080).
It is to be understood that the present invention includes other manners of harmonizing the speed of the two swings. For example, in alternate embodiments rather than interleaving mixed images into the faster swing, blank frames are inserted or repeat frames are inserted. In still other alternate embodiments, the system accounts for the different speeds by adjusting the playback speed based on the ratio of the lengths of the swings. For example, in the context of the example of Figure 19, the playback speed of the professional swing (sixty frames) to trainee swing (ninety frames) is two-thirds (60 frames/90 frames) that of the trainee. Thus, if the trainee swing is played back at thirty frames per second, the professional swing is played back at twenty frames per second, resulting in swings lasting three seconds (60 frames (1 second 20 frames)=3 seconds; 90 frames (1 second/30 frames)=3 seconds). The system adjusts the playback speed by accessing and/or refreshing the frames at different rates. In yet another alternate embodiment, a number of frames (equal to the number otherwise to be added to the faster swing in the above embodiments) from the slower swing are dropped from the image.
Moreover, it is to be understood that the system and method for adjusting the speed of a swing may be separately applied to portions of a swing, thereby synchronizing discrete portions of swings. For example, the different durations of the professional's and trainee's backswings may be harmonized so that upon playback both images arrive at the end of the backswing at the same time. Furthermore, the remainder of the swing (i.e., the downswing and follow- through) can similarly be synchronized. To achieve synchronization of portions of the swing, the process of Figure 20 is performed based on the beginning and end of each portion of the swing to be synchronized. Although certain logical storage arrangements of frames have been described herein, it is to be understood that the present invention is not limited to any particular frame addressing scheme. One exemplary addressing scheme is that of the embodiment of Figure 15, wherein successive images are stored at known, continuous addresses. In alternate embodiments, the system includes various degrees of a linked list of frame addresses. In one such embodiment, each data element in the linked list points to a frame as well as the previous and successive frame in each of the variable dimensions, such as those illustrated in Figure 18a, including up and down, diagonal, left and right and forward and back in time. In other such embodiments, the data elements in the linked list point to either the previous or successive frame in a subset of those dimensions. Furthermore, it is preferable that frames taken from cameras at the boundaries of the array are linked to frames taken at the opposite boundary. For example, the frames from the last camera in a given row of the array of Figure 13 are linked to frames from the first camera in the same row.
Additionally, although the exemplary embodiments described herein relating to harmonizing the speed and duration of images are concerned with harmonizing two images, the present invention can be used to harmonize multiple images by utilizing the process described with reference to Figure 19 to add frames to all but the longest image. Furthermore, it is to be understood that although the embodiments described herein intersperse a single frame containing a mixed image between frames containing real images, in alternate embodiments multiple frames containing mixed images are interspersed between frames containing real images.
It is also to be understood that images captured and processed according to the present invention may be stored on a portable storage medium, such as a CD-ROM, and played back by a user on hardware separate from that which was used to capture and process the images. In such an embodiment, the play-back hardware includes software providing the play back functionality, including the ability to interpret user inputs and, in response thereto, locate and display appropriate frames. The playback software locates the frames in any number of ways, including accessing a mapping or linked list of the frames which is stored on the storage medium.
Embodiments Covered
Although the present invention has been described in terms of certain preferred embodiments, other embodiments that are apparent to those of ordinary skill in the art are also intended to be within the scope of this invention. Accordingly, the scope of the present invention is intended to be limited only by the claims appended hereto.

Claims

Claims
1. A system for comparing two or more subjects, the system comprising: an array of cameras, the array including a plurality of cameras capable of capturing 5 images from a plurality of perspectives and over a plurality of points in time; one or more storage devices coupled to the array, the storage devices capable of storing a first set of images of a first subject and a second set of images of a second subject, the sets of images captured by the array from plurality of perspectives and at plurality of points in time; and 0 one or more processors coupled to the storage devices, the processors programmed to selectively access for display: a first image of the first subject, the first image from one of the perspectives and at one of the points in time, and a second image of the first subject, the second image from another perspective, another point in time, or both another perspective and another point in 5 time; and a first image of the second subject, the first image from one of the perspectives and at one of the points in time, and a second image of the second subject, the second image from another perspective, another point in time, or both another perspective and another point in time; o the processors also programmed to mix the first images with the second images, thereby allowing comparison of the first subject and the second subject from changing perspectives or changing points in time.
2. The system of claim 1 wherein the processors selectively access the first and second images in response to user inputs, the user inputs indicating changing perspectives, changing 5 points in time, or both changing perspectives and changing points in time.
3. The system of claim 1 wherein the first and second images of the first subject are from the same perspectives and points in time as the first and second images of the second subject.
4. The system of claim 1 wherein the one or more storage devices includes a first set of one or more devices for storing images of the first subject and a second set of one or more 0 separate devices for storing images of the second subject.
5. The system of claim 1 wherein the images are frames and the storage devices are for storing the frames, the frames being independently accessible.
6. The system of claim 1 further including a user display device coupled to the processors.
7. The system of claim 6 wherein the first and second mixed images of the first subject are displayed adjacent to the first and second mixed images of the second subject.
5 8. The system of claim 6 wherein the first and second mixed images of the first subject are displayed over the first and second mixed images of the second subject.
9. The system of claim 8 wherein the processors are programmed to display the images of the first subject at a first luminance level and the images of the second subject at a second luminance level. 0
10. A method of comparing two or more subjects, the method comprising: capturing images of a first subject from a plurality of perspectives over time; capturing images of a second subject from a plurality of perspectives over time; storing the images of the first subject; storing the images of the second subject; 5 accessing a first series of images of the first subject, the first series of images from changing perspectives, points in time, or both perspectives and points in time; accessing a second series of images of the second subject, the second series of images from changing perspectives, points in time, or both perspectives and points in time; mixing the first series of images; and 0 mixing the second series of images, the mixed first series and mixed second series available for viewing.
11. A method of comparing two or more subjects, the method comprising: capturing images of a first subject from a plurality of perspectives over time; capturing images of a second subject from a plurality of perspectives over time; 5 storing the images of the first subject; storing the images of the second subject; accessing a first image of the first subject, the first image of the first subject from one perspective and one point in time; accessing a first image of the second subject, the first image of the second subject o from one perspective and one point in time; accessing a second image of the first subject, the second image of the first subject from another perspective or another point in time; accessing a second image of the second subject, the second image of the second subject from another perspective or another point in time; mixing the first and second images of the first subject; and mixing the first and second images of the second subject. one or more processors coupled to the storage devices, the processors programmed to selectively access for display at least a portion of the first set of images and at least a portion of the second set of images, the portion of the first set of images and the portion of the second set of images being images from any of the perspectives and at any of the points in time, such accessing allowing comparison of the first subject and the second subject from any of the perspectives and at any of the points in time.
12. A method of synchronizing two images, the method comprising: determining a length of a first image relative to a length of a second image; determining a number of frames to be added to a shorter one of the images, the number of frames based on the difference in lengths of the first and second images; adding the number of frames to the shorter images.
13. A method of comparing a plurality of images, the method comprising: determining a length of a portion of a first image; determining a length of a portion of a second image, the length of the portion of one of the images being shorter than the length of the portion of the other image; adding a number of frames to the portion of shorter image so that the length of the portion of the shorter image approaches the length of the portion of the other image.
14. The method of claim 13 further comprising capturing the first and second images with an array of video cameras, each image captured from multiple cameras in the array over a period of time.
15. The method of claim 14 wherein determining the lengths of the portions of the images includes receiving a manual indication of boundaries of each portion.
16. The method of claim 14 wherein determining the lengths of the portions of the images includes receiving an electronic indication of boundaries of the portions.
17. The method of claim 14 wherein the length of the portion of the shorter image is a first number of frames, the length of the portion of the other image is a second number of frames, and the number of frames to be added to the portion of the shorter image is approximately the second number less the first number.
18. The method of claim 14 wherein the frames added to the portion of the shorter image include images synthesized from two or more adjacent frames of the portion of the shorter image.
19. The method of claim 14 wherein the frames added to the portion of the shorter image include duplicate frames of the portion of the shorter image.
20. The method of claim 14 wherein the frames added to the portion of the shorter image include blank frames.
21. The method of claim 14 further comprising: determining a length of a second portion of a first image; determining a length of a second portion of a second image, the length of the second portion of one image being shorter than the length of the second portion of the other image; adding a number of frames to the second portion of the shorter image so that the length of the second portion of the shorter image approaches the length of the portion of the other image.
22. A telepresence system for providing a first user with a first display of an environment and a second user with a second display of the environment, the system comprising: a plurality of removable arrays of cameras, each camera having an associated view of the environment and an associated camera output representing the associated view; at least one storage device including a plurality of storage nodes wherein the output of each camera is stored in an associated storage node, the storage nodes are accessible to permit at least one path for viewing the environment; a first user interface device associated with the first user having first user inputs associated with movement along a first path in the environment; a second user interface device associated with the second user having second user inputs associated with movement along a second path in the environment; at least one processing element coupled to the user interface devices for receiving user inputs including moving up down, clockwise around an environment, counter-clockwise around an environment, forward and backward through the environment, the processing element configured to interpret received first inputs and select outputs of the storage nodes forming the first path, and interpret received second inputs and select outputs of storage nodes forming the second path independently of the first inputs, thereby allowing the first user and second user to navigate simultaneously and independently through the environment.
23. The telepresence system of claim 22 wherein the outputs of the cameras are accessible by the processing element.
24. The telepresence system of claim 22 wherein each removable array is situated at different lengths from the environment.
25. The telepresence system of claim 23 wherein each array is removed after the cameras in the array have transmitted the output to the associated storage node.
26. The telepresence system of claim 25 wherein each array is of cylindrical shape and of a varying diameter.
27. A telepresence system for providing a first user with a first display of an environment and a second user with a second display of the environment, the system comprising: a plurality of removable arrays of cameras, each camera having an associated view of the environment and an associated camera output representing the associated view, the arrays situated at varying lengths from the environment and including at least one path for viewing the environment, said arrays are removed after the cameras in the array have transmitted the output to an associated storage node; at least one storage device including a plurality of storage nodes wherein the output of each camera is stored in an associated storage node, the storage nodes are accessible to permit at least one path for viewing the environment; a first user interface device associated with the first user having first user inputs associated with movement along a first path in the environment; a second user interface device associated with the second user having second user inputs associated with movement along a second path in the environment; at least one processing element coupled to the user interface devices for receiving user inputs including moving up down, clockwise around an environment, counter-clockwise around an environment, forward and backward through the environment, the processing element configured to interpret received first inputs and select outputs of the storage node forming the first path, and interpret received second inputs and select outputs of the storage node forming the second path independently of the first inputs, thereby allowing the first user and second user to navigate simultaneously and independently through the environment.
28. A method of providing users with views of a remote environment, the method comprising: receiving electronic images of the environment from a plurality of array of cameras; storing the image of the environment in storage nodes associated with each camera, the storage nodes are accessible to permit at least one path for viewing the environment; removing the array of cameras after storing the image in the associated storage node; receiving a first input from a first user interface device associated with a first user, the first input indicating movement along a first path; receiving a second input from a second user interface device associated with a second user, the second input indicating movement along a second path; obtaining a first mixed image by mixing, with a first processing element, a first image with a second image in accordance with the first input; obtaining a second mixed image by mixing, with a second processing element, a third image with a fourth image in accordance with the second input; providing the first user with the first mixed image thereby simulating movement along the first path; and providing the second user with the second mixed image thereby independently and simulating movement along the second path.
29. The method of claim 28 wherein one array at a time is situated around the environment.
30. A telepresence system for allowing a first user, in response to at least one first input, to navigate along a first path through an environment and a second user, in response to at least one second input, to navigate along a second path through the environment independently of the first user, the system comprising: an array of cameras including a first series of camera modes defining the first path through the environment, wherein each camera node includes at least one camera and wherein the cameras in the first series have progressively different perspectives of the environment along the first path, and a second series of camera nodes defining the second path through the environment, wherein the cameras in the second series have progressively different perspectives of the environment along the second path, wherein at least one of the camera nodes includes multiple cameras having different fields of view of the environment from substantially the same perspective; at least one processing element coupled to the array, the processing element configured to select outputs of cameras in the first series, based on the first inputs, and to cause the selected outputs of cameras in the first series to be sequentially provided to the first user, thereby allowing the first user to progressively navigate through the environment along the first path, the at least one processing element also configured to select outputs of cameras in the second series, based on the second inputs, and to cause the selected outputs of cameras in the second series to be sequentially provided to the second user, thereby allowing the first user and second user to navigate simultaneously and independently through the environment along paths defined by cameras, and forward or backward through the environment based on the fields of view of the cameras at the at least one camera node.
PCT/US2000/028652 1999-10-15 2000-10-16 Method and system for comparing multiple images utilizing a navigable array of cameras WO2001028309A2 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU12081/01A AU1208101A (en) 1999-10-15 2000-10-16 Method and system for comparing multiple images utilizing a navigable array of cameras
EP00973582A EP1224798A2 (en) 1999-10-15 2000-10-16 Method and system for comparing multiple images utilizing a navigable array of cameras
HK03100632.1A HK1048576A1 (en) 1999-10-15 2003-01-24 Method and system for comparing multiple images utilizing a navigable array of cameras

Applications Claiming Priority (4)

Application Number Priority Date Filing Date Title
US09/419,274 US6522325B1 (en) 1998-04-02 1999-10-15 Navigable telepresence method and system utilizing an array of cameras
US09/419,274 1999-10-15
US22895800P 2000-08-29 2000-08-29
US60/228,958 2000-08-29

Publications (2)

Publication Number Publication Date
WO2001028309A2 true WO2001028309A2 (en) 2001-04-26
WO2001028309A3 WO2001028309A3 (en) 2001-09-13

Family

ID=26922813

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2000/028652 WO2001028309A2 (en) 1999-10-15 2000-10-16 Method and system for comparing multiple images utilizing a navigable array of cameras

Country Status (5)

Country Link
EP (1) EP1224798A2 (en)
CN (1) CN1409925A (en)
AU (1) AU1208101A (en)
HK (1) HK1048576A1 (en)
WO (1) WO2001028309A2 (en)

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002096096A1 (en) * 2001-05-16 2002-11-28 Zaxel Systems, Inc. 3d instant replay system and method
GB2382485A (en) * 2001-10-11 2003-05-28 Hewlett Packard Co Multiple camera arrangement
EP1519582A1 (en) * 2002-06-28 2005-03-30 Sharp Kabushiki Kaisha Image data delivery system, image data transmitting device thereof, and image data receiving device thereof
WO2012056437A1 (en) 2010-10-29 2012-05-03 École Polytechnique Fédérale De Lausanne (Epfl) Omnidirectional sensor array system
WO2014022309A1 (en) 2012-07-30 2014-02-06 Yukich Bartholomew G Systems and methods for creating three-dimensional image media
WO2014035717A1 (en) * 2012-08-30 2014-03-06 Microsoft Corporation Changing perspectives of a microscopic-image device based on a viewer's perspective
US9075566B2 (en) 2012-03-02 2015-07-07 Microsoft Technoogy Licensing, LLC Flexible hinge spine
US9618977B2 (en) 2012-03-02 2017-04-11 Microsoft Technology Licensing, Llc Input device securing techniques
US9824808B2 (en) 2012-08-20 2017-11-21 Microsoft Technology Licensing, Llc Switchable magnetic lock
US9870066B2 (en) 2012-03-02 2018-01-16 Microsoft Technology Licensing, Llc Method of manufacturing an input device
US10120420B2 (en) 2014-03-21 2018-11-06 Microsoft Technology Licensing, Llc Lockable display and techniques enabling use of lockable displays
US10324733B2 (en) 2014-07-30 2019-06-18 Microsoft Technology Licensing, Llc Shutdown notifications

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5493456B2 (en) * 2009-05-01 2014-05-14 ソニー株式会社 Image processing apparatus, image processing method, and program
CN102036010A (en) * 2009-09-30 2011-04-27 鸿富锦精密工业(深圳)有限公司 Image processing system and method
CN102065214A (en) * 2009-11-12 2011-05-18 鸿富锦精密工业(深圳)有限公司 Image processing system and method
US9354748B2 (en) 2012-02-13 2016-05-31 Microsoft Technology Licensing, Llc Optical stylus interaction
US8873227B2 (en) 2012-03-02 2014-10-28 Microsoft Corporation Flexible hinge support layer
US8947353B2 (en) 2012-06-12 2015-02-03 Microsoft Corporation Photosensor array gesture detection
US9256089B2 (en) 2012-06-15 2016-02-09 Microsoft Technology Licensing, Llc Object-detecting backlight unit
JP2020197550A (en) * 2019-05-30 2020-12-10 パナソニックi−PROセンシングソリューションズ株式会社 Multi-positioning camera system and camera system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5495576A (en) * 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
US5659323A (en) * 1994-12-21 1997-08-19 Digital Air, Inc. System for producing time-independent virtual camera movement in motion pictures and other media
US5703961A (en) * 1994-12-29 1997-12-30 Worldscape L.L.C. Image transformation and synthesis methods
US5729471A (en) * 1995-03-31 1998-03-17 The Regents Of The University Of California Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
US6020931A (en) * 1996-04-25 2000-02-01 George S. Sheng Video composition and position system and media signal communication system

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5495576A (en) * 1993-01-11 1996-02-27 Ritchey; Kurtis J. Panoramic image based virtual reality/telepresence audio-visual system and method
US5659323A (en) * 1994-12-21 1997-08-19 Digital Air, Inc. System for producing time-independent virtual camera movement in motion pictures and other media
US5703961A (en) * 1994-12-29 1997-12-30 Worldscape L.L.C. Image transformation and synthesis methods
US5729471A (en) * 1995-03-31 1998-03-17 The Regents Of The University Of California Machine dynamic selection of one video camera/image of a scene from multiple video cameras/images of the scene in accordance with a particular perspective on the scene, an object in the scene, or an event in the scene
US6020931A (en) * 1996-04-25 2000-02-01 George S. Sheng Video composition and position system and media signal communication system

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002096096A1 (en) * 2001-05-16 2002-11-28 Zaxel Systems, Inc. 3d instant replay system and method
GB2382485A (en) * 2001-10-11 2003-05-28 Hewlett Packard Co Multiple camera arrangement
GB2382485B (en) * 2001-10-11 2005-10-05 Hewlett Packard Co Multiple camera arrangement
EP1519582A1 (en) * 2002-06-28 2005-03-30 Sharp Kabushiki Kaisha Image data delivery system, image data transmitting device thereof, and image data receiving device thereof
EP1519582A4 (en) * 2002-06-28 2007-01-31 Sharp Kk Image data delivery system, image data transmitting device thereof, and image data receiving device thereof
WO2012056437A1 (en) 2010-10-29 2012-05-03 École Polytechnique Fédérale De Lausanne (Epfl) Omnidirectional sensor array system
US10362225B2 (en) 2010-10-29 2019-07-23 Ecole Polytechnique Federale De Lausanne (Epfl) Omnidirectional sensor array system
US9904327B2 (en) 2012-03-02 2018-02-27 Microsoft Technology Licensing, Llc Flexible hinge and removable attachment
US9852855B2 (en) 2012-03-02 2017-12-26 Microsoft Technology Licensing, Llc Pressure sensitive key normalization
US10963087B2 (en) 2012-03-02 2021-03-30 Microsoft Technology Licensing, Llc Pressure sensitive keys
US9618977B2 (en) 2012-03-02 2017-04-11 Microsoft Technology Licensing, Llc Input device securing techniques
US9619071B2 (en) 2012-03-02 2017-04-11 Microsoft Technology Licensing, Llc Computing device and an apparatus having sensors configured for measuring spatial information indicative of a position of the computing devices
US9678542B2 (en) 2012-03-02 2017-06-13 Microsoft Technology Licensing, Llc Multiple position input device cover
US9710093B2 (en) 2012-03-02 2017-07-18 Microsoft Technology Licensing, Llc Pressure sensitive key normalization
US9766663B2 (en) 2012-03-02 2017-09-19 Microsoft Technology Licensing, Llc Hinge for component attachment
US10013030B2 (en) 2012-03-02 2018-07-03 Microsoft Technology Licensing, Llc Multiple position input device cover
US9075566B2 (en) 2012-03-02 2015-07-07 Microsoft Technoogy Licensing, LLC Flexible hinge spine
US9870066B2 (en) 2012-03-02 2018-01-16 Microsoft Technology Licensing, Llc Method of manufacturing an input device
WO2014022309A1 (en) 2012-07-30 2014-02-06 Yukich Bartholomew G Systems and methods for creating three-dimensional image media
EP2880862A4 (en) * 2012-07-30 2016-03-23 Bartholomew G Yukich Systems and methods for creating three-dimensional image media
US9824808B2 (en) 2012-08-20 2017-11-21 Microsoft Technology Licensing, Llc Switchable magnetic lock
WO2014035717A1 (en) * 2012-08-30 2014-03-06 Microsoft Corporation Changing perspectives of a microscopic-image device based on a viewer's perspective
US10120420B2 (en) 2014-03-21 2018-11-06 Microsoft Technology Licensing, Llc Lockable display and techniques enabling use of lockable displays
US10324733B2 (en) 2014-07-30 2019-06-18 Microsoft Technology Licensing, Llc Shutdown notifications

Also Published As

Publication number Publication date
HK1048576A1 (en) 2003-04-04
CN1409925A (en) 2003-04-09
EP1224798A2 (en) 2002-07-24
AU1208101A (en) 2001-04-30
WO2001028309A3 (en) 2001-09-13

Similar Documents

Publication Publication Date Title
US7613999B2 (en) Navigable telepresence method and systems utilizing an array of cameras
AU761950B2 (en) A navigable telepresence method and system utilizing an array of cameras
US6741250B1 (en) Method and system for generation of multiple viewpoints into a scene viewed by motionless cameras and for presentation of a view path
US7193645B1 (en) Video system and method of operating a video system
WO2001028309A2 (en) Method and system for comparing multiple images utilizing a navigable array of cameras
US6034716A (en) Panoramic digital camera system
US7224382B2 (en) Immersive imaging system
US6141034A (en) Immersive imaging method and apparatus
US20020190991A1 (en) 3-D instant replay system and method
WO1995007590A1 (en) Time-varying image processor and display device
US5153716A (en) Panoramic interactive system
US20090309975A1 (en) Dynamic Multi-Perspective Interactive Event Visualization System and Method
US20020075295A1 (en) Telepresence using panoramic imaging and directional sound
EP1410621A1 (en) Method and apparatus for control and processing of video images
JP2014529930A (en) Selective capture and display of a portion of a native image
CN2667827Y (en) Quasi-panorama surrounded visual reproducing system
JPH09149296A (en) Moving projector system
WO2002087218A2 (en) Navigable camera array and viewer therefore
US20230353717A1 (en) Image processing system, image processing method, and storage medium
WO1995019093A1 (en) Viewing imaged objects from selected points of view
JP5457668B2 (en) Video display method and video system
JPH07105400A (en) Motion picture reproducing device
JPH1070740A (en) Stereoscopic camera and video transmission system

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AE AL AM AT AU AZ BA BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MD MG MK MN MW MX MZ NO NZ PL PT RO RU SD SE SG SI SK SL TJ TM TR TT UA UG US UZ VN YU ZA ZW

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZW AM AZ BY KG KZ MD RU TJ TM AT BE CH CY DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM GA GN GW ML MR NE SN TD TG

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
WWE Wipo information: entry into national phase

Ref document number: IN/PCT/2002/00473/MU

Country of ref document: IN

WWE Wipo information: entry into national phase

Ref document number: 2000973582

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 008170088

Country of ref document: CN

WWP Wipo information: published in national office

Ref document number: 2000973582

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWW Wipo information: withdrawn in national office

Ref document number: 2000973582

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: JP

DPE2 Request for preliminary examination filed before expiration of 19th month from priority date (pct application filed from 20040101)