US20040165061A1 - Method and apparatus for generating metadata for classifying and searching video databases based on 3-D camera motion - Google Patents

Method and apparatus for generating metadata for classifying and searching video databases based on 3-D camera motion Download PDF

Info

Publication number
US20040165061A1
US20040165061A1 US10/781,018 US78101804A US2004165061A1 US 20040165061 A1 US20040165061 A1 US 20040165061A1 US 78101804 A US78101804 A US 78101804A US 2004165061 A1 US2004165061 A1 US 2004165061A1
Authority
US
United States
Prior art keywords
video
camera motion
motion
image
computing
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US10/781,018
Inventor
Radu Jasinschi
Thumpudi Naveen
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Thomson Licensing SAS
Original Assignee
Thomson Licensing SAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Thomson Licensing SAS filed Critical Thomson Licensing SAS
Priority to US10/781,018 priority Critical patent/US20040165061A1/en
Assigned to THOMSON LICENSING S.A. reassignment THOMSON LICENSING S.A. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: JASINSCHI, RADU S., NAVEEN, THUMPUDI
Publication of US20040165061A1 publication Critical patent/US20040165061A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7847Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content
    • G06F16/786Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using low-level visual features of the video content using motion, e.g. object motion or camera motion

Definitions

  • the present invention relates to video data processing, and more particularly to a method for classifying and searching video databases based on 3-D camera motion.
  • Video is becoming a central medium for the storage, transmission, and retrieval of dense audio-visual information. This has been accelerated by the advent of the Internet, networking technology, and video standardization by the MPEG group. In order to process and retrieve efficiently large amounts of video information, the video sequence has to be appropriately indexed and segmented according to different levels of its contents.
  • This disclosure deals with one method for video indexing based on the (global) camera motion information.
  • the camera as it captures a given scene, moves around in 3-D space and it consequently induces a corresponding 2-D image motion.
  • a forward-looking camera which moves forward induces in the image plane a dollying motion similar to an optical zoom in motion by which image regions increase in size, and they move out of view as they are approached.
  • This kind of motion is very common in TV broadcast/cable news, sports, documentaries, etc. for which the camera, either optically or physically, zooms in or out or dollys forward and backward with respect to a given scene spot. This indicates the intention to focus the viewer's attention on particular scene parts.
  • An analogously common camera motion is that of panning, for which the camera rotates about a vertical axis, thus inducing an apparent horizontal movement of image features. In this case the camera shows different parts of a scene as seen from the distance.
  • This is also very common in TV programs, when the intention is that of giving the viewer a general view of a scene, without pointing to any particular details of it.
  • the camera may be tracking (horizontal translational motion), booming (vertical translational motion), tilting (rotation about the horizontal axis) and/or rolling (rotation about the forward axis).
  • these camera motions constitute a very general mode of communicating content information about video sequences which may be analyzed at various levels of abstraction. This is important for storage and retrieval of video content information which is going to be standardized by MPEG-7 by the year 2001.
  • What is desired is a general method of indexing and searching of video sequences according to camera motion which is based on full 3-D camera motion information estimated independently of the video contents, e.g., how the camera moves or how many objects there are in a given 3-D scene.
  • the present invention provides a method of classifying and searching video databases based on 3-D camera motion that is estimated independently of the video contents. Indexing and searching is realized on a video database made up of shots. Each video shot is assumed to be pre-processed from a long video sequence. For example, the MPEG-7 video test material is divided into CD-ROMs containing roughly 45 minutes of audio-video data ( ⁇ 650 Mbytes). The shots are either manually or automatically generated. A collection of these shots makes up a video database. Each shot is individually processed to determine the camera motion parameters and afterwards indexed according to different types of camera motion. Finally, the video database is searched according to user specifications of types of camera motion.
  • FIG. 1 is a block diagram view of an overall system for classifying and searching video databases according to the present invention.
  • FIG. 2 is a block diagram view of a system for video shot querying according to the present invention.
  • FIG. 3 is a block diagram view of a search system according to the present invention.
  • FIG. 4 is a plan view of a screen showing the results of a video database search based on camera motion according to the present invention.
  • FIG. 5 is a graphic view of camera motion modes versus time for a video shot according to the present invention.
  • FIG. 1 the method of the current invention is summarized by the following steps. Given a video shot from a video database, the method:
  • the method may still be applied to indexing/search applications if the 3-D camera motion is obtained through another method than by using the essential matrix, as below, such as by using capture-time metadata information. The details of these four steps are explained below.
  • the first part of the method is the extraction of the 3-D camera motion.
  • This uses a variant of a method proposed in U.S. patent application Ser. No. 09/064,889 filed Apr. 22, 1998 by Jasinschi et al entitled “2-D Extended Image Generation from 3-D Data Extracted from a Video Sequence”.
  • the camera motion is estimated for each consecutive pair of images by:
  • the second part of the method consists in the computation of the amounts of motion in image coordinates.
  • the x and y image motion components of the ith feature point are given by:
  • T x ,T y T z are the three translational world (camera) motion components defined with respect to the global 3 -D Cartesian world coordinate system OX,OY,OZ with origin at point O, Z i is the 3-D depth associated with the ith feature point, f is the camera focal length, and x i ,y i are the feature point image coordinates (they vary between ⁇ 1 to 1; the image in normalized image coordinates is of size 2 ⁇ 2 large).
  • the amount of translational motion is defined by the “area” in the image induced by the camera motion; this area is given by a vertical (for horizontal—OX) motion or horizontal (for vertical—OY) motion stripe.
  • ⁇ Z> is the average depth of the features on the imaginary line(s); the operator ⁇ .> takes the average value of a given variable.
  • the sign of the direction of motion is that of T x ,T y given by:
  • the dollying amount of motion is defined by the area spanned by an annulus centered about the image center in normalized coordinates. All feature points in the vicinity of an imaginary circle, centered about the image center, have their image velocities computed; due to pure dollying they move either forward or backward, thus generating a circle of smaller or larger size. It can be shown that the annulus area for a single feature is equal to:
  • a z i ⁇ (( v x i ) 2 +( v y i ) 2 +2 v x i x i +2 v y i y i )
  • a z i ⁇ ((( x i ) 2 +( y i ) 2 )*(( T z /Z i ) 2 +2 T z /Z i ))
  • This equation is normalized by dividing by the area of the circle, i.e., by ⁇ ((x i ) 2 +(y i ) 2 ). This provides a quantity that is independent of the imaginary circle's area. An average of a z i is taken over all the feature points inside a region of confidence defined in a neighborhood of the imaginary circle. Thus the amount of dollying is:
  • a z ( T z ) 2 / ⁇ Z 2 >+2 T z / ⁇ Z>.
  • FOE focus of expansion
  • FOC focus of contraction
  • the FOE may be used to discriminate points in the scene at which the viewer should focus his attention, say a news speaker or a sports athlete.
  • the description of video data may be at different levels of temporal granularity.
  • the description may be on a frame-by-frame basis or in terms of elementary segments.
  • the frame-by-frame basis description contains the full information about the camera motion.
  • the elementary segment descriptor is based on a building block descriptor. Using the concept of elementary segment descriptor gives flexibility in the resolution of the descriptor.
  • the camera motion descriptor Given a time window on a given video data, the camera motion descriptor describes the video data in terms of the union of separate elementary segments, say of track, boom, dolly, tilt, roll and pan, or in terms of the union of joint elementary segments, say the joint description of track, boom, dolly, tilt, roll and pan.
  • FIG. 5 shows an example of a distribution of motion types as they occur over time for given video data.
  • the camera motion descriptor may describe the elementary segments, shown as white rectangles, either as a mixture or non-mixture of these.
  • the mixture mode captures the global information about the camera motion parameters, disregarding detailed temporal information, by jointly describing multiple motion types, even if these motion types occur simultaneously. This level of detail is sufficient for a number of applications.
  • the non-mixture mode captures the notion of pure motion type and their union within certain time intervals.
  • the situations where multiple motion types occur simultaneously are described as a union of the description of pure motion types.
  • the time window of a particular elementary segment may overlap with the time window of another elementary segment. This enhanced level of detail is necessary for a number of applications.
  • ⁇ motion — type duration motion — type /total_duration
  • ⁇ motion — type represents the length in time for which the motion type occurs.
  • the “amount of motion” parameters describe “how much” of track, boom, dolly, pan, tilt, roll and zoom there is in an image. They depend upon the camera parameters.
  • the amount of motion for a given camera motion type is defined as the fraction of the image, an area expressed in normalized coordinates, that is uncovered or covered due to a given camera motion type.
  • the amount of motion may also be computed as the average of the displacement of feature points in the images.
  • These features may be prominent image points, such as “corner” points detected through a corner point detector, or points describing the shape of simple geometrical objects, such as the corner of a rectangle. These parameters are independent of the video encoding format, frame rate or spatial resolution.
  • the camera motion descriptor is defined in the following Table: CameraMotionDescriptor NumSegmentDescription int DescriptionMode int Info[NumSegmentDescription] SegmentedCameraMotion
  • SegmentedCameraMotion is defined in the following Table: Segmented CameraMotion start_time TimeStamp duration (sec.) float presence FractionalPresence speeds AmountofMotion FOE/FOC: horizontal position float FOE/FOC: vertical position float
  • the FOE/FOC parameters determine the position of the FOE/FOC when dolly/zoom is present.
  • FractionPresence is defined in the following Table: FractionalPresence TRACK_LEFT[0 . . . 1] float TRACK_RIGHT[0 . . . 1] float BOOM_DOWN[0 . . . 1] float BOOM_UP[0 . . . 1] float DOLLY_FORWARD[0 . . . 1] float DOLLY_BACKWARD[0 . . . 1] float PAN_LEFT[0 . . . 1] float PAN_RIGHT[0 . . . 1] float TILT_UP[0 . . . 1] float TILT_DOWN[0 . . .
  • the AmountofMotion is defined in the following Table: AmountofMotion TRACK_LEFT[0 . . . 1] float TRACK_RIGHT[0 . . . 1] float BOOM_DOWN[0 . . . 1] float BOOM_UP[0 . . . 1] float DOLLY_FORWARD[0 . . . 1] float DOLLY_BACKWARD[0 . . . 1] float PAN_LEFT[0 . . . 1] float PAN_RIGHT[0 . . . 1] float TILT_UP[0 . . . 1] float TILT_DOWN[0 . . .
  • FractionPresence and AmountofMotion data structures are expressed in the UML language, as suggested by the MPEG7 community.
  • the symbol [0 . . . 1] means that the field is optional.
  • the operation of union of elementary segments may be realized with disjoint or overlapping time windows. If the DescriptionMode in CameraMotionDescriptor is 0, then inside each entry in the vector info[.] the “fractional presence” and the “AmountofMotion” have one and only one entry, i.e., for the “fractional presence” one entry with value 1 and the rest with value 0. This way the optional fields allow the descriptor to represent either mixture of motion types or a single motion type.
  • the fourth part of this method describes how to index video shots according to camera motion parameters.
  • One set of parameters used for this indexing are the tracking, booming and dollying rates. These are complemented by the signs of the three translational camera motion parameters. Additionally the degree of tracking, booming or dollying is used. For this the ratio between the tracking, booming and dollying rates is computed. For indexing with respect to pure dollying, how much larger the dollying rate a z is compared to the tracking and booming rates a x ,a y is determined.
  • a ratio is used that goes from 1.0 to 5.0; in using 1.0, shots are indexed which contain camera dollying, but which may also have an equal share of camera tracking and booming; on the other hand a value of 3.0 puts a more stringent indexing of shots containing “strong” camera dollying. For indexing with respect to tracking and booming, how much larger the tracking and booming rates are compared to the dollying rate is determined. Similar ratios between 1.0 and 5.0 are used.
  • This indexing is realized on metadata files containing camera motion parameters, rates of tracking, booming and dollying, and the FOE (FOC).
  • a set of specifications is used, say, indexing the shots for “strong” dolly in.
  • the indexing result is shown in a file containing all the shots in the database with a string of zeros (0) and ones (1), the time intervals for which an event occurs, and a number between 0 and 1 giving the number of is with respect to the total number of frames in the shot.
  • the 0/1 string determines if a given frame has an event (panning, zooming), thus 1, or does not have it, thus 0.
  • this string of 0s is 1s post-processed by: 1. Deleting isolated 1s, i.e., flanked, on both sides by, at least, two zeros; 2. Filling in gaps with 1, 2, and 3, contiguous 0s, i.e., with the configurations 101, 1001, and 10001 ; these were transformed to 111, 1111, and 11111, respectively; 3. Removing isolated 1s at the boundaries, i.e., for string start 100 goes to 000, and for string end 001 goes to 000. Based on these numbers the shots in the database are rated in decreasing order.
  • the video shot query is done based on the user specifications, i.e., the user wants the query for “strong” dollying and uses the descriptors extracted as discussed above. More specifically given the metadata files containing the camera motion parameters and the rates of tracking, booming and dollying, and a set of user specifications, the query is realized on a video shot database, as shown in FIG. 2.
  • GUI graphical user interface
  • B The degree of tracking, booming or dollying. This degree is given by the ratio between the tracking, booming and dollying rates. For indexing with respect to pure dollying, how much larger the dollying rate is compared to the tracking and booming rates is determined. Typically a ratio is used that goes from 1.0 to 5.0; 1.0 denotes indexing of shots containing camera dollying, but also have an equal share of tracking and booming; while a value of 3.0 puts a more stringent indexing of shots containing “strong” dollying. For indexing with respect to tracking or booming, how much larger the tracking or booming rates are compared to the dollying rate is determined. Similar ratios between 1.0 and 5.0 are used. This is chosen in the graphical user interface by a horizontal scrolling bar. Once item A is specified, the user chooses the degree of Tracking/booming/dollying by positioning the scrolling bar at the appropriate position.
  • the GUI displays the four best ranked shots by displaying a thumbnail of each shot, with a timeline of frames showing the highlighted ranked frames. Finally the user plays each of the four shots between the ranked frames.
  • the query result is shown in a file containing all the shots in the database with a string of zeros and ones.
  • the 0/1 string determines if a given frame has an event (panning or zooming).
  • This string may be further compressed by using techniques, such as run length/arithmetic coding, for efficient storage and transmission.
  • FIG. 4 shows the result of a query for dolly forward.
  • the search was done using a video shot database consisting of a total of 52 shots. These shots were manually chosen from the MPEG-7 video test material. For each shot, camera motion parameters were extracted per successive pairs of frames. The per frame processing time varied depending on the image intensity information quality, e.g., images with strong contrast and “texture” information were rich in feature points, thus allowing an adequate camera parameter estimation, while other images with poor or almost nonexistent contrast information did not permit an adequate estimation. On average this processing time was about 2 mins., varying between 1-3 mins. After all the 52 shots were processed, they were indexed. The resulting metadata files were stored.
  • the first three columns correspond to the (normalized) T x ,T y ,T z translational camera motion components (the translational motion is normalized to have the sum of its squares equal to 1.0).
  • This shot shows very strong camera dolly backward; therefore T z >T x T y .
  • indexing classification metadata file for the same shot. 192 10 3 0.109477 0.206756 0.683767 0.002532 0.004782 0.049684 0.001266 0.002391 0.029065 4 0.078388 0.335339 0.586273 0.001917 0.008200 0.045036 0.000958 0.004100 0.026346 5 0.003976 0.284010 0.712014 0.000096 0.006845 0.053912 0.000048 0.003423 0.031539.
  • the indexing is in almost real-time; it just requires parsing the metadata files. Together with this, the indexed shots are ranked according to the total number of frames/shot; the first four best ranked shots were shown via a GUI.
  • the first string identifies it, followed by s string of 0s and is; the last (floating point) number is an index that gives the ratio of 1s divided by the total number of 0s and 1s; the latter number could also be used for ranking purposes.
  • These shots are ranked by counting the total number of contiguous is; in order to make this ranking more effective we post-process the strings of 0s and 1s, as explained before. After this the shots are ranked. Following is an example of ranking for subsequent shots: shot04 144 227 shot06 4 31 shot03 1 25 shot05 53 59.
  • Shot #04 has the longest string of contiguous 1s, from frame 144 to frame 227 , followed by shot #06, shot #03 and shot #05.
  • the present invention provides a method of classifying and searching video databases based on 3-D camera motion parameters which provides a descriptor for indexing video shots according to the occurrence of particular camera motions and their degree.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Library & Information Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Image Analysis (AREA)

Abstract

A method of indexing and searching a video database having a plurality of video shots uses 3-D camera motion parameters. For each video shot the 3-D camera motion parameters are estimated, rates of tracking, booming, dollying, panning, tilting, rolling and zooming are computed, and the results are indexed in a metadata index file in the video database according to the types of camera. The video database is searched by selecting one of the types of camera motion and submitting a query. The query is processed to identify those video shots in the video database that satisfy the query in order of priority. The highest priority video shots are displayed for the user.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This is a continuation of provisional U.S. Patent Application Serial No. 60/118,204 filed Feb. 1, 1999, now abandoned.[0001]
  • BACKGROUND OF THE INVENTION
  • The present invention relates to video data processing, and more particularly to a method for classifying and searching video databases based on 3-D camera motion. [0002]
  • Video is becoming a central medium for the storage, transmission, and retrieval of dense audio-visual information. This has been accelerated by the advent of the Internet, networking technology, and video standardization by the MPEG group. In order to process and retrieve efficiently large amounts of video information, the video sequence has to be appropriately indexed and segmented according to different levels of its contents. This disclosure deals with one method for video indexing based on the (global) camera motion information. The camera, as it captures a given scene, moves around in 3-D space and it consequently induces a corresponding 2-D image motion. For example, a forward-looking camera which moves forward induces in the image plane a dollying motion similar to an optical zoom in motion by which image regions increase in size, and they move out of view as they are approached. This kind of motion is very common in TV broadcast/cable news, sports, documentaries, etc. for which the camera, either optically or physically, zooms in or out or dollys forward and backward with respect to a given scene spot. This indicates the intention to focus the viewer's attention on particular scene parts. An analogously common camera motion is that of panning, for which the camera rotates about a vertical axis, thus inducing an apparent horizontal movement of image features. In this case the camera shows different parts of a scene as seen from the distance. This is also very common in TV programs, when the intention is that of giving the viewer a general view of a scene, without pointing to any particular details of it. In addition to dollying and panning, the camera may be tracking (horizontal translational motion), booming (vertical translational motion), tilting (rotation about the horizontal axis) and/or rolling (rotation about the forward axis). Taken together, these camera motions constitute a very general mode of communicating content information about video sequences which may be analyzed at various levels of abstraction. This is important for storage and retrieval of video content information which is going to be standardized by MPEG-7 by the year 2001. [0003]
  • What is desired is a general method of indexing and searching of video sequences according to camera motion which is based on full 3-D camera motion information estimated independently of the video contents, e.g., how the camera moves or how many objects there are in a given 3-D scene. [0004]
  • BRIEF SUMMARY OF THE INVENTION
  • Accordingly the present invention provides a method of classifying and searching video databases based on 3-D camera motion that is estimated independently of the video contents. Indexing and searching is realized on a video database made up of shots. Each video shot is assumed to be pre-processed from a long video sequence. For example, the MPEG-7 video test material is divided into CD-ROMs containing roughly 45 minutes of audio-video data (˜650 Mbytes). The shots are either manually or automatically generated. A collection of these shots makes up a video database. Each shot is individually processed to determine the camera motion parameters and afterwards indexed according to different types of camera motion. Finally, the video database is searched according to user specifications of types of camera motion. [0005]
  • The objects, advantages and other novel features of the present invention are apparent from the following detailed description when read in conjunction with the appended claims and attached drawing.[0006]
  • BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWING
  • FIG. 1 is a block diagram view of an overall system for classifying and searching video databases according to the present invention. [0007]
  • FIG. 2 is a block diagram view of a system for video shot querying according to the present invention. [0008]
  • FIG. 3 is a block diagram view of a search system according to the present invention. [0009]
  • FIG. 4 is a plan view of a screen showing the results of a video database search based on camera motion according to the present invention. [0010]
  • FIG. 5 is a graphic view of camera motion modes versus time for a video shot according to the present invention.[0011]
  • DETAILED DESCRIPTION OF THE INVENTION
  • Referring now to FIG. 1 the method of the current invention is summarized by the following steps. Given a video shot from a video database, the method: [0012]
  • 1. Estimates 3-D camera motion; [0013]
  • 2. Computes the amount of motion in the image as induced by the 3-D camera motion; [0014]
  • 3. Indexes the shot by the type of camera motion, e.g., translational (tracking, booming, dollying) or rotational (panning, tilting, rolling), based on the amount of motion and the 3-D camera motion signs; and [0015]
  • 4. Queries (or searches) for sub-shots or shot intervals based on the indexing information from [0016] step 3.
  • It should be remarked that the method may still be applied to indexing/search applications if the 3-D camera motion is obtained through another method than by using the essential matrix, as below, such as by using capture-time metadata information. The details of these four steps are explained below. [0017]
  • The first part of the method is the extraction of the 3-D camera motion. This uses a variant of a method proposed in U.S. patent application Ser. No. 09/064,889 filed Apr. 22, 1998 by Jasinschi et al entitled “2-D Extended Image Generation from 3-D Data Extracted from a Video Sequence”. In summary, the camera motion is estimated for each consecutive pair of images by: [0018]
  • (a) Computing image feature points (corners) via the Kitchen-Rosenfeld corner detection operator, [0019]
  • (b) Computing image intensity contrast or variance variation; at each pixel the image intensity mean and the variance about this mean are computed within a rectangular window; a histogram of the variance for all pixels is computed; assuming that this histogram is unimodal, a mean and variance for this histogram are computed; pixels whose intensity contrast variance lies outside the histogram variance are not used. [0020]
  • (c) Tracking corner points; this uses a hierarchical matching method, as disclosed in the above-identified U.S. patent application. [0021]
  • (d) Pruning the matched corner points by verifying if each corner point has a MATCHINGGOODNESS value that is smaller than a given threshold; the MATCHINGGOODNESS is equal to the product of the image intensity contrast variance with the cross-correlation measure (used in (c)); this pruning method is used instead of the one proposed in the above-identifed U.S. patent application which verifies separately if a corner point has a cornerness value and a cross-correlation value which are separately below given threshold values. [0022]
  • (e) Tessellating the image into eight (8) contiguous rectangular regions; selecting, based on a pseudo-random number generator, one arbitrary corner point per rectangle. [0023]
  • (f) Computing the essential matrix E. [0024]
  • (g) Computing the translation T and rotation R matrices from E. [0025]
  • (h) Repeating steps (e)-(g) for a pre-determined number of times (such as 1000). [0026]
  • (i) Obtaining a single (“best”) T and R. [0027]
  • The second part of the method consists in the computation of the amounts of motion in image coordinates. In the case of no rotational camera motion, the x and y image motion components of the ith feature point (in normalized coordinates) are given by: [0028]
  • v x i =f*(x i T 2 −T x)/Z i,
  • v y i =f*(y i T z −T y)/Z i,
  • where T[0029] x,TyTz are the three translational world (camera) motion components defined with respect to the global 3-D Cartesian world coordinate system OX,OY,OZ with origin at point O, Zi is the 3-D depth associated with the ith feature point, f is the camera focal length, and xi,yi are the feature point image coordinates (they vary between −1 to 1; the image in normalized image coordinates is of size 2×2 large).
  • Camera translational motion (tracking or booming), occurs when T[0030] z=0, and either (or both) Tx≠0,Ty≠0. The amount of translational motion is defined by the “area” in the image induced by the camera motion; this area is given by a vertical (for horizontal—OX) motion or horizontal (for vertical—OY) motion stripe. The thickness of these stripes is proportional to vx,vy; in order to obtain a more robust value for these areas, an average over many feature points is taken: an imaginary vertical (horizontal) line is used, say passing through the image center, and the velocity of all feature points close to this line (by a given tolerance distance) is computed; this requires the knowledge of depth values which are computed as in the above-identified U.S. patent application. This gives the areas for vertical and horizontal translation:
  • a x |T x /<Z>|,
  • a y =|T y /<Z>|,
  • where <Z> is the average depth of the features on the imaginary line(s); the operator <.> takes the average value of a given variable. The sign of the direction of motion is that of T[0031] x,Ty given by:
  • sign(T x)=T x |T x|,
  • sign(T y)=T y /|T y|.
  • The convention used is: [0032]
  • (a.1) right tracking: sign(T[0033] x)<0
  • (a.2) left tracking: sign(T[0034] x)>0
  • (b.1) upward booming: sign(T[0035] y)>0
  • (b.2) downward booming: sign(T[0036] y)<0
  • This completes the description of the translational motion amounts. [0037]
  • Dollying is defined for T[0038] x=Ty=0, and Tz≠0. The dollying amount of motion is defined by the area spanned by an annulus centered about the image center in normalized coordinates. All feature points in the vicinity of an imaginary circle, centered about the image center, have their image velocities computed; due to pure dollying they move either forward or backward, thus generating a circle of smaller or larger size. It can be shown that the annulus area for a single feature is equal to:
  • a z i=π((v x i)2+(v y i)2+2v x i x i+2v y i y i)
  • Using that, for pure dollying, [0039]
  • v x i=(x i T z)/Z i,
  • v y i=(y i T z)/Zi,
  • we get that: [0040]
  • a z i=π(((x i)2+(y i)2)*((T z /Z i)2+2T z /Z i))
  • This equation is normalized by dividing by the area of the circle, i.e., by π((x[0041] i)2+(yi)2). This provides a quantity that is independent of the imaginary circle's area. An average of az i is taken over all the feature points inside a region of confidence defined in a neighborhood of the imaginary circle. Thus the amount of dollying is:
  • a z=(T z)2 /<Z 2>+2T z /<Z>.
  • The sign for the dolly motion is given by that of T[0042] z:
  • (a.1) dolly forward: sign(T[0043] z)<0
  • (a.2) dolly backward: sign(T[0044] z)>0
  • It should be remarked that a circle in the normalized image coordinate system maps to an ellipse in the un-normalized (raster scan) coordinate system. This is important because, as it is known, dollying is associated with radially symmetric lines which meet at the FOE (FOC) and which are perpendicular to circles of constant image velocity. This completes the description of the dollying motion. [0045]
  • For rotational camera motion the amount of motion for panning and tilting is given by a[0046] pany and atiltx, where Ωx=−R2,3 and Ωy=−R1,3, given that Rij (1≦i, j≦3) is an element of the rotational motion matrix R. Finally for rolling aroll=2/(2+tan(Ωz), where Ωz=−R1,2.
  • The focus of expansion (FOE) or the focus of contraction (FOC) are a complement to these amounts of motion; the FOE (FOC) is the (imaginary) point in the image at which all image motions have their directions converge, such that they point from it (at it). Its position is defined by: [0047]
  • x 0 =T x /T z,
  • y 0 =T y /T z,
  • The FOE (FOC) may be used to discriminate points in the scene at which the viewer should focus his attention, say a news speaker or a sports athlete. [0048]
  • The description of video data may be at different levels of temporal granularity. The description may be on a frame-by-frame basis or in terms of elementary segments. The frame-by-frame basis description contains the full information about the camera motion. The elementary segment descriptor is based on a building block descriptor. Using the concept of elementary segment descriptor gives flexibility in the resolution of the descriptor. [0049]
  • Given a time window on a given video data, the camera motion descriptor describes the video data in terms of the union of separate elementary segments, say of track, boom, dolly, tilt, roll and pan, or in terms of the union of joint elementary segments, say the joint description of track, boom, dolly, tilt, roll and pan. These two approaches are discussed below. A shot/sub-shot description gives an overall view of the camera motion types and motion amount present in that shot/sub-shot. [0050]
  • FIG. 5 shows an example of a distribution of motion types as they occur over time for given video data. The camera motion descriptor may describe the elementary segments, shown as white rectangles, either as a mixture or non-mixture of these. The mixture mode captures the global information about the camera motion parameters, disregarding detailed temporal information, by jointly describing multiple motion types, even if these motion types occur simultaneously. This level of detail is sufficient for a number of applications. [0051]
  • On the other hand the non-mixture mode captures the notion of pure motion type and their union within certain time intervals. The situations where multiple motion types occur simultaneously are described as a union of the description of pure motion types. In this mode of description the time window of a particular elementary segment may overlap with the time window of another elementary segment. This enhanced level of detail is necessary for a number of applications. [0052]
  • The fractional presence of a motion type (Δ[0053] motion type) within a given sequence of frames is defined as follows. Let total_duration be the duration of the temporal window for a given description. Then
  • Δmotion type=durationmotion type/total_duration where Δmotion type represents the length in time for which the motion type occurs.
  • The “amount of motion” parameters describe “how much” of track, boom, dolly, pan, tilt, roll and zoom there is in an image. They depend upon the camera parameters. The amount of motion for a given camera motion type is defined as the fraction of the image, an area expressed in normalized coordinates, that is uncovered or covered due to a given camera motion type. The amount of motion may also be computed as the average of the displacement of feature points in the images. These features may be prominent image points, such as “corner” points detected through a corner point detector, or points describing the shape of simple geometrical objects, such as the corner of a rectangle. These parameters are independent of the video encoding format, frame rate or spatial resolution. [0054]
  • The camera motion descriptor is defined in the following Table: [0055]
    CameraMotionDescriptor
    NumSegmentDescription int
    DescriptionMode int
    Info[NumSegmentDescription] SegmentedCameraMotion
  • The NumSegmentDescription is the number of elementary segments being combined through the union operation. If DescriptionMode=0, this corresponds to the non-mixture mode, and if DescriptionMode=1, this corresponds to the mixture mode. [0056]
  • SegmentedCameraMotion is defined in the following Table: [0057]
    Segmented CameraMotion
    start_time TimeStamp
    duration (sec.) float
    presence FractionalPresence
    speeds AmountofMotion
    FOE/FOC: horizontal position float
    FOE/FOC: vertical position float
  • The FOE/FOC parameters determine the position of the FOE/FOC when dolly/zoom is present. [0058]
  • The FractionPresence is defined in the following Table: [0059]
    FractionalPresence
    TRACK_LEFT[0 . . . 1] float
    TRACK_RIGHT[0 . . . 1] float
    BOOM_DOWN[0 . . . 1] float
    BOOM_UP[0 . . . 1] float
    DOLLY_FORWARD[0 . . . 1] float
    DOLLY_BACKWARD[0 . . . 1] float
    PAN_LEFT[0 . . . 1] float
    PAN_RIGHT[0 . . . 1] float
    TILT_UP[0 . . . 1] float
    TILT_DOWN[0 . . . 1] float
    ROLL_CLOCKWISE[0 . . . 1] float
    ROL_ANTICLOCKWISE[0 . . . 1] float
    ZOOM_IN[0 . . . 1] float
    ZOOM_OUT[0 . . . 1] float
    FIXED[0 . . . 1] float
  • The AmountofMotion is defined in the following Table: [0060]
    AmountofMotion
    TRACK_LEFT[0 . . . 1] float
    TRACK_RIGHT[0 . . . 1] float
    BOOM_DOWN[0 . . . 1] float
    BOOM_UP[0 . . . 1] float
    DOLLY_FORWARD[0 . . . 1] float
    DOLLY_BACKWARD[0 . . . 1] float
    PAN_LEFT[0 . . . 1] float
    PAN_RIGHT[0 . . . 1] float
    TILT_UP[0 . . . 1] float
    TILT_DOWN[0 . . . 1] float
    ROLL_CLOCKWISE[0 . . . 1] float
    ROLL_ANTICLOCKWISE[0 . . . 1] float
    ZOOM_IN[0 . . . 1] float
    ZOOM_OUT[0 . . . 1] float
  • The FractionPresence and AmountofMotion data structures are expressed in the UML language, as suggested by the MPEG7 community. The symbol [0 . . . 1] means that the field is optional. The operation of union of elementary segments may be realized with disjoint or overlapping time windows. If the DescriptionMode in CameraMotionDescriptor is 0, then inside each entry in the vector info[.] the “fractional presence” and the “AmountofMotion” have one and only one entry, i.e., for the “fractional presence” one entry with [0061] value 1 and the rest with value 0. This way the optional fields allow the descriptor to represent either mixture of motion types or a single motion type.
  • The fourth part of this method describes how to index video shots according to camera motion parameters. One set of parameters used for this indexing are the tracking, booming and dollying rates. These are complemented by the signs of the three translational camera motion parameters. Additionally the degree of tracking, booming or dollying is used. For this the ratio between the tracking, booming and dollying rates is computed. For indexing with respect to pure dollying, how much larger the dollying rate a[0062] z is compared to the tracking and booming rates ax,ay is determined. Typically a ratio is used that goes from 1.0 to 5.0; in using 1.0, shots are indexed which contain camera dollying, but which may also have an equal share of camera tracking and booming; on the other hand a value of 3.0 puts a more stringent indexing of shots containing “strong” camera dollying. For indexing with respect to tracking and booming, how much larger the tracking and booming rates are compared to the dollying rate is determined. Similar ratios between 1.0 and 5.0 are used.
  • This indexing is realized on metadata files containing camera motion parameters, rates of tracking, booming and dollying, and the FOE (FOC). Given a video shot database, a set of specifications is used, say, indexing the shots for “strong” dolly in. The indexing result is shown in a file containing all the shots in the database with a string of zeros (0) and ones (1), the time intervals for which an event occurs, and a number between 0 and 1 giving the number of is with respect to the total number of frames in the shot. The 0/1 string determines if a given frame has an event (panning, zooming), thus 1, or does not have it, thus 0. In order to make the results more consistent, this string of 0s and is 1s post-processed by: 1. Deleting isolated 1s, i.e., flanked, on both sides by, at least, two zeros; 2. Filling in gaps with 1, 2, and 3, contiguous 0s, i.e., with the configurations 101, 1001, and [0063] 10001; these were transformed to 111, 1111, and 11111, respectively; 3. Removing isolated 1s at the boundaries, i.e., for string start 100 goes to 000, and for string end 001 goes to 000. Based on these numbers the shots in the database are rated in decreasing order.
  • The video shot query is done based on the user specifications, i.e., the user wants the query for “strong” dollying and uses the descriptors extracted as discussed above. More specifically given the metadata files containing the camera motion parameters and the rates of tracking, booming and dollying, and a set of user specifications, the query is realized on a video shot database, as shown in FIG. 2. [0064]
  • A graphical user interface (GUI) is used as an interface for the query. The user specifies: [0065]
  • A. What type of camera motion, tracking, booming or dollying, he wants to query on: [0066]
  • 1. dolly forward, [0067]
  • 2. dolly backward, [0068]
  • 3. track right [0069]
  • 4. track left [0070]
  • 5. boom up [0071]
  • 6. boom down. [0072]
  • One of these six options are clicked by the user in a specially designed box. [0073]
  • B. The degree of tracking, booming or dollying. This degree is given by the ratio between the tracking, booming and dollying rates. For indexing with respect to pure dollying, how much larger the dollying rate is compared to the tracking and booming rates is determined. Typically a ratio is used that goes from 1.0 to 5.0; 1.0 denotes indexing of shots containing camera dollying, but also have an equal share of tracking and booming; while a value of 3.0 puts a more stringent indexing of shots containing “strong” dollying. For indexing with respect to tracking or booming, how much larger the tracking or booming rates are compared to the dollying rate is determined. Similar ratios between 1.0 and 5.0 are used. This is chosen in the graphical user interface by a horizontal scrolling bar. Once item A is specified, the user chooses the degree of Tracking/booming/dollying by positioning the scrolling bar at the appropriate position. [0074]
  • After this the user submits the query on the system shown in FIG. 3. As a result the GUI displays the four best ranked shots by displaying a thumbnail of each shot, with a timeline of frames showing the highlighted ranked frames. Finally the user plays each of the four shots between the ranked frames. [0075]
  • The query result is shown in a file containing all the shots in the database with a string of zeros and ones. The 0/1 string determines if a given frame has an event (panning or zooming). This string may be further compressed by using techniques, such as run length/arithmetic coding, for efficient storage and transmission. [0076]
  • FIG. 4 shows the result of a query for dolly forward. The search was done using a video shot database consisting of a total of 52 shots. These shots were manually chosen from the MPEG-7 video test material. For each shot, camera motion parameters were extracted per successive pairs of frames. The per frame processing time varied depending on the image intensity information quality, e.g., images with strong contrast and “texture” information were rich in feature points, thus allowing an adequate camera parameter estimation, while other images with poor or almost nonexistent contrast information did not permit an adequate estimation. On average this processing time was about 2 mins., varying between 1-3 mins. After all the 52 shots were processed, they were indexed. The resulting metadata files were stored. [0077]
  • As an example of a camera parameter metadata file below are the first 3 lines for a shot which has a total of 192 processed frames: [0078]
    192 13
    3 0.124551 −0.279116 0.952146 151.417717 28.582283 111.794757
    68.205243 107.560949 72.439051 0.212731 −0.212731 0.212731
    4 0.121448 −0.545849 0.829035 178.158197 1.841803 90.481436
    89.518564 91.777726 88.222274 0.290051 −0.290051 0.290051
    5 0.006156 −0.411413 0.911428 163.579885 16.420115 93.838807
    86.161193 74.059700 105.940300 0.373067 −0.373067 0.373067
  • For example, the first three columns correspond to the (normalized) T[0079] x,Ty,Tz translational camera motion components (the translational motion is normalized to have the sum of its squares equal to 1.0). This shot shows very strong camera dolly backward; therefore Tz>TxTy.
  • Following is an example of the indexing classification metadata file for the same shot. [0080]
    192 10
    3 0.109477 0.206756 0.683767 0.002532 0.004782 0.049684 0.001266
    0.002391 0.029065
    4 0.078388 0.335339 0.586273 0.001917 0.008200 0.045036 0.000958
    0.004100 0.026346
    5 0.003976 0.284010 0.712014 0.000096 0.006845 0.053912 0.000048
    0.003423 0.031539.
  • The last three columns correspond to the tracking, booming and dollying rates; the effects of dolly backward show clearly: the dollying rate is larger than the tracking and booming rates. [0081]
  • Given the indexing specifications, the indexing is in almost real-time; it just requires parsing the metadata files. Together with this, the indexed shots are ranked according to the total number of frames/shot; the first four best ranked shots were shown via a GUI. Next, an example of an indexing metadata file is shown; this file resulted from a request for dolly forward for multiple shots: [0082]
    shot03.bmp
    1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    0 0 0 0 0 0 0.774194
    shot04.bmp
    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    0 0 0 0 1 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1 1 1 1 1
    1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    1 1 1 0.387665
    shot05.bmp
    0 0 0 0 0 0 0 0 0 0 1 1 1 0 0 0 0 0 0 0 0 0 0 0 0
    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    0 0 1 1 1 1 1 1 1 0 0 0 0 0 0 0 0 0 1 1 0 0 0 0 0
    0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 1 1
    1 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0 0
    0 0.126984
    shot06.bmp
    0 0 0 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1 1
    1 1 1 1 1 1 0.903226.
  • For each shot, the first string identifies it, followed by s string of 0s and is; the last (floating point) number is an index that gives the ratio of 1s divided by the total number of 0s and 1s; the latter number could also be used for ranking purposes. These shots are ranked by counting the total number of contiguous is; in order to make this ranking more effective we post-process the strings of 0s and 1s, as explained before. After this the shots are ranked. Following is an example of ranking for subsequent shots: [0083]
    shot04 144 227
    shot06 4 31
    shot03 1 25
    shot05 53 59.
  • Shot #04 has the longest string of contiguous 1s, from frame [0084] 144 to frame 227, followed by shot #06, shot #03 and shot #05.
  • Thus the present invention provides a method of classifying and searching video databases based on 3-D camera motion parameters which provides a descriptor for indexing video shots according to the occurrence of particular camera motions and their degree. [0085]

Claims (7)

What is claimed is:
1. A method of indexing and searching a video database containing a plurality of video shots comprising the steps of:
for each video shot estimating 3-D camera motion parameters from successive pairs of images in the video shot;
computing a rate of motion for each image from the video shot using the 3-D camera motion parameters;
indexing the video shot by types of camera motion based on the rate of motion and a sign of the 3-D camera motion parameters; and
repeating the estimating, computing and indexing steps for each video shot in the video database.
2. The method as recited in claim 1 further comprising the step of searching for video shots within the video database based on a selected one of the types of camera motion.
3. The method as recited in claim 2 wherein the types of camera motion are selected from the group consisting of tracking, booming, dollying, panning, rolling, tilting and zooming.
4. The method as recited in claim 1 wherein the estimating step comprises the steps of:
computing image feature points from each consecutive pair of images in the video shot;
computing image intensity contrast variation to select pixels from the images to be used;
tracking the image feature points from image to image in the given shot to identify matched feature points;
pruning the matched feature points using the image intensity contrast variation; and
computing interatively from the matched feature points a best set of matrices representing translation and rotation of the images.
5. The method as recited in claim 4 wherein the computing step comprises the steps of:
computing rates of tracking, booming and dollying from the translation matrix for each image feature point;
computing a focus of interest as a point in each image at which all image motions converge as a function of the translation matrix; and
obtaining a vector descriptor for each consecutive pair of images as a function of the rates of tracking, booming and zooming, and the focus of interest as the rate of motion.
6. The method as recited in claim 5 wherein the indexing step comprises the steps of:
computing how much larger the tracking and booming rates are compared to the dollying rate as a first ratio;
computing how much larger the dollying rate is compared to the tracking and booming rates as a second ratio;
generating an index file for the video shot containing a string of ones and zeros for each of the types of camera motion.
7. The method as recited in claim 6 wherein the searching step comprises the steps of:
querying the video database with a selected one of the types of camera motion;
processing the selected one of the types of camera motion to find the video shots satisfying selected one of the types of camera motion; and
displaying the video shots satisfying the processing step.
US10/781,018 1999-02-01 2004-02-18 Method and apparatus for generating metadata for classifying and searching video databases based on 3-D camera motion Abandoned US20040165061A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US10/781,018 US20040165061A1 (en) 1999-02-01 2004-02-18 Method and apparatus for generating metadata for classifying and searching video databases based on 3-D camera motion

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
US11820499P 1999-02-01 1999-02-01
US09/495,091 US6748158B1 (en) 1999-02-01 2000-02-01 Method for classifying and searching video databases based on 3-D camera motion
US10/781,018 US20040165061A1 (en) 1999-02-01 2004-02-18 Method and apparatus for generating metadata for classifying and searching video databases based on 3-D camera motion

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US09/495,091 Division US6748158B1 (en) 1999-02-01 2000-02-01 Method for classifying and searching video databases based on 3-D camera motion

Publications (1)

Publication Number Publication Date
US20040165061A1 true US20040165061A1 (en) 2004-08-26

Family

ID=32328593

Family Applications (2)

Application Number Title Priority Date Filing Date
US09/495,091 Expired - Fee Related US6748158B1 (en) 1999-02-01 2000-02-01 Method for classifying and searching video databases based on 3-D camera motion
US10/781,018 Abandoned US20040165061A1 (en) 1999-02-01 2004-02-18 Method and apparatus for generating metadata for classifying and searching video databases based on 3-D camera motion

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US09/495,091 Expired - Fee Related US6748158B1 (en) 1999-02-01 2000-02-01 Method for classifying and searching video databases based on 3-D camera motion

Country Status (1)

Country Link
US (2) US6748158B1 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100384220C (en) * 2006-01-17 2008-04-23 东南大学 Video camera rating data collecting method and its rating plate
US20110170841A1 (en) * 2009-01-14 2011-07-14 Sony Corporation Information processing device, information processing method and program
EP2651130A1 (en) * 2012-04-10 2013-10-16 Acer Incorporated Method for assisting in video compression using rotation operation and image capturing device thereof
CN114996600A (en) * 2022-08-03 2022-09-02 成都经纬达空间信息技术有限公司 Multi-temporal image management database data writing and reading method and device

Families Citing this family (28)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6748158B1 (en) * 1999-02-01 2004-06-08 Grass Valley (U.S.) Inc. Method for classifying and searching video databases based on 3-D camera motion
US7050110B1 (en) * 1999-10-29 2006-05-23 Intel Corporation Method and system for generating annotations video
US6807361B1 (en) * 2000-07-18 2004-10-19 Fuji Xerox Co., Ltd. Interactive custom video creation system
US7751683B1 (en) * 2000-11-10 2010-07-06 International Business Machines Corporation Scene change marking for thumbnail extraction
NO20020417L (en) * 2001-01-25 2002-07-26 Ensequence Inc Selective viewing of video based on one or more themes
US6965645B2 (en) * 2001-09-25 2005-11-15 Microsoft Corporation Content-based characterization of video frame sequences
EP1376471A1 (en) * 2002-06-19 2004-01-02 STMicroelectronics S.r.l. Motion estimation for stabilization of an image sequence
EP1377040A1 (en) * 2002-06-19 2004-01-02 STMicroelectronics S.r.l. Method of stabilizing an image sequence
US7483624B2 (en) * 2002-08-30 2009-01-27 Hewlett-Packard Development Company, L.P. System and method for indexing a video sequence
US7734144B2 (en) * 2002-10-30 2010-06-08 Koninklijke Philips Electronics N.V. Method and apparatus for editing source video to provide video image stabilization
JP2006005682A (en) * 2004-06-17 2006-01-05 Toshiba Corp Data structure of meta-data of dynamic image and reproducing method therefor
JP2006050105A (en) * 2004-08-02 2006-02-16 Toshiba Corp Structure of matadata and its reproducing device and method
JP2006050275A (en) * 2004-08-04 2006-02-16 Toshiba Corp Structure of metadata and its reproduction method
JP4250574B2 (en) * 2004-08-05 2009-04-08 株式会社東芝 Metadata structure and method of reproducing metadata
JP2006080918A (en) * 2004-09-09 2006-03-23 Toshiba Corp Data structure and reproduction device of metadata
JP4133981B2 (en) * 2004-09-09 2008-08-13 株式会社東芝 Metadata and video playback device
JP2006099671A (en) * 2004-09-30 2006-04-13 Toshiba Corp Search table of meta data of moving image
JP2006113632A (en) * 2004-10-12 2006-04-27 Toshiba Corp Data structure of metadata, metadata reproduction device, and method therefor
US7689631B2 (en) * 2005-05-31 2010-03-30 Sap, Ag Method for utilizing audience-specific metadata
JP2007235324A (en) * 2006-02-28 2007-09-13 Toshiba Corp Information processing apparatus and information processing method executing decryption or encryption
JP4673916B2 (en) * 2006-03-10 2011-04-20 パイオニア株式会社 Information processing apparatus, information processing method, and information processing program
US8117210B2 (en) * 2006-10-06 2012-02-14 Eastman Kodak Company Sampling image records from a collection based on a change metric
WO2008111308A1 (en) * 2007-03-12 2008-09-18 Panasonic Corporation Content imaging device
JP4645707B2 (en) * 2008-09-01 2011-03-09 ソニー株式会社 Content data processing device
US9171578B2 (en) 2010-08-06 2015-10-27 Futurewei Technologies, Inc. Video skimming methods and systems
US20160148648A1 (en) * 2014-11-20 2016-05-26 Facebook, Inc. Systems and methods for improving stabilization in time-lapse media content
US10229325B2 (en) 2017-02-28 2019-03-12 International Business Machines Corporation Motion based video searching system using a defined movement path for an object
DE102018009571A1 (en) * 2018-12-05 2020-06-10 Lawo Holding Ag Method and device for the automatic evaluation and provision of video signals of an event

Citations (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4969036A (en) * 1989-03-31 1990-11-06 Bir Bhanu System for computing the self-motion of moving images devices
US5012270A (en) * 1988-03-10 1991-04-30 Canon Kabushiki Kaisha Image shake detecting device
US5259037A (en) * 1991-02-07 1993-11-02 Hughes Training, Inc. Automated video imagery database generation using photogrammetry
US5267034A (en) * 1991-03-11 1993-11-30 Institute For Personalized Information Environment Camera work detecting method
US5502482A (en) * 1992-08-12 1996-03-26 British Broadcasting Corporation Derivation of studio camera position and motion from the camera image
US5582173A (en) * 1995-09-18 1996-12-10 Siemens Medical Systems, Inc. System and method for 3-D medical imaging using 2-D scan data
US5671335A (en) * 1991-05-23 1997-09-23 Allen-Bradley Company, Inc. Process optimization using a neural network
US5809202A (en) * 1992-11-09 1998-09-15 Matsushita Electric Industrial Co., Ltd. Recording medium, an apparatus for recording a moving image, an apparatus and a system for generating a digest of a moving image, and a method of the same
US5956026A (en) * 1997-12-19 1999-09-21 Sharp Laboratories Of America, Inc. Method for hierarchical summarization and browsing of digital video
US6038393A (en) * 1997-09-22 2000-03-14 Unisys Corp. Software development tool to accept object modeling data from a wide variety of other vendors and filter the format into a format that is able to be stored in OMG compliant UML representation
US6070167A (en) * 1997-09-29 2000-05-30 Sharp Laboratories Of America, Inc. Hierarchical method and system for object-based audiovisual descriptive tagging of images for information retrieval, editing, and manipulation
US6195497B1 (en) * 1993-10-25 2001-02-27 Hitachi, Ltd. Associated image retrieving apparatus and method
US6195122B1 (en) * 1995-01-31 2001-02-27 Robert Vincent Spatial referenced photography
US6208345B1 (en) * 1998-04-15 2001-03-27 Adc Telecommunications, Inc. Visual data integration system and method
US6282362B1 (en) * 1995-11-07 2001-08-28 Trimble Navigation Limited Geographical position/image digital recording and display system
US6337688B1 (en) * 1999-01-29 2002-01-08 International Business Machines Corporation Method and system for constructing a virtual reality environment from spatially related recorded images
US6504569B1 (en) * 1998-04-22 2003-01-07 Grass Valley (U.S.), Inc. 2-D extended image generation from 3-D data extracted from a video sequence
US6748158B1 (en) * 1999-02-01 2004-06-08 Grass Valley (U.S.) Inc. Method for classifying and searching video databases based on 3-D camera motion

Patent Citations (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5012270A (en) * 1988-03-10 1991-04-30 Canon Kabushiki Kaisha Image shake detecting device
US4969036A (en) * 1989-03-31 1990-11-06 Bir Bhanu System for computing the self-motion of moving images devices
US5259037A (en) * 1991-02-07 1993-11-02 Hughes Training, Inc. Automated video imagery database generation using photogrammetry
US5267034A (en) * 1991-03-11 1993-11-30 Institute For Personalized Information Environment Camera work detecting method
US5671335A (en) * 1991-05-23 1997-09-23 Allen-Bradley Company, Inc. Process optimization using a neural network
US5502482A (en) * 1992-08-12 1996-03-26 British Broadcasting Corporation Derivation of studio camera position and motion from the camera image
US5809202A (en) * 1992-11-09 1998-09-15 Matsushita Electric Industrial Co., Ltd. Recording medium, an apparatus for recording a moving image, an apparatus and a system for generating a digest of a moving image, and a method of the same
US6195497B1 (en) * 1993-10-25 2001-02-27 Hitachi, Ltd. Associated image retrieving apparatus and method
US6195122B1 (en) * 1995-01-31 2001-02-27 Robert Vincent Spatial referenced photography
US6292215B1 (en) * 1995-01-31 2001-09-18 Transcenic L.L.C. Apparatus for referencing and sorting images in a three-dimensional system
US5582173A (en) * 1995-09-18 1996-12-10 Siemens Medical Systems, Inc. System and method for 3-D medical imaging using 2-D scan data
US6282362B1 (en) * 1995-11-07 2001-08-28 Trimble Navigation Limited Geographical position/image digital recording and display system
US6038393A (en) * 1997-09-22 2000-03-14 Unisys Corp. Software development tool to accept object modeling data from a wide variety of other vendors and filter the format into a format that is able to be stored in OMG compliant UML representation
US6070167A (en) * 1997-09-29 2000-05-30 Sharp Laboratories Of America, Inc. Hierarchical method and system for object-based audiovisual descriptive tagging of images for information retrieval, editing, and manipulation
US5956026A (en) * 1997-12-19 1999-09-21 Sharp Laboratories Of America, Inc. Method for hierarchical summarization and browsing of digital video
US6208345B1 (en) * 1998-04-15 2001-03-27 Adc Telecommunications, Inc. Visual data integration system and method
US6504569B1 (en) * 1998-04-22 2003-01-07 Grass Valley (U.S.), Inc. 2-D extended image generation from 3-D data extracted from a video sequence
US6337688B1 (en) * 1999-01-29 2002-01-08 International Business Machines Corporation Method and system for constructing a virtual reality environment from spatially related recorded images
US6748158B1 (en) * 1999-02-01 2004-06-08 Grass Valley (U.S.) Inc. Method for classifying and searching video databases based on 3-D camera motion

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100384220C (en) * 2006-01-17 2008-04-23 东南大学 Video camera rating data collecting method and its rating plate
US20110170841A1 (en) * 2009-01-14 2011-07-14 Sony Corporation Information processing device, information processing method and program
US9734406B2 (en) * 2009-01-14 2017-08-15 Sony Corporation Information processing device, information processing method and program
EP2651130A1 (en) * 2012-04-10 2013-10-16 Acer Incorporated Method for assisting in video compression using rotation operation and image capturing device thereof
CN114996600A (en) * 2022-08-03 2022-09-02 成都经纬达空间信息技术有限公司 Multi-temporal image management database data writing and reading method and device

Also Published As

Publication number Publication date
US6748158B1 (en) 2004-06-08

Similar Documents

Publication Publication Date Title
US6748158B1 (en) Method for classifying and searching video databases based on 3-D camera motion
US6956573B1 (en) Method and apparatus for efficiently representing storing and accessing video information
Zhang et al. Content-based video retrieval and compression: A unified solution
Aigrain et al. Content-based representation and retrieval of visual media: A state-of-the-art review
Zhong et al. Spatio-temporal video search using the object based video representation
US7151852B2 (en) Method and system for segmentation, classification, and summarization of video images
Kasturi et al. An evaluation of color histogram based methods in video indexing
US7010036B1 (en) Descriptor for a video sequence and image retrieval system using said descriptor
Doulamis et al. Efficient summarization of stereoscopic video sequences
Peng et al. Keyframe-based video summary using visual attention clues
Yuan et al. Fast and robust short video clip search using an index structure
EP0976089A1 (en) Method and apparatus for efficiently representing, storing and accessing video information
US20040207656A1 (en) Apparatus and method for abstracting summarization video using shape information of object, and video summarization and indexing system and method using the same
Ngo et al. Recent advances in content-based video analysis
Jeannin et al. Motion descriptors for content-based video representation
Jeannin et al. Video motion representation for improved content access
Yoshitaka et al. Violone: Video retrieval by motion example
JP2002513487A (en) Algorithms and systems for video search based on object-oriented content
Ferreira et al. Towards key-frame extraction methods for 3D video: a review
Aner-Wolf et al. Video summaries and cross-referencing through mosaic-based representation
US8692852B2 (en) Intelligent display method for multimedia mobile terminal
Cherfaoui et al. Two-stage strategy for indexing and presenting video
Ioka et al. Estimation of motion vectors and their application to scene retrieval
Taniguchi et al. PanoramaExcerpts: Video cataloging by automatic synthesis and layout of panoramic images
Cai et al. Video anatomy: cutting video volume for profile

Legal Events

Date Code Title Description
AS Assignment

Owner name: THOMSON LICENSING S.A., FRANCE

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:JASINSCHI, RADU S.;NAVEEN, THUMPUDI;REEL/FRAME:015009/0255;SIGNING DATES FROM 20000129 TO 20000410

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION