WO2005098751A1 - Crowd detection - Google Patents

Crowd detection Download PDF

Info

Publication number
WO2005098751A1
WO2005098751A1 PCT/IL2005/000382 IL2005000382W WO2005098751A1 WO 2005098751 A1 WO2005098751 A1 WO 2005098751A1 IL 2005000382 W IL2005000382 W IL 2005000382W WO 2005098751 A1 WO2005098751 A1 WO 2005098751A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
images
vehicle
environment
trajectories
Prior art date
Application number
PCT/IL2005/000382
Other languages
French (fr)
Inventor
Pinchas Reisman
Ofer Mano
Shmuel Avidan
Amnon Shashua
Original Assignee
Mobileye Technologies Limited
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mobileye Technologies Limited filed Critical Mobileye Technologies Limited
Publication of WO2005098751A1 publication Critical patent/WO2005098751A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/269Analysis of motion using gradient-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/182Network patterns, e.g. roads or rivers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Definitions

  • CWAS collision warning/avoidance systems
  • An aspect of some embodiments of the present invention relates to providing a method for determining whether an ensemble of moving objects is present in an environment.
  • an aspect of some embodiments of the invention relates to determining the presence of the ensemble responsive to optical data acquired for the environment.
  • the optical data comprises patterns of optic flow generated responsive to optical data comprised in a sequence of camera images of a scene in the environment.
  • the inventors have determined that in a sequence of camera images of a scene, an ensemble of moving objects in the scene will often generate patterns of optic flow having characteristics associated with the presence of the ensemble in the scene. These characteristics are usually sufficiently correlated with the ensemble so that a degree to which the characteristics are expressed in the sequence of images may be used to indicate the presence or absence of the ensemble in the scene.
  • the sequence of camera images is acquired by a camera moving towards the environment and a degree to which the images exhibit inward optic flow is used to determine whether or not the ensemble is present.
  • optical data in the sequence of camera images is represented as a function of coordinates in a space-time (ST) volume defined by the images.
  • An ST volume is an optionally rectangular volume defined by arraying the images parallel to each other and aligned one behind the other in the order in which they were acquired.
  • a location of a given pixel in the images is determined by a time coordinate and two spatial "image" coordinates.
  • the time coordinate is measured along a t-axis perpendicular to the planes of the camera images.
  • the two spatial image coordinates are measured along spatial axes parallel to the planes of the camera images, which are conventionally x and y orthogonal image axes.
  • Planes parallel to the xt-plane of an ST volume are referred to as epipolar or EPI planes.
  • the x and y image coordinates of a pixel in a camera image acquired at a given time t, as measured along the t-axis, correspond to "real world" x and y-coordinates of a feature in the scene imaged on the pixel at the time t.
  • Pixels in the ST volume that image a same feature in the scene at different times t trace out a line, hereinafter referred to as an "image trajectory", in the ST volume.
  • Image trajectories of features in a scene at a constant distance from a moving camera are located in an EPI plane.
  • real world coordinates are represented by capitalized letters, while camera image coordinates are represented by small letters.
  • the image x-axis and y-axis are defined to correspond respectively to real world X and Y-axes so that for a displacement of the camera along the positive world X-axis or Y-axis, a feature in a camera image corresponding to a stationary feature in the real world displaces along the positive image x-axis or positive image y-axis respectively.
  • the origin of world coordinates is assumed to be at the optical center of the camera.
  • the world X-axis is a horizontal axis parallel to the ground
  • the world Y-axis a vertical axis perpendicular to the ground
  • the world Z-axis coincides with the optic axis of the camera. If the camera is moving in the direction of its optic axis, it is moving along the positive Z-axis. As the camera moves towards the environment, stationary features in the environment that are imaged in the sequence of camera images are characterized by "outward" optic flow, away from the optic axis of the camera.
  • an ensemble of objects moving in different directions in the environment generally provides a plurality of features in the sequence of images that exhibit "inward ' ' optic flow towards the optic axis.
  • the corresponding image trajectories of the features in the ST volume move inward, towards the ST axis.
  • a measure of a degree to which image trajectories in the ST volume exhibit inward optic flow is used to indicate presence of the ensemble in the environment.
  • image trajectories associated with an ensemble of moving object in an environment often exhibit a relatively high incidence of intersecting image trajectories. For example, a crowd of people crossing a street in both directions at a zebra crossing will produce many image trajectories that cross each other as people pass one another and randomly occlude one another in the sequence of images.
  • a measure of the frequency with which image trajectories in the ST volume intersect is used to indicate presence of the ensemble in the environment.
  • the environment is an automotive environment and the ensemble is a crowd of people.
  • a method of determining the presence of an ensemble of moving objects in an environment comprising: acquiring a plurality of images of a scene in the environment; processing the images to determine optic flow of features in the scene; and determining whether an ensemble of moving objects is present in the environment responsive to the optic flow.
  • the method comprises determining a degree to which the optic flow exhibits inward optic flow.
  • the method comprises determining whether the ensemble is present responsive to the degree of inward optic flow.
  • determining optic flow comprises determining image trajectories of features in the scene and using the image trajectories to determine optic flow.
  • determining image trajectories comprises determining image trajectories that lie in at least one EPI plane of a space time volume defined by the images.
  • the at least one EPI plane comprises a plurality of planes.
  • the method comprises determining a degree to which the image trajectories intersect.
  • the method comprises determining whether the ensemble is present responsive to the degree to which the image trajectories intersect.
  • the images are acquired under conditions for which stationary features in the environment exhibit outward optic flow.
  • the images are acquired by a camera mounted to a vehicle.
  • the vehicle is an automotive vehicle.
  • the ensemble of moving objects is a crowd of people.
  • apparatus for detecting presence of a crowd of people in an environment comprising: a camera that acquires images of a scene in the environment; and a processor that processes the images to determine presence of a crowd of people in accordance with an embodiment of the invention.
  • the Apparatus according is adapted to be mounted in a vehicle.
  • the vehicle is an automotive vehicle.
  • FIG. 1A schematically shows a vehicle comprising a CWAS moving along a road in an urban environment and image trajectories of features in the environment in an ST volume defined by a sequence of images acquired by the CWAS in accordance with an embodiment of the invention.
  • Fig. IB shows a plan view of the environment and vehicle shown in Fig.
  • Fig. 1A schematically shows a plan view of an epipolar (EPI) plane through the ST volume and image trajectories that lie in the plane, in accordance with an embodiment of the invention
  • Fig. 2A schematically shows the vehicle and urban environment shown in Fig. 1A with the addition of a crowd of people present in the path of the vehicle and image trajectories associated with the crowd in an ST volume defined by a sequence of images acquired by the CWAS, in accordance with an embodiment of the invention
  • Fig. 2B shows a plan view of the environment and vehicle shown in Fig. 2A; Fig.
  • FIG. 2C schematically shows a plan view of an EPI plane through the ST volume and image trajectories associated with the crowd that lie in the plane, in accordance with an embodiment of the invention
  • Fig. 3 shows a flow diagram of an algorithm used to determine presence of a crowd, in accordance with an embodiment of the present invention.
  • Figs. 1A and IB schematically show perspective and plan views of an urban environment 24 in which a vehicle 20 comprising a CWAS 30, in accordance with an embodiment of the invention, is moving along a road 22. The vehicle is moving towards an intersection 26 that has zebra crosswalks 27 and 28.
  • CWAS 30 optionally comprises a single camera 31 that acquires images of the environment through which vehicle 20 moves and a processor (not shown) for processing the images.
  • Camera 31 has an optic axis 32 and a field of view schematically delineated by lines 34.
  • CWAS 30 and camera 31 are shown greatly enlarged relative to vehicle 20 and mounted on the roof of the vehicle.
  • a CWAS is mounted in a substantially less obtrusive location, inside a vehicle, such as under the hood and comprises appropriate optics to enable a camera in the CWAS to image the vehicle's environment.
  • FIG. 1A A sequence of images 50 acquired by camera 31 in the time it takes vehicle 20 to move from position Pj to position P ⁇ are schematically shown in Fig. 1A in an inset 60 aligned one behind the other to define an ST volume 52.
  • a first image in the sequence acquired at time t] when vehicle 20 is located at P ⁇ is labeled IMj and an N-th, image in the sequence, acquired at a time tj s j when vehicle 20 is located at position Pxj, is labeled IM ⁇ j.
  • Pixels in images 50 comprised in ST volume 52 are located by coordinates measured relative to a coordinate system 41 having a time axis perpendicular to the planes of the images and x and y spatial axes that are parallel to the planes of the images.
  • the x and y-axes correspond respectively to X and Y-axes of coordinate system 40.
  • ST volume 52 has an ST "optic" axis 42 corresponding to optic axis 32 of camera 31.
  • ST optic axis 42 passes through pixels in images 50 that image features in environment 24 lying along camera optic axis 32.
  • the t-axis of ST volume 52 is chosen to coincide with ST axis 42.
  • a given pixel in ST volume 52 is located by a time coordinate along the t-axis, which designates a particular image 50 in which the pixel lies by a time at which the particular image is acquired, and x and y-coordinates, which designate where in the particular image 50 the pixel is located.
  • Pixels in images 50 that image a same feature in urban environment 24 lie along a same line, i.e. an image trajectory, in ST volume 52.
  • image trajectories in ST volume 52 of stationary features in environment 24, except for features that might lie along optic axis 32, are "outward" moving trajectories that veer away from ST axis 42 of the ST volume, i.e. the t-axis of coordinate system 40.
  • image trajectories in ST volume 52 corresponding to features F 0 , Fj, ⁇ 2, F3, F4 and F5 are schematically shown in the ST volume and are indicated by reference labels T 0 , Ti , T2, T3, T4 and T5 respectively.
  • All features F 0 -F5 are, by way of example, stationary features in environment 24 and are assumed for convenience of presentation to have a same Y-coordinate equal to zero.
  • Feature F 0 lies along the Z-axis
  • features ⁇ and 2 have negative X-coordinates and lie to the right of the driver of vehicle 20
  • features F3, F4 and F5 have positive X-coordinates and lie to the left of the driver.
  • the Z-coordinates (relative to coordinate system 40) of the features decrease while their respective X and Y-coordinates remain the same.
  • image trajectory T 0 corresponding to feature F 0 is a straight line lying along the ST axis 42. Because features F 0 -F5 have a same Y-coordinate equal to zero, image trajectories T 0 -T5 are coplanar and lie in an EPI plane 54 of ST volume 52 that is coincident with the xt-plane of the ST volume.
  • a given feature in environment 24 does not have an image trajectory that is coplanar with an EPI plane of ST volume 52.
  • the projections share characteristics of the image trajectories and in general may be used as approximations of the image trajectories.
  • projections of image trajectories on EPI planes of an ST volume are assumed to be image trajectories and are not distinguished from actual image trajectories.
  • Fig.lC schematically shows an enlarged plan view of a portion of EPI plane 54 and details of image trajectories T 0 , T ⁇ , T2, T3, T4 and T5.
  • the image trajectories have been generated, by way of example, assuming that at position Pj of vehicle 20, the X and Z-coordinates of features F 0 , ⁇ , F2, F3, F4, and F5 are respectively (0, 80), (-3.5, 32.6), (-3.5, 43), (8.5, 32.6), (8.5, 43), (10.7, 48).
  • the first and second number give the X-coordinate and Y-coordinate in meters of its corresponding feature.
  • the small icons along a given trajectory T 0 -T5 indicate an x-value for the trajectory at a time indicated by a witness line directly below the icon.
  • distance vehicle 20 has traveled from position Pj at each of the times indicated by a witness line along axis 51 is indicated in meters. It is noted that in addition to each of image trajectories T 0 -T5 veering away from the
  • FIG. 1A and 1C schematically show perspective and plan views of vehicle 20 and urban environment 24 having, in addition to the features shown in Figs. 1A and IB, a crowd 70 of people that are crossing road 22 at zebra crosswalk 27 from both sides of the road.
  • the crowd comprises six people, three of whom are moving from left to right and three of whom are moving from right to left as seen by the driver (and camera 31) of vehicle 20.
  • the motion of the people in crowd 70 generates features in a sequence of images of the crowd acquired by camera 31 that exhibit inward optic flow and corresponding image trajectories in an ST volume defined by the images that move inward in spite of the motion of vehicle 20.
  • Insets 61 and 62 in Figs. 2A and 2B respectively show enlarged schematic views of crowd 70 at time t], when vehicle 20 is located at position
  • Pi - Fig. 2A schematically shows in an inset 63, a sequence of images 80 acquired by camera 31 as the vehicle moves from location Pj to location Pfsj and an ST volume 82 defined by the images.
  • ST volume 82 shows image trajectories TPj, TP2, TP3, TP4, TP5 and TPg for six features, each of which is associated with a different person in crowd 70, that are imaged by camera 31 in images 80.
  • a feature associated with a given person in crowd 70 may, for example, be a region of the person's body or clothing or something the person is carrying.
  • Image trajectories having an odd subscript are associated with persons in crowd 70 moving from right to left and trajectories having an even subscript are associated with persons in the crowd moving from left to right.
  • Each of the features is assumed, for convenience of presentation, to have a Y-coordinate equal to zero.
  • Image trajectories TPj - TPg are therefore coplanar and lie in an EPI plane 84 that lies in the xt-plane of ST volume 52.
  • Fig. 2C shows an enlarged plan view of a region of EPI plane 84 and details of image trajectories TPj-TPg.
  • Image trajectories TPI, TP2, TP3, TP4, TP5, and TP6 are generated assuming that the persons they are associated with move at constant velocities equal respectively to -2.8, 1, -1.3, 2, -1 and 1.4 m/s between times t] and jsj and that at time tj their respective X-coordinates are, in meters, 1.75, -0.5, 1.2, -2, 0.5, -1.5. Because of the motions of the persons associated with image trajectories TP1-TP6, each of the trajectories initially moves inward towards the t-axis.
  • CWAS 30 determines whether crowd 70 is present in environment 24 responsive to characteristics of image trajectories in ST volume 82.
  • CWAS 30 determines whether crowd 70 is present or not present responsive to a degree to which ST volume 82 exhibits inward moving image trajectories.
  • CWAS 30 determines whether or not crowd 70 is present responsive to a frequency with which image trajectories in the ST volume intersect.
  • CWAS 30 determines a degree to which ST volume 82 exhibits inward moving and/or intersecting trajectories responsive to a degree to which at least one EPI plane in the ST volume exhibits such trajectories.
  • the y-coordinate of the at least one EPI plane in ST volume 82 is determined so that it corresponds to Y and Z-coordinates at which features of people in a crowd in the path of vehicle 20 are expected to be located and which will therefore generate image trajectories in the at least one EPI plane responsive to human motion.
  • the at least one EPI plane comprise a plurality of EPI planes located at appropriate image y-coordinates.
  • the at least one EPI plane comprise a plurality of EPI planes located at appropriate image y-coordinates.
  • the y-coordinates of the planes are determined so that at a distance of about 30 meters from vehicle 20 features in a person's body at locations within a range from about knee height to about shoulder height of the person generate image trajectories in the EPI planes.
  • the planes are evenly spaced. Assume that knee height to head height extends from about 0.25 to about 1.75 meters above ground. Then a central EPI plane of the five EPI planes would have a y-coordinate equal to zero (it would lie in the xt- planes of ST volumes 52 or 82) and the planes would be spaced apart by about 0.5 mm. For each EPI plane, to determine presence of inward moving and intersecting image trajectories in the EPI plane, in accordance with an embodiment of the invention, CWAS 30 generates a "rolling", sample image of the EPI plane.
  • the rolling sample image comprises a line of pixels parallel to the x-axis of ST volume 82 at the y-coordinate of the EPI plane from a last image 80 acquired by camera 31 and from each of a plurality of "M-l" images 80 preceding the last image.
  • the sample image therefore comprises pixel lines from a total of M images.
  • M is equal to about 20.
  • CWAS 30 identifies image trajectories in the rolling sample image optionally using a Hough transform that maps pixels in the rolling sample image to a Hough accumulator space. For each pixel in the rolling sample image, a gradient of intensity is determined. For pixels having a relatively well defined intensity gradient, CWAS 30 defines a straight line that passes through the pixel and has a slope that is perpendicular to the gradient. A pixel is optionally assumed to have a well defined gradient if its gray level differs from that of it neighbors by a sufficient amount. For example, in some embodiments of the invention, for pixels that have gray levels in a range from 0-255 a gray level difference is required to be greater than about 16 for a pixel to have a well defined gradient.
  • the straight line defined for a given pixel is assumed to approximate an image trajectory along which the pixel lies.
  • an image trajectory along which a pixel lies is, as is shown for example in Figs. 1A, 1C, 2A and 2C, generally not a straight line, for relatively short distances, the trajectory can usually be approximated by a straight line.
  • Portions of image trajectories that are comprised in rolling sample images that do not have a large dimension along the t-axis are generally sufficiently short so that they may be reasonably well approximated by straight lines.
  • image trajectories in a rolling sample may advantageously be approximated by straight lines.
  • the parameter, x 0 is the x intercept of the pixel's straight-line trajectory with the last pixel line added to the rolling sample image.
  • CWAS 30 maps the j-th pixel into the Hough accumulator space by increasing a count in the bin that brackets the values (sj, x 0 ) by one.
  • a trajectory in the sample image is a right or left moving trajectory relative to the driver of vehicle 20 if its slope s is positive or negative respectively.
  • a trajectory in the sample image is an outward moving trajectory if its slope s and x 0 have opposite signs and is an inward moving trajectory if they have same signs.
  • Relatively high counts for both positive slope Hough bins and negative slope Hough bins that are associated with a same value of x 0 indicate that x 0 is an intersection point of an outward and an inward moving image trajectory.
  • CWAS 30 uses values accumulated in the Hough accumulator space for a rolling sample image to define probability functions that are used to determine a degree to which inward moving and/or intersecting image trajectories are found in the sample image.
  • the probability functions are used to determine presence of a crowd, such as crowd 70, in the path of vehicle 20.
  • P(s+, x 0 ) [ ⁇ H(s,x o )/ ⁇ H(s,x o ) ] for s > K ⁇ 3)
  • S S, X o P(s_, x 0 ) [ ⁇ H(s,x o )/ ⁇ H(s,x o ) ] for s ⁇ K ⁇ , 4)
  • K ⁇ is a predetermined threshold, which is used to increase signal to noise.
  • K-j- has a value equal to about 0.25.
  • the rolling sample image is determined to exhibit inward moving trajectories to a degree that indicates presence of crowd 70 if, ⁇ P(s + , x o ) > K INW 5) x > 0 o or ⁇ P(S_, X Q ) > K INW , 6) x ⁇ 0 o
  • Krj ⁇ is a predetermined threshold controllable to control sensitivity of the inward motion determination.
  • each rolling sample images is processed to generate a Gaussian pyramid of images, using methods known in the art.
  • Image trajectories having large slopes are generally more easily detected in a higher level image of a Gaussian pyramid than in a lower one and in accordance with an embodiment of the invention, higher level Gaussian pyramid images of rolling sample images are used to determine to what extent they exhibit inward motion and intersecting trajectories.
  • a highest level image in the Gaussian pyramid of a rolling sample image is first processed optionally in accordance with equations 3) - 6) to determine if it exhibits inward moving trajectories.
  • inward motion is not found in the highest level, it is looked for in a next lower level. The process is continued to determine if at some level of the pyramid, including possibly the lowest level of the pyramid (i.e. the original rolling sample image for the EPI plane), inward motion is found. The process of looking for inward motion is stopped at a highest level of the pyramid at which such motion is found. Depending on whether or not inward motion is found in some level of the pyramid, the original rolling sample image and its corresponding EPI plane are determined respectively to exhibit or not exhibit inward motion.
  • the results from all the EPI planes are combined to determine if the images acquired by camera 31 exhibit a degree of inward motion indicative of presence of a crowd.
  • a weighted sum of the results from each of the EPI planes is determined and if the weighted sum is greater than an appropriate threshold the images are determined to exhibit inward motion to a degree indicating presence of a crowd.
  • the camera images are processed to determine, in accordance with an embodiment of the invention, whether or not they exhibit a degree of intersecting trajectories sufficient to indicate presence of a crowd.
  • the rolling sample region for each EPI plane is processed to determine if it exhibits intersecting image trajectories.
  • processing a given rolling sample region for intersections is performed in a same level of the Gaussian pyramid generated for the sample region for which inward motion is found. If inward motion was not found, processing for intersections is performed on the original rolling sample region.
  • each value x 0 that labels a Hough space bin is vetted to determine if it is an intersection point of image trajectories.
  • x 0 is determined to be an intersection point for at least two image trajectories if the product of the probabilities determined in equations 3) and 4) above satisfy an equation of the form P(s+, x 0 )P(s_, x 0 ) > K C , 7) where KQ is a predetermined "sensitivity" threshold.
  • NQ represents the number of points x 0 that satisfy equation 7
  • the rolling sample region and its associated EPI plane are determined to exhibit a degree of intersecting image trajectories indicative of a crowd if
  • the results from all the EPI planes are combined to determine if the images acquired by camera 31 exhibit a number of intersecting trajectories to indicate presence of a crowd.
  • the determination is made responsive to whether a weighted sum of the "intersection results" from all the EPI planes is greater than a predetermined threshold.
  • the results from testing the rolling sample regions of the EPI planes at a given time t for inward motion and multiplicity of intersections are processed by CWAS 30 to provide an assessment as to whether at the given time t a crowd is present in front of vehicle 20.
  • CWAS 30 determines whether a crowd such as crowd 70 (Figs. 2A and 2B) is present in accordance with an algorithm 200 similar to that shown in a flow diagram in Fig. 3.
  • CWAS 30 optionally proceeds to a junction 202.
  • CWAS 31 proceeds to a junction 203 and if not it proceeds to a junction 204.
  • the CWAS optionally determines whether any of the following three conditions prevail: 1) there is substantial clustering of intersections close to the t-axis (as shown for example in Figs. 2A and 2C for crowd 70); 2) in an immediately preceding decision, CWAS 30 determined there was a crowd present; or 3) a pedestrian detection system comprised in the CWAS determined that many individuals are present in front of vehicle 20. Any of various pedestrian detection systems known in the art may be used in the practice of the present invention to provide an indication if many individuals are present in front of vehicle 20.
  • the pedestrian detection system is a component based detection system such as described in a PCT patent application entitled "Pedestrian Detection” filed on even date with the present application, the disclosure of which is incorporated herein by reference. If at least one of the three conditions exists, CWAS 30 proceeds to a decision block 205 and determines that a crowd is present. If on the other hand none of the conditions are extant CWAS proceeds to junction 204. At junction 204 if the camera images provided by camera 31 have been determined to exhibit inward motion, CWAS 30 proceeds to decision junction 206.
  • the CWAS optionally, determines if either of the following two conditions prevail: 1) in an immediately preceding decision, CWAS 30 determined there was a crowd present; or 2) a pedestrian detection system comprised in the CWAS determined that many individuals are present in front of vehicle 20. If at least one of the conditions prevails, the CWAS proceeds to a decision block 207 and determines that a crowd is present. If neither of the two conditions are present, CWAS 30 proceeds to a junction 208 and determines if vehicle 20 is or is not stationary. If the vehicle is not stationary, CWAS proceeds to block 209 and determines that a crowd is not present. Any of various methods and devices known in the art may be used to determine if vehicle 20 is moving or not.
  • CWAS 30 optionally determines whether vehicle 20 is moving from an accelerometer it comprises or from signals that it receives from a speedometer system in the vehicle. If at junction 208 the vehicle is stationary, CWAS 30 proceeds to a junction 210. At 210 if in an immediately preceding decision CWAS 31 determined that a crowd was not present, the CWAS proceeds to a decision block 21 1 and determines that a crowd is not currently present. If on the other hand, at junction 210 the preceding decision was that a crowd was present, CWAS proceeds to a junction 212.
  • CWAS 30 determines in a decision block 213 that a crowd is currently also not present. If the images did exhibit outward flow, CWAS 30 proceeds to a decision block 214 and determines that a crowd is currently present. In using outward flow as a criterion for deciding whether a crowd is present, it is noted that if vehicle 20 is stationary, outward flow can be generated in images acquired by camera 31 only if moving objects are imaged in the images.
  • each of the verbs, "comprise” “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements or parts of the subject or subjects of the verb.
  • the present invention has been described using detailed descriptions of embodiments thereof that are provided by way of example and are not intended to limit the scope of the invention.
  • the described embodiments comprise different features, not all of which are required in all embodiments of the invention.
  • Some embodiments of the present invention utilize only some of the features or possible combinations of the features. Variations of embodiments of the present invention that are described and embodiments of the present invention comprising different combinations of features noted in the described embodiments will occur to persons of the art. The scope of the invention is limited only by the following claims.

Abstract

A method of determining the presence of an ensemble of moving objects in an environment comprising: acquiring a plurality of images of a scene in the environment; processing the images to determine optic flow of features in the scene; and determining whether an ensemble of moving objects is present in the environment responsive to the optic flow.

Description

CROWD DETECTION RELATED APPLICATIONS The present application claims benefit under 35 U.S.C. 1 19(e) of US Patent application 60/560,048 filed on April 8, 2004, the disclosure of which is incorporated herein by reference. FIELD OF THE INVENTION The present invention relates to methods of detecting an ensemble of moving objects, such as for example, a crowd of moving people. BACKGROUND OF THE INVENTION Automotive accidents are a major cause of loss of life and dissipation of resources in substantially all societies in which automotive transportation is common. It is estimated that over 10,000,000 people are injured in traffic accidents annually worldwide and that of this number, about 3,000,000 people are severely injured and about 400,000 are killed. A report "The Economic Cost of Motor Vehicle Crashes 1994" by Lawrence J. Blincoe, published by the United States National Highway Traffic Safety Administration, estimates that motor vehicle crashes in the U.S. in 1994 caused about 5.2 million nonfatal injuries, 40,000 fatal injuries and generated a total economic cost of about $150 billion. The damage and costs of vehicular accidents have generated substantial interest in collision warning/avoidance systems (CWAS) that detect potential accident situations in the environment of a driver's vehicle and alert the driver to such situations with sufficient warning to allow him or her to avoid them or to reduce the severity of their realization. In relatively dense population environments typical of urban environments, it is advantageous for a CWAS system to be capable of detecting and alerting a driver to the presence of a person or persons in the path of a vehicle. Systems that may be used to detect the presence of one or a few pedestrians in the path of a vehicle exist. An article, "Pedestrian Detection Using Wavelet Templates"; Oren et al Computer Vision and Pattern Recognition (CVPR) June 1997 describes a global shape-based detection system for detecting presence of a person. The system uses Haar wavelets to represent patterns in images of a scene and a support vector machine classifier to process the Haar wavelets to classify a pattern as representing a person. A component based detection system for detecting a person is described in "Example Based Object Detection in Images by Components"; A. Mohan et al; IEEE Transactions on Pattern Analysis and Machine Intelligence; Vol 23, No. 4; April 2001. The disclosures of the above noted references are incorporated herein by reference. However, as the density of people in the path of a vehicle increases and the people become a crowd, such as for example, as often occurs at a zebra crossing of a busy street corner, cues useable to determine presence of an individual often become masked and obscured by the commotion of the individuals in the crowd. As a result, system for detecting one or a few human forms exhibit a tendency to become "confused" by the presence of a crowd in the path of a vehicle and for such situations, their ability to provide reliable detection of pedestrians can be compromised. SUMMARY OF THE INVENTION An aspect of some embodiments of the present invention relates to providing a method for determining whether an ensemble of moving objects is present in an environment. An aspect of some embodiments of the invention relates to determining the presence of the ensemble responsive to optical data acquired for the environment. In accordance with an embodiment of the invention, the optical data comprises patterns of optic flow generated responsive to optical data comprised in a sequence of camera images of a scene in the environment. The inventors have determined that in a sequence of camera images of a scene, an ensemble of moving objects in the scene will often generate patterns of optic flow having characteristics associated with the presence of the ensemble in the scene. These characteristics are usually sufficiently correlated with the ensemble so that a degree to which the characteristics are expressed in the sequence of images may be used to indicate the presence or absence of the ensemble in the scene. According to an aspect of some embodiments of the invention, the sequence of camera images is acquired by a camera moving towards the environment and a degree to which the images exhibit inward optic flow is used to determine whether or not the ensemble is present. In an embodiment of the invention, optical data in the sequence of camera images is represented as a function of coordinates in a space-time (ST) volume defined by the images.
An ST volume is an optionally rectangular volume defined by arraying the images parallel to each other and aligned one behind the other in the order in which they were acquired. A location of a given pixel in the images is determined by a time coordinate and two spatial "image" coordinates. The time coordinate is measured along a t-axis perpendicular to the planes of the camera images. The two spatial image coordinates are measured along spatial axes parallel to the planes of the camera images, which are conventionally x and y orthogonal image axes. Planes parallel to the xt-plane of an ST volume are referred to as epipolar or EPI planes. The x and y image coordinates of a pixel in a camera image acquired at a given time t, as measured along the t-axis, correspond to "real world" x and y-coordinates of a feature in the scene imaged on the pixel at the time t. Pixels in the ST volume that image a same feature in the scene at different times t, trace out a line, hereinafter referred to as an "image trajectory", in the ST volume. Image trajectories of features in a scene at a constant distance from a moving camera are located in an EPI plane. Hereinafter, to distinguish camera image coordinates from real world coordinates, real world coordinates are represented by capitalized letters, while camera image coordinates are represented by small letters. Optionally, the image x-axis and y-axis are defined to correspond respectively to real world X and Y-axes so that for a displacement of the camera along the positive world X-axis or Y-axis, a feature in a camera image corresponding to a stationary feature in the real world displaces along the positive image x-axis or positive image y-axis respectively. Optionally, the origin of world coordinates is assumed to be at the optical center of the camera. For images of a scene acquired by a land based camera, conventionally, the world X-axis is a horizontal axis parallel to the ground, the world Y-axis a vertical axis perpendicular to the ground and the world Z-axis coincides with the optic axis of the camera. If the camera is moving in the direction of its optic axis, it is moving along the positive Z-axis. As the camera moves towards the environment, stationary features in the environment that are imaged in the sequence of camera images are characterized by "outward" optic flow, away from the optic axis of the camera. In the ST volume, their corresponding image trajectories move outward, away from a line, hereinafter the "ST axis", in the ST volume corresponding to the camera optic axis. However, an ensemble of objects moving in different directions in the environment generally provides a plurality of features in the sequence of images that exhibit "inward'' optic flow towards the optic axis. The corresponding image trajectories of the features in the ST volume move inward, towards the ST axis. In accordance with an embodiment of the invention, a measure of a degree to which image trajectories in the ST volume exhibit inward optic flow is used to indicate presence of the ensemble in the environment. The inventors have noted that image trajectories associated with an ensemble of moving object in an environment often exhibit a relatively high incidence of intersecting image trajectories. For example, a crowd of people crossing a street in both directions at a zebra crossing will produce many image trajectories that cross each other as people pass one another and randomly occlude one another in the sequence of images. In accordance with an embodiment of the invention, a measure of the frequency with which image trajectories in the ST volume intersect is used to indicate presence of the ensemble in the environment. In some embodiments of the invention, the environment is an automotive environment and the ensemble is a crowd of people. Optionally, methods in accordance with an embodiment of the invention are used in a CWAS comprised in a moving vehicle to determine whether a crowd is present in the path of the vehicle. There is therefore provided in accordance with an embodiment of the present invention, a method of determining the presence of an ensemble of moving objects in an environment comprising: acquiring a plurality of images of a scene in the environment; processing the images to determine optic flow of features in the scene; and determining whether an ensemble of moving objects is present in the environment responsive to the optic flow. Optionally, the method comprises determining a degree to which the optic flow exhibits inward optic flow. Optionally, the method comprises determining whether the ensemble is present responsive to the degree of inward optic flow. In some embodiments of the invention, determining optic flow comprises determining image trajectories of features in the scene and using the image trajectories to determine optic flow. Optionally, determining image trajectories comprises determining image trajectories that lie in at least one EPI plane of a space time volume defined by the images. Optionally, the at least one EPI plane comprises a plurality of planes. In some embodiments of the invention the method comprises determining a degree to which the image trajectories intersect. Optionally, the method comprises determining whether the ensemble is present responsive to the degree to which the image trajectories intersect. In some embodiments of the invention, the images are acquired under conditions for which stationary features in the environment exhibit outward optic flow. In some embodiments of the invention, the images are acquired by a camera mounted to a vehicle. Optionally, the vehicle is an automotive vehicle. In some embodiments of the invention, the ensemble of moving objects is a crowd of people. There is further provided in accordance with an embodiment of the invention apparatus for detecting presence of a crowd of people in an environment comprising: a camera that acquires images of a scene in the environment; and a processor that processes the images to determine presence of a crowd of people in accordance with an embodiment of the invention. Optionally, the Apparatus according is adapted to be mounted in a vehicle. Optionally, the vehicle is an automotive vehicle. BRIEF DESCRIPTION OF FIGURES Non-limiting examples of embodiments of the present invention are described below with reference to figures attached hereto, which are listed following this paragraph. In the figures, identical structures, elements or parts that appear in more than one figure are generally labeled with a same numeral in all the figures in which they appear. Dimensions of components and features shown in the figures are chosen for convenience and clarity of presentation and are not necessarily shown to scale. Fig. 1A schematically shows a vehicle comprising a CWAS moving along a road in an urban environment and image trajectories of features in the environment in an ST volume defined by a sequence of images acquired by the CWAS in accordance with an embodiment of the invention. Fig. IB shows a plan view of the environment and vehicle shown in Fig. 1A; Fig. 1C schematically shows a plan view of an epipolar (EPI) plane through the ST volume and image trajectories that lie in the plane, in accordance with an embodiment of the invention; Fig. 2A schematically shows the vehicle and urban environment shown in Fig. 1A with the addition of a crowd of people present in the path of the vehicle and image trajectories associated with the crowd in an ST volume defined by a sequence of images acquired by the CWAS, in accordance with an embodiment of the invention; Fig. 2B shows a plan view of the environment and vehicle shown in Fig. 2A; Fig. 2C schematically shows a plan view of an EPI plane through the ST volume and image trajectories associated with the crowd that lie in the plane, in accordance with an embodiment of the invention; and Fig. 3 shows a flow diagram of an algorithm used to determine presence of a crowd, in accordance with an embodiment of the present invention. DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS Figs. 1A and IB schematically show perspective and plan views of an urban environment 24 in which a vehicle 20 comprising a CWAS 30, in accordance with an embodiment of the invention, is moving along a road 22. The vehicle is moving towards an intersection 26 that has zebra crosswalks 27 and 28. CWAS 30 optionally comprises a single camera 31 that acquires images of the environment through which vehicle 20 moves and a processor (not shown) for processing the images. Camera 31 has an optic axis 32 and a field of view schematically delineated by lines 34. For convenience of presentation, CWAS 30 and camera 31 are shown greatly enlarged relative to vehicle 20 and mounted on the roof of the vehicle. Generally a CWAS is mounted in a substantially less obtrusive location, inside a vehicle, such as under the hood and comprises appropriate optics to enable a camera in the CWAS to image the vehicle's environment. Features of urban environment 24 are optionally located relative to a coordinate system 40 having an origin located at the optic center of camera 31, a Z-axis coincident with optic axis 32, a horizontal X-axis and a vertical Y-axis. Vehicle 20 is shown at a first location Pj at a time t\ and at a second location jsj at a time tjvf as it moves along road 22 towards intersection 26. A sequence of images 50 acquired by camera 31 in the time it takes vehicle 20 to move from position Pj to position P^ are schematically shown in Fig. 1A in an inset 60 aligned one behind the other to define an ST volume 52. A first image in the sequence acquired at time t] when vehicle 20 is located at P\ is labeled IMj and an N-th, image in the sequence, acquired at a time tjsj when vehicle 20 is located at position Pxj, is labeled IMχj. Pixels in images 50 comprised in ST volume 52 are located by coordinates measured relative to a coordinate system 41 having a time axis perpendicular to the planes of the images and x and y spatial axes that are parallel to the planes of the images. The x and y-axes correspond respectively to X and Y-axes of coordinate system 40. ST volume 52 has an ST "optic" axis 42 corresponding to optic axis 32 of camera 31. ST optic axis 42 passes through pixels in images 50 that image features in environment 24 lying along camera optic axis 32. As a matter of convenience, the t-axis of ST volume 52 is chosen to coincide with ST axis 42. A given pixel in ST volume 52 is located by a time coordinate along the t-axis, which designates a particular image 50 in which the pixel lies by a time at which the particular image is acquired, and x and y-coordinates, which designate where in the particular image 50 the pixel is located. Pixels in images 50 that image a same feature in urban environment 24 lie along a same line, i.e. an image trajectory, in ST volume 52. For stationary features in environment 24, as vehicle 20 approaches intersection 26, the Z-coordinates (relative to coordinate system 40) of the features decrease, while their respective X and Y-coordinates remain the same. As a result, all stationary features in the environment that do not lie along the Z-axis, i.e. optic axis 32, exhibit outward optic flow. The location of pixels in an image 50 of the sequence of images 50 that image a given stationary feature in environment 24 that does not lie on the Z-axis, "drifts" away from the center and towards the perimeter of the image as the acquisition time of the image increases. As a result, image trajectories in ST volume 52 of stationary features in environment 24, except for features that might lie along optic axis 32, are "outward" moving trajectories that veer away from ST axis 42 of the ST volume, i.e. the t-axis of coordinate system 40. To illustrate characteristics of image trajectories of features in environment 24 that are imaged by camera 31 six features in the environment are indicated by reference labels F0, Fi , ¥2, F3, F4 and F5. Image trajectories in ST volume 52 corresponding to features F0, Fj, ¥2, F3, F4 and F5 are schematically shown in the ST volume and are indicated by reference labels T0, Ti , T2, T3, T4 and T5 respectively. All features F0-F5 are, by way of example, stationary features in environment 24 and are assumed for convenience of presentation to have a same Y-coordinate equal to zero. Feature F0 lies along the Z-axis, features ¥\ and 2 have negative X-coordinates and lie to the right of the driver of vehicle 20 and features F3, F4 and F5 have positive X-coordinates and lie to the left of the driver. As vehicle 20 approaches intersection 26, the Z-coordinates (relative to coordinate system 40) of the features decrease while their respective X and Y-coordinates remain the same. As a result, all features except feature F0, which lies along the Z-axis, exhibit outward optic flow and their respective image trajectories T\ - T5 in ST volume 52, move outward and veer away from the t-axis as t increases. Image trajectory T0 corresponding to feature F0 is a straight line lying along the ST axis 42. Because features F0-F5 have a same Y-coordinate equal to zero, image trajectories T0-T5 are coplanar and lie in an EPI plane 54 of ST volume 52 that is coincident with the xt-plane of the ST volume. It is noted, that in general, a given feature in environment 24 does not have an image trajectory that is coplanar with an EPI plane of ST volume 52. However, many objects in an environment such as environment 24, especially if they have sufficiently large vertical extents and optical properties that change relatively slowly with change in Y-coordinate, generate traces on EPI planes that are projections of image trajectories associated with the regions. The projections share characteristics of the image trajectories and in general may be used as approximations of the image trajectories. Hereinafter, projections of image trajectories on EPI planes of an ST volume are assumed to be image trajectories and are not distinguished from actual image trajectories. Fig.lC schematically shows an enlarged plan view of a portion of EPI plane 54 and details of image trajectories T0, Tι , T2, T3, T4 and T5. The image trajectories have been generated, by way of example, assuming that at position Pj of vehicle 20, the X and Z-coordinates of features F0, ¥\, F2, F3, F4, and F5 are respectively (0, 80), (-3.5, 32.6), (-3.5, 43), (8.5, 32.6), (8.5, 43), (10.7, 48). In each set of parenthesis, the first and second number give the X-coordinate and Y-coordinate in meters of its corresponding feature. Between position Pj at which image IMj is acquired, and position P vj at which image
Figure imgf000010_0001
is acquired, it is assumed that vehicle 20 moves twenty two meters. Camera 31 is assumed to have a focal length of 10 mm and values along the x-axis in the image of EPI plane 54 shown in Fig. 1C are given in mm. Units along the t-axis, which passes through x = 0 and coincides with ST axis 42 (also shown in Fig. 1A) and image trajectory T0, are arbitrary. To prevent clutter, tj, tN and equidistance time intervals between time t\ and time txj are indicated by witness lines along an axis 51 parallel to the t-axis at the bottom of the image of EPI plane 54. The small icons along a given trajectory T0-T5 indicate an x-value for the trajectory at a time indicated by a witness line directly below the icon. For convenience of presentation, distance vehicle 20 has traveled from position Pj at each of the times indicated by a witness line along axis 51 is indicated in meters. It is noted that in addition to each of image trajectories T0-T5 veering away from the
ST axis 42 (i.e. the t-axis in Fig. 1C), none of the image trajectories intersect. The lack of intersection points is due to a fortuitous choice of the locations of features F0-F5- For example, were a feature Fg, which is shown in Fig. IB, added to the group of features, F0-F5, its image trajectory would intersect with image trajectory Tj of feature Fjat a time between t\ and ^. However, in general, for a region of an environment in which none of the features is moving, intersections between image trajectories occur less frequently than for a region of an environment comprising an ensemble of moving objects. In addition, since there are no inward moving image trajectories in an environment in which there are no moving features, there are no intersection points between an inward and an outward moving trajectory. All intersection points are only between image trajectories moving in a same, outward direction relative to ST axis 42. A configuration of image trajectories different from that shown in Figs. 1A and 1C arises for an environment comprising an ensemble of non-stationary features that move in different directions. For example, Figs. 2A and 2B schematically show perspective and plan views of vehicle 20 and urban environment 24 having, in addition to the features shown in Figs. 1A and IB, a crowd 70 of people that are crossing road 22 at zebra crosswalk 27 from both sides of the road. By way of example, the crowd comprises six people, three of whom are moving from left to right and three of whom are moving from right to left as seen by the driver (and camera 31) of vehicle 20. The motion of the people in crowd 70 generates features in a sequence of images of the crowd acquired by camera 31 that exhibit inward optic flow and corresponding image trajectories in an ST volume defined by the images that move inward in spite of the motion of vehicle 20. Insets 61 and 62 in Figs. 2A and 2B respectively show enlarged schematic views of crowd 70 at time t], when vehicle 20 is located at position
Pi - Fig. 2A schematically shows in an inset 63, a sequence of images 80 acquired by camera 31 as the vehicle moves from location Pj to location Pfsj and an ST volume 82 defined by the images. ST volume 82 shows image trajectories TPj, TP2, TP3, TP4, TP5 and TPg for six features, each of which is associated with a different person in crowd 70, that are imaged by camera 31 in images 80. A feature associated with a given person in crowd 70 may, for example, be a region of the person's body or clothing or something the person is carrying. Image trajectories having an odd subscript are associated with persons in crowd 70 moving from right to left and trajectories having an even subscript are associated with persons in the crowd moving from left to right. Each of the features is assumed, for convenience of presentation, to have a Y-coordinate equal to zero. Image trajectories TPj - TPg are therefore coplanar and lie in an EPI plane 84 that lies in the xt-plane of ST volume 52. Fig. 2C shows an enlarged plan view of a region of EPI plane 84 and details of image trajectories TPj-TPg. Image trajectories TPI, TP2, TP3, TP4, TP5, and TP6 are generated assuming that the persons they are associated with move at constant velocities equal respectively to -2.8, 1, -1.3, 2, -1 and 1.4 m/s between times t] and jsj and that at time tj their respective X-coordinates are, in meters, 1.75, -0.5, 1.2, -2, 0.5, -1.5. Because of the motions of the persons associated with image trajectories TP1-TP6, each of the trajectories initially moves inward towards the t-axis. In addition, the trajectories exhibit a relatively high concentration of intersection points, which tend to be clustered close to the t axis (the ST axis corresponding to optic axis 32 (Fig. 2A) of camera 31) and intersection points between trajectories moving in opposite directions as well as trajectories moving in same directions. In accordance with an embodiment of the invention, CWAS 30 determines whether crowd 70 is present in environment 24 responsive to characteristics of image trajectories in ST volume 82. Optionally, CWAS 30 determines whether crowd 70 is present or not present responsive to a degree to which ST volume 82 exhibits inward moving image trajectories. In some embodiments of the invention, CWAS 30 determines whether or not crowd 70 is present responsive to a frequency with which image trajectories in the ST volume intersect. Optionally, CWAS 30 determines a degree to which ST volume 82 exhibits inward moving and/or intersecting trajectories responsive to a degree to which at least one EPI plane in the ST volume exhibits such trajectories. The y-coordinate of the at least one EPI plane in ST volume 82 is determined so that it corresponds to Y and Z-coordinates at which features of people in a crowd in the path of vehicle 20 are expected to be located and which will therefore generate image trajectories in the at least one EPI plane responsive to human motion. In an embodiment of the invention, the at least one EPI plane comprise a plurality of EPI planes located at appropriate image y-coordinates. By way of example, assume that five EPI planes are used to detect crowds, that camera 31 (Figs. 1A and 2 A) has a focal length of about 10 mm and that optic axis 32 of the camera is located about a meter from the ground (unlike in Figs. 1A and 2A where for convenience of presentation camera 31 is shown located on the roof). Optionally, the y-coordinates of the planes are determined so that at a distance of about 30 meters from vehicle 20 features in a person's body at locations within a range from about knee height to about shoulder height of the person generate image trajectories in the EPI planes. Optionally, the planes are evenly spaced. Assume that knee height to head height extends from about 0.25 to about 1.75 meters above ground. Then a central EPI plane of the five EPI planes would have a y-coordinate equal to zero (it would lie in the xt- planes of ST volumes 52 or 82) and the planes would be spaced apart by about 0.5 mm. For each EPI plane, to determine presence of inward moving and intersecting image trajectories in the EPI plane, in accordance with an embodiment of the invention, CWAS 30 generates a "rolling", sample image of the EPI plane. At any given time t, the rolling sample image comprises a line of pixels parallel to the x-axis of ST volume 82 at the y-coordinate of the EPI plane from a last image 80 acquired by camera 31 and from each of a plurality of "M-l" images 80 preceding the last image. The sample image therefore comprises pixel lines from a total of M images. Each time a new last image is acquired by camera 31, the line of pixels from the earliest acquired image of the M images is, optionally, discarded and a line, hereinafter a "last pixel-line" of pixels from the new last image is added to the sample image. In some embodiments of the invention, M is equal to about 20. CWAS 30 identifies image trajectories in the rolling sample image optionally using a Hough transform that maps pixels in the rolling sample image to a Hough accumulator space. For each pixel in the rolling sample image, a gradient of intensity is determined. For pixels having a relatively well defined intensity gradient, CWAS 30 defines a straight line that passes through the pixel and has a slope that is perpendicular to the gradient. A pixel is optionally assumed to have a well defined gradient if its gray level differs from that of it neighbors by a sufficient amount. For example, in some embodiments of the invention, for pixels that have gray levels in a range from 0-255 a gray level difference is required to be greater than about 16 for a pixel to have a well defined gradient. The straight line defined for a given pixel is assumed to approximate an image trajectory along which the pixel lies. Whereas, an image trajectory along which a pixel lies is, as is shown for example in Figs. 1A, 1C, 2A and 2C, generally not a straight line, for relatively short distances, the trajectory can usually be approximated by a straight line. Portions of image trajectories that are comprised in rolling sample images that do not have a large dimension along the t-axis are generally sufficiently short so that they may be reasonably well approximated by straight lines. For example, for a rolling image for which M is about 20 and camera images are acquired at a rate of about 10 images per second, the inventors have found that image trajectories in a rolling sample may advantageously be approximated by straight lines. The straight line image trajectory for a pixel is described by an equation optionally of the form, x = s(t-tn) + x0, 1) where tn is a time at which the last image 80 is acquired, s is the slope that characterizes the straight line and x0 is the x-coordinate of the straight-line trajectory at time tn. The parameter, x0 is the x intercept of the pixel's straight-line trajectory with the last pixel line added to the rolling sample image. CWAS 30 determines values for s and x0 for the pixel and maps the pixel into a two dimensional Hough accumulator space having discrete accumulator bins, each of which defines a range of values for Q and a range of values for s. Mapping the pixel comprises increasing by one a count in the Hough space bin that brackets the values of x0 and s determined for the pixel. For example, for a j-th pixel having coordinates (tj,xj) and a relatively well defined intensity gradient, the slope s is determined to have a value s; that, optionally, corresponds to a direction perpendicular to the intensity gradient at the coordinates (tj,xj). The x0 intercept is determined to have a value x0 , optionally in accordance with an expression xoJ = xj - sj(tj'tn)- 2)
CWAS 30 maps the j-th pixel into the Hough accumulator space by increasing a count in the bin that brackets the values (sj, x0 ) by one. It is noted that in accordance with equation 1) given above, a trajectory in the sample image is a right or left moving trajectory relative to the driver of vehicle 20 if its slope s is positive or negative respectively. A trajectory in the sample image is an outward moving trajectory if its slope s and x0 have opposite signs and is an inward moving trajectory if they have same signs. Relatively high counts for both positive slope Hough bins and negative slope Hough bins that are associated with a same value of x0 indicate that x0 is an intersection point of an outward and an inward moving image trajectory. In accordance with an embodiment of the invention, CWAS 30 uses values accumulated in the Hough accumulator space for a rolling sample image to define probability functions that are used to determine a degree to which inward moving and/or intersecting image trajectories are found in the sample image. The probability functions are used to determine presence of a crowd, such as crowd 70, in the path of vehicle 20. Let the accumulated count in a Hough space bin having central values s and x0 be represented by H(s,x0) and let a probability of a value for x0 lying on a trajectory having a positive or negative slope be represented by P(s+, x0) and P(s_, x0) respectively. Then, in accordance with an embodiment of the invention, P(s+, x0) = [∑H(s,xo)/ ∑H(s,xo) ] for s > K± 3) S S, X o P(s_, x0) = [∑H(s,xo)/ ∑H(s,xo) ] for s < K±, 4) S S, X o where K± is a predetermined threshold, which is used to increase signal to noise. By way of example, in some embodiments of the invention, K-j- has a value equal to about 0.25. Optionally, the rolling sample image is determined to exhibit inward moving trajectories to a degree that indicates presence of crowd 70 if, ∑P(s+, xo) > KINW 5) x > 0 o or ∑ P(S_, XQ) > KINW, 6) x < 0 o where Krj^ is a predetermined threshold controllable to control sensitivity of the inward motion determination. It is noted that the above procedure assumes that trajectories in a rolling sample image are readily detected. The inventors have found that fast moving features, which generate image trajectories having relatively large slopes and leave a sample region quickly, have a tendency to generate artifacts in the rolling sample image and escape detection. To reduce severity of the artifacts and improve efficiency for detecting trajectories, optionally, each rolling sample images is processed to generate a Gaussian pyramid of images, using methods known in the art. Image trajectories having large slopes are generally more easily detected in a higher level image of a Gaussian pyramid than in a lower one and in accordance with an embodiment of the invention, higher level Gaussian pyramid images of rolling sample images are used to determine to what extent they exhibit inward motion and intersecting trajectories. In accordance with an embodiment of the invention, a highest level image in the Gaussian pyramid of a rolling sample image is first processed optionally in accordance with equations 3) - 6) to determine if it exhibits inward moving trajectories. If "inward motion" is not found in the highest level, it is looked for in a next lower level. The process is continued to determine if at some level of the pyramid, including possibly the lowest level of the pyramid (i.e. the original rolling sample image for the EPI plane), inward motion is found. The process of looking for inward motion is stopped at a highest level of the pyramid at which such motion is found. Depending on whether or not inward motion is found in some level of the pyramid, the original rolling sample image and its corresponding EPI plane are determined respectively to exhibit or not exhibit inward motion. In accordance with an embodiment of the invention, following testing of each EPI plane for inward motion, the results from all the EPI planes are combined to determine if the images acquired by camera 31 exhibit a degree of inward motion indicative of presence of a crowd. Optionally, to determine whether the images exhibit inward motion a weighted sum of the results from each of the EPI planes is determined and if the weighted sum is greater than an appropriate threshold the images are determined to exhibit inward motion to a degree indicating presence of a crowd. Following determination as to whether the camera images exhibit inward motion, the camera images are processed to determine, in accordance with an embodiment of the invention, whether or not they exhibit a degree of intersecting trajectories sufficient to indicate presence of a crowd. Optionally, the rolling sample region for each EPI plane is processed to determine if it exhibits intersecting image trajectories. Optionally, processing a given rolling sample region for intersections is performed in a same level of the Gaussian pyramid generated for the sample region for which inward motion is found. If inward motion was not found, processing for intersections is performed on the original rolling sample region. In accordance with an embodiment of the invention, each value x0 that labels a Hough space bin is vetted to determine if it is an intersection point of image trajectories. Optionally, x0 is determined to be an intersection point for at least two image trajectories if the product of the probabilities determined in equations 3) and 4) above satisfy an equation of the form P(s+, x0)P(s_, x0) > KC, 7) where KQ is a predetermined "sensitivity" threshold. In accordance with an embodiment of the invention, if NQ represents the number of points x0 that satisfy equation 7), the rolling sample region and its associated EPI plane are determined to exhibit a degree of intersecting image trajectories indicative of a crowd if
NC > KNC, 8) where is an appropriate sensitivity threshold. In accordance with an embodiment of the invention, following testing of the rolling sample regions of the EPI planes for trajectory intersections, the results from all the EPI planes are combined to determine if the images acquired by camera 31 exhibit a number of intersecting trajectories to indicate presence of a crowd. Optionally, the determination is made responsive to whether a weighted sum of the "intersection results" from all the EPI planes is greater than a predetermined threshold. The results from testing the rolling sample regions of the EPI planes at a given time t for inward motion and multiplicity of intersections are processed by CWAS 30 to provide an assessment as to whether at the given time t a crowd is present in front of vehicle 20.
Optionally, CWAS 30 determines whether a crowd such as crowd 70 (Figs. 2A and 2B) is present in accordance with an algorithm 200 similar to that shown in a flow diagram in Fig. 3. After processing images provided by camera 20 as described above to determine at time t whether or not images provided by camera 31 exhibit inward motion and multiple intersecting trajectories, in a step 201 CWAS 30 optionally proceeds to a junction 202. At 202, if the images exhibit multiple intersections, CWAS 31 proceeds to a junction 203 and if not it proceeds to a junction 204. At junction 203 the CWAS optionally determines whether any of the following three conditions prevail: 1) there is substantial clustering of intersections close to the t-axis (as shown for example in Figs. 2A and 2C for crowd 70); 2) in an immediately preceding decision, CWAS 30 determined there was a crowd present; or 3) a pedestrian detection system comprised in the CWAS determined that many individuals are present in front of vehicle 20. Any of various pedestrian detection systems known in the art may be used in the practice of the present invention to provide an indication if many individuals are present in front of vehicle 20. Optionally, the pedestrian detection system is a component based detection system such as described in a PCT patent application entitled "Pedestrian Detection" filed on even date with the present application, the disclosure of which is incorporated herein by reference. If at least one of the three conditions exists, CWAS 30 proceeds to a decision block 205 and determines that a crowd is present. If on the other hand none of the conditions are extant CWAS proceeds to junction 204. At junction 204 if the camera images provided by camera 31 have been determined to exhibit inward motion, CWAS 30 proceeds to decision junction 206. At junction 206, the CWAS optionally, determines if either of the following two conditions prevail: 1) in an immediately preceding decision, CWAS 30 determined there was a crowd present; or 2) a pedestrian detection system comprised in the CWAS determined that many individuals are present in front of vehicle 20. If at least one of the conditions prevails, the CWAS proceeds to a decision block 207 and determines that a crowd is present. If neither of the two conditions are present, CWAS 30 proceeds to a junction 208 and determines if vehicle 20 is or is not stationary. If the vehicle is not stationary, CWAS proceeds to block 209 and determines that a crowd is not present. Any of various methods and devices known in the art may be used to determine if vehicle 20 is moving or not. For example, CWAS 30 optionally determines whether vehicle 20 is moving from an accelerometer it comprises or from signals that it receives from a speedometer system in the vehicle. If at junction 208 the vehicle is stationary, CWAS 30 proceeds to a junction 210. At 210 if in an immediately preceding decision CWAS 31 determined that a crowd was not present, the CWAS proceeds to a decision block 21 1 and determines that a crowd is not currently present. If on the other hand, at junction 210 the preceding decision was that a crowd was present, CWAS proceeds to a junction 212. At 212 if the camera images did not exhibit outward optic flow, optionally determined using methods similar to those used to determine inward flow, CWAS 30 determines in a decision block 213 that a crowd is currently also not present. If the images did exhibit outward flow, CWAS 30 proceeds to a decision block 214 and determines that a crowd is currently present. In using outward flow as a criterion for deciding whether a crowd is present, it is noted that if vehicle 20 is stationary, outward flow can be generated in images acquired by camera 31 only if moving objects are imaged in the images. In the description and claims of the present application, each of the verbs, "comprise" "include" and "have", and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of members, components, elements or parts of the subject or subjects of the verb. The present invention has been described using detailed descriptions of embodiments thereof that are provided by way of example and are not intended to limit the scope of the invention. The described embodiments comprise different features, not all of which are required in all embodiments of the invention. Some embodiments of the present invention utilize only some of the features or possible combinations of the features. Variations of embodiments of the present invention that are described and embodiments of the present invention comprising different combinations of features noted in the described embodiments will occur to persons of the art. The scope of the invention is limited only by the following claims.

Claims

1. A method of determining the presence of an ensemble of moving objects in an environment comprising: acquiring a plurality of images of a scene in the environment; processing the images to determine optic flow of features in the scene; and determining whether an ensemble of moving objects is present in the environment responsive to the optic flow.
2. A method according to claim 1 and comprising determining a degree to which the optic flow exhibits inward optic flow.
3. A method according to claim 2 and determining whether the ensemble is present responsive to the degree of inward optic flow.
4. A method according to any of claims 1- 3 wherein determining optic flow comprises determining image trajectories of features in the scene and using the image trajectories to determine optic flow.
5. A method according to claim 4 wherein determining image trajectories comprises determining image trajectories that lie in at least one EPI plane of a space time volume defined by the images.
6. A method according to claim 5 wherein the at least one EPI plane comprises a plurality of planes.
7. A method according to any of claims 4-6 and comprising determining a degree to which the image trajectories intersect.
8. A method according to claim 7 and comprising determining whether the ensemble is present responsive to the degree to which the image trajectories intersect.
9. A method according to any of the preceding claims wherein the images are acquired under conditions for which stationary features in the environment exhibit outward optic flow.
10. A method according to any of the preceding claims wherein the images are acquired by a camera mounted to a vehicle.
1 1. A method according to claim 10 wherein the vehicle is an automotive vehicle.
12. A method according to any of the preceding claims wherein the ensemble of moving objects is a crowd of people.
13. Apparatus for detecting presence of a crowd of people in an environment comprising: a camera that acquires images of a scene in the environment; and a processor that processes the images to determine presence of a crowd of people in accordance with any of the claims 1 - 12.
14. Apparatus according to claim 13 adapted to be mounted in a vehicle.
15. Apparatus according to claim 14 wherein the vehicle is an automotive vehicle.
PCT/IL2005/000382 2004-04-08 2005-04-07 Crowd detection WO2005098751A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
US56004804P 2004-04-08 2004-04-08
US60/560,048 2004-04-08

Publications (1)

Publication Number Publication Date
WO2005098751A1 true WO2005098751A1 (en) 2005-10-20

Family

ID=34966836

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL2005/000382 WO2005098751A1 (en) 2004-04-08 2005-04-07 Crowd detection

Country Status (1)

Country Link
WO (1) WO2005098751A1 (en)

Cited By (75)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8718321B2 (en) 2006-08-16 2014-05-06 Cortexica Vision Systems Limited Method of image processing
US8917169B2 (en) 1993-02-26 2014-12-23 Magna Electronics Inc. Vehicular vision system
US8993951B2 (en) 1996-03-25 2015-03-31 Magna Electronics Inc. Driver assistance system for a vehicle
US9008369B2 (en) 2004-04-15 2015-04-14 Magna Electronics Inc. Vision system for vehicle
US9436880B2 (en) 1999-08-12 2016-09-06 Magna Electronics Inc. Vehicle vision system
US9555803B2 (en) 2002-05-03 2017-01-31 Magna Electronics Inc. Driver assistance system for vehicle
US9952594B1 (en) 2017-04-07 2018-04-24 TuSimple System and method for traffic data collection using unmanned aerial vehicles (UAVs)
US9953236B1 (en) 2017-03-10 2018-04-24 TuSimple System and method for semantic segmentation using dense upsampling convolution (DUC)
US10067509B1 (en) 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
US10071676B2 (en) 2006-08-11 2018-09-11 Magna Electronics Inc. Vision system for vehicle
US10147193B2 (en) 2017-03-10 2018-12-04 TuSimple System and method for semantic segmentation using hybrid dilated convolution (HDC)
US10303956B2 (en) 2017-08-23 2019-05-28 TuSimple System and method for using triplet loss for proposal free instance-wise semantic segmentation for lane detection
US10303522B2 (en) 2017-07-01 2019-05-28 TuSimple System and method for distributed graphics processing unit (GPU) computation
US10311312B2 (en) 2017-08-31 2019-06-04 TuSimple System and method for vehicle occlusion detection
US10308242B2 (en) 2017-07-01 2019-06-04 TuSimple System and method for using human driving patterns to detect and correct abnormal driving behaviors of autonomous vehicles
US10360257B2 (en) 2017-08-08 2019-07-23 TuSimple System and method for image annotation
US10387736B2 (en) 2017-09-20 2019-08-20 TuSimple System and method for detecting taillight signals of a vehicle
US10410055B2 (en) 2017-10-05 2019-09-10 TuSimple System and method for aerial video traffic analysis
US10474790B2 (en) 2017-06-02 2019-11-12 TuSimple Large scale distributed simulation for realistic multiple-agent interactive environments
US10471963B2 (en) 2017-04-07 2019-11-12 TuSimple System and method for transitioning between an autonomous and manual driving mode based on detection of a drivers capacity to control a vehicle
US10481044B2 (en) 2017-05-18 2019-11-19 TuSimple Perception simulation for improved autonomous vehicle control
US10493988B2 (en) 2017-07-01 2019-12-03 TuSimple System and method for adaptive cruise control for defensive driving
US10528851B2 (en) 2017-11-27 2020-01-07 TuSimple System and method for drivable road surface representation generation using multimodal sensor data
US10528823B2 (en) 2017-11-27 2020-01-07 TuSimple System and method for large-scale lane marking detection using multimodal sensor data
US10552979B2 (en) 2017-09-13 2020-02-04 TuSimple Output of a neural network method for deep odometry assisted by static scene optical flow
US10552691B2 (en) 2017-04-25 2020-02-04 TuSimple System and method for vehicle position and velocity estimation based on camera and lidar data
US10558864B2 (en) 2017-05-18 2020-02-11 TuSimple System and method for image localization based on semantic segmentation
US10649458B2 (en) 2017-09-07 2020-05-12 Tusimple, Inc. Data-driven prediction-based system and method for trajectory planning of autonomous vehicles
US10657390B2 (en) 2017-11-27 2020-05-19 Tusimple, Inc. System and method for large-scale lane marking detection using multimodal sensor data
US10656644B2 (en) 2017-09-07 2020-05-19 Tusimple, Inc. System and method for using human driving patterns to manage speed control for autonomous vehicles
US10666730B2 (en) 2017-10-28 2020-05-26 Tusimple, Inc. Storage architecture for heterogeneous multimedia data
US10671083B2 (en) 2017-09-13 2020-06-02 Tusimple, Inc. Neural network architecture system for deep odometry assisted by static scene optical flow
US10671873B2 (en) 2017-03-10 2020-06-02 Tusimple, Inc. System and method for vehicle wheel detection
US10678234B2 (en) 2017-08-24 2020-06-09 Tusimple, Inc. System and method for autonomous vehicle control to minimize energy cost
US10685239B2 (en) 2018-03-18 2020-06-16 Tusimple, Inc. System and method for lateral vehicle detection
US10685244B2 (en) 2018-02-27 2020-06-16 Tusimple, Inc. System and method for online real-time multi-object tracking
US10710592B2 (en) 2017-04-07 2020-07-14 Tusimple, Inc. System and method for path planning of autonomous vehicles based on gradient
US10733465B2 (en) 2017-09-20 2020-08-04 Tusimple, Inc. System and method for vehicle taillight state recognition
US10739775B2 (en) 2017-10-28 2020-08-11 Tusimple, Inc. System and method for real world autonomous vehicle trajectory simulation
US10737695B2 (en) 2017-07-01 2020-08-11 Tusimple, Inc. System and method for adaptive cruise control for low speed following
US10752246B2 (en) 2017-07-01 2020-08-25 Tusimple, Inc. System and method for adaptive cruise control with proximate vehicle detection
US10762673B2 (en) 2017-08-23 2020-09-01 Tusimple, Inc. 3D submap reconstruction system and method for centimeter precision localization using camera-based submap and LiDAR-based global map
US10762635B2 (en) 2017-06-14 2020-09-01 Tusimple, Inc. System and method for actively selecting and labeling images for semantic segmentation
US10768626B2 (en) 2017-09-30 2020-09-08 Tusimple, Inc. System and method for providing multiple agents for decision making, trajectory planning, and control for autonomous vehicles
US10782693B2 (en) 2017-09-07 2020-09-22 Tusimple, Inc. Prediction-based system and method for trajectory planning of autonomous vehicles
US10782694B2 (en) 2017-09-07 2020-09-22 Tusimple, Inc. Prediction-based system and method for trajectory planning of autonomous vehicles
US10783381B2 (en) 2017-08-31 2020-09-22 Tusimple, Inc. System and method for vehicle occlusion detection
US10796402B2 (en) 2018-10-19 2020-10-06 Tusimple, Inc. System and method for fisheye image processing
US10812589B2 (en) 2017-10-28 2020-10-20 Tusimple, Inc. Storage architecture for heterogeneous multimedia data
US10816354B2 (en) 2017-08-22 2020-10-27 Tusimple, Inc. Verification module system and method for motion-based lane detection with multiple sensors
US10839234B2 (en) 2018-09-12 2020-11-17 Tusimple, Inc. System and method for three-dimensional (3D) object detection
US10860018B2 (en) 2017-11-30 2020-12-08 Tusimple, Inc. System and method for generating simulated vehicles with configured behaviors for analyzing autonomous vehicle motion planners
US10877476B2 (en) 2017-11-30 2020-12-29 Tusimple, Inc. Autonomous vehicle simulation system for analyzing motion planners
US10942271B2 (en) 2018-10-30 2021-03-09 Tusimple, Inc. Determining an angle between a tow vehicle and a trailer
US10953881B2 (en) 2017-09-07 2021-03-23 Tusimple, Inc. System and method for automated lane change control for autonomous vehicles
US10953880B2 (en) 2017-09-07 2021-03-23 Tusimple, Inc. System and method for automated lane change control for autonomous vehicles
US10962979B2 (en) 2017-09-30 2021-03-30 Tusimple, Inc. System and method for multitask processing for autonomous vehicle computation and control
US10970564B2 (en) 2017-09-30 2021-04-06 Tusimple, Inc. System and method for instance-level lane detection for autonomous vehicle control
CN112767451A (en) * 2021-02-01 2021-05-07 福州大学 Crowd distribution prediction method and system based on double-current convolutional neural network
US11010874B2 (en) 2018-04-12 2021-05-18 Tusimple, Inc. Images for perception modules of autonomous vehicles
US11009365B2 (en) 2018-02-14 2021-05-18 Tusimple, Inc. Lane marking localization
US11009356B2 (en) 2018-02-14 2021-05-18 Tusimple, Inc. Lane marking localization and fusion
US11029693B2 (en) 2017-08-08 2021-06-08 Tusimple, Inc. Neural network based vehicle dynamics model
CN113104045A (en) * 2021-03-24 2021-07-13 东风柳州汽车有限公司 Vehicle collision early warning method, device, equipment and storage medium
US11104334B2 (en) 2018-05-31 2021-08-31 Tusimple, Inc. System and method for proximate vehicle intention prediction for autonomous vehicles
US11151393B2 (en) 2017-08-23 2021-10-19 Tusimple, Inc. Feature matching and corresponding refinement and 3D submap position refinement system and method for centimeter precision localization using camera-based submap and LiDAR-based global map
US11292480B2 (en) 2018-09-13 2022-04-05 Tusimple, Inc. Remote safe driving methods and systems
US11305782B2 (en) 2018-01-11 2022-04-19 Tusimple, Inc. Monitoring system for autonomous vehicle operation
US11312334B2 (en) 2018-01-09 2022-04-26 Tusimple, Inc. Real-time remote control of vehicles with high redundancy
US11500101B2 (en) 2018-05-02 2022-11-15 Tusimple, Inc. Curb detection by analysis of reflection images
US11587304B2 (en) 2017-03-10 2023-02-21 Tusimple, Inc. System and method for occluding contour detection
US11701931B2 (en) 2020-06-18 2023-07-18 Tusimple, Inc. Angle and orientation measurements for vehicles with multiple drivable sections
US11810322B2 (en) 2020-04-09 2023-11-07 Tusimple, Inc. Camera pose estimation techniques
US11823460B2 (en) 2019-06-14 2023-11-21 Tusimple, Inc. Image fusion for autonomous vehicle operation
US11967140B2 (en) 2022-11-08 2024-04-23 Tusimple, Inc. System and method for vehicle wheel detection

Non-Patent Citations (8)

* Cited by examiner, † Cited by third party
Title
BELL M G H ET AL: "Pedestrian behaviour and exposure to risk", IEEE COLLQUIUM ON INCIDENT DETECTION, 2 June 1997 (1997-06-02), pages 2 - 1, XP006509974 *
BOGHOSSIAN B A ET AL: "Motion-based machine vision techniques for the management of large crowds", ELECTRONICS, CIRCUITS AND SYSTEMS, 1999. PROCEEDINGS OF ICECS '99. THE 6TH IEEE INTERNATIONAL CONFERENCE ON PAFOS, CYPRUS 5-8 SEPT. 1999, PISCATAWAY, NJ, USA,IEEE, US, vol. 2, 5 September 1999 (1999-09-05), pages 961 - 964, XP010361627, ISBN: 0-7803-5682-9 *
DAVIES A C ET AL: "CROWD MONITORING USING IMAGE PROCESSING", ELECTRONICS AND COMMUNICATION ENGINEERING JOURNAL, INSTITUTION OF ELECTRICAL ENGINEERS, LONDON, GB, vol. 7, no. 1, 1 February 1995 (1995-02-01), pages 37 - 47, XP000500769, ISSN: 0954-0695 *
ENKELMANN W: "OBSTACLE DETECTION BY EVALUATION OF OPTICAL FLOW FIELDS FROM IMAGE SEQUENCES", IMAGE AND VISION COMPUTING, GUILDFORD, GB, vol. 9, no. 3, June 1991 (1991-06-01), pages 160 - 168, XP009033182, ISSN: 0262-8856 *
KOLODKO J ET AL: "On the use of motion as a primitive quantity for autonomous vehicle guidance", INTELLIGENT VEHICLES SYMPOSIUM, 2000. IV 2000. PROCEEDINGS OF THE IEEE DEARBORN, MI, USA 3-5 OCT. 2000, PISCATAWAY, NJ, USA,IEEE, US, 3 October 2000 (2000-10-03), pages 64 - 69, XP010528914, ISBN: 0-7803-6363-9 *
MAURIN B ET AL: "Monitoring crowded traffic scenes", INTELLIGENT TRANSPORTATION SYSTEMS, 2002. PROCEEDINGS. THE IEEE 5TH INTERNATIONAL CONFERENCE ON SEPT. 3-6, 2002, PISCATAWAY, NJ, USA,IEEE, 3 September 2002 (2002-09-03), pages 19 - 24, XP010608255, ISBN: 0-7803-7389-8 *
PINI R ET AL: "Crowd detection in video sequences", INTELLIGENT VEHICLES SYMPOSIUM, 2004 IEEE PARMA, ITALY JUNE 14-17, 2004, PISCATAWAY, NJ, USA,IEEE, 14 June 2004 (2004-06-14), pages 66 - 71, XP010727444, ISBN: 0-7803-8310-9 *
TAKEDA N ET AL: "Moving obstacle detection using residual error of FOE estimation", INTELLIGENT ROBOTS AND SYSTEMS '96, IROS 96, PROCEEDINGS OF THE 1996 LEEE/RSJ INTERNATIONAL CONFERENCE ON OSAKA, JAPAN 4-8 NOV. 1996, NEW YORK, NY, USA,IEEE, US, vol. 3, 4 November 1996 (1996-11-04), pages 1642 - 1647, XP010212538, ISBN: 0-7803-3213-X *

Cited By (141)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8917169B2 (en) 1993-02-26 2014-12-23 Magna Electronics Inc. Vehicular vision system
US8993951B2 (en) 1996-03-25 2015-03-31 Magna Electronics Inc. Driver assistance system for a vehicle
US9436880B2 (en) 1999-08-12 2016-09-06 Magna Electronics Inc. Vehicle vision system
US10683008B2 (en) 2002-05-03 2020-06-16 Magna Electronics Inc. Vehicular driving assist system using forward-viewing camera
US9643605B2 (en) 2002-05-03 2017-05-09 Magna Electronics Inc. Vision system for vehicle
US10351135B2 (en) 2002-05-03 2019-07-16 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US11203340B2 (en) 2002-05-03 2021-12-21 Magna Electronics Inc. Vehicular vision system using side-viewing camera
US10118618B2 (en) 2002-05-03 2018-11-06 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US9555803B2 (en) 2002-05-03 2017-01-31 Magna Electronics Inc. Driver assistance system for vehicle
US9834216B2 (en) 2002-05-03 2017-12-05 Magna Electronics Inc. Vehicular control system using cameras and radar sensor
US9008369B2 (en) 2004-04-15 2015-04-14 Magna Electronics Inc. Vision system for vehicle
US9428192B2 (en) 2004-04-15 2016-08-30 Magna Electronics Inc. Vision system for vehicle
US9609289B2 (en) 2004-04-15 2017-03-28 Magna Electronics Inc. Vision system for vehicle
US9948904B2 (en) 2004-04-15 2018-04-17 Magna Electronics Inc. Vision system for vehicle
US10110860B1 (en) 2004-04-15 2018-10-23 Magna Electronics Inc. Vehicular control system
US11847836B2 (en) 2004-04-15 2023-12-19 Magna Electronics Inc. Vehicular control system with road curvature determination
US10015452B1 (en) 2004-04-15 2018-07-03 Magna Electronics Inc. Vehicular control system
US11503253B2 (en) 2004-04-15 2022-11-15 Magna Electronics Inc. Vehicular control system with traffic lane detection
US9191634B2 (en) 2004-04-15 2015-11-17 Magna Electronics Inc. Vision system for vehicle
US10735695B2 (en) 2004-04-15 2020-08-04 Magna Electronics Inc. Vehicular control system with traffic lane detection
US10462426B2 (en) 2004-04-15 2019-10-29 Magna Electronics Inc. Vehicular control system
US9736435B2 (en) 2004-04-15 2017-08-15 Magna Electronics Inc. Vision system for vehicle
US10187615B1 (en) 2004-04-15 2019-01-22 Magna Electronics Inc. Vehicular control system
US10306190B1 (en) 2004-04-15 2019-05-28 Magna Electronics Inc. Vehicular control system
US10071676B2 (en) 2006-08-11 2018-09-11 Magna Electronics Inc. Vision system for vehicle
US11396257B2 (en) 2006-08-11 2022-07-26 Magna Electronics Inc. Vehicular forward viewing image capture system
US11951900B2 (en) 2006-08-11 2024-04-09 Magna Electronics Inc. Vehicular forward viewing image capture system
US11148583B2 (en) 2006-08-11 2021-10-19 Magna Electronics Inc. Vehicular forward viewing image capture system
US11623559B2 (en) 2006-08-11 2023-04-11 Magna Electronics Inc. Vehicular forward viewing image capture system
US10787116B2 (en) 2006-08-11 2020-09-29 Magna Electronics Inc. Adaptive forward lighting system for vehicle comprising a control that adjusts the headlamp beam in response to processing of image data captured by a camera
US9424486B2 (en) 2006-08-16 2016-08-23 Cortexica Vision Systems Limited Method of image processing
US8718321B2 (en) 2006-08-16 2014-05-06 Cortexica Vision Systems Limited Method of image processing
US9953236B1 (en) 2017-03-10 2018-04-24 TuSimple System and method for semantic segmentation using dense upsampling convolution (DUC)
US10067509B1 (en) 2017-03-10 2018-09-04 TuSimple System and method for occluding contour detection
US10147193B2 (en) 2017-03-10 2018-12-04 TuSimple System and method for semantic segmentation using hybrid dilated convolution (HDC)
US10671873B2 (en) 2017-03-10 2020-06-02 Tusimple, Inc. System and method for vehicle wheel detection
US11501513B2 (en) 2017-03-10 2022-11-15 Tusimple, Inc. System and method for vehicle wheel detection
US11587304B2 (en) 2017-03-10 2023-02-21 Tusimple, Inc. System and method for occluding contour detection
US11673557B2 (en) 2017-04-07 2023-06-13 Tusimple, Inc. System and method for path planning of autonomous vehicles based on gradient
US9952594B1 (en) 2017-04-07 2018-04-24 TuSimple System and method for traffic data collection using unmanned aerial vehicles (UAVs)
US10710592B2 (en) 2017-04-07 2020-07-14 Tusimple, Inc. System and method for path planning of autonomous vehicles based on gradient
US10471963B2 (en) 2017-04-07 2019-11-12 TuSimple System and method for transitioning between an autonomous and manual driving mode based on detection of a drivers capacity to control a vehicle
US11557128B2 (en) 2017-04-25 2023-01-17 Tusimple, Inc. System and method for vehicle position and velocity estimation based on camera and LIDAR data
US10552691B2 (en) 2017-04-25 2020-02-04 TuSimple System and method for vehicle position and velocity estimation based on camera and lidar data
US11928868B2 (en) 2017-04-25 2024-03-12 Tusimple, Inc. System and method for vehicle position and velocity estimation based on camera and LIDAR data
US10867188B2 (en) 2017-05-18 2020-12-15 Tusimple, Inc. System and method for image localization based on semantic segmentation
US11885712B2 (en) 2017-05-18 2024-01-30 Tusimple, Inc. Perception simulation for improved autonomous vehicle control
US10481044B2 (en) 2017-05-18 2019-11-19 TuSimple Perception simulation for improved autonomous vehicle control
US10830669B2 (en) 2017-05-18 2020-11-10 Tusimple, Inc. Perception simulation for improved autonomous vehicle control
US10558864B2 (en) 2017-05-18 2020-02-11 TuSimple System and method for image localization based on semantic segmentation
US10474790B2 (en) 2017-06-02 2019-11-12 TuSimple Large scale distributed simulation for realistic multiple-agent interactive environments
US10762635B2 (en) 2017-06-14 2020-09-01 Tusimple, Inc. System and method for actively selecting and labeling images for semantic segmentation
US10493988B2 (en) 2017-07-01 2019-12-03 TuSimple System and method for adaptive cruise control for defensive driving
US10308242B2 (en) 2017-07-01 2019-06-04 TuSimple System and method for using human driving patterns to detect and correct abnormal driving behaviors of autonomous vehicles
US11040710B2 (en) 2017-07-01 2021-06-22 Tusimple, Inc. System and method for using human driving patterns to detect and correct abnormal driving behaviors of autonomous vehicles
US11753008B2 (en) 2017-07-01 2023-09-12 Tusimple, Inc. System and method for adaptive cruise control with proximate vehicle detection
US10737695B2 (en) 2017-07-01 2020-08-11 Tusimple, Inc. System and method for adaptive cruise control for low speed following
US10752246B2 (en) 2017-07-01 2020-08-25 Tusimple, Inc. System and method for adaptive cruise control with proximate vehicle detection
US11958473B2 (en) 2017-07-01 2024-04-16 Tusimple, Inc. System and method for using human driving patterns to detect and correct abnormal driving behaviors of autonomous vehicles
US10303522B2 (en) 2017-07-01 2019-05-28 TuSimple System and method for distributed graphics processing unit (GPU) computation
US11550329B2 (en) 2017-08-08 2023-01-10 Tusimple, Inc. Neural network based vehicle dynamics model
US11029693B2 (en) 2017-08-08 2021-06-08 Tusimple, Inc. Neural network based vehicle dynamics model
US10360257B2 (en) 2017-08-08 2019-07-23 TuSimple System and method for image annotation
US10816354B2 (en) 2017-08-22 2020-10-27 Tusimple, Inc. Verification module system and method for motion-based lane detection with multiple sensors
US11874130B2 (en) 2017-08-22 2024-01-16 Tusimple, Inc. Verification module system and method for motion-based lane detection with multiple sensors
US11573095B2 (en) 2017-08-22 2023-02-07 Tusimple, Inc. Verification module system and method for motion-based lane detection with multiple sensors
US10762673B2 (en) 2017-08-23 2020-09-01 Tusimple, Inc. 3D submap reconstruction system and method for centimeter precision localization using camera-based submap and LiDAR-based global map
US10303956B2 (en) 2017-08-23 2019-05-28 TuSimple System and method for using triplet loss for proposal free instance-wise semantic segmentation for lane detection
US11151393B2 (en) 2017-08-23 2021-10-19 Tusimple, Inc. Feature matching and corresponding refinement and 3D submap position refinement system and method for centimeter precision localization using camera-based submap and LiDAR-based global map
US11846510B2 (en) 2017-08-23 2023-12-19 Tusimple, Inc. Feature matching and correspondence refinement and 3D submap position refinement system and method for centimeter precision localization using camera-based submap and LiDAR-based global map
US10678234B2 (en) 2017-08-24 2020-06-09 Tusimple, Inc. System and method for autonomous vehicle control to minimize energy cost
US11366467B2 (en) 2017-08-24 2022-06-21 Tusimple, Inc. System and method for autonomous vehicle control to minimize energy cost
US11886183B2 (en) 2017-08-24 2024-01-30 Tusimple, Inc. System and method for autonomous vehicle control to minimize energy cost
US11745736B2 (en) 2017-08-31 2023-09-05 Tusimple, Inc. System and method for vehicle occlusion detection
US10311312B2 (en) 2017-08-31 2019-06-04 TuSimple System and method for vehicle occlusion detection
US10783381B2 (en) 2017-08-31 2020-09-22 Tusimple, Inc. System and method for vehicle occlusion detection
US10649458B2 (en) 2017-09-07 2020-05-12 Tusimple, Inc. Data-driven prediction-based system and method for trajectory planning of autonomous vehicles
US10782693B2 (en) 2017-09-07 2020-09-22 Tusimple, Inc. Prediction-based system and method for trajectory planning of autonomous vehicles
US10953880B2 (en) 2017-09-07 2021-03-23 Tusimple, Inc. System and method for automated lane change control for autonomous vehicles
US10953881B2 (en) 2017-09-07 2021-03-23 Tusimple, Inc. System and method for automated lane change control for autonomous vehicles
US11853071B2 (en) 2017-09-07 2023-12-26 Tusimple, Inc. Data-driven prediction-based system and method for trajectory planning of autonomous vehicles
US11294375B2 (en) 2017-09-07 2022-04-05 Tusimple, Inc. System and method for using human driving patterns to manage speed control for autonomous vehicles
US10782694B2 (en) 2017-09-07 2020-09-22 Tusimple, Inc. Prediction-based system and method for trajectory planning of autonomous vehicles
US10656644B2 (en) 2017-09-07 2020-05-19 Tusimple, Inc. System and method for using human driving patterns to manage speed control for autonomous vehicles
US11892846B2 (en) 2017-09-07 2024-02-06 Tusimple, Inc. Prediction-based system and method for trajectory planning of autonomous vehicles
US10671083B2 (en) 2017-09-13 2020-06-02 Tusimple, Inc. Neural network architecture system for deep odometry assisted by static scene optical flow
US10552979B2 (en) 2017-09-13 2020-02-04 TuSimple Output of a neural network method for deep odometry assisted by static scene optical flow
US10387736B2 (en) 2017-09-20 2019-08-20 TuSimple System and method for detecting taillight signals of a vehicle
US10733465B2 (en) 2017-09-20 2020-08-04 Tusimple, Inc. System and method for vehicle taillight state recognition
US11328164B2 (en) 2017-09-20 2022-05-10 Tusimple, Inc. System and method for vehicle taillight state recognition
US11734563B2 (en) 2017-09-20 2023-08-22 Tusimple, Inc. System and method for vehicle taillight state recognition
US11853883B2 (en) 2017-09-30 2023-12-26 Tusimple, Inc. System and method for instance-level lane detection for autonomous vehicle control
US10970564B2 (en) 2017-09-30 2021-04-06 Tusimple, Inc. System and method for instance-level lane detection for autonomous vehicle control
US10962979B2 (en) 2017-09-30 2021-03-30 Tusimple, Inc. System and method for multitask processing for autonomous vehicle computation and control
US10768626B2 (en) 2017-09-30 2020-09-08 Tusimple, Inc. System and method for providing multiple agents for decision making, trajectory planning, and control for autonomous vehicles
US11500387B2 (en) 2017-09-30 2022-11-15 Tusimple, Inc. System and method for providing multiple agents for decision making, trajectory planning, and control for autonomous vehicles
US10410055B2 (en) 2017-10-05 2019-09-10 TuSimple System and method for aerial video traffic analysis
US10739775B2 (en) 2017-10-28 2020-08-11 Tusimple, Inc. System and method for real world autonomous vehicle trajectory simulation
US10666730B2 (en) 2017-10-28 2020-05-26 Tusimple, Inc. Storage architecture for heterogeneous multimedia data
US10812589B2 (en) 2017-10-28 2020-10-20 Tusimple, Inc. Storage architecture for heterogeneous multimedia data
US11435748B2 (en) 2017-10-28 2022-09-06 Tusimple, Inc. System and method for real world autonomous vehicle trajectory simulation
US10528851B2 (en) 2017-11-27 2020-01-07 TuSimple System and method for drivable road surface representation generation using multimodal sensor data
US10528823B2 (en) 2017-11-27 2020-01-07 TuSimple System and method for large-scale lane marking detection using multimodal sensor data
US10657390B2 (en) 2017-11-27 2020-05-19 Tusimple, Inc. System and method for large-scale lane marking detection using multimodal sensor data
US11580754B2 (en) 2017-11-27 2023-02-14 Tusimple, Inc. System and method for large-scale lane marking detection using multimodal sensor data
US10860018B2 (en) 2017-11-30 2020-12-08 Tusimple, Inc. System and method for generating simulated vehicles with configured behaviors for analyzing autonomous vehicle motion planners
US10877476B2 (en) 2017-11-30 2020-12-29 Tusimple, Inc. Autonomous vehicle simulation system for analyzing motion planners
US11782440B2 (en) 2017-11-30 2023-10-10 Tusimple, Inc. Autonomous vehicle simulation system for analyzing motion planners
US11681292B2 (en) 2017-11-30 2023-06-20 Tusimple, Inc. System and method for generating simulated vehicles with configured behaviors for analyzing autonomous vehicle motion planners
US11312334B2 (en) 2018-01-09 2022-04-26 Tusimple, Inc. Real-time remote control of vehicles with high redundancy
US11305782B2 (en) 2018-01-11 2022-04-19 Tusimple, Inc. Monitoring system for autonomous vehicle operation
US11852498B2 (en) 2018-02-14 2023-12-26 Tusimple, Inc. Lane marking localization
US11009356B2 (en) 2018-02-14 2021-05-18 Tusimple, Inc. Lane marking localization and fusion
US11009365B2 (en) 2018-02-14 2021-05-18 Tusimple, Inc. Lane marking localization
US11740093B2 (en) 2018-02-14 2023-08-29 Tusimple, Inc. Lane marking localization and fusion
US10685244B2 (en) 2018-02-27 2020-06-16 Tusimple, Inc. System and method for online real-time multi-object tracking
US11830205B2 (en) 2018-02-27 2023-11-28 Tusimple, Inc. System and method for online real-time multi- object tracking
US11295146B2 (en) 2018-02-27 2022-04-05 Tusimple, Inc. System and method for online real-time multi-object tracking
US10685239B2 (en) 2018-03-18 2020-06-16 Tusimple, Inc. System and method for lateral vehicle detection
US11610406B2 (en) 2018-03-18 2023-03-21 Tusimple, Inc. System and method for lateral vehicle detection
US11074462B2 (en) 2018-03-18 2021-07-27 Tusimple, Inc. System and method for lateral vehicle detection
US11010874B2 (en) 2018-04-12 2021-05-18 Tusimple, Inc. Images for perception modules of autonomous vehicles
US11694308B2 (en) 2018-04-12 2023-07-04 Tusimple, Inc. Images for perception modules of autonomous vehicles
US11500101B2 (en) 2018-05-02 2022-11-15 Tusimple, Inc. Curb detection by analysis of reflection images
US11104334B2 (en) 2018-05-31 2021-08-31 Tusimple, Inc. System and method for proximate vehicle intention prediction for autonomous vehicles
US11948082B2 (en) 2018-05-31 2024-04-02 Tusimple, Inc. System and method for proximate vehicle intention prediction for autonomous vehicles
US11727691B2 (en) 2018-09-12 2023-08-15 Tusimple, Inc. System and method for three-dimensional (3D) object detection
US10839234B2 (en) 2018-09-12 2020-11-17 Tusimple, Inc. System and method for three-dimensional (3D) object detection
US11292480B2 (en) 2018-09-13 2022-04-05 Tusimple, Inc. Remote safe driving methods and systems
US11935210B2 (en) 2018-10-19 2024-03-19 Tusimple, Inc. System and method for fisheye image processing
US10796402B2 (en) 2018-10-19 2020-10-06 Tusimple, Inc. System and method for fisheye image processing
US10942271B2 (en) 2018-10-30 2021-03-09 Tusimple, Inc. Determining an angle between a tow vehicle and a trailer
US11714192B2 (en) 2018-10-30 2023-08-01 Tusimple, Inc. Determining an angle between a tow vehicle and a trailer
US11823460B2 (en) 2019-06-14 2023-11-21 Tusimple, Inc. Image fusion for autonomous vehicle operation
US11810322B2 (en) 2020-04-09 2023-11-07 Tusimple, Inc. Camera pose estimation techniques
US11701931B2 (en) 2020-06-18 2023-07-18 Tusimple, Inc. Angle and orientation measurements for vehicles with multiple drivable sections
CN112767451B (en) * 2021-02-01 2022-09-06 福州大学 Crowd distribution prediction method and system based on double-current convolutional neural network
CN112767451A (en) * 2021-02-01 2021-05-07 福州大学 Crowd distribution prediction method and system based on double-current convolutional neural network
CN113104045B (en) * 2021-03-24 2022-05-31 东风柳州汽车有限公司 Vehicle collision early warning method, device, equipment and storage medium
CN113104045A (en) * 2021-03-24 2021-07-13 东风柳州汽车有限公司 Vehicle collision early warning method, device, equipment and storage medium
US11967140B2 (en) 2022-11-08 2024-04-23 Tusimple, Inc. System and method for vehicle wheel detection

Similar Documents

Publication Publication Date Title
WO2005098751A1 (en) Crowd detection
Kilicarslan et al. Predict vehicle collision by TTC from motion using a single video camera
Mukhtar et al. Vehicle detection techniques for collision avoidance systems: A review
Atev et al. A vision-based approach to collision prediction at traffic intersections
Gandhi et al. Pedestrian collision avoidance systems: A survey of computer vision based recent studies
Gandhi et al. Pedestrian protection systems: Issues, survey, and challenges
US7747039B2 (en) Apparatus and method for automatically detecting objects
JP3463858B2 (en) Perimeter monitoring device and method
EP2993654B1 (en) Method and system for forward collision warning
CN107991671A (en) A kind of method based on radar data and vision signal fusion recognition risk object
Reisman et al. Crowd detection in video sequences
US6556692B1 (en) Image-processing method and apparatus for recognizing objects in traffic
JP2012069121A (en) Protection system for the weak who use road
JP2003067752A (en) Vehicle periphery monitoring device
Rezaei et al. Computer vision for driver assistance
Antony et al. Vision based vehicle detection: A literature review
Kovačić et al. Computer vision systems in road vehicles: a review
Ohn-Bar et al. Partially occluded vehicle recognition and tracking in 3D
Wu et al. Overtaking Vehicle Detection Techniques based on Optical Flow and Convolutional Neural Network.
Chang et al. Stereo-based object detection, classi? cation, and quantitative evaluation with automotive applications
Kilicarslan et al. Direct vehicle collision detection from motion in driving video
Karaduman et al. Approaching car detection via clustering of vertical-horizontal line scanning optical edge flow
Stubbs et al. A real-time collision warning system for intersections
Kilicarslan Motion Paradigm to Detect Pedestrians and Vehicle Collision
Wu et al. Real-time lane and vehicle detection based on a single camera model

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KM KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SM SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): BW GH GM KE LS MW MZ NA SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IS IT LT LU MC NL PL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

WWW Wipo information: withdrawn in national office

Country of ref document: DE

122 Ep: pct application non-entry in european phase